Manual Calibration

This wiki will help you with a manual calibration assignment. Please let the DRM know if you foresee difficulties in completing your assignment, so that the dataset can be re-assigned if necessary. Data reducers are not allowed to reduce data from projects where they are a PI or co-I; if you are assigned data from such a project, please let your DRM know.

If this is a Total Power dataset, please use the total power manual calibration wiki SingleDish

Cycle 4 Solar SD

Cycle 4 Solar INT

ExecutionBlockLifeCycle

Table of Contents

Environment Requirements

All manual calibration should be preformed on a cvpost node.

Required Project Information

  1. You should have receive an email with a manual calibration assignment. That email contains the Project code, Project ID, SBName, MOUS, ASDMs.
    • For DAs: Go to bugs.nrao.edu – search for your NADR ticket, or look for tickets assigned to you.
    • For all: current assignments can be found at: DRMWhiteboard
  2. If you have received a JIRA-SCOPS notification email, the number of the JIRA-SCOPS ticket associated with your project can be found in that email. Otherwise, search for your project code on the ALMA JIRA page. In the results, locate the SCOPS ticket associated with your project with a summary description of "Science Project Data Reduction". This is where you are going to communicate with the DRMs.
  3. Open the ALMA Observing Tool (OT) and load from the ALMA archive the file associated with you project.
  4. Familiarize yourself with the project goals; these will influence your reduction methods.
  5. If this dataset is a pipeline failed, the weblog can be used when working on the manual calibration. Contact the DRM to obtained the weblog.

Reference Materials

Topics Link to references
flux=bandpass=phase http://jira.alma.cl/browse/CSV-3277
Tsys https://safe.nrao.edu/wiki/bin/view/ALMA/TroubleTsys

https://wikis.alma.cl/bin/view/Main/Cycle1Redux#Tsys
Band Width Switching Reduction https://safe.nrao.edu/wiki/bin/view/ALMA/NarrowBandWidthSwitchingReduction

https://wikis.alma.cl/bin/view/Main/Cycle1Redux#Bandwidth_Switching
Data Weights and Combination https://casaguides.nrao.edu/index.php/DataWeightsAndCombination
Ephemeris https://staff.nrao.edu/wiki/bin/view/NAASC/NAImaging_attachephem
WVR https://safe.nrao.edu/wiki/bin/view/ALMA/TroubleWVR

https://wikis.alma.cl/bin/view/Main/Cycle1Redux#WVR
WVR crashes

Low signal to noise reduction https://safe.nrao.edu/wiki/bin/view/ALMA/LowSignalToNoiseReduction
Faint calibrators
narrow spws

Bandpass https://wikis.alma.cl/bin/view/Main/Cycle1Redux#Bandpasses
Incorrect baseline solutions https://wikis.alma.cl/bin/view/Main/Cycle1Redux#Incorrect_baseline_solutions_ant
12-m datasets with some 7-m antennas https://wikis.alma.cl/bin/view/Main/Cycle1Redux#12_m_datasets_with_some_7_m_ante
7-m datasets with some 12-m antennas https://wikis.alma.cl/bin/view/Main/Cycle1Redux#7_m_datasets_with_some_12_m_ante
Atmospheric lines in Bandpass https://wikis.alma.cl/bin/view/Main/Cycle1Redux#Issues_that_we_think_have_been_f
Long Baseline LBC
Polarization 3C286 Polarization Guide
Polarization Reduction Tips
Imaging team wiki General Tips

Manual Calibration

Staging
  • In your lustre area (/lustre/naasc/sciops/qa2/your_username), the following directory structure has been created for you:
    • 2013.1.00661.S_uid_A001_X???_X??? (note that slashes in the MOUS code are replaced with underscores)
    • 2013.1.00661.S_ uid_A001_X???_X???/Calibration_X??1 (where X??1 is the final extension of the first asdm)
    • 2013.1.00661.S_uid_A001_X???_X???/Calibration_X??N (where X??N is the final extension of the Nth asdm)
    • 2013.1.00661.S_uid_A001_X???_X???/Imaging
  • Do not edit the names of directories and files created by the scripts. The data packager expects certain file and directory names.

  • If you ran the staging script as described in the previousSetup instructions, it should have already placed an ASDM in each Calibration_Xxxxx directory. If it has not, download each ASDM (also known as an execution block, or EB) using asdmExportLight and put the ASDMs in the corresponding Calibration_X00N directory.
  • Each ASDM can be found in the corresponding Calibration_X00N directory.

Calibration

Calibration should be done with: CASA 4.6

  1. In each Calibration_X00N directory:
    • Generate the standard script to reduce ALMA data by running the following CASA task: es.generateReducScript('asdmname').
    • For Long baseline Campaign data (Oct/Nov 2015), the script generator must be called with the option lbc=True, in CASA 4.6.
    • Before you run the data reduction script generated by the previous step you must declare a Python list called "mysteps". This controls which steps are executed when you start the script. For example, setting mysteps = [2,3,4] will execute only steps 2, 3, and 4. Setting mysteps = [] will execute all steps. You can run the script ( how to run scripts in CASA) one step at a time or run all the steps at once and review the output afterwards.
    • If you already have a good idea of modifications to implement (see "Here are some suggestions to evaluate the quality of the calibration" below) you could start to implement those edits in the script even before running it.
    • Analyze the visibilities, models and calibration tables to identify problems (use your experience from the CASA Guides to help you determine what's problematic).

      • Tip: Every task -- including running plotcal, which is run inside the various checkCalTable scripts -- can be run directly from the CASA command line, which can be helpful when troubleshooting to better understand and check the effects of changing individual parameters. In the CASA window, type (for instance) "help plotcal" to read about the task and its inputs and "inp plotcal" to see the task's current settings and "go" once the inputs are correct.
      • Check for error/warning messages sent to the CASA window or logger
      • Look for bad data sections in the calibrated dataset (split.cal), using plotms for amplitude and phase as a function of frequency, time and uvdistance). You can look for noisy timeranges, unexpected spectral behavior, large differences of gains between antennas, large temporal gain variations, .... While it may be hard to check the data quality on the source (if it is faint, has spectral lines and spatial structure), the phase and bandpass calibrator should be point sources with a flat spectrum in each spw. Once you have identified bad data sections, you may want to flag them in the script, or find ways to make the calibration better.
      • Check directly the solutions tables (tsys, bandpass, gain) using the plotcal and plotbandpass commands, or through the png files automatically generated by calibration script. You may want to note down data sections which appear to have no or poor solutions, large phase offsets and phase drifts as a function of time or frequency, spectral feature in the tsys or bandpass. Note that even if the solutions look ratty, if they do a good job in calibrating the sources
      • Check the before/after WVR solutions for antennas. This is done by comparing the blue (before) and green (after) plots in the "wvr_plot.png" file in the qa2 directory, and also by looking at the tables at the end of the uid*.ms.wvrgcal directory in the calibration directory.
      • It is good practice to verify whether the fluxes in the setjy calls are sensible. For grid source fluxes, the calibrator database can be queried using au.getALMAFluxForMS or au.getALMAFlux. For solar system flux calibrators to the expected value can be found using https://safe.nrao.edu/wiki/bin/view/ALMA/PlanetFlux ,
      • You can get a sense of the flagging rate at different stages of the calibration by using amc.getFlagStatistics: https://safe.nrao.edu/wiki/bin/view/ALMA/GetFlagStatistics
      • A few other suggested general modifications to the data reduction script:
        • In step 0, a measurement set is created - add a os.system('rm -rf [ASDMuid].ms..flagversions') to delete the .flagversions directory just before the call to importasdm.
        • Before the very first flag commands, it can be useful to put an additional flagmanager call to save a snapshot of the flags state before any flagging is done. You will hence be able to 'reload' the original flag state without having to re-import the asdm. In particular this can save you time if you are working on trying several different tsys application calls.

    • If some antennas/spws/timeranges have bad data which calibration does not appear to correct, you can flag them before deriving calibration solutions ( in the 'initial flagging' step of the calibration script)
    • If the wvr corrections are not good for a given antenna, interpolated wvr corrections from neighboring antenna can be used. To do this, you add the following parameter to the wvrgcal command : wvrflag=['DA59'] (if DA59 is the antenna with bad wvr). Note that if wvr is applied to the data, it must be applied to all antennas, otherwise differential delays between the corrected and uncorrected antennas may show up. Additional references: WVR and see Reference table above.
    • If the bandpass corrections have large amplitude corrections (greater than 5% channel-to-channel or so), then you may want to try to re-do the bandpass solution with a larger solint (for frequency, not time). Example: solint='inf,1MHz'). A suggestion of the solint to be used for a given dataset can be given by aU.bandpassPreAverage.
    • if the bandpass solutions are not satisfactory (for example solutions with a large amount of flagged data), the issue may actually be in the first 'gaincal' call in the bandpass step. The role of this first gaincal is to flatten the phases. Look at the corresponding tables. To improve those solutions, you can try to the increase spw selection in the gaincal call, or change the solint to inf. If the bandpass calibrator is also the phase calibrator, combine = 'scan' could be used. Note that using the LowSNR=True option when generating the calibration script will also result in the spw selection being increased.
    • if you see a spectral feature in the calibrators: follow the instructions here https://docs.google.com/document/d/1aM_hlGJnADMraBVmaYP4jiWHOKVuOGYLSdPmcsixXWA/edit.
      • In particular If there is a feature in the bandpass calibrator which affects the bandpass calibration solutions and which is not desirable to be applied to the source, the feature can be flagged in the bandpass calibrator before the bandpass solutions are derived. The 'fillgaps' parameter may be used in the bandpass call to interpolate over the flagged section.
      • If there is a strong feature on the gain calibrator, the gain solutions may be affected. One can flag the affected spectral regions in the gain calibrator before deriving the solutions.
    • If the gain solutions are not satisfactory (lots of data flagged, scatter), there are different ways to improve them: increasing the integration time (solint parameter), play with the minimum snr (minsnr parameter), change the gaintype to T), change the reference antenna. Combining spws could also be used (see LowSnrNarrowBandReduction and CAS-7400). As a rule, if there is a very narrow (~60 MHz) FDM window, it is a good idea to pre-emptively remap the phases for that window(s)
    • if the phase_int gain solutions are not satisfactory, one could use the phase_inf solutions to derive the amplitude gain solutions instead of the phase_int solutions
    • If the observation switched between a broad bandwidth for the calibrators and very narrow bandwidth for the science targets (bandwidth switching), the auto-generated script will require some significant modifications which are described here. This presentation describes the rationale behind this technique: https://staff.nrao.edu/wiki/pub/NAASC/NAASCSep30Meeting2015/bwsw_reduction.pdf
    • If SNR is low in the calibrators in one or more spws (typically because of narrow bandwidth, low phase calibrator flux, high tsys, ...) are likely to require spw-mapping or spw-merging for gain calibration. These modifications discussed at [[https://safe.nrao.edu/wiki/bin/view/ALMA/LowSignalToNoiseReduction][LowSnrNarrowBandReduction] * In the process of being updated *
    • Band 8 and 9 datasets can also often be affected by low SNR on the calibrators. Todd's suggestions on how to handle those are gathered: CAS-7400. Note that the option PhaseDiff in generateReducScript will add the phase offset calculation step in the script, but you can also add it manually to the default script.
    • with Cycle 3 data LB , you may need several low SNR tricks: on the gain calibration (spw combining), smoothing of the BP cal, increasing the spw selection for the pre-bandpass phase cal
  2. If you made changes to the script, be sure to annotate the changes to the script. You can now execute the modified script - do this cycle (editing script, running) as many times as needed. The modified script may need to be re-run from the beginning. When re-running the script entirely: please remove the .ms directory and flagversions directories, so that the ASDM is re-imported from scratch.
    • The data may have to be re-run from the beginning of the script with the new commands as this might surface other problems; do this as many times as needed. When re-running the script from scratch: Please remove the .ms directory and flagversions directories, so that the ASDM is re-imported from scratch
    • If an entire antenna, spw, or other large amount of data was flagged and the data is less than 1 month old, file a Problem Report. In addition, please make Catherine aware of any problems you find during data reduction and/or any tickets you create (for PI data or your own data).
  3. Once you are satisfied with the resulting data you need to rerun the data from scratch with your updated script. When run set, "mysteps=[]" - this is to ensure the script runs without errors for the PI later and that there is only one, clean casa log file. Please delete all other log files.
  4. Generate a Quality Assessment (QA2) Report of the dataset by running the following task: es.generateQA2Report('uid___whatever.ms','uid___whatever.ms.split',refAnt='refant_used_in_script'). This report will be located in a new directory called, "qa2".
  5. Review the png and txt files produced by the script and put in the qa2 directory - these will be sent to the PI. The textfile.txt file has a lot of useful information and information that is useful for deciding imaging parameters.
  6. Copy the .ms.split.cal directory into the /Imaging directory. Remember to copy using "cp -r", making the command recursive, to include all of the subdirectories under the .ms.split.cal directory.


  • * Check for error/warning messages sent to the CASA window or logger.
    • Tip: Every task -- including running plotcal, which is run inside the various checkCalTable scripts -- can be run directly from the CASA command line, which can be helpful when troubleshooting to better understand and check the effects of changing individual parameters. In the CASA window, type (for instance) "help plotcal" to read about the task and its inputs and "inp plotcal" to see the task's current settings and "go" once the inputs are correct.
    • You should reload the flag state from before the task was run the first time. Indeed several tasks (in particular applycal calls) modify the flag state. The script contains many calls to 'flagmanager', which takes a snapshot of the flag state at a given point in the calibration workflow. To reload one of these flag 'snapshots', you can use flagmanager with mode = restore. Before re-running the whole script, please remove the .ms directory and flagversions directories, so that the ASDM is re-imported from scratch.
    • You can get a sense of the flagging rate at different stages of the calibration by using amc.getFlagStatistics: https://safe.nrao.edu/wiki/bin/view/ALMA/GetFlagStatistics
    • Check the png files automatically generated by calibration script. These are usually calibration tables, so check that these look continuous and have low scatter.
      • Check the before/after WVR solutions for antennas. This is done by looking at the tables at the end of the uid*.ms.wvrgcal directory in the calibration directory. To get the wvr corrections for antennas with bad WVRs, in the 'Generation and time averaging of the WVR cal table' step, you may need to interpolate the WVR corrections for antennas that have worse phases after WVR. To do this, you add the following parameter to the wvrgcal command: wvrflag=['DA59'], .
      • Check the *.ms.split.phase_int.plots and *.ms.split.bandpass.plots that are generated in the script.
      • Check directly the solutions (tsys, bpass, gain) using the plotcal and plotbandpass commands.
    • It is good practice to verify whether the fluxes in the setjy calls are sensible. For grid source fluxes, the calibrator database can be queried using au.getALMAFluxForMS or au.getALMAFlux. For datasets with a non-Solar System object as an absolute flux calibrator, you can use these tools to determine a reasonable value of the fluxes for one or more quasars in the measurement set. If you use this value to set the flux of the flux calibrator, please use the field number (not name) for easier comparison with pipeline reductions.
    • At the end of calibration, use plotms to check that plots of the corrected amplitude and phase vs frequency and time are flat (on the phase and bandpass calibrators). The MS most useful to look at is uid_XXX.ms.split.cal.
    • Review the png and txt files produced by the script and put in the qa2 directory - these will be sent to the PI. The textfile.txt file has a lot of useful information and information that is useful for deciding imaging parameters.

Imaging

  • Use the same version of CASA that was used for calibration (currently 4.6).
  • Follow the Cycle 1, 2 and 3 Imaging Reduction from Step 5 to Step 9.

Final checklist and wrap-up

Move back up to the top level directory (e.g. cd /lustre/naasc/sciops/qa2/uname/ProjectID_uid) . You should find a README template there, which needs to be filled out
    • Put a summary of the requested rms (with bandwidth used for sensitivity) from the OT.
    • For the 'configuration' entry, put the longest baseline.
    • Specify how the data was calibrated
    • Describe any significant issues with data (antennas flagged, large portions of data flagged, spws flagged)
    • If continuum-only data, describe the quality of the continuum images ( representative beam + rms, whether self-cal was applied or not). Specify the bandwidth used for sensitivity.
    • If line data, describe the quality of the line images at the representative frequency (representative beam + rms, whether self-cal was applied or not, continuum subtracted or not). Specify the bandwidth used for sensitivity.
    • Compare the achieved beam and rms to the requested beam and rms.
    • If this is a SB from a multi array dataset (12m + 7m, several 12 m arrays,...), mention that the Science Goal complete when combined with the other SBs, and, if you are dealing with the most compact component, that the sensitivity and resolution cannot be determined with this dataset alone.
    • Put any suggested improvements to the imaging script here. These include:
      • The use of self-calibration, if appropriate
      • The use of uvcontsub, if appropriate
      • Any additional imaging not needed for QA2 but useful for science, such as imaging additional sources, spws, spectral lines, etc.
    • This is an example (real case): " For example (from a real case): "This data set was calibrated using the pipeline. The pipeline calibration appears to be reasonable, although a large amount of data (50%) has been online flagged as is typical for 7M data sets. I imaged the continuum and the HNC and HC3N lines: all were detected. The beam size is ~6.4 by 3.8 arcsec and the native resolution was 1.13km/s. The continuum RMS is 3.4mJy/beam over ~8 GHz BW while the line RMS is 56 mJy/beam in a 1.13km/s channel. I have not attempted to clean deeply since this data will be combined with 12M data and the improved uv-sampling will greatly improve the recovery of the emission. The final sensitivity of the combined 7M+12M data cannot be determined from this data set alone. However, the 7M data has the appropriate number of executions, has been successfully calibrated, and does not appear to be flagged more than usual. The central continuum source is strong enough to self-calibrated, but I have not attempted to this data again because of the poor uv-sampling. However, I've included the necessary commands to self-calibrate the data in case the PI would like to try."

  • At the end of basic calibration you need to have for EACH execution (each ASDM):
    • A "clean" casa log file that contains all the steps run. These need to be in the individual Calibration_X*** subdirectories or they will not be picked up by the packaging script. If you have additional log files in the directory it is best to delete them at this stage.
    • A complete script with a name like uid___A002_X3c7a84_X443.ms.scriptForCalibration.pythat contains all the steps you ran and any important brief notes you think the PI would need.
      • Make sure that all steps of the script will run (e.g., thesteps=[])
    • a qa2 directory

  • You need to have for the assigned MOUS:
    • A "clean" casa log file that contains imaging steps run (if it is doable to run all the imaging steps in one go), and a log file that contains all the imaging preparation steps. These need to be in the Imaging directory, or they will not be picked up by the packaging script.
    • a README file
    • a scriptForImaging.py file and a scriptForImagingPrep.py in the Imaging directory
    • .pbcor.fits and .flux.fits files in the Imaging directory

  • Add a note to the SCOPS data reduction ticket telling the DRM that your reduction is ready for review and telling her/him where to look for the data / QA2 plots / scripts
    • Note on the ticket any issues with the data or the data reduction, and whether the sensitivity and spatial resolution achieved reach the proposal requests. Note that in the case of multi-array SG, you cannot directly compare the achieved rms to the SG request.
    • copy the README on the ticket, and attach the calibration scripts.
    • if necessary, inform the contact scientist of additional information to be communicated to the PI or P2G group

  • If the DRM approves the MOUS as QA2 pass/ QA2_Semipass, the data will be assigned to DAs for packaging and delivery.
    • Keep the imaging, calibration and combination scripts in a safe place - they may be still useful years later
    • Information on the data delivery date can be found in the data reduction spreadsheets: Cycle 1, Cycle 2, Cycle 3
    • after the data is delivered, please move your entire data reduction package (including the README) to the /lustre/naasc/sciops/deliveries directory. For example, mv manual_uid_A001_X13e_X1fe /lustre/naasc/sciops/deliveries
  • For data that is not delivered (QA2_FAIL), please attach the imaging script to the SCOPS ticket
    • you can move the data to /lustre/naasc/sciops/qa2fails

Help

Scientific staff will have the data placed into their lustre area (/lustre/naasc/sciops/qa2). You will be notified by a comment on the project's SCOPS data reduction ticket. Since the data must first arrive from JAO, it may take some time to be staged to your lustre area.

Data analysts will stage their own data, following the instructions on their wiki page.


Arielle Moullet is the data reduction manager (DRM), Mark Lacy and Catarina Ubach are deputy data reduction managers. They should be your primary sources of advice. Crystal Brogan and Todd Hunter are also available for questions.


The primary method of communication is the SCOPS data reduction ticket through which you received the original notification about your project. Sometimes the JIRA system can be slow at sending out email notifications, in which case you might want to email the DRMs directly or talk to any of the DRMs in person.


The latest Best Practice Manual is in CSV-2809, which also tracks changes in the script generator. The NA Imaging page also has tips on best practices for imaging.


Criteria for passing QA2 for Cycle 1 and Cycle 2 are given here. Dor Cycle 3, the threshold on sensitivity is 10% and on beam area 20%. If the project is mainly continuum, continuum sensitivity will be the quantity taken into consideration. If it is mainly a line project, line rms is what matters. The DRMs have the final say on if your data will be a QA2_Fail, QA2_Pass, or QA2_Semipass.

The task au.gaincalSNR can be used to check if your ms has the required signal-to-noise ratio to meet QA2 or if you might need to combine spws to reach the requested goal.

The script generator will sometimes fail with ACA data because it fails to find a suitable reference antenna. You can fix this by specifying the reference antenna in the es.generateReduceScript command, e.g. es.generateReduceScript('uid_XXX',refant='CM02'). If the script generator still fails, comment on CSV-3013 for help.


Report the problem on CSV-2903.


Use the data reducers email list and send an email to: science_datared @ alma.cl . Subscribe by going to: https://lists.alma.cl/mailman/listinfo/science_datared


If necessary, inform the contact scientist of additional information to be communicated to the PI or ! P2G group. The contact scientist is listed on the P2G Project Preparation ticket linked to a the top of the SCOPS ticket.

The list of known issues for the different versions of CASA can be found here.

This document shows an example calibration in CASA 4.2.2, complete with plots of calibration table solutions, Tsys, WVR and bandpass plots, and the commands that produce them.

You can file a bug report at bugs.nrao.edu. Please include specific descriptions of the command or actions that caused the crash.

Look at the ALMA project tracker here.

  1. Run wvrgcal with the questionable antennas in the wvrflag parameter. That will make wvrgcal interpolate from nearby antennas.
  2. Compare the phase rms and coherence from the original wvrgcal run to the new run with interpolated solutions
  3. Decide which, if either, version of wvr solutions you want to apply to the data.
One should not try to circumvent applycal's flagging and apply wvr solutions to only some antennas and not to others. This can result in a differential delay between the corrected and uncorrected antennas.

Go to the ALMA calibrator catalog and search for your source. For grid source fluxes, the calibrator database can be queried using au.getALMAFluxForMS or au.getALMAFlux. For solar system flux calibrators to the expected value can be found using https://safe.nrao.edu/wiki/bin/view/ALMA/PlanetFlux ,

First of all check that you are looking at the right SG tab in the OT, and whether you are dealing with one MOUS of a multi-array SG. For example if you are dealing with a _TC component of a SG which has both _TC and _TE components, it is probably OK if the observations were performed in a configuration too compact to reach the OT requested resolution. Otherwise, it is possible that the original requested SB specs had been further modified by P2G before observations (due to change requests for example), and hence that the observations do not completely match the original request. There are multiple ways to check that. In the SCOPS-P2G ticket of the project, each change to the original SB is documented (look for comments containing: 'Incremented version to '). If you have advanced privileges in the OT, those comments are conveniently gathered in the 'project notes' - in the proposal tab, click on the proposal title in the left-side column, project notes can be found below 'Main Project Information'. The advanced privileges may be enabled in the OT by going to File, Preferences, Advanced, and clicking on 'Enable privileged operations'. If you cannot have these privileges, you can ask your DRM to send you the project notes. Finally, the exact specifications which were used for the observations can be found in the OT, within 'SG OUS', which is the last component (below Technical Justification) in the SG tree structure. Look in the 'instrument setup' section to look for spectral setup changes

Look in the SCOPS P2G at the table at the top of the ticket. If several MOUS are listed with the same trunk name as your assigned MOUS, but different endings (typically _TE, _TC, _7m), then several MOUS belong to the same science goal. An MOUS name ending in _TC corresponds to a 12m compact MOUS, an SB ending in _7m to a ACA MOUS

Additional Links?
  • NA Imaging Scripts Wiki page

-- CatarinaUbach - 21 Sep 2015

This topic: ALMA > Cycle2DataReduction
Topic revision: 2019-01-09, EricaKeller
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback