Imaging Cycle 1-4 data using pipeline products

NOTIFICATIONS: READ THIS FIRST

  • Assignment timeline: Please have a look at the visibilities and the weblog within 2 working days after assignment. The DRM will then contact you to ask you whether there are obvious issues with the dataset that would need data re-processing or a re-assignment of the dataset to the pipeline-working group. The deadline for completing the assignment is two weeks from the date you are provided with the correct data set. Please let the DRM know if you foresee difficulties in completing your assignment in the expected timeframe, so that the dataset can be re-assigned if necessary.

  • Feel free to contact the DRMs at any stage of your imaging work with concerns and questions, either in-person, through e-mail or the SCOPS-Data reduction ticket. Arielle Moullet is the data reduction manager (DRM), J Mark Lacy and Catarina Ubach are data reduction manager deputies. They should be your primary source of advice if questions come up. Crystal Brogan, Todd Hunter, and Brenda Matthews are our data reduction specialists in Charlottesville and Victoria, and have also agreed to be available for questions.

  • Data reducers are not allowed to reduce data from projects where they are a PI or co-I, if you are assigned data from such a project please let your DRM know and they will reassign you.

  • data imaging is done under the same CASA version used during pipeline calibration* Currently 4.5.1, 4.5.2 and 4.5.3 are being used for pipeline calibration, 4.6 for manual calibration. Please check in the weblog which version was used

Instructions

  1. When they will have placed restored pipeline products into your Lustre area, the data analysts will notify you through a comment on the project's SCOPS- data reduction ticket. Communication with the DRMs is done through that ticket.
    • Each project has a "Data Reduction" SCOPS JIRA (DR) ticket and a "Data Taking" SCOPS JIRA (P2G) ticket (linked from the top of the DR ticket). The DR ticket is where you will communicate with the DRMs about progress/problems associated with the data reduction. Ask your supervisor to gain access to the JIRA system.
    • Your assignment will be given to you via a comment (which you should get by email) on the JIRA SCOPS data reduction (DR) ticket of the project. The comment will include the project code and the MOUS name (MOUS = section of the project to be individually processed).
    • Data reducers are not allowed to reduce data from projects where they are a PI or co-I, if you are assigned data from such a project, please let your DRM know.
    • The ALMA project Tracker contains details about the corresponding observations (weather, technical issues). Use the "project" search tool to find your data set, then, click on the project code to select it, then click on the "Project Report" button to generate the PDF summary of your project (requires special privileges)
    • The detailed description of the observational setup of your MOUS can be accessed through the ALMA's Observing Tool (Cycle 3). Note that some discrepancies in the spectral setup may appear when using the Cycle 3 OT to look at Cycle 1 and 2 projects. If you have user privileges, you can open the full description of the project directly in the OT, by searching the project code in the project archive. Otherwise you can download the latest .aot file attached in the project's P2G ticket, and open it with the OT.
  2. Look for the project in the OT archive. Note that some discrepancies in the spectral setup may appear when using the Cycle 3 OT to look at Cycle 1 and 2 projects. Familiarize yourself with the science goal (SG), as well as the SG requested rms noise and resolution (from the Control and Performance OT tab, not the Technical Justification tab). Note that, for a 7m dataset, the SG requested rms and resolution do not directly apply, since those requests are intended for the 12m + 7m dataset. This is also valid for multi-arrays SGs.
    look in the SCOPS project preparation ticket (P2G ticket - linked from the top of the SCOPS data reduction ticket) in the SB table at the top of the ticket. If several SBs are listed with the same SG trunk name but different endings (typically _TE, _TC), there are several components to the SG. The SB ending in _TC corresponds to the compact component
    Looking at the spatial setup of the weblog should be a good indication of whether the different fields overlap (and which ones do). To be sure that the data is meant to be processed as a mosaic, one should look at the OT. In the 'field setup' section of the OT, the tabs at the top of the window (below Spectral, Spatial, Field Setup tabs) each indicate one independent source (to be imaged on its own). Each independent source may be a mosaic or one or several single pointings. To determine this, in the 'Source' section, look at the target type toggle: if 'rectangular field' is selected, all fields corresponding to this source are intended to be processed as a mosaic. If 'custom mosaic' is selected, the fields corresponding to the source may be processed as a mosaic (if the 'custom mosaic' box In the 'Field Center Coordinates section' is checked), or as individual sources (if the box is not checked).
  3. Go to the member_ouss_id directory of the package. There should be five sub-directories named: calibration calibrated log product qa script, and possibly a text file README_JAO, which contains useful information transferred from the JAO analyst who already looked at the weblog
  4. Within two days of your assignment, you need to perform a first review. The goal is to identify bad data which needs flagging, and to look out for issues which will affect the data quality.
    1. review the weblog, which is in the qa directory, following the instructions on NAWebLogReview. Note outlying antennas, which should be flagged later at the imaging stage. You can compare your findings to the notes from the JAO analyst in the README_JAO file
      First of all check that you are looking at the right SG tab in the OT, and whether you are dealing with one MOUS of a multi-array SG. For example if you are dealing with a _TC component of a SG which has both _TC and _TE components, it is probably OK if the observations were performed in a configuration too compact to reach the OT requested resolution. Otherwise, it is possible that the original requested SB specs had been further modified by P2G before observations (due to change requests for example), and hence that the observations do not completely match the original request. There are multiple ways to check that. In the SCOPS-P2G ticket of the project, each change to the original SB is documented (look for comments containing: 'Incremented version to '). If you have advanced privileges in the OT, those comments are conveniently gathered in the 'project notes' - in the proposal tab, click on the proposal title in the left-side column, project notes can be found below 'Main Project Information'. The advanced privileges may be enabled in the OT by going to File, Preferences, Advanced, and clicking on 'Enable privileged operations'. If you cannot have these privileges, you can ask your DRM to send you the project notes. Finally, the exact specifications which were used for the observations can be found in the OT, within 'SG OUS', which is the last component (below Technical Justification) in the SG tree structure. Look in the 'instrument setup' section to look for spectral setup changes
    2. Compare the beam size in the calibrators images of the weblog to the requested MOUS beam size. If it is very different, alert the DRM. The data may fail QA2, in which case there is no need to proceed to imaging.
    3. Go to the calibrated directory. This is where the calibrated measurement set(s) will have been created by the DAs running the scriptFor PI. The calibrated measurement sets should have the science spectral windows split out already ('science' spws are spectral windows recording visibilities, corresponding to the spectral setup of the SB. They contain information from the target but also the calibrator)
    4. Have a quick sanity check at the calibrated visibilities using plotms.
    5. Report to the DRM on the SCOPS-DR ticket If you saw something worrisome in your review or have any concern.
    6. If you think that there is something wrong or odd related to the pipeline behavior - for example, excessive flagging -, check if what you see may correspond to known pipeline issues gathered in the https://wikis.alma.cl/bin/view/Main/Cycle1Redux. Newly discovered pipeline issues can be filed under parent ticket SCOPS-1338.
    7. If you think that a portion of your data is bad, check whether it corresponds to a known issue in the https://wikis.alma.cl/bin/view/Main/Cycle1Redux, or or searching alma.jira.cl.. You may find a suggestion on how to handle the bad data by checking if your issue is a known telescope issue.
    8. If you think that a fraction of the data is obviously bad, and that the calibration must be be re-run with that fraction of data flagged, you can ask for a re-processing of the data to be performed. You can list on the DR ticket which fractions should be flagged, and why. Your dataset will then be reprocessed by the DAs using your newly determined flags. See the list of flag keywords to use in flagtemplate here i. If your data is young (<3 months), JAO is interested in knowing about bad data that you may have found (flagged antenna or spw). Check if your issue is a known telescope issue in the https://wikis.alma.cl/bin/view/Main/Cycle1Redux, or by searching alma.jira.cl. Otherwise, please go ahead and file a JIRA PRTSPR ticket to report the issue with Remy as a watcher, or ask the DRM to do it, see ProblemReport).
  5. Now onto imaging! Imaging should be done in the calibrated data directory, using a cvpost machine: type %ssh cvpost-master; %nodescheduler --request [number of days] [number of nodes]; %exit, then log into the cvpost node that you are assigned in the notification email. Imaging tips and general directions can be found in the Imaging group wiki and the script templates *
      • In Cycle 3 LBC, we have added antennas on baselines longer than 10km: those antennas were put on distant pads for test purposes. The antennas are left in the array during the calibration, to improve the SNR, but please either flag them before doing the imaging, or use tapering in clean, so as to reach the request resolution (suggestion: try 2Mlambda for B3 - 4-5 Mlambda for Band 6 ).
      • Continuum imaging: the effect of bandwidth smearing can be seen if the measurement set used for continuum imaging is too much spectrally averaged. Specifically, the individual averaged channels should not be wider than ~125 MHz in Bands 3/4/6 and 250 MHz in Band 7. For example, for a 2 GHz TDM window, the width parameter used in split when creating a continuum measurement set should be set such that we average up to 8 channels in Bands 3/4/6 and to 16 channels in Band 7.
      • Since LBC images often combine large and small spatial scales, using multiscale in the clean call may be helpful. See these resources for help on how to do multiscale cleaning: https://casaguides.nrao.edu/images/0/00/Juno_Band6_Imaging.py , https://casaguides.nrao.edu/index.php?title=ALMA2014_LBC_SVDATA#Juno_-_Asteroid . The paragraph below is in the ALMA2014_LBC_SVDATA wiki, in the LBC SV Imaging section: https://casaguides.nrao.edu/index.php?title=ALMA2014_LBC_SVDATA " Extended emission at high angular resolution can best be imaged with multiscale: Cleaning diffuse emission on size scales significantly larger than the beam with the standard delta-function deconvolution method results in the "clean instability": giving the diffuse emission a "pointilated" or "cotton-candy" like morphology. This can be ameliorated by using a range of scales to do the cleaning. In CASA, this technique is accessed through the clean multiscale parameter. The scales are given as multipliers of the cell (pixel) size. It is essential to always use 0 for the first scale as this corresponds to the normal clean beam, multiscale=[0,5,15] is usually a good starting point for experimentation. For more details see: Cornwell 2008 (IEEE Journal of Sig Proc., 2, 793)."Also from the online CASA Cookbook and User Reference Manual, page 301 in the PDF version : http://casa.nrao.edu/Doc/Cookbook/casa_cookbook.pdf, http://casa.nrao.edu/Release4.1.0/doc/UserMan/UserMansu270.html, http://casa.nrao.edu/docs/TaskRef/clean-task.html#x13-120000.1.10
      • Several tips and issues are included here: https://wikis.alma.cl/bin/view/Main/Cycle1Redux#Cycle_3_Long_baseline_Campaign_d
    1. The imaging process is composed of two main steps: data preparation and imaging the emission, each of which has its own script. The script should have been placed for you in the directory where imaging is performed
    2. Modify the scriptForImagingPrep.py script to prepare your data for imaging by
      • erasing the pointing table (for mosaic projects) -- DO NOT DO THIS FOR CYCLE 3 DATA
      • performing flux equalization if you deem it necessary - see instructions here https://safe.nrao.edu/wiki/bin/view/ALMA/Cycle1and2ImagingReduction#Fluxscale
      • flagging additional antennas / spws / timeranges if needed (provided a full data re-calibration including these additional flags is not necessary)
      • splitting off your science sources,
      • creating a combined dataset, including the case of multiple ASDMs with different Doppler settings. In particular if you are dealing with a Solar System line project, look here for instructions on velocity corrections: https://staff.nrao.edu/wiki/bin/view/NAASC/NAImaging_attachephem
      • creating a back up visibility set in case clean corrupts the original file
    3. Run the scriptForImagingPrep.py using the CASA version used for the calibration (currently CASA 4.5.1 or CASA 4.5.2)
    4. Modify the scriptForImaging.py so as to produce to the desired imaging products, based on the main scientific objectives (in the OT). You do not need to image every line or every source, but you should produce images that are relevant to their scientific goals,
      This varies significantly depending on the objectives of the project, and how much time cleaning takes. Here are some guidelines for the minimum and maximum number of expected products for different types of project. In most cases, delivering at least 3-6 image cubes will be appropriate.
        • If the sensitivity is defined for the continuum (bandwidth for sensitivity >= than a spw):
          • Best: continuum maps of all sources, using as many line-free channels as possible (whether they come from SPWs labeled "continuum" or "line").
            • If line spws are present in the spectral setup, line cubes of one spw on a representative set of sources. If the representative spw corresponds to a line spw use that one, otherwise use a line mentioned in the TJ boxes. Image the area around the primary named line, or the entire spectral window for line searches, with a resolution lower than the expected line width.
          • Minimum: continuum maps on a representative set of sources, using as many line-free channels as possible (whether they come from SPWs labeled "continuum" or "line").
            • If line spws are present in the spectral setup, line cube on one source for one spw. If the representative spw corresponds to a line spw use that one, otherwise use a line mentioned in the TJ boxes. Image the area around the primary named line, or the entire spectral window for line searches, with a resolution lower than the expected line width.
        • If the sensitivity is defined for line data (bandwidth for sensitivity < than a spw)
          • Best: continuum maps of all sources using as many line-free channels as possible (whether they come from SPWs labeled "continuum" or "line"). Line cubes of the representative spw of all sources. Line cubes of other spws for one source.
            • If the representative spw is a continuum spw, choose instead a nearby line spw or a line mentioned in the TJ boxes. Image the area around the primary named lines, or the entire spectral windows for line searches.
          • Minimum: continuum maps of one sources using as many line-free channels as possible (whether they come from SPWs labeled "continuum" or "line"). Line cubes of the representative spw of a representative set of sources
            • If the representative spw is a continuum spw, choose instead a nearby line spw or a line mentioned in the TJ boxes. Image the area around the primary named lines, or the entire spectral window for line searches.
        • Here are some examples
          • Sensitivity defined for continuum, 3 continuum spw (one of which contains a line), 1 line spw, 4 targets: continuum image x 4 targets (using line-free channels from all windows, including the line window if necessary to reach the requested bandwidth), line image for the line spw x 1 target
          • Sensitivity defined for line data, 4 line windows (two of which appear empty), 0 cont windows, 1 target : line images x 2 spws (one for each line spw with line emission) x 1 target, image continuum x 1 target (using line free channels from all four windows)
          • Sensitivity defined for line data: 12 line windows (each containing 4-8 lines), 0 cont windows, 10 targets: line image of six spws x 2 of the targets, continuum image x 1-2 targets (using line free channels from all four windows)
      It depends on the velocity of the source, defined by the z parameter (found in the Field Setup tab of the OT). For low-z projects (z0.2), the sky frequency of the cleaned spw (or the rest frequency of the targeted line if it is not placed at the center of the spw), should be used. Note that for high-z projects containing several sources with different z values, each source yields a different sky frequency, which should be used in the clean call. The sky frequencies of each source (calculated for the representative spw only) are listed at the bottom of the Spectral Setup tab.
      It is the spectral window used to define the sensitivity requirement of the SG. To find out what the representative window is, look at the Spectral Setup tab in the OT. The spectral window for which the 'Representative window' button is checked (at the right end of the table) is the representative window.
    5. Run the scriptForImaging.py script in the CASA version used for calibration, and iteratively modify it so as to reach the SG requested quality, within the bounds defined in this knowledge base article. Note that those requirements do not apply to 7m-only data, or data from a dataset in a compact configuration from a multi-array SG). Many imaging tips can be found in the Imaging team wiki.
      • The SG requested quality corresponding to the sensitivity and resolution listed on the Control and Performance tab. I.e., if the sensitivity is defined for line data, the continuum rms is not the criterion (one may still want to improve the continuum image, but it is not strictly necessary).
        You can measure the rms on the non-primary beam corrected image, away from the region of emission, but also close to the phase center. Draw a shape with the tool of your choice in the viewer, click inside and read the rms in the statistics tab of the 'region' box. You should use an image with a channel width matching the bandwidth used for sensitivity in the OT (or scale the noise to the requested bandwidth). For line projects, measure the noise in line-free channels.
        The spatial resolution corresponds to the synthesized beam size. You can find it in the CASA viewer: it is listed in the right panel of the directory browsing window which appears when clicking on 'open'. One can also use the imhead CASA command to get the beam size.
      • If the achieved synthesized beam is not consistent with resolution request (based on the QA2 pass criteria for Cycle 1 and 2 - Cycle 3 criteria are 10% on sensitivity for B3,4,6, 15% for B7 and B8, 20% for B9 and 20% on beam area), one must first check whether the science goals are actually met with the achieved resolution. If the data would fail based on resolution, one can play with the weighting in the clean calls: 'natural' for a larger beam (the rms should decrease), or briggs parameter varying between -2 and 0.5 to obtain a smaller beam (the rms should decrease)
      • If the achieved rms is much higher than expected, that does not necessarily mean that the data is bad. Here are a few other possible reasons, and possible solutions.
        • The first thing to check is whether your MOUS is actually only the compact component portion of a multi-array SG (usually MOUS name ending in _TC). In a multi-array science goal, the bulk of the sensitivity is carried out by the extended dataset. Hence the sensitivity request of the OT is not relevant for compact components
        • The flux calibration may be off. Compare the flux of the phase / bandpass calibrators to the calibrator catalogue, and the solar system flux calibrators to the expected value using https://safe.nrao.edu/wiki/bin/view/ALMA/PlanetFlux. For grid source fluxes, the calibrator database can be queried using au.getALMAFluxForMS or au.getALMAFlux.
        • The imaging may be dynamic range limited (for bright sources). This sets in at around a S:N of 100 or more, so typically affects continuum images. Remember that if the sensitivity is defined for line data, the continuum rms is not the criterion. Attempting self-calibration may be a good idea for strong continuum sources which do not meet the criteria defined in this knowledgebase article. Instructions for self-calibration are included in the imaging template. Self-calibration solutions can be determined when the source exhibits a strong continuum emission, preferably near the phase center. Self-calibration can be attempted in cases where the expected rms is not reached on the continuum (for projects where the sensitivity is defined for the continuum, ie, bandwidth for sensitivity> than a spw ) or the line data (for projects where the sensitivity is defined for the line data ie, bandwidth for sensitivity here). In any case, when attempting self-cal, one should be careful about whether solutions are found for most antennas / scans, otherwise applying the solutions will result in too much data being flagged
        • It may be difficult to find an emission free region or there may be filtered out spatial scales which make the image messy.
        • Check that you have included all the data - if there are multiple executions concat will place each spw of each execution into its own spw in the concatenated dataset, so you need to be sure to list all the relevant spws in the clean command.
    6. If the achieved rms is much higher than expected, and that it is clear that the data quality is the issue, then there are still some solutions
      • You may try imaging with 'natural' weighting in clean: this downweights the longest baselines, and typically decreases the rms. Note that using natural weighting also increases the size of the synthesized beam.
      • Self-calibration. Instructions for self-calibration are included in the imaging template. Self-calibration solutions can be determined when the source exhibits a strong continuum emission, preferably near the phase center. Self-calibration can be attempted in cases where the expected rms is not reached on the continuum (for projects where the sensitivity is defined for the continuum, ie, bandwidth for sensitivity> than a spw ) or the line data (for projects where the sensitivity is defined for the line data ie, bandwidth for sensitivity here). In any case, when attempting self-cal, one should be careful about whether solutions are found for most antennas / scans, otherwise applying the solutions will result in too much data being flagged.
      • If you see striping in some channels, this may be indicative of some incorrect data (to be flagged), bad uv-sampling (which should be evident for an odd-looking psf - look for outlier antennas or data with incorrect weights to flag), or unresolved large-scale emission . You can try to increase the briggs parameter (to go towards natural weighting) to better sample large scales.
    7. If the rms is not met, or if the imaging quality is not sufficient, you may find that you want to flag additional antennas or spws before imaging. You can write those flags in scriptForImaging Prep.py, which will need to be run before re-running all the steps from scriptForImaging.py.
    8. In the case of mosaics, in addition to rms and resolution, plot the .flux image (primary beams maps), to verify whether all fields were sufficiently observed (there should be no gaps)
  6. Once you are happy with your images, run the last section of the scripts which generates the fits images that will be delivered to the PI. Those are the primary beam corrected images (.pbcor), the primary beams (.flux), and the clean masks (.mask).
  7. The final scripts will be used by the PI as a series of commands the PI will copy and paste into CASA.Your scripts should be sufficiently annotated so that it can be easily understood by the PI and should include any plotting commands you used to decide what to image, but you should remove the instructions for the imager ('NAASC-only instructions') from the scripts. These instructions are designated by lines that start with "#>>>". Any lines that begin with only "#" will be passed on to the PI. You can can manually remove those lines or do the following
    • Download the simple python script to remove these lines from here
    • In CASA,
      • execfile('strip_instructions.py')
      • strip_instructions('scriptForImaging.py')
      • strip_instructions('scriptForImagingPrep.py')
      • Remove 'strip_instructions.py' from your directory
    • These commands will create a backup of your scripts and then create new scripts file that do NOT include any line that starts with "#>>>".
    • Avoid using “triple quotes” to comment out a long sections of the script.
      For gedit: Highlight a block of code and press ctrl+m to comment and shift+ctrl+m to uncomment
    • Delete section(s) not relevant to the imaging project (e.g. concat multiple MS files, self-cal, etc.). Note: IF self-cal is appropriate but not used, comment the section and add a note to the README file clearly stating that self-cal was not applied but left in case the PI would like to try it.
  8. Remove all .log files but the two corresponding to the last execution of the two scripts , and only leave the masks which you used.
  9. Compare the final achieved noise rms and resolution them to the SG requested noise rms and resolution. The guidelines for the QA2_PASS/QA2_FAIL decision are gathered here: https://help.almascience.org/index.php?/na/Knowledgebase/Article/View/285 for Cycle 1 and 2. For Cycle 3 the thresholds are 10% for sensitivity and 20% in beam areaNote that in the case of 7m data, you cannot directly compare the achieved rms to the SG request. In this case, you should only verify whether the expected on-source integration time was obtained and whether high flagging rates were applied. The DRM will carry out the QA2 assessment. This is also true for multi-arrays datasets
  10. If you are imaging a manually-calibrated dataset, go back to the manual calibration wiki here: https://safe.nrao.edu/wiki/bin/view/ALMA/Cycle2DataReduction#checklist
  11. Move back up to the top level directory (e.g. cd /lustre/naasc/sciops/qa2/uname/XXXX-analysis/sg_ouss_id/group_ouss_id/member_ouss_id/). You should find there README file template which needs to be filled. (If the README file is not there: cp /users/thunter/AIV/science/qa2/README.header.cycle2.txt ./README.header.txt (for both Cycle 1 and Cycle 2 datasets) and cp /users/thunter/AIV/science/qa2/README.header.cycle3.txt ./README.header.txt (for Cycle 3) . )
    • Put a summary of the requested rms (with bandwidth used for sensitivity) from the OT.
    • For the 'configuration' entry, put the longest baseline.
    • Specify that the data has been pipeline calibrated
    • Describe any significant issues with data (antennas flagged, large portions of data flagged, spws flagged)
    • If continuum-only data, describe the quality of the continuum images ( representative beam + rms, whether self-cal was applied or not). Specify the bandwidth used for sensitivity.
    • If line data, describe the quality of the line images at the representative frequency (representative beam + rms, whether self-cal was applied or not, continuum subtracted or not). Specify the bandwidth used for sensitivity.
    • Compare the achieved beam and rms to the requested beam and rms.
    • If this is a SB from a multi array dataset (12m + 7m, several 12 m arrays,...), mention that the Science Goal complete when combined with the other SBs, and, if you are dealing with the most compact component, that the sensitivity and resolution cannot be determined with this dataset alone.
    • Add this sentence to remind PIs to verify that the continuum subtraction and selection of line-free channels are accurate: "The PI may wish to modify the channels which were identified as "line-free" based on a dirty image or inspection of the calibrated visibilities."
    • Put any suggested improvements to the imaging script here. These include:
      • The use of self-calibration, if appropriate
      • The use of uvcontsub, if appropriate
      • Any additional imaging not needed for QA2 but useful for science, such as imaging additional sources, spws, spectral lines, etc.
    • This is an example (real case): " For example (from a real case): "This data set was calibrated using the pipeline. The pipeline calibration appears to be reasonable, although a large amount of data (50%) has been online flagged as is typical for 7M data sets. I imaged the continuum and the HNC and HC3N lines: all were detected. The beam size is ~6.4 by 3.8 arcsec and the native resolution was 1.13km/s. The continuum RMS is 3.4mJy/beam over ~8 GHz BW while the line RMS is 56 mJy/beam in a 1.13km/s channel. I have not attempted to clean deeply since this data will be combined with 12M data and the improved uv-sampling will greatly improve the recovery of the emission. The final sensitivity of the combined 7M+12M data cannot be determined from this data set alone. However, the 7M data has the appropriate number of executions, has been successfully calibrated, and does not appear to be flagged more than usual. The central continuum source is strong enough to self-calibrated, but I have not attempted to this data again because of the poor uv-sampling. However, I've included the necessary commands to self-calibrate the data in case the PI would like to try."
  12. Note the results and the path to your imaging assignment on the data reduction SCOPS ticket, and copy the README ticket there
  13. If the DRM approves the MOUS as QA2 pass/ QA2_Semipass, the data will be assigned to DAs for packaging and delivery.
  14. Final cleanup
    • Please attach the scriptForImaging_SBNAME.py and scriptForImagingPrep_SBNAME.py to the SCOPS data reduction ticket
    • Keep the Imaging and ImagingPrep scripts in a safe place - they may be still useful years later
    • Rename the scripts to scriptForImaging.py and scriptForImagingPrep.py - this is necessary for Archive ingestion, standard across all ARCs
    • Information on the data delivery date can be found in the data reduction spreadsheets: Cycle 1, Cycle 2, Cycle3
    • after the data are delivered, please move your entire data reduction package (including the README) to the /lustre/naasc/sciops/deliveries directory. For example, mv 2013.1.00XXX._SBNAME-analysis lustre/naasc/sciops/deliveries
  15. For data that is not delivered (QA2_FAIL), please attach the imaging script and imaging prep script to the SCOPS ticket
    • move your entire data reduction package (including the README) to the /lustre/naasc/sciops/qa2_fails directory. For example, mv 2013.1.00XXX._SBNAME-analysis /lustre/naasc/sciops/qa2_fails

Help

Scientific staff will have the data placed into their lustre area (/lustre/naasc/sciops/qa2). You will be notified by a comment on the project's SCOPS data reduction ticket. Since the data must first arrive from JAO, it may take some time to be staged to your lustre area.

Data analysts will stage their own data, following the instructions on their wiki page.

  • The following steps should be performed in the Imaging (for manual calibration) or the calibrated directory (for ppl calibratio). All uid*.ms.split.cal directories and all uid*.ms.split.fluxscale files need to be in that directory.
    • Type es.generateReducScript(['uid_FIRST-EB.ms.split.cal','uid_SECOND-EB.ms.split.cal',(etc)], step='fluxcal')
    • This produces two files: allFluxes.txt, scriptForFluxCalibration.py
      • allFluxes.txt lists the measured and weighted mean fluxes in each spw for each phase calibrator in each dataset. The first flux value is the valued calculated from the individual ms; the second flux value is the weighted mean flux from all ms's
      • The python script, by default, uses the flux values from allFluxes.txt to scale the individual datasets based on the the weighted means and concatenates them into a single MS.
    • Check that the the values in scriptForFluxCalibration.py are correct with respect to the .fluxscale directories and that they are sensible (i.e., windows at nearly the same observing frequency should have similar values or they should follow the phase calibrator's expected spectrum as derived from the fluxscale task).
      • If they are not correct or if you think you know the flux better (based on the results for an ASDM which you trust better for example), you should edit the values in scriptForFluxCalibration.py to put in the correct expected flux for each SPW for each ASDM
    • When you have the correct scriptForFluxCalibration.py, copy the setjy and applycal commands in scriptForFluxCalibration.py to scriptForImagingPrep.py, then remove scriptForFluxCalibration.py

Go to the ALMA calibrator catalog and search for your source. For grid source fluxes, the calibrator database can be queried using au.getALMAFluxForMS or au.getALMAFlux. For solar system flux calibrators to the expected value can be found using https://safe.nrao.edu/wiki/bin/view/ALMA/PlanetFlux ,


Criteria for passing QA2 are given here for Cycle 1 and 2. For Cycle 3, the thresholds are 10% on sensitivity and 20% on beam area. If the project is mainly continuum, continuum sensitivity will be the quantity taken into consideration. If it is mainly a line project, line rms is what matters. The DRMs have the final say on if your data will be a QA2_Fail, QA2_Pass, or QA2_Semipass.

The task au.gaincalSNR can be used to check if your ms has the required signal-to-noise ratio to meet QA2 or if you might need to combine spws to reach the requested goal.


Use the data reducers email list and send an email to: science_datared @ alma.cl . Subscribe by going to: https://lists.alma.cl/mailman/listinfo/science_datared


If necessary, inform the contact scientist of additional information to be communicated to the PI or ! P2G group. The contact scientist is listed on the P2G Project Preparation ticket linked to a the top of the SCOPS ticket.

-- MarkLacy - 2014-09-25

This topic: ALMA > Cycle1and2ImagingReduction
Topic revision: 2017-09-14, CatarinaUbach
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback