Table of Contents

The information below is in approximate time order of doing things. Even so, it is strongly recommended you read the whole thing before you begin.

Getting Started - Communication

  • Unless told otherwise by your supervisor, your data reduction assignment is your primary functional responsibility and should be completed as quickly as possible.
  • Your assignment will be given to you via an ALMA Helpdesk ticket.
  • Each project has a "Data Taking" SCOPS JIRA (P2G) ticket and a "Data Reduction" SCOPS JIRA (DR) ticket
    • The P2G and DR tickets can be found on this Google document
    • The DR ticket is where you will communicate with Scott about progress/problems associated with the data reduction
    • When you first begin work on the reduction, post a note to the reduction ticket saying that you have begun.
    • Each week, please post a brief (sentence or two) update on your progress to the JIRA data reduction ticket
  • You can find, and you should update, important information about your reductions on this Google doc
  • Use the Project Trackerto download a pdf summary of your project.
    • Use the "project" search tool to find your data set, then...
    • Click on the project code to select it, then...
    • Click on the "Project Report" button to generate the PDF
  • Report problems with data or script generator:
    • CSV-2809 keeps track of changes to the data reduction script
    • CSV-3013 report bugs in the Cycle 1 script generator (replacing CSV-2902)
    • CSV-3014 request improvements to the Cycle 1 script generator (replacing CSV-2902)
    • CSV-2903 bugs in the Cycle 1 QA2 report generator
    • Make yourself a watcher on these tickets
  • JAO requests the following procedure for issues:
    • First check this wiki (see FAQ at the end of the manual reduction section and tips at the bottom of this page) and the tickets listed above to see if your question is already answered.
    • Check the JAO's FAQ
    • For questions that will be of general interest to data reducers, use the data reducers email list
    • For questions of interest to one project in particular, ask Scott Schnee via email, phone, in-person, or on the data reduction SCOPS ticket.
  • Scott is the data reduction coordinator, so he should be your primary source of advice if questions come up.
  • Mark Lacy, Crystal Brogan, Todd Hunter, and Brenda Matthews are our data reduction specialists in Charlottesville and Victoria, and have also agreed to be available for questions.

  • For those reducing their data in Charlottesville: you will reduce your data on the cluster using the Lustre file system
  • For those reducing their data in Socorro:
  • Make note of the host you are assigned in the nodescheduler email (e.g. multivac21)
  • ssh multivac21
  • cd /lustre/naasc/your username
  • The data you have been assigned to reduce should be there
  • The data packer expects certain file names so do not edit file names
  • The data packaging script expects a certain file structure.
    • Do all of the calibration script steps for each dataset in a separate directory, called something like Calibration_X### (i.e. the final extension of the uid like X2cd)
    • For multiple executions for a given SB
      • Do the combination script in the directory above the individual Calibration_X### folders called Combination
      • Do the combined imaging in a directory above the individual Calibration_X### folders called Imaging
      • Here is an example directory structure
        • ./Reduce_00031
          • ./Reduce_00031/Calibration_X001 --> This has one ASDM and its associated scriptForCalibration.py
          • ./Reduce_00031/Calibration_X002 --> This has the other ASDM and its associated scriptForCalibration.py
          • ./Reduce_00031/Combination --> This is where you put the scriptForFluxCalibration.py
          • ./Reduce_00031/Combination/calibrated --> This is where you put the .split.cal measurement sets (note lower case)
          • ./Reduce_00031/Imaging --> This is where you put the scriptForImaging.py
  • Set up your CASA variables if you haven't before: Setting up your CASA environment
  • Use CASA 4.2.1 ('casapy -r 4.2.1' at the linux command line). If the script fails to generate, try CASA 4.1 ('casapy -r 4.1.0' at the linux command line).
  • You'll also need analyzemscal v1.408 or later (type 'au.version()' to check)

Data Reduction - Manual Calibration

  • Move the ASDM(s) into the appropriate ./Calibration_XXXX directories
    • See the Getting Started - Computer related information section above
  • Generate the data reduction script at the CASA prompt: type es.generateReducScript('asdmname')
    • This will generate a python file '<msname>.scriptForCalibration.py' in your current directory
    • This will also make a CASA measurement set
  • Suggested modifications to the data reduction script:
    • In steps 0, 8, and 18 a measurement set is created - add a os.system('rm -rf [ms-name].flagversions') to delete the .flagversions file just before the call to importasdm or split
      • In step 0, ms name will be [ASDM uid].ms. In step 8, it will be [ASDM uid].ms.split, and in step 18 it will be [ASDM uid].ms.split.cal.
      • The idea here is that a new measurement set should get a new flagversions table
    • In step 15, by default only the phase calibrator has its flux scale corrected - it would be better to also correct the flux of the bandpass calibrator
      • in the for loop change
        for phaseCalName in ['phasecal']:
        to
         for phaseCalName in ['phasecal','bandpasscal']: 
  • Run the data reduction script in CASA: execfile('<msname>.scriptForCalibration.py')
    • The script is separated into several steps. The steps can be run all at once or piece by piece by setting the global variable "mysteps" at the CASA command line. So to run only steps 0, 1, and 2, type mysteps = [0,1,2].
    • If you are running the steps piece by piece, run it once all together once you're satisfied to generate a clean log file; to do so, use mysteps = [].
    • Annotate your scripts with information that may be helpful (calibrator fluxes, reasons for flagging baselines/antennas), but do not provide personal commentary of any kind. Be sure to note any deviations from the original script
    • The pipeline will grep your script for relevant parameters, such as your reference antenna. Therefore, if you want to set parameters to a named variable, use the variable as the parameter name
      • i.e. refant = 'DV04' followed by refant = refant later is OK but ref_ant = 'DV04' followed by refant = ref_ant is NOT OK
    • If the observation switched between a broad bandwidth for the calibrators and very narrow bandwidth for the science targets, the auto-generated script will require some significant modifications which are described at NarrowBandWidthSwitchingReduction
    • If the SNR on the phase (or other) calibrator for one or more narrow band SPWs in your dataset is too low, use the approach described at LowSnrNarrowBandReduction
    • You must follow as closely as possible the script produced by Eric's script generator
  • Move the .split.cal measurement sets into the Combination/calibrated directory
    • See the Getting Started - Computer related information section above

Data Reduction - Troubleshooting

  • Here are some suggestions for making sure that calibration is proceeding well
    • Check for error/warning messages sent to the CASA window or logger
    • Check the png files automatically generated by calibration script
      • These are usually calibration tables, so check that these look continuous and have low scatter
      • The *.ms.tsys.plots.overlayTime plots are generally useful to look at (the *.ms.tsys.plots, less so)
      • the *.ms.split.phase_int.plots and *.ms.split.bandpass.plots are useful to look at. If the bandpass corrections have large amplitude corrections (greater than 5% channel-to-channel or so), then you may want to try to re-do the bandpass solution with a larger solint (for frequency, not time) or smooth the bandpass solution.
    • At the end of calibration, use plotms to check that plots of the corrected amplitude and phase vs frequency and time are flat (on the phase and bandpass calibrators). The MS most useful to look at is *.ms.split.cal.
    • Use the QA2 plots (see below) as a further check.
    • Every task -- including running plotcal, which is run inside the various checkCalTable scripts -- can be run directly from the CASA command line, which can be helpful when troubleshooting to better understand and check the effects of changing individual parameters. In the CASA window, type (for instance) "help plotcal" to read about the task and its inputs and "inp plotcal" to see the task's current settings and "go" once the inputs are correct.
  • If you find some problem with the array performance or antennas, please open up a "Problem Report" in the JAO JIRA system. See ProblemReport
  • Here are some frequently found problems and solutions
    • When you create an FDM Tsys table, many hundreds of error messages are generated that dump to your terminal.
      • These can be ignored.
    • For datasets with a Solar System absolute flux calibrator, it is common to use a nearby quasar for the pointing. When the script flags the POINTING intents data, this quasar is completely flagged (as it should be). However the final applycal in the script typically includes this calibrator. Since it has no data, an error message of the form "SEVERE applycal::Calibrater::selectvis (file /var/rpmbuild/BUILD/casapy/casapy-33.0.16856/code/synthesis/implement/MeasurementComponents/Calibrater.cc, line 396) Caught exception: Specified selection selects zero rows!" will appear.
      • Remove the field id of the pointing calibrator from the applycal.
    • For datasets without a Solar System object as an absolute flux calibrator, you can use au.getALMAFluxForMS or au.getALMAFluxto determine a reasonable value of the fluxes for one or more quasars in the measure set.
      • Please use the field number (not name), when setting the flux of the flux calibrator in this case (for easier comparison with pipeline reductions)
    • Tsys problems
      • There is a new parent ticket to report crazy Tsys values (please use it): CSV-2814
        • Some of these Tsys problems can be fixed by you. Follow the instructions here to correct the Tsys table.
      • If your dataset is missing a Tsys measurement that ought to be there (according to listobs), this can often be fixed offline.
        • Request that the JAO regenerate the Tsys on this Google document
        • If you don't want to wait, you can try to regenerate Tsys yourself by following the instructions here
    • Mosaic problems
      • If a dataset appears to be missing mosaic pointings (compare listobs output with field setup in the .aot file), see CSV-2999 and this wiki for instructions
    • Very narrow bandwidth science observations are likely to require modifications discussed at NarrowBandWidthSwitchingReduction
    • If es.generateReducScript did not successfully create a measurement set from the ASDM, then here are the instructions for importing data.
      • At the CASA prompt: importasdm(asdm='asdm_name', asis='Antenna Station Receiver Source CalWVR CalAtmosphere ')
      • run this command in the directory that contains the ASDM. In the context of the example above, that will be ./Reduce_00031/Calibration_X001
      • this will generate a measurement set named, in the example, uid___A...X001.ms
    • if python bombed at some point, or if your script dies during various plotting commands (complaining about latex or variables or some such), then somethings up with the files in your ~/.casa directory. So quit CASA and do the following: cp ~/.casa/init.py ~/.; rm -r ~/.casa. Then get back into CASA, exit again, and type cp ~/init.py ~/.casa/.. Get back into CASA and proceeded as you were.
      • Some recent (summer 2014) projects have an issue with narrow spectral windows near band edges being wrong in frequency, if you see this please report to your DRM, it's likely these executions will be marked QA0 fail.

Quality Assurance 2 (QA2)

  • For each ASDM assigned to you:
    • cd to the directory where calibration is
    • In CASA: es.generateQA2Report('uid___whatever.ms','uid___whatever.ms.split',refAnt='antenna_name')(note the capital "A" in refAnt)
      • Choose the same reference antenna as you actually used for your calibration
      • This will run QA2 commands and place the results in a subdirectory called "qa2"
      • If it crashes in target_spectrum() due to no data found, then it is probably due to the default uvrange limit ('0~30m') being too small. This can be overridden with:
        • es.generateQA2Report('uid___whatever.ms',uvrange='0~300m')
    • Review the png and txt files produced by the script - these will be sent to the PI

Data Reduction - Combination (if more than one execution per SB)

  • NOTE: If you do not have more than one execution per SB, then simply go into the imaging directory and have the script point to the uid_*.ms.split.cal file in the Calib_X* directory
  • Copy the .fluxscale file from each Calib_X* directory into the Combination/calibrated directory with the suffix that matches the uid_*.ms.split.calfile format
    • e.g. >cp uid___A002_X7fffd6_X11d7.ms.split.fluxscale ../Combination/calibrated/uid___A002_X7fffd6_X11d7.ms.split.cal.fluxscale
  • Copy the uid*ms.split.cal files from each Calib_X* directory into the Combination/calibrated directory
  • Move into the Combination directory and type es.generateReducScript([uid_FIRST-EB.ms.split.cal','uid_SECOND-EB.ms.split.cal',(etc)], step='fluxcal')
    • You can combine any number of measurement sets this way, but there's no need for this step if your SB was only executed once.
  • This produces two files: allFluxes.txt, scriptForFluxCalibration.py
    • allFluxes.txt lists the measured and weighted mean fluxes in each spw for each phase calibrator in each dataset.
    • The python script, by default, uses these fluxes to scale the individual datasets based on the the weighted means and concatenates them into a single MS.
  • If in doubt, don't scale the fluxes in your individual datasets, just delete the setjy, gaincal, and applycal commands from the script, leaving just the concat command at the end
    • One good reason for doing this: the time between each EB is long enough (more than a few days) that the phasecal flux may have varied significantly
  • If you do want to scale the fluxes, check that the the values in allFluxes.txt are corrected.
    • If they are not correct, you should edit the file to put in the correct average flux for each SPW.
      • You can find the flux for each individual EB in the .ms.split.fluxscale file
      • Average these together to get the mean flux for each SPW
      • Enter these values into the allFluxes.txt file. It should look like this:
        "J1626-2951" 0 230.56 1.02 1.02 2014-03-23T06:42:25 root_path/Combination/calibrated/uid___A002_X7d44e7_X13d1.ms.split.cal "J1626-2951" 0 230.56 1.02 1.02 2014-03-24T06:20:02 root_path/Combination/calibrated/uid___A002_X7d6d46_X38a.ms.split.cal "J1626-2951" 1 232.63 1.02 1.02 2014-03-23T06:42:25 root_path/Combination/calibrated/uid___A002_X7d44e7_X13d1.ms.split.cal "J1626-2951" 1 232.63 1.02 1.02 2014-03-24T06:20:02 root_path/Combination/calibrated/uid___A002_X7d6d46_X38a.ms.split.cal "J1626-2951" 2 245.43 1.01 1.01 2014-03-23T06:42:25 root_path/Combination/calibrated/uid___A002_X7d44e7_X13d1.ms.split.cal "J1626-2951" 2 245.43 1.00 1.00 2014-03-24T06:20:02 root_path/Combination/calibrated/uid___A002_X7d6d46_X38a.ms.split.cal "J1626-2951" 3 247.43 1.00 1.00 2014-03-23T06:42:25 root_path/Combination/calibrated/uid___A002_X7d44e7_X13d1.ms.split.cal "J1626-2951" 3 247.43 1.00 1.00 2014-03-24T06:20:02 root_path/Combination/calibrated/uid___A002_X7d6d46_X38a.ms.split.cal 
      • Regenerate scriptForFluxCalibration.py using the new allFluxes.txt - just type es.generateReducScript(['calibrated/MS1.split.cal','calibrated/MS2.split.cal'], step='fluxcal') again
    • When you have the correct allFluxes.txt and scriptForFluxCalibration.py, type execfile('scriptForFluxCalibration.py')
  • You now have calibrated.ms in the Combination directory.

Data Reduction - IMAGING

  • Be sure to image the data in a directory structure such that the packager can find the relevant files. In the above example you would run this from the "Imaging" subdirectory.
    • See the Getting Started - Computer related information section above
  • Retrieve your project from the OT, and look at the "Control and Performance" node for the corresponding SB. Note the requested resolution, Larget Angular Scale (LAS), requested sensitivity, and bandwidth for sensitivity.
  • Generate the data reduction script
    • In theory, you can generate an imaging script automatically, but this rarely works well. If you wanted to try, here's the command
      • If you had more than 1 execution per SB: es.generateReducScript('../Combination/calibrated.ms', step='imaging', chanWid=some_integer, angScale=some_number)
      • If you had 1 execution per SB: es.generateReducScript('../Calib_X*/uid_*.ms.split.cal', step='imaging', chanWid=some_integer, angScale=some_number)
  • It is easier to just make your own scriptForImaging.py with the necessary clean commands. The page https://staff.nrao.edu/wiki/bin/view/NAASC/NAImagingScripts contains example scripts and instructions on what is expected for NA reductions.
    • You only need to make images from the combined data (the qa2 process will have made test images for the individual ASDMs).
    • You do not need to image every line or every source, just a representative set of maps/cubes is sufficient (up to ~4 images of different spw and/or sources, including the representative spectral window identified in the OT) .
    • You need to look at the proposal (in the OT) to determine what exactly to image (line vs continuum), frequency width to bin together, etc.
  • Tips for making your own imaging script
    • The image to give to the PI should be the primary beam corrected image, but the noise is easier to measure on the data without the primary beam correction. Here's my suggestion:
      • In CLEAN, set pbcor = False (this is actually the defaul)
      • Measure the noise in a line-free channel, or far from the continuum in a MFS map
      • To make the primary beam corrected image: impbcor(imagename = 'science.image', pbimage = 'science.flux', outfile = 'science.pbcor')where
        • science.image is the output cleaned image from your call to clean
        • science.flux is the output primary beam map from your call to clean
        • science.pbcor is what you will create, it is the primary-beam corrected image that we will send to the PI
    • You can get a rough idea of how big to make your cell size from the outputs of the QA2 report
    • For the best signal to noise, I suggest using weighting = 'natural' in CLEAN
    • If there is strong continuum and it is a spectral line project, try to do at least a crude continuum subtraction before making the line cube(s).
  • Provide fits images (use exportfits in CASA) to the PI. These file will be automatically picked up by the packaging script.
    • Provide the primary beam corrected image as .pbcor.fits
    • Provide the .flux as .flux.fits
  • You should assess the rms noise on the combined image and compare the results to the proposal request.
    • The sensitivity listed in the "Control and Performance" tab of the OT is the reference for sensitivity request regardless of what proposal says.
    • If you have multiple executions this should be for the combined images.
    • Be sure to use the bandwidth specified in the OT.
    • Be sure to measure the noise away from emission/absorption
  • Add a note to the SCOPS data reduction ticket telling Scott that your reduction is ready for review and telling him where to look for the data / QA2 plots / scripts
    • Once Scott has approved your reduction, the data can be packaged and delivered to the PI
  • (Tip) If dealing with a large number of SPWs due to different days (i.e. different doppler settings), you may greatly improve your imaging speed by adding some mstransform steps (with combinespws=True and setting the spw to those you want combined) before the ms concat command in the scriptForFluxCalibration.py file (see "Combination" subsection above) .

Checklist after completion of QA2

  • At the end of basic calibration you need to have for EACH execution:
    • A "clean" casa log file that contains all the steps run. These need to be in the individual EB subdirectories or they will not be picked up by the packaging script. If you have additional log files in the directory its best to delete them at this stage.
    • A complete script with a name like uid___A002_X3c7a84_X443.ms.scriptForCalibration.pythat contains all the steps you ran and any important brief notes you think the PI would need.
      • Make sure that all steps of the script will run (e.g., thesteps=[])
    • A filled out version of the Cycle 1 calibration/QA2 checklist:
      • The latest version of the calibration/QA2 checklist can be found here: /users/thunter/AIV/science/qa2/checklists/Cycle1_QA2_Checklist.txt
        • The checklist is only required for inexperienced reducers If you are an experienced reducer, this is optional
      • The analysisUtils version number given in the Checklist can be found in: /users/thunter/AIV/science/analysis_scripts/README or by typing 'au.version()'
      • The PWV value in the Checklist can be found using au.getMedianPWV
      • Fill out one calibration checklist for each EB, prepended with the UID name. Put in same directory as uid*.scriptForCalibration.py.
  • And for each SB
    • A "clean" casa log file that contains the data combination and imaging steps run. These need to be in the individual subdirectories or they will not be picked up by the packaging script.
    • A filled out version of the Cycle 1 combination & imaging checklist:
      • The latest version of the combination and imaging checklist can be found here: /users/thunter/AIV/science/qa2/checklists/Cycle1_QA2_SB_Checklist.txt
        • The checklist is only required for inexperienced reducers If you are an experienced reducer, this is optional
      • The time on source can be calculated using au.timeOnSource
      • Fill out one combination and imaging checklist for each SB, prepended with the SB name. Put in the top-level directory.
  • Attach the filled-out checklists (1 per EB and 1 per SB) to the DR JIRA ticket with a note saying the data are ready for delivery to PI

Packaging and Posting the Data

  • The data reducers are now responsible for packaging and testing the package and scripts
  • We will deliver, in addition to the archived package, two additional measurements sets (to be made available to the PI via FTP
    • Create a measurement set of all calibrated data. This should have the name all_calibrated.ms
    • Create a measurement set of the calibrated science target data. This should have the name science_calibrated.ms
    • In the case of a reduction of a single ASDM
      • Go the the directory with your .ms.split.cal measurement set
      • In CASA split(vis='XXX..ms.split.cal',outputvis='all_calibrated.ms',datacolumn='corrected')
        • If you don't have a 'corrected column' (i.e., the split command fails), then split out the 'data' column instead
      • In CASA split(vis='XXX..ms.split.cal',outputvis='science_calibrated.ms',datacolumn='corrected',intent='*OBSERVE_TARGET*')
        • If you don't have a 'corrected column' (i.e., the split command fails), then split out the 'data' column instead
    • In the case of a reduction of multiple ASDMs
      • Go the the directory with your calibrated.ms (the Combination directory) measurement set
      • In CASA split(vis='calibrated.ms',outputvis='all_calibrated.ms',datacolumn='corrected')
        • If you don't have a 'corrected column' (i.e., the split command fails), then split out the 'data' column instead
      • In CASA split(vis='calibrated.ms',outputvis='science_calibrated.ms',datacolumn='corrected',intent='*OBSERVE_TARGET*')
        • If you don't have a 'corrected column' (i.e., the split command fails), then split out the 'data' column instead
  • Go to the directory with your reduction, in the case described in this wiki, you go to the ./Reduce_00031/ directory
  • Copy the scriptForPI to this directory: cp /users/thunter/AIV/science/qa2/scriptForPI.py .
  • Copy the README file to this directory: cp /users/thunter/AIV/science/qa2/README.header.cycle2.txt ./README.header.txt
  • Fill out the the latest template for the README file
    • Put a summary of the requested rms (and parameters assumed) from the OT and the achieved rms just after the project info in the README.
    • Put any deviations from the normal reduction path into the README
    • Put any suggested improvements to the imaging script here. These include:
      • The use of self-calibration, if appropriate
      • The use of uvcontsub, if appropriate
      • Any additional imaging not needed for QA2 but useful for science, such as imaging additional sources, spws, spectral lines, etc.
  • Launch CASA
    • from QA2_Packaging_module import *
    • QA_Packager(origpath='./',readme='./README.header.txt',packpath='./2012.1.00XXX.S',PIscript='./scriptForPI.py',append="",mode='hard',noms=True)
      • This will create a new directory, with several subdirectories, called 2012.1.00XXX.S (be sure fill in the XXX with your project code).
      • Exit CASA
  • Test the package
    • We need to test that the scriptForPI.py and package work.
    • Copy the 2012.1.00XXX.S directory somewhere outside of your calibration path, for instance, in /lustre/naasc//Testing
      • mkdir /lustre/naasc/user_name/Testing
      • cp -r 2012.1.00XXX.S /lustre/naasc/user_name/Testing
    • create a 'raw' directory and copy the ASDMs there
      • cd /lustre/naasc/user_name/Testing/2012.1.00XXX.S/sg_ouss_id/group_ouss_id/member_ouss_id/
      • mkdir raw
      • cp -rf path_to_each_ASDM/ASDM_NAME ./raw
    • Change the names of the ASDMs to the names expected by the scriptForPI
      • cd raw
      • mv ASDM_NAME ASDM_NAME.asdm.sdm
    • Execute the scriptForPI
      • cd ../scripts
      • casapy -r 4.2.1 (or whatever version you used for your reduction
      • execfile('scriptForPI.py')
    • If this works (i.e., no crashes and you produced calibrated data in the ../calibrated/ directory):
      • Post to your JIRA Data reduction ticket to tell Scott that you have tested your package and sciptForPI and everything is good. Tell him the path to:
        • The original package directory
        • all_calibrated.ms
        • science_calibrated.ms
    • If it fails, you need to fix the problem, which is probably either a python formatting issue or an improper directory structure. Each time you try to test your fix, follow these steps:
      • Exit CASA
      • Delete the 'calibrated' directory, which was probably created by scriptForPI
      • Start CASA again, and execfile('scriptForPI.py')
  • Post a note on the data reduction SCOPS ticket to the contact scientist (who is a watcher on the ticket)
    • This note should contain information that will be relayed by the contact scientist to the PI via the ALMA Helpdesk.
    • This note should include the additional steps that the PI may want to take to make additional images or improve the images already present. This information is the same as the note you added to the README.header.txt file.

Delivering the Data

  • In Cycle 1, the DAs will handle tarring and data delivery
    • The tar command is /users/thunter/AIV/science/DSO/tarsplit.py -o MOUS_name project_code
      • e.g., /users/thunter/AIV/science/DSO/tarsplit.py -o uid___A002_X5ce05d_X162 2012.1.00610.S
  • To see the delivery workflow carried out by the DAs, see this wiki
  • To see which DA is responsible for data delivery of a particular project, see this wiki

Data Reduction - Pipeline Calibration

  • The following is only for certain projects selected for Pipeline comparison. There is no need to carry out these next steps unless specifically asked by Remy/Scott.
  • At least for the beginning of Cycle 1, we will be using Cycle 1 data sets to test the data reduction pipeline
    • DSO wiki for Pipeline Cycle 1 Parallel Testing page is here. Contains links to official pipeline documentation for staff, but only of relevance for DAs, not yet for data reducers.
  • A pipeline reduction will be run by the DAs and can help inform your manual calibration
    • To see the output from the pipeline, find the 'analyzemscal URL' for your project in the "(plausible) PI data" worksheet in this Google document (~column AB)
  • You will be asked (via a Helpdesk ticket) to compare the results of the pipeline calibration with your manual reduction.
    • The link to the analyzemscal output will be in the Helpdesk ticket. To see it you need to login. The user name is "pipetesters". Ask the DAs for the password.
    • Fill out Section 2 of the checklist given here
      • Attach the checklist to the ALMA Helpdesk data reduction ticket
      • Assign the ticket to Remy
    • YOU ARE DONE!

Troubleshooting and Plotting Tricks

-- ScottSchnee - 2013-01-30
Topic attachments
ISorted ascending Attachment Action Size Date Who Comment
BestPracticesDRMCycle1v1.7.pdfpdf BestPracticesDRMCycle1v1.7.pdf manage 1 MB 2014-06-15 - 16:25 JohnHibbard Best Practices for Cycle 1 Imaging
Topic revision: r123 - 2015-01-07, BrianMason
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback