This wiki provides a procedure to repackage Cycle 1 data that has previously been delivered in the Cycle 0 style. Such data needs to be repackaged and ingested into the archive so that it is available to PIs, delegated co-Is, and eventually (after the end of the proprietary period), to astronomers everywhere.

Here is the procedure to package and ingest the data, scripts, and other products associated with a single MOUS (equivalent to an SB). Note that each MOUS should be packaged and delivered separately.

Starting Point

The files you need to begin this process are:
  1. For each EB
    1. the ASDM
    2. the scriptForCalibration.py
    3. the QA2 products (these can be regenerated if necessary)
  2. For each SB
    1. the scriptForFluxCalibration.py (only required for SBs comprised of multiple EBs)
    2. the scriptForImaging.py
    3. the .fits and .mask files created by the imaging script (these can be regenerated by running the imaging script, if necessary)
    4. the scriptForPI.py
    5. the README.header.txt (previously delivered version and the new version)

Directory structure

The packaging script and the scriptForPI.py both require a certain directory structure. Here is an example for an SB composed of 2 EBs (EB1 and EB2):
  • ./Reduce_00031
    • ./Reduce_00031/Calibration_X001 → This is where you put EB1 and its associated scriptForCalibration.py
      • ./Reduce_00031/Calibration_X001/qa2 → This contains the qa2 plots and text files for EB1
    • ./Reduce_00031/Calibration_X002 → This is where you put EB2 and its associated scriptForCalibration.py
      • ./Reduce_00031/Calibration_X002/qa2 → This contains the qa2 plots and text files for EB2
    • ./Reduce_00031/Comination → This is where you put scriptForFluxCalibration.py (if required)
      • ./Reduce_00031/Combination/calibrated → This is where you put the .split.cal measurement sets for EB1 and EB2
    • ./Reduce_00031/Imaging → This is where you put the scriptForImaging.py and the .fits and .mask files

Edits to scriptForCalibration.py

  1. Make sure the last line of the calibration script is
               split(vis='uid ....ms.split', outputvis='uid ....ms.split.cal')
  2. Furthermore, edit the calibration script and turn
      if applyonly != True:
          print 'List of steps to be executed ...', mysteps
          thesteps = mysteps
          print 'global variable mysteps not set.'
        if (thesteps==[]):
          thesteps = range(0,len(step_title))
          print 'Executing all steps: ', thesteps
        print 'List of steps to be executed ...', mysteps
        thesteps = mysteps
        print 'global variable mysteps not set.'
      if (thesteps==[]):
        thesteps = range(0,len(step_title))
        print 'Executing all steps: ', thesteps
    I.e. delete the first "if" line and shift the other lines one indentation step to the left.
  3. Make sure that in front of each call to aU... and es..., there is a
      if applyonly != True:
  4. If you have a scriptForFluxCalibration.py, move the .split.cal file to the ../Combination/calibrated directory

Edits to scriptForFluxCalibration.py

  1. Remove the initial calls to split that create the .split.cal files (they are now included at the end of the calibration scripts)
  2. Make sure that the right paths to the .ms.split.cal measurement sets are used, i.e. simply "calibrated/uid___....ms.split.cal"

Edits to scriptForImaging.py

Neither the scriptForPI.py nor the packaging script require any changes to the imaging script. You may choose add a note to the scriptForImaging.py that the measurement set to be imaged will be in the 'calibrated' directory.

Edits to README.header.txt

The contents of a delivery, and the instructions to turn an ASDM into a calibrated measurement set, have changed between Cycle 0 style deliveries and Cycle 1 style deliveries. The README.header.txt, therefore, needs to be updated.

  1. Get a new README.header.txt template (this is kept up to date in a CVS repository)
    1. In NA, this is found in /users/thunter/AIV/science/qa2/README.header.txt
  2. Edit the new template with the comments (provided by the data reducer) from the previously-delivered README file


Once you have edited the scripts and README.header.txt and made sure that all the files are in the proper directory structure, you are ready to run the packaging script

  1. Copy the scriptForPI.py and README.header.txt to the ./Reduce_00031 directory (FILL IN THE CORRECT DIRECTORY NAME FOR YOUR PACKAGE)
    1. cd ./Reduce_00031
    2. e.g., in NA: cp /users/thunter/AIV/science/qa2/scriptForPI.py .
  2. Launch CASA
    1. from QA2_Packaging_module import *
    2. QA_Packager(origpath='./',readme='./README.header.txt',packpath='./2012.1.0XXXX.S',PIscript='./scriptForPI.py',append='',mode='hard',noms=True)
      1. Be sure fill in the XXXX with your project code
  3. Exit CASA
  4. You now have a new directory, with several subdirectories, called 2012.1.0XXXX.S

Testing the package and scriptForPI.py

Until we are experienced with the procedure, it probably makes sense to test each package and the scriptForPI.py. To test the package, here are the steps:

  • Copy the 2012.1.0XXXX.S directory somewhere, for instance, in /lustre/naasc//Testing
    • mkdir /lustre/naasc/user_name/Testing
    • cp -r 2012.1.0XXXX.S /lustre/naasc/user_name/Testing
  • create a 'raw' directory and copy the ASDMs there
    • cd /lustre/naasc/user_name/Testing/2012.1.00XXX.S/sg_ouss_id/group_ouss_id/member_ouss_id/
    • mkdir raw
    • cp -rf path_to_each_ASDM/ASDM_NAME ./raw
  • Change the names of the ASDMs to the names expected by the scriptForPI.py
    • cd raw
    • mv ASDM_NAME ASDM_NAME.asdm.sdm
  • Execute the scriptForPI
    • cd ../scripts
    • casapy -r 4.2.0 (or whatever version you used for your reduction
    • execfile('scriptForPI.py')
  • If this works (i.e., no crashes and you produced calibrated data in the ../calibrated/ directory):
  • If it fails, you need to fix the problem, which is probably either a python formatting issue or an improper directory structure. Each time you try to test your fix, follow these steps:
    • Exit CASA
    • Delete the 'calibrated' directory, which was probably created by scriptForPI.py
    • Start CASA again, and execfile('scriptForPI.py')

Creating tar files

The archive wants us to upload our packages as one or more tar files. The tar files have a specific naming convention, and can be created using tarsplit.py.

  • The path to tarsplit.py is
    • In NA: /users/thunter/AIV/science/DSO/tarsplit.py
    • In EA: /remote/home/skomugi/AIV/science/DSO/tarsplit.py
    • In EU:
  • The call to tarsplit.py takes the form tarsplit_path/tarsplit.py -o MOUS_name project_code
    • An example for NA: /users/thunter/AIV/science/DSO/tarsplit.py -o uid___A002_X5ce05d_X162 2012.1.00610.S

Ingesting the data into the archive

Here is an outline of the steps involved in getting the packaged data into the ALMA archive

  1. Open an APO ticket
  2. Copy data to JAO and make note on APO ticket
  3. The JAO APO team ingest the data into the archive and make a note on the APO ticket when complete and the data are replicated to all the ARCs
  4. The metadata for the delivery is then entered by the ARC staff on the Data Delivery Email wiki https://ictwiki.alma.cl/twiki/bin/view/ZLegacy/Archive/DataDeliveryEmail Please take care to enter the metadata in exactly the same format as prior entries, as the ingest script is sensitive to the exact format of the uids and filenames. (note that the JAO APO team may take over this role and perform the metadata ingestion as part of step (3)).
  5. After a few hours, check that the metadata has correctly propagated to your ARC by attempting to search for the data in the archive query tool (remember to turn off the default selection of public data only). You should see a checkbox on a red background by the results row corresponding to the delivery. If you do not, please inform your local archive scientist.

-- ScottSchnee - 2014-02-12
Topic revision: r1 - 2014-02-12, ScottSchnee
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback