! CASA Analysis Cookbook

Version: 2006-12-18

Concise Table of Contents

Detailed Table of Contents

Introduction -- Getting Started

This document describes how to calibrate and image interferometric data using the CASA (Common Astronomy Software Application) package using tasks from the CASA (Common Astronomy Software Application) package (single-dish analysis will be incorporated in this document at a later date). CASA is a suite of astronomical data reduction tools and tasks that can be run via the IPython interface to Python. Tools in CASA provide the full capability of the package. Tasks are essentially IPython interface scripts to the tools, but with specific, limited access to them. The idea for having tasks is that they are simpler to use, provide a more familiar interface, and are easier to learn for most astronomers who are familiar with radio interferometric data reduction (and hopefully for novice users as well). Currently, CASA is in an alpha release and this should be taken into account as users begin to learn the package.

For the moment, the audience is assumed to have some basic grasp of the fundamentals of synthesis imaging, so details of how a radio interferometer works and why the data needs to undergo calibration in order to make synthesis images are left to other documentation --- a good place to start might be Synthesis Imaging in Radio Astronomy II (1999, ASP Conference Series Vol. 180, eds. Taylor, Carilli & Perley).

Finally, this document provides some examples of executing tasks in CASA and what might be expected during and at the end of such execution. These examples are found in Appendix II.

Current CASA documentation consists of:

  • CASA Task Analysis Cookbook - This document
  • CASA in-line help
  • A CASA "tool-based" Cookbook - useful when the new tasks do no have everything you want and you need more power/functionality.

If you have experience with CLIC, AIPS, or MIRIAD software packages to reduce interferometric data, Appendices 2--4 provide CLIC-CASA, AIPS-CASA, and MIRIAD-CASA dictionaries which map common CLIC, AIPS and MIRIAD tasks/adverbs to their counterparts in CASA.

Besides this cookbook, you may also want to access CASA documentation on the web at:

http://casa.nrao.edu

CASA Basics --- Useful Information for First-Time Users

Starting CASA

This section assumes that CASA has been installed on your LINUX or OSX system. See Appendix 1 for instruction on how to install CASA. For NRAO-AOC testers, you should do the following on an AOC RHE4 machine:

'> . /home/casa/casainit.sh
or '> source /home/casa/casainit.csh

After having run the appropriate casainit script, CASA is started by typing casapy on the command line. After startup information, you should see the casapy command prompt: CASA <1>:

Ending CASA

You can exit CASA by typing CTRL-D, %exit, or quit. If you don't want to see the question "Do you really want to exit [y]/n?", then just type Exit and CASA will stop right then and there



Python

Within CASA, you use python to interact with the system. This does not mean an extensive python course is necessary - basic interaction with the system (assigning parameters, running tasks) is straightforward. At the same time, the full potential of python is at the more experienced user's disposal. Some further details about python, ipython, and the interaction between python and CASA can be found in Appendix II.

Tools and tasks

Tools

Originally, CASA consisted of a collection of tools, combined in the so-called toolkit. Since the majority of prospective users is far more familiar with the concept of tasks, an effort is underway to replace most - if not all - toolkit functionality by tasks.

Tasks

While running CASA, you will have access to and be interacting with tasks, either indirectly by providing parameters to a task, or directly by running a task. Each task has a well defined purpose, and a number of associated parameters, the values of which are to be supplied by the user. Technically speaking, tasks are built on top of tools - when you are running a task, you are running tools in the toolkit, though this should be transparent.

What does this mean for the normal user?

As more tasks are being written, and the functionality of each task is enhanced, there will be less and less reason to run tools in the toolkit. We are working toward a system in which direct access to the underlying toolkit is unnecessary for all standard data processing.

Data

Measurement Sets

Interferometric data are filled into a so-called measurement set (or MS), which is a generalized description of data from any interferometric (visibility or "uv" data) or single dish telescope. The MS consists of several tables in a directory on disk. The actual data is in a large table that is organized in such a way that you can access different parts of the data easily. When you create an MS called, say, "new-data.ms", then the name of the directory where all the tables are stored will be called new-data.ms. See Appendix XX for a detailed description of the Measurement Set.

Where are my data ?

The data that you originally get from a telescope can be put in any directory that is convienent to you. Once you "fill" the data into a measurement set that can be accessed by CASA, it is generally best to keep that MS in the same directory where you started CASA so you can get access to it easily (rather than constantly having to specify a full path name).

Also, when you generate calibration solutions or images (again these are in table format), these will also be written to disk. It is a good idea to keep them in the directory in which you started CASA. Note that when you delete a measurement set, calibration table or an image you must delete the top level directory, and all underlying directories and files, using the file delete method of the operating system you started CASA from. For example, when running CASA on a Linux system, in order to delete the measurement set named AM675 type:

  CASA <5>: !rm -r AM675

from within CASA. The ! tells CASA that a system command follows, and the -r makes sure that all subdirectories are being deleted recursively. Any system command may be executed in this manner from within CASA.

Further Details About Tasks

As mentioned in the introduction, tasks in CASA are python interfaces to the more basic toolkit. Tasks are executed to perform a single job, such as loading, plotting, flagging, calibrating, and imaging the data.

Basic information on tasks, including the parameters used and their defaults, can be obtained by typing help taskname at the CASA prompt, where taskname is the name of a given task. help taskname provides a description of the task and then lists all parameters, a brief description of the parameter, the parameter default, an example setting the parameter and any options if there are limited allowed values for the parameter.

*NOTE: Your PAGER environment variable determines how help is displayed in the terminal window where you start CASA. If you set your environment variable PAGER=LESS then typing 'help task' will show you the help but the text will vanish and return you to the command line when you are done viewing it. Setting PAGER=MORE will scroll the help onto your command window and then return you to your prompt (but leaving it on display). Setting PAGER=CAT will give you the MORE equivalent without some extra formatting baggage and is the recommended choice.

To see what tasks are available in CASA, type 'tasklist', e.g.
  CASA <48>: tasklist
  ---------> tasklist()
  Available tasks: 

  Import/Export   Information     Editing         Display/Plot
  -------------   -----------     -------         ------------
  importvla       listhistory     flagautocorr    plotants
  importasdm      listobs         flagger         plotcal 
  importfits                      flagxy          plotxy
  importuvfits                                    viewer
  exportfits
  exportuvfits

  Calibration     Imaging         Modelling       Utility
  -----------     -------         ---------       -------
  accum            clean           contsub         help task
  bandpass (B)    feather         setjy           help par.parameter
  blcal (M,MF)    ft              uvmodelfit      taskhelp
  correct         invert                          tasklist
  gaincal (G)     makemask                        browsetable
  fluxscale       mosaic                          split
  fringecal (K)                                   restore
  clearcal                                                
  pointcal 
  smooth

  Image Analysis  Simulation
  --------------  ----------
  imcontsub       almasim
  regrid

Typing taskhelp - provides a one line description of all available tasks, e.g.
  CASA <49>: taskhelp
  ---------> taskhelp()
  Available tasks: 

  accum        : Accumulate calibration solutions into a cumulative table
  bandpass     : Calculate a bandpass calibration solution
  blcal        : Calculate a baseline-based calibration solution
  browsetable  : Browse a visibility data set or calibration table
  clean        : Calculate a deconvolved image with selected clean algorithm
  clearcal     : Re-initialize visibility data set calibration data
  contsub      : Continuum fitting and subtraction in the uv plane
  correct      : Apply calculated calibration solutions
  exportuvfits : Export MS to UVFITS file
  feather      : Feather together an interferometer & a single dish image in the Fourier plane
  flagautocorr : Flag autocorrelations (typically in a filled VLA data set)
  fluxscale    : Bootstrap the flux density scale from standard calibraters
  fringecal    : Calculate a baseline-based fringe-fitting solution (phase, delay, delay-rate)
  ft           : Fourier transform the specified model (or component list)
  gaincal      : Calculate gain calibration solutions
  importvla    : Convert VLA archive file(s) to a CASA visibility data set (MS)
  importasdm   : Convert an ALMA Science Data Model directory to a CASA visibility data set (MS)
  importfits   : Convert a FITS image to a CASA image
  importuvfits : Convert a UVFITS file to a CASA visibility data set (MS)
  imhead       : Print image header to logger
  invert       : Calculate a dirty image and dirty beam
  listhistory  : List the processing history of a data set
  listobs      : List the observations in a data set
  flagxy       : Plot points for flagging selected X and Y axes
  makemask     : Calculate mask from image or visibility data set
  mosaic       : Calculate a multi-field deconvolved image with selected clean algorithm
  plotants     : Plot the antenna distribution in local reference frame
  plotcal      : Plot calibration solutions
  plotxy       : Plot points for selected X and Y axes
  pointcal     : Calculate pointing error calibration
  setjy        : Compute the model visibility for a specified source flux density
  split        : Create a new data set (MS) from a subset of an existing data set (MS)
  smooth       : Produce a smoothed calibration table
  uvmodelfit   : Fit a single component source model to the uv data
  viewer       : View an image or visibility data set

Typing startup - will provide the startup page when entering CASA

Setting Parameters and Invoking Tasks

Tasks require input parameters (sometimes called keywords). One can either 1) assign parameters, one at a time, from the CASA prompt, and then execute the task; 2) execute the task in one line, specifically assigning each parameter within the line; or 3) using the position within the task call to specify the value of the parameter. For example the following are equivalent:

  # Specify parameter names for each keyword input: 
    plotxy(vis='ngc5921.ms',xaxis='channel',yaxis='amp',datacolumn='data')
  # when specifying the parameter name, order doesn't matter, e.g.:
    plotxy(xaxis='channel',vis='ngc5921.ms',datacolumn='data',yaxis='amp')
  # use parameter order for invoking tasks
    plotxy('ngc5921.ms','channel','amp','data')

You can also set parameters by performing the assigment within the CASA shell and then inspecting them using the inp command.

  CASA <5>: inp 'listobs'       # type this to see the inputs needed for the listobs task
  --------> inp('listobs')      # CASA tells you that it is interpreting your input into
                                #  the correct format with parentheses.  

  listobs -- List the observations in a data set: # This is what is returned: 
  vis         = "False"         # A list of input parameters and what they are set to
  verbose     =  False          # If you have not set a parameter, then the default is shown. 

  CASA <6>: vis='ngc5921.ms'    # Set the parameter vis to be the visibility dataset ngc5921.ms
  CASA <7>: inp 'listobs'       # Now ask to see the input list for the listobs task
  --------> inp('listobs')

  listobs -- List the observations in a data set:
  vis         = "ngc5921.ms"    # Now the keyword vis is set to the MS file you selected.  
  verbose     =  False

  CASA <8>: run listobs         # now run the task with these inputs to get a summary list
                                # of the observation 

All task parameters have global scope within CASA, i.e., the parameter values are common to all tasks and at the CASA command line. This allows the convenience of not changing parameters that are shared between tasks but does require care when chaining together sequences of task invocations (to insure proper values are provided).

If you want to reset all input keywords for all tasks, type:

  CASA <10>: restore

If you want to reset the input keywords for a single task, say clean, type:

  CASA <12>: default('clean')

To inspect a single parameter value just type it at the command line:
  CASA <16>: alg          # type 'alg' to see the what the algorithm keyword is set to
    Out[16]: 'clark'      # CASA tells you it is set to use the Clark algorithm

Help on a specific parameter can be obtained by typing 'help par.parameter_name',e.g.,

  CASA <17>: help par.alg   # Ask for help on parameter 'alg'
  ---------> help(par.alg)

  Help on function alg in module parameter_dictionary: # CASA returns this: 
  alg()
    Imaging algorithm to use. Options: 'clark', 'hogbom', 'csclean', 'multiscale',
    'mfclark', 'mfhogbom', 'mfmultiscale':
    Default: 'hogbom'

Saving and Recalling Task Parameters

Parameters for a given task can be saved by using the saveinputs command and restored using the execfile filename command. For example:

  CASA <1>: vis='ngc5921.ms'          # set the parameter vis to ngc5921.ms

  CASA <2>: inp 'listobs'             # See the inputs to the listobs task
  --------> inp('listobs')

  listobs -- List the observations in a data set:
  vis         = "ngc5921.ms"          # yep, its set.  
  verbose     =  False

  CASA <3>: saveinputs 'listobs'      # save listobs inputs to a file on disk called 
  --------> saveinputs('listobs')     # 'listobs.saved'

  CASA <4>: !more 'listobs.saved'     # view the listobs.saved file on disk.  

  vis         = "ngc5921.ms"          # yep, its here and the inputs are all nicely saved.  
  verbose     =  False

  CASA <5>: vis='someotherfile.ms'    # Now, set the keyword vis to someotherfile.ms

  CASA <6>: inp 'listobs'             # See the inputs to the listobs task
  --------> inp('listobs')

  listobs -- List the observations in a data set:
  vis         = "someotherfile.ms"    # OK, vis is set to this new file name
  verbose     =  False

  CASA <7>: execfile 'listobs.saved'  # now execute the listobs.saved file to restore the 
  --------> execfile('listobs.saved') # inputs to the previous values. 

  CASA <8>: inp 'listobs'             # Look at the listobs inputs now
  --------> inp('listobs')

  listobs -- List the observations in a data set:
  vis         = "ngc5921.ms"          # Yep, they are back to the original values that you 
  verbose     =  False                # saved before.  

  CASA <9>: saveinputs 'listobs','ngc5921_listobs.par'  # Now save parameters to a file with 
  --------> saveinputs('listobs','ngc5921_listobs.par') # a name that you choose

  CASA <10>: !more ngc5921_listobs.par  # Look at that file on disk
  vis         = "ngc5921.ms"            # you can run this anytime with the command 
  verbose     =  False                  # execfile ngc5921_listobs.par

Other help navigating the system is available natively within Python/Ipython; see the next section for details on the interface.

You should get an IPython command prompt in the xterm window where you started CASA , CASA will take approximately 10 seconds to initialize at startup in a new working directory; subsequent startups are faster. CASA is active when you get a ''CASA <1>'' prompt in the command line interface.

Your command line history is automatically maintained and stored in the local directory as ipython.log; this file can be edited and re-executed as appropriate using the execfile 'filename' feature. In addition, logging of output from commands is sent to the file casapy.log, also in your local directory; this is brought up automatically (see Logging).

The logger has a range of features including the ability to filter messages, sort by Priority, Time, etc, and the ability to insert additional comments.

* CASA logger:
%BEGINFIGURE{label='fig:logger', caption='CASA Logger GUI window'}% casalogger1.jpg%ENDFIGURE%

* CASA logger - Search facility: Specify a string in the entry box to have all instances of the found string highlighted.
%BEGINFIGURE{label='fig:logger_search',caption='CASA Logger - Search example'}% casalogger select.jpg%ENDFIGURE%

* CASA logger - Filter facility: The log output can be sorted by Priority, Time, Origin. One can also filter for a string found in the Message.
%BEGINFIGURE{label='fig:logger_filter',caption='CASA Logger - Filter example'}% casalogger filter.jpg%ENDFIGURE%

* CASA logger - Insert facility: The log output can be augmented by adding notes or comments during the reduction. The file should then be saved to disk to retain these changes.
%BEGINFIGURE{label='fig:logger_insert',caption='CASA Logger - Insert example'}% casalogger insert.jpg%ENDFIGURE%

To assess the installation, please download the following file and place it into your local directory (where you will execute casapy);

http://casa.nrao.edu/ngc5921_regression.py

The only line that must change is the importuvfits command which should point to your data directory

  • AOC: /home/casa/data.
  • MAC OSX install: /opt/casa/data
  • RHE4 install: /usr/lib/casapy/data

Now execute the following command from the CASA prompt to run an end to end regression/benchmark test on a small VLA data set:

CASA <1>: execfile 'ngc5921_regression.py'
--------> execfile('ngc5921_regression.py')
--Import--
--Flagautocorr--
Whole data set has been selected or use mode argument to select specific subset 

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%
--Setjy--

0%....10....20....30....40....50....60....70....80....90....100%
--Gaincal--
--Bandpass--
--Fluxscale--
--Correct--
--Split (cal data)--
--Continuum Subtract--
mode now set to subtract
Processing 2 slots.
 Slot  1 2
--Split (src data)--
--Clean--

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%

0%....10....20....30....40....50....60....70....80....90....100%
Cal max amp  32.1873168945
Src max amp  1.38156664371

0%....10....20....30....40....50....60....70....80....90....100%
Warning no plotter attached.  Attach a plotter to get plots
Image max  [0.053987175226211548]
Image rms  [0.0019960671197623014]


********** Regression ***********
*                               *
* Passed cal max amplitude test *
* Passed src max amplitude test *
* Passed image max test         *
* Passed image rms test         *
---
Passed Regression test for NGC5921
---
*********************************


********* Benchmarking *****************
*                                      *
Total wall clock time was:  33.5700910091
Total CPU        time was:  23.86
Processing rate MB/s  was:  1.04557357293
* Breakdown:                           *
*   import       time was:  3.29027104378
*   flagautocorr time was:  1.05761194229
*   setjy        time was:  3.13838410378
*   gaincal      time was:  1.32955789566
*   bandpass     time was:  1.32256412506
*   fluxscale    time was:  0.225440979004
*   correct      time was:  1.02406787872
*   split-cal    time was:  0.58132815361
*   contsub      time was:  1.49871182442
*   split-src    time was:  1.3791179657
*   clean        time was:  16.9850730896
*****************************************
basho (test cpu) time was: 55 seconds

Interface

CASA uses Python, IPython and matplotlib within the package.

"Python is an interpreted, interactive, object-oriented programming language" (from www.python.org); it is used as the underlying command line interface/scripting language to CASA . Note: since Python is inherently 0-based in its indexing of arrays, vectors, etc, CASA is also 0-based; any Index inputs (e.g., start (for start channel), fieldIndex, antennaID, etc) will start with 0.

IPython is an enhanced, interactive shell to Python which provides many features for efficient command line interaction.

matplotlib is a Python 2-D plotting library for publication quality figures in different hardcopy formats.

Some key links are:

Each of the features of these components behave in the standard way within \casa\ . In the following sections, we outline the key elements for analysis interactions; see the IPython page for the full suite of functionality.

In-line help

  • TAB key

At any time, hitting will complete any available commands or variable names and show you a list of the possible completions if there's no unambiguous result. It will also complete filenames in the current directory if no CASA or Python names match.

For example, it can be used to list the available functionality using minimum match; once you have typed enough characters to make the command unique, will complete it.
CASA <15>: cle<TAB>
clean     clear     clearcal  

  • help task

Basic information on an application, including the parameters used and their defaults, can be obtained by typing help task (pdoc task and task? are equivalent commands with some additional programming information returned). help task provides a one line description of the task and then lists all parameters, a brief description of the parameter, the parameter default, an example setting the parameter and any options if there are limited allowed values for the parameter.

*NOTE: The behavior of the help display is controlled by the user's PAGER environment variable. If you have this set to 'more' or to 'less', the display of 'help task' is fine but the display of 'taskname?' will often have confusing formatting content at the beginning (lots of 'ESC' surrounding the text). This can be remedied by exiting casapy and doing an 'unset PAGER' at the unix command line.

CASA <45>: help contsub
Help on function contsub in module contsub:

contsub(vis=None, fieldid=None, spwid=None, channels=None, solint=None, fitorder=None, fitmode=None)
    Continuum fitting and subtraction in the uv plane:

    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: <unset>; example: vis='ngc5921.ms'
    fieldid -- Field index identifier
            default=unset; example: fieldid=1
    spwid -- Spectral window index identifier
            default=0; example: spwid=1
    channels -- Range of channels to fit
            default:; example: channels=range(4,7)+range(50,60)
    solint -- Averaging time (seconds)
            default: 0.0 (scan-based); example: solint=10
    fitorder -- Polynomial order for the fit
            default: 0; example: fitorder=1
    fitmode -- Use of the continuum fit model
            default: 'subtract'; example: fitmode='replace'
            <Options:
            'subtract'-store continuum model and subtract from data,
            'replace'-replace vis with continuum model,
            'model'-only store continuum model>
  • help par.parameter - provides a brief description of a given parameter
CASA <46>: help par.robust
Help on function robust in module parameter_dictionary:

robust()
    Brigg's robustness parameter.

    Options: -2.0 (close to uniform) to 2.0 (close to natural)

  • help (with no arguments) - native Python help facility

Typing 'help' will give you the default Python help and give you the help> prompt for further information; hitting at the help prompt returns you to the CASA prompt. You can also get the short help for a CASA method by typing 'help tool.method' or 'help task'.

CASA [2]: help
---------> help()

Welcome to Python 2.5!  This is the online help utility.

If this is your first time using Python, you should definitely check out
the tutorial on the Internet at http://www.python.org/doc/tut/.

Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules.  To quit this help utility and
return to the interpreter, just type "quit".

To get a list of available modules, keywords, or topics, type "modules",
"keywords", or "topics".  Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word
such as "spam", type "modules spam".
help> keywords

Here is a list of the Python keywords.  Enter any keyword to get more help.

and                 else                import              raise
assert              except              in                  return
break               exec                is                  try
class               finally             lambda              while
continue            for                 not                 yield
def                 from                or
del                 global              pass
elif                if                  print

help>

# hit <RETURN> to return to CASA prompt

You are now leaving help and returning to the Python interpreter.
If you want to ask for help on a particular object directly from the
interpreter, you can type "help(object)".  Executing "help('string')"
has the same effect as typing a particular string at the help> prompt.

CASA <3>: help gaincal
--------> help(gaincal)
Help on function gaincal in module gaincal:

gaincal(vis=None, caltable=None, mode=None, nchan=None, start=None, step=None, msselect=None, gaincurve=None, 
opacity=None, tau=None, solint=None, refant=None)
    Solve for gain calibration:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: <unset>; example: vis='ngc5921.ms'
    caltable -- Name of output Gain calibration table
            default: <unset>; example: caltable='ngc5921.gcal'
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            <Options: 'channel', 'velocity', 'none'
    nchan -- Number of channels to select (when mode='channel')
            default: 1; example: nchan=45 (note: nchan=0 selects all)
    start -- Start channel, 0-relative;active when mode='channel','velocity'
            default=0; example: start=5,start='20km/s'
    step -- Increment between channels;active when mode='channel','velocity'
            default=1; example: step=1, step='1km/s'
    msselect -- Optional subset of the data to select (field,spw,time,etc)
            default:'';example:msselect='FIELD_ID==0', 
            msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'
            <Options: see: http://aips2.nrao.edu/docs/notes/199/199.html>
    gaincurve -- Apply VLA antenna gain curve correction
            default: True; example: gaincurve=False
            <Options: True, False>
    opacity -- Apply opacity correction
            default: True; example: opacity=False
            <Options: True, False>
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    solint --  Solution interval (sec)
            default: -1 (scan based); example: solint=60.
    refant -- Reference antenna
            default: -1 (none); example: refant=13

  • startup - provide the CASA startup screen (notes available tasks, tools and help methods).

  • User Reference Manual

The User Reference Manual has not been brought up to date for the Python interface as yet. It can be useful to obtain additional information on a given method (though the examples remain in glish).

It can be reached at:

http//casa.nrao.edu/docs

Automatic parentheses

Automatic parenthesis is enabled for calling functions with argument lists; this feature is intended to allow less typing for common situations. IPython will display the interpretation of the line, beneath the one typed, as indicated by the '-------->'. Again, typing 'autocall 2' will enable this mode which is assumed throughout this cookbook.

CASA <12>: inp 'listobs'
---------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "False"
verbose     =  False

CASA <13>: vis='ngc5921.ms'

CASA <14>: inp 'listobs'
---------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "ngc5921.ms"
verbose     =  False

CASA <15>: listobs
---------> listobs() # logger contains the observing summary

*NOTE: Parentheses should not be omitted in scripts which are executed through 'execfile'; IPython's autoparentheses feature only works at the command line and not in batch/scripting mode.*

Specifying parameters to tasks

All task parameters have global scope within the CASA shell, i.e., the parameter values are common to all tasks and at the CASA command line. This allows the convenience of not changing parameters that are shared between tasks but does require care when chaining together sequences of task invocations (to insure proper values are provided).

To inpect a parameter value just type it at the command line:
CASA <16>: alg
  Out[16]: 'clark'

Again, help on a parameter can be obtained by typing 'help par.parameter_name',e.g.,

CASA <17>: help par.alg
---------> help(par.alg)
Help on function alg in module parameter_dictionary:

alg()
    Imaging algorithm to use. Options: 'clark','hogbom','csclean','multiscale','mfclark','mfhogbom','mfmultisc
ale':
    
    Default: 'hogbom'

Parameters can be specified either explicitly by name or by their order. For example the following are equivalent:

# use autoparentheses and parameter order to set inputs
plotxy 'ngc5921.ms','channel','amp','data' 
# use parameter order
plotxy('ngc5921.ms','channel','amp','data')
# specify parameter names for each
plotxy(vis='ngc5921.ms',xaxis='channel',yaxis='amp',datacolumn='data')
# when specifying the parameter name, order doesn't matter
plotxy(xaxis='channel',vis='ngc5921.ms',datacolumn='data',yaxis='amp')

You can also set parameters by performing the assigment within the CASA shell and then inspecting them using the inp command.

CASA <4>: vis
  Out[4]: False

CASA <5>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "False"
verbose     =  False

CASA <6>: vis='ngc5921.ms'

CASA <7>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "ngc5921.ms"
verbose     =  False

CASA <8>: inp 'listhistory'
--------> inp('listhistory')

listhistory -- List the processing history of a data set:

vis         = "ngc5921.ms"
# Note: other tasks that use the same parameters will have them changed as well.

CASA <9>: listhistory
---------> listhistory()
# file history is printed in the logger

Parameter sets for a given task can be saved by using the saveinputs command and restored using the execfile filename command.

CASA <1>: vis='ngc5921.ms'

CASA <2>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "ngc5921.ms"
verbose     =  False

CASA <3>: saveinputs 'listobs' # defaults to 'taskname.saved'
--------> saveinputs('listobs')

CASA <4>: !more 'listobs.saved'
vis         = "ngc5921.ms"
verbose     =  False

CASA <5>: vis='someotherfile.ms'

CASA <6>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "someotherfile.ms"
verbose     =  False

CASA <7>: execfile 'listobs.saved'
--------> execfile('listobs.saved')

CASA <8>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "ngc5921.ms"
verbose     =  False

CASA <9>: saveinputs 'listobs','ngc5921_listobs.par'  #specify a file
--------> saveinputs('listobs','ngc5921_listobs.par')

CASA <10>: !more ngc5921_listobs.par
vis         = "ngc5921.ms"
verbose     =  False

CASA <11>: 

System shell access

Any input line beginning with a '!' character is passed verbatim (minus the '!', of course) to the underlying operating system (*the sole exception to this is the 'cd' command which must be executed without the '!'*).

Several common commands (ls, pwd, cd, less) may be executed with or without the '!'.

CASA [1]: pwd
/export/home/corsair-vml/jmcmulli/data
CASA [2]: ls n*
ngc5921.ms ngc5921.py
CASA [3]: !cp -r ../test.py .

In addition, filesystem navigation is aided through the use of bookmarks to simplify access to frequently-used directories:

CASA [4]: cd /home/ballista/jmcmulli/other_data
CASA [4]: pwd
/home/ballista/jmcmulli/other_data
CASA [5]: bookmark other_data
CASA [6]: cd /export/home/corsair-vml/jmcmulli/data
CASA [7]: pwd
/export/home/corsair-vml/jmcmulli/data
CASA [8]: cd -b other_data
(bookmark:data) -> /home/ballista/jmcmulli/other_data

Output from shell commands can be captured in two ways:

  1. sx shell_command, !!shell_command - this captures the output to a list

CASA [1]: sx pwd # stores output of 'pwd' in a list
  Out[1]: ['/home/basho3/jmcmulli/pretest']

CASA [2]: !!pwd  # !! is a shortcut for 'sx'
  Out[2]: ['/home/basho3/jmcmulli/pretest']

CASA [3]: sx ls v* # stores output of 'pwd' in a list
  Out[3]:
['vla_calplot.jpg',
 'vla_calplot.png',
 'vla_msplot_cals.jpg',
 'vla_msplot_cals.png',
 'vla_plotcal_bpass.jpg',
 'vla_plotcal_bpass.png',
 'vla_plotcal_fcal.jpg',
 'vla_plotcal_fcal.png',
 'vla_plotvis.jpg',
 'vla_plotvis.png']

CASA [4]: x=_ # remember '_' is a shortcut for the output from the last command

CASA [5]: x
  Out[5]:
['vla_calplot.jpg',
 'vla_calplot.png',
 'vla_msplot_cals.jpg',
 'vla_msplot_cals.png',
 'vla_plotcal_bpass.jpg',
 'vla_plotcal_bpass.png',
 'vla_plotcal_fcal.jpg',
 'vla_plotcal_fcal.png',
 'vla_plotvis.jpg',
 'vla_plotvis.png']

CASA [6]: y=Out[2] # or just refer to the enumerated output

CASA [7]: y
  Out[7]: ['/home/basho3/jmcmulli/pretest']

  1. sc - captures the output to a variable; options are '-l' and '-v'

CASA [1]: sc x=pwd # capture output from 'pwd' to the variable 'x'

CASA [2]: x
  Out[2]: '/home/basho3/jmcmulli/pretest'

CASA [3]: sc -l x=pwd # capture the output from 'pwd' to the variable 'x' but
                      # split newlines into a list (similar to sx command)

CASA [4]: x
  Out[4]: ['/home/basho3/jmcmulli/pretest']

CASA [5]: sc -v x=pwd # capture output from 'pwd' to a variable 'x' and
                      # show what you get (verbose mode)
x ==
'/home/basho3/jmcmulli/pretest'

CASA [6]: x
  Out[6]: '/home/basho3/jmcmulli/pretest'

Logging

There are two components to logging within CASA. Logging of all command line inputs is done via IPython.

Upon startup, CASA will log all commands to a file called ipython.log. This file can be changed via the use of the ipythonrc file.

The following line sets up the logging for CASA . There are four options following the specification of the logging file: 1) append, 2) rotate (each session of CASA will create a new log file with a counter incrementing ipython.log.1~, ipython.log.2~ etc, 3) over (overwrite existing file), and 4) backup (renames exising log file to log_name~).

logfile ./ipython.log append

The command logstate will provide details on the current logging setup:

CASA [12]: logstate

File:   ipython.log
Mode:   append
State:  active

Logging can be turned on and off using the logon, logoff commands.

The second component is the output from applications which is directed to the file ./casapy.log.

Your command line history is automatically maintained and stored in the local directory as ipython.log; this file can be edited and re-executed as appropriate using the execfile 'filename' feature. In addition, logging of output from commands is sent to the file casapy.log, also in your local directory; this is brought up automatically.

The logger has a range of features including the ability to filter messages, sort by Priority, Time, etc, and the ability to insert additional comments.

* CASA logger:
%BEGINFIGURE{label='fig:logger', caption='CASA Logger GUI window'}% casalogger1.jpg%ENDFIGURE%

* CASA logger - Search facility: Specify a string in the entry box to have all instances of the found string highlighted.
%BEGINFIGURE{label='fig:logger_search',caption='CASA Logger - Search example'}% casalogger select.jpg%ENDFIGURE%

* CASA logger - Filter facility: The log output can be sorted by Priority, Time, Origin. One can also filter for a string found in the Message.
%BEGINFIGURE{label='fig:logger_filter',caption='CASA Logger - Filter example'}% casalogger filter.jpg%ENDFIGURE%

* CASA logger - Insert facility: The log output can be augmented by adding notes or comments during the reduction. The file should then be saved to disk to retain these changes.
%BEGINFIGURE{label='fig:logger_insert',caption='CASA Logger - Insert example'}% casalogger insert.jpg%ENDFIGURE%

History and Searching

Numbered input/output history is provided natively within IPython. Command history is also maintained on-line.

CASA [11]: x=1

CASA [12]: y=3*x

CASA [13]: z=x**2+y**2

CASA [14]: x
  Out[14]: 1

CASA [15]: y
  Out[15]: 3

CASA [16]: z
  Out[16]: 10

CASA [17]: Out[14]   # Note: The 'Out' vector contains command output
  Out[17]: 1

CASA [18]: _15       # Note: The return value can be accessed by _number
  Out[18]: 3

CASA [19]: ___       # Note: The last three return values can be accessed as:
  Out[19]: 10        #       _, __, ___

Command history can be accessed via the 'hist' command. The history is reset at the beginning of every CASA session, that is, typing 'hist' when you first enter start CASA will not provide any commands from the previous session; however, all of the commands are still available at the command line and can be accessed through the up,down arrow keys, and through searching.

CASA [22]: hist
1 : __IP.system("vi temp.py")  # Note:shell commands are designated in this way
2 : ipmagic("%run -i temp.py") # Note:magic commands are designated in this way
3 : ipmagic("hist ")
4 : more temp.py
5 : __IP.system("more temp.py")
6 : quickhelp()                # Note: autoparenthesis are added in the history
7 : im.open('ngc5921.ms')
8 : im.summary()
9 : ipmagic("pdoc im.setdata")
10: im.close()
11: quickhelp()
12: ipmagic("logstate ")
13: x=1
14: y=3*x
15: z=x**2+y**2
16: x
17: y
18: z
19: Out[16]
20: _17
21: ___

The history can be saved as a script or used as a macro for further use:

CASA [24]: save script.py 13:16
File `script.py` exists. Overwrite (y/[N])? y
The following commands were written to file `script.py`:
x=1
y=3*x
z=x**2+y**2
CASA [25]: !more script.py
x=1
y=3*x
z=x**2+y**2

CASA [26]:

Note that the history commands will be saved up to, but not including the last value (i.e., history commands 13-16 saves commands 13, 14, and 15).

There are two mechanisms for searching command history:

  1. Begin typing and use the Ctrl-p (previous,up) and Ctrl-n (next,down) to search through only
the history items that match what you've typed so far. If you use Ctrl-p,Ctrl-n at a blank prompt, they behave just like the normal arrow keys.

  1. Hit Ctrl-r; this opens a search prompt. Begin typing and the system searches your history for
lines that contain what you've typed so far, completing what it can.

For example:

CASA [37]:

(reverse-i-search)`':

Typing anything after the colon will provide you with the last command matching the characters, for example, typing 'op' finds:

(reverse-i-search)`op': im.open('ngc5921.ms')

Subsequent hitting of Ctrl-r will search for the next command matching the characters.

Macros

Macros can be made for easy re-execution of previous commands. For example to store the commands 13-15 to the macro 'example':

CASA [31]: macro example 13:16
Macro `example` created. To execute, type its name (without quotes).
Macro contents:
x=1
y=3*x
z=x**2+y**2

CASA [32]: z
Out[32]: 6

CASA [33]: z=10

CASA [34]: example
Out[34]: Executing Macro...

CASA [35]: z
Out[35]: 6

CASA [36]:

On-line editing

You can edit files on-line in two ways:

  1. Using the shell access via '!vi'
  2. Using the ed function; this will edit the file but upon closing, it will try to
execute the file; using the 'script.py' example above:

CASA [13]: ed script.py # this will bring up the file in your chosen editor
                        # when you are finished editing the file, it will automatically
                        # execute it (as though you had done a execfile 'script.py'
Editing... done. Executing edited code...

CASA [14]: x
  Out[14]: 1

CASA [15]: y
  Out[15]: 3

CASA [16]: z
  Out[16]: 6

Executing Python scripts

Python scripts are simple text files containing lists of commands as if typed at the keyboard. Note: the autoparentheses feature of IPython can not be used in scripts, that is, you should make sure all function calls have any opening and closing parentheses.

\begin{verbatim} # file is script.py # My script to plot the observed visibilities plotxy('ngc5921.ms','uvdist') #yaxis defaults to amplitude \end{verbatim}

This can be done by:

  1. execfile: this will execute the script as though you had typed the lines at the \casa\ prompt.

CASA [5]: execfile 'script.py'
--------> execfile('script.py')

How do I exit from CASA?

Hit or type at the CASA command line prompt:

CASA>%Exit

and press return.

Tasks in CASA

There are currently a very limited selection of tasks within CASA which provide a streamlined interface to a subset of the CASA applications; this subset should cover the basic reduction sequence for continuum or spectral line analysis.

Please see the CASA Analysis Cookbook for the Toolkit for more extensive information on the full suite of functionality.

The tasks available as of the Alpha Release are:

Available tasks: 

Import/Export   Information     Editing         Display/Plot
-------------   -----------     -------         ------------
importarchive   listhistory     flagautocorr    plotants
importasdm      listobs         flagger         plotxy 
importfits                      flagxy          viewer
importuvfits
exportfits
exportuvfits

Calibration     Imaging         Modelling       Utility
-----------     -------         ---------       -------
bandpass (B)    clean           contsub         help task
blcal (M,MF)    feather         setjy           help par.parameter
correct         ft              uvmodelfit      taskhelp
gaincal (G)     invert                          tasklist
fluxscale       mosaic                          browsetable
fringecal (K)                                   split
clearcal
polcal (D)

Image Analysis  Simulation
--------------  ----------
imcontsub       almasim
regrid

Where:

Available tasks: 

 
bandpass     : Calculate a bandpass calibration solution
blcal        : Calculate a baseline-based calibration solution
browsetable  : Browse a visibility data set or calibration table
clean        : Calculate a deconvolved image with selected clean algorithm
clearcal     : Re-initialize visibility data set calibration data
contsub      : Continuum fitting and subtraction in the uv plane
correct      : Apply calculated calibration solutions
feather      : Feather together an interferometer and a single dish image in the Fourier plane
flagautocorr : Flag autocorrelations (typically in a filled VLA data set)
fluxscale    : Bootstrap the flux density scale from standard calibraters
fringecal    : Calculate a baseline-based fringe-fitting solution (phase, delay, delay-rate)
ft           : Fourier transform the specified model (or component list)
gaincal      : Calculate gain calibration solutions
importarchive: Convert VLA archive file(s) to a CASA visibility data set (MS)
importasdm   : Convert an ALMA Science Data Model directory to a CASA visibility data set (MS)
importfits   : Convert a FITS image to a CASA image
importuvfits : Convert a UVFITS file to a CASA visibility data set (MS)
invert       : Calculate a dirty image and dirty beam
listhistory  : List the processing history of a data set
listobs      : List the observations in a data set
flagxy       : Plot points for flagging selected X and Y axes
mosaic       : Calculate a multi-field deconvolved image with selected clean algorithm
plotants     : Plot the antenna distribution in local reference frame
plotxy       : Plot points for selected X and Y axes
setjy        : Compute the model visibility for a specified source flux density
split        : Create a new data set (MS) from a subset of an existing data set (MS)
uvmodelfit   : Fit a single component source model to the uv data
viewer       : View an image or visibility data set

What happens if something goes wrong?

First, always check that your inputs are correct; use the help task or help par.parameter_name to review the inputs/output.

You can submit a question/bug/enhancement via the site:

http://bugs.aoc.nrao.edu

Login (or register yourself if you don't have a login/password); click the 'Create New Issue' along the top tabs to file a question/bug/enhancement.

%BEGINFIGURE{label='fig:jira', caption='Issue/Defect Tracking system. Left: http://bugs.aoc.nrao.edu page showing login entry fields. Right: Screen after selecting the "Create New Issue" tab along the top.'}% jira.jpgjira2.jpg%ENDFIGURE%

If something has gone wrong and you want to stop what is executing, then typing 'Control-C' will usually cleanly abort the application.

Measurement Sets

Visibility data are stored in a CASA table known as a Measurement Set (MS). An MS consists of a main table containing the visibility data and associated sub-tables containing auxiliary or secondary information. Table \ref{tabselect} lists data selection parameters which may be used during a typical data reduction session while Table \ref{tabmain} identifies the commonly accessed components of the MAIN table in data sets.

Each row in a data column in the MS (e.g. DATA, ALMA_PHAS_CORR, CORRECTED_DATA) contains a matrix of observed complex visibilities at a single time stamp, for a single baseline in a single spectral window. The shape of the data matrix is given by the number of channels and the number of correlations (voltage-products) formed by the correlator for an array. Figures \ref{fig:tablekeywords} and \ref{fig:tablebrowser} illustrates the structure of the MAIN and sub-tables in an MS. Details about the contents of each column and if or how they change during the calibration procedure is described in Section \ref{chapter:calibration}.

All CASA data files, including Measurement Sets, are written into the current working directory by default, with each CASA table represented as a separate sub-directory. MS names therefore need only comply with UNIX file or directory naming conventions, and can be referred to from within CASA directly, or via full path names.

%BEGINTABLE{label="tbl:tabselect" caption="Common Data Selection Parameters"}%
Parameter Contents
ANTENNA1 First antenna in baseline
ANTENNA2 Second antenna in baseline
FIELD_ID Field (source no.) identification
DATA_DESC_ID Spectral window number, polarization identifier pair (IF no.)
ARRAY_ID Subarray number
OBSERVATION_ID Observation identification
POLARIZATION_ID Polarization identification
SCAN_NUMBER Scan number
TIME Integration midpoint time
UVW UVW coordinates
%ENDTABLE%

%BEGINTABLE{label="tbl:maintable" caption="Commonly accessed MAIN Table components for ALMA data"}%
Column Format
  Comments
DATA Complex(N_c, N_f)
  Complex visibility matrix
  ~=ALMA_PHASE_CORR by default
WEIGHT_SPECTRUM Float(N_c)
  Weight for whole data matrix
ALMA_PHASE_CORR Complex(N_c, N_f)
  On-line phase corrected complex visibility matrix
  (Not in VLA data)
ALMA_NO_PHAS_CORR Bool(N_c, N_f)
  Complex visibility matrix that has not been phase corrected
  (Not in VLA data)
ALMA_PHAS_CORR_FLAG_ROW Bool(N_c, N_f)
  Flag to use phase-corrected data or not, Default=F
  (not in VLA data)
CORRECTED_DATA Complex(N_c, N_f)
  Corrected data created by calibrater or imager tools
MODEL_DATA Complex(N_c, N_f)
  Model data created by calibrater or imager tools
IMAGING WEIGHT Float(N_c)
  created by calibrater or imager tools
FLAG Bool(N_c, N_f)
  cumulative data flags
%ENDTABLE%

*Note: when you examine table entries like FIELD_ID or DATA_DESC_ID with the table browser, you will see 0-based numbers.

%BEGINFIGURE{label="fig:tablekeywords" caption="The structure of a Measurement Set. The tables which compose a Measurement Set named ngc5921.ms on disk (Left); a few of the MAIN table columns in a Measurement Set (Right)."}% jbrowser ms.jpgjbrowser1.jpg%ENDFIGURE%

%BEGINFIGURE{label="fig:matplotlib" caption="matplotlib plotter. The buttons on the lower left are: 1,2,3) Home, Back and Forward. Click to navigate between previously defined views (akin to web navigation), 4) pan. Click and drag to pan to a new position, 5) zoom. Click to define a rectangular region for zooming, 6) Subplot Configuration. Click to configure the parameters of the subplot and spaces for the figures, 7) {\bf Save*. Click to launch a file save dialog box. The cursor readout is on the bottom right."}% B0319 uvcoverage.jpg%ENDFIGURE%

Basic Calibration Fundamentals

Calibration and imaging responsibilities are divided between the \calibrater\ and \imager\ tools. The \calibrater\ tool handles visibility-plane calibration while the \imager\ tool deals with image formation, deconvolution, and image-plane calibration.

In a typical synthesis observation, a number of calibrators of known structure (usually point sources) and at least one of known flux density are observed in addition to the scientific target(s) in order to obtain a measure of the systematic properties of the instrument and, if relevant, the corrupting media above it. The effects of these systematic properties are then removed from the data for the scientific target. This technique is often referred to as {\em cross-calibration} and its success depends implicitly on the relationship between: 1) the true brightness distribution of the observed radio sources (what we seek); 2) the formal response of the instrument and its variety of imperfections of varying importance (what we are calibrating); and 3) the data that are actually recorded (what we have measured).

The Measurement Equation, originally defined by Hamaker, Bregman, \& Sault (1996, A\&AS, 117, 137; 1996, A\&AS, 117, 149), is an explicit description of this relationship. The Measurement Equation (in the visibility plane) for a spectral line polarization calibration is given by:

\vec{V}_{ij}~=~B_{ij}~G_{ij}~D_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}

where:

  • \vec{V}_{ij}~=~ cross-correlations between two polarizations for each of two feeds (i,j) which characterize an individual baseline (e.g. the measured visibilities).
  • \vec{V}_{ij}^{\mathrm{~IDEAL}}~=~ visibilities measured by an ideal interferometer (e.g. no instrumental errors or calibration effects). \vec{V}_{ij}^{\mathrm{~IDEAL}}~=~ is directly related to the Fourier Transform of the true polarized sky brightness, \vec{I} = (I,Q,U,V).
  • T_{ij}~=~ Complex gain effects due to the troposphere which are
polarization-independent, e.g. high-frequency opacity corrections.
  • P_{ij}~=~ Parallactic angle rotation correction.
  • D_{ij}~=~ Instrumental polarization response. "D-terms" describe the polarization leakage between feeds (e.g. how much the R-polarized feed picked up L-polarized emission, and vice versa).
  • G_{ij}~=~ Polarization-dependent complex gain effects which includes the scale factor for absolute flux density calibration, and may include phase and amplitude corrections due to changes in the atmosphere (in lieu of T_{ij}).
  • B_{ij}~=~ Bandpass response.

Note that the Measurement Equation is a matrix equation. \vec{V}_{ij}$ and $\vec{V}_{ij}^{\mathrm{~IDEAL}} are 4-vectors with elements for each correlator product. Terms like B_{ij} are 4\times4 matrices representing products of antenna pairs that `corrupt' the correlator products. CASA hardwires the Measurement Equation order and all of the specific algebra associated with the individual terms. The equation written above contains only the most commonly considered calibration terms; many others are possible and several additional types are described in the Synthesis Calibration chapter of this cookbook.

In practical calibration, it is often best to begin the calibration process by determining solutions for those terms which affect the data most. Thus, the user would normally determine gain solutions (G) first (applying pre-computed parallactic angle corrections P if doing polarization calibration), then bandpass solutions (B) and/or polarization solutions (D). Errors which are polarization-independent (T) can be determined at any point in the calibration process. T has nearly identical form to G but is located at a different point in the Measurement Equation. A practical advantage to this factorization is the ability to store polarization-independent effects which are characterized by a certain timescale or a particular parametrization (e.g., tropospheric opacity) as a separate calibration factor, thus isolating them from electronic gain effects which may vary on a different timescale and/or with different systematics. Although most data reduction sequences will probably proceed in this order, the user has complete freedom and flexibility to determine the order in which calibration effects are solved for. Self-calibration, the process by which an improved model of the target source is used to obtain improved solutions for the calibration, is also not limited to just the electronic gain (G) corrections, as has been traditionally the case. Improved solutions for, say, the instrumental polarization (D) may be obtained by self-calibration as well. In general, decision-making along the calibration path is a process of determining which effects are dominating the errors in the data and deciding how best to converge upon the optimal set of calibration solutions (for all relevant components) and source models which best describe the observed data.

The on-line \vec{V}_{ij}, are stored in the DATA column in the MAIN table of the MS; these are the data as loaded by a filler tool (e.g. fitstoms). Associated with the DATA column are scratch columns to hold the most recent version of the calibrated data (CORRECTED_DATA), and the latest visibility plane representation of the source or field model, \vec{V}_{ij}^{IDEAL} (MODEL_DATA). The latter two columns are filled in by the calibrater (cb) and imager (im) tools, respectively. The actual calibration information is stored in separate calibration tables which are written to the directory in which you started CASA. The observed DATA column does not change during reduction, but the scratch columns do. When plotting the data using a tool such as \msplot, the data sub-types _observed, corrected, and model_) can be selected individually.

Summary of Data Reduction Steps

Listed below are the basic steps for typical continuum and spectral line data reduction sessions which solve for and apply the standard complex gain ($G$) and the relative bandpass ($B$). The basic reduction sequence is:

  1. Translate the data into a Measurement Set (the CASA internal data format).
    • VLA data: Data may be translated from either a VLA archive file using the importarchive task, or from a UVFITS file (e.g., as written by AIPS) using the importuvfits task.
  2. Examine and edit calibrator data using the flagger,flagautocorr tasks for automatic data editing and flagxy tasks for interactive editing.
  3. Calibrate the data using the calibration tasks: gaincal, bandpass, fluxscale, setjy, correct, blcal, fringecal. As an example, VLA, PdBI and BIMA calibration differ as described below:
    • High SNF data calibration: VLA data generally has a high signal-to-noise ratio (SNR); thus, one can determine the gain calibration with relative ease. The calibration solutions for each observations have high fidelity and the solution transfer to the target data is a straight forward interpolation between points. Calibration procedures applicable for VLA data will be appropriate for ALMA 3 and 1~mm data and some 3mm BIMA data. The specific steps required to calibrate high SNR data for each frequency group are:
      • Set the flux density of the absolute flux calibrators (e.g. the initial calibration model).
      • Determine the complex gain (G) calibration (phase & amplitude); apply parallactic angle (P) correction if doing polarization.
      • Determine a bandpass calibration (B).
      • Obtain the instrumental polarization calibration (D).
      • Transfer the flux density scale derived from the absolute flux calibrator(s).
      • Apply all calibrations to the target source data.
      • Split out calibrated source data.

    • Low SNR data calibration: PdBI & BIMA data, can have a relatively low SNR and individual gain solutions in phase and amplitude can have significant error. Thus, solutions are fitted with a polynomial before application to the target data. Further, to minimize errors in the determination of the solutions, one first gets a rough phase correction and so the data can be averaged in time to increase SNR before determining the bandpass solution. Once the bandpass is determined, the data can be averaged in frequency to get a good phase solution. Then the absolute flux scale is determined and an amplitude calibration is done. Using this series of procedures, error propagation along the calibration path is minimized. Calibration procedures applicable for PdBI/BIMA data will be appropriate for ALMA submillimeter data. Thus:
      • Get a rough time-dependent phase calibration to be used in next step.
      • Determine a bandpass calibration (B) (apply a rough time-dependent phase calibration to average the data in time before determining B). Note: BIMA data typically does not require bandpass calibration.
      • Determine a phase-only gain calibration (apply the B calibration).
      • Determine an absolute flux density calibration (apply calibrations obtained so far).
      • Determine a residual amplitude calibration (apply calibrations obtained so far).
      • Apply all calibrations to the target source data.
      • Split out calibrated source data.

  1. Examine and edit the calibrated target data.
  2. Subtract the continuum emission from the line emission in the uv-plane, if necessary.
  3. Image the target source.

Loading Data into CASA

VLA: Filling data from VLA archive format

VLA data in on-line archive format are read into CASA from disk using the importarchive task.

importarchive(archivefiles=None, vis=None, bandname=None, freqtol=None)
    Convert VLA archive file(s) to a CASA visibility data set (MS):
    
    Keyword arguments:
    archivefiles -- Name of input VLA archive file(s)
            default: 
            example: archivefiles=['AP314_A950519.xp1','AP314_A950519.xp2']
    vis -- Name of output visibility file (MS)
            default: ; example: vis='NGC7538.ms'
    bandname -- VLA Frequency band
            default:  - all bands; example: bandname='K'
            
    #projectname -- Observing project name
    #       default: ; example='AP314'
    freqtol -- Tolerance in frequency shift in naming spectral windows
            default: channel width of current spectral window in Hz
            example: 150000.0

Note: autocorrelations are filled automatically into the data set. Autocorrelation data is not needed for standard interferometric data, further, the imaging routine will try to image the autocorrelation data (it assumes it is single dish data) which will swamp any real signal. Thus, it is necessary to flag the autocorrelation data any time before imaging.

Downloaded archive format data are available with names like: AS500_A030501.xp1, AS500_A030501.xp2, ..., AS500_A030501.xpN.

The importarchive task allows selection on frequency band. Suppose that you have 1.3 cm line observations in 'K' band and you have copied the archive data to a subdirectory called 'data'. To run the importarchive using archive data located on disk:

importarchive(archivefiles=['/home/rohir2/jmcmulli/ALMATST1/Data/N7538/AP314_A950519.xp1',
'/home/rohir2/jmcmulli/ALMATST1/Data/N7538/AP314_A950519.xp2',
'/home/rohir2/jmcmulli/ALMATST1/Data/N7538/AP314_A950519.xp3'],vis='ngc7538.ms',bandname='K',
freqtolerance=10000000.0)

Filling data from UVFITS format

For UVFITS format, use importuvfits:

importuvfits(fitsfile=None, vis=None)
    Convert a UVFITS file to a CASA visibility data set (MS):
    
    Keyword arguments:
    fitsfile -- Name of input UV FITS file
            default: <unset>; example='3C273XC1.fits'
    vis -- Name of output visibility file (MS)
            default: <unset>; example: vis='3C273XC1.ms'

Determine opacity corrections for VLA data

For VLA data, zenith opacity can be measured at the frequency and during the time observations are made using a VLA tipping scan in the observe file. Historical tipping data are available at:

http://www.vla.nrao.edu/astro/calib/tipper

Choose a year, and click Go to get a list of all tipping scans that have been made for that year.

If a tipping scan was made for your observation, then select the appropriate file. Go to the bottom of the page and click on the button that says Press here to continue.. The results of the tipping scan will be displayed. Go to the section called 'Overall Fit Summary' to find the fit quality and the fitted zenith opacity in percent. If the zenith opacity is reported as 6%, then the actual zenith opacity is 0.060 (this is the input needed for CASA corrections).

Filling ALMA Science Data Model (ASDM) observations

The importasdm task will fill an ASDM into a CASA visibility data set (MS). Currently there are no options for filling the data (you get the whole data set!).

CASA <1>: importasdm '/home/basho3/jmcmulli/ASDM/ExecBlock3'
--------> importasdm('/home/basho3/jmcmulli/ASDM/ExecBlock3')
Parameter: asdm is: /home/basho3/jmcmulli/ASDM/ExecBlock3 and has type .
Taking the dataset /home/basho3/jmcmulli/ASDM/ExecBlock3 as input.
Time spent parsing the XML medata :1.16 s.
The measurement set will be filled with complex data
About to create a new measurement set '/home/basho3/jmcmulli/ASDM/ExecBlock3.ms'
The dataset has 4 antennas...successfully copied them into the measurement set.
The dataset has 33 spectral windows...successfully copied them into the measurement set.
The dataset has 4 polarizations...successfully copied them into the measurement set.
The dataset has 41 data descriptions...successfully copied them into the measurement set.
The dataset has 125 feeds...successfully copied them into the measurement set.
The dataset has 2 fields...successfully copied them into the measurement set.
The dataset has 0 flags...
The dataset has 0 historys...
The dataset has 1 execBlock(s)...successfully copied them into the measurement set.
The dataset has 12 pointings...successfully copied them into the measurement set.
The dataset has 3 processors...successfully copied them into the measurement set.
The dataset has 72 sources...successfully copied them into the measurement set.
The dataset has 3 states...
The dataset has 132 calDevices...
The dataset has 72 mains...
Processing row # 0 in MainTable
Entree ds getDataCols
About to clear
About to getData
About to new VMSData
Exit from getDataCols
ASDM Main table row #0 transformed into 40 MS Main table rows
Processing row # 1 in MainTable
Entree ds getDataCols
About to clear
About to getData
About to new VMSData
Exit from getDataCols
ASDM Main table row #1 transformed into 40 MS Main table rows
...
ASDM Main table row #71 transformed into 40 MS Main table rows

...successfully copied them into the measurement set.
About to flush and close the measurement set.
Overall time spent in ASDM methods to read/process the ASDM Main table : cpu = 5.31 s.
Overall time spent in AIPS methods to fill the MS Main table : cpu = 1.3

Data Examination and Editing

Plot the Data

pylab - matplotlib library interface

The principal tool for X-Y plots of visibility data is plotxy. Plots are displayed using the matplotlib plotting library which is also provided to the user at the command line through the pl tool.

pylab functions allow one to annotate, alter, or add to any plot displayed in the matplotlib plotter. matplotlib commands can be found at:

http://matplotlib.sourceforge.net/pylab_commands.html

pl.clf() is perhaps the most useful native command as it will clear the contents of the plotter frame.

A quick synopsis of relevant commands for altering plots are found below.

pl.axhline - draw a horizontal line across axes
pl.axvline - draw a vertical line across axes
pl.axhspan - draw a horizontal bar across axes
pl.axvspan - draw a vertical bar across axes
pl.clf     - clear the current figure
pl.close   - close the plotter (subsequent plots will reopen a plotter window)
pl.ion     - turn interaction mode on (default - this indicates that any plotting command
             will be seen immediately after the command)
pl.ioff    - turn off interaction mode (a 'show' command is required to see commands after
             this mode has been enabled)
pl.savefig - save the current figure
pl.subplot - make a subplot (numrows, numcols, axesnum)
pl.text    - add some text at location (x,y)
pl.title   - add/change the title
pl.xlim    - set/get the xlimits
pl.ylim    - set/get the ylimits
pl.xlabel  - add/change the xlabel
pl.ylabel  - add/change the ylabel

In addition, there are a range of mathematical functions provided (trigonometric, matrix algebra, logs, etc). Again, see the pylab documentation for help.

%BEGINFIGURE{label="fig:matplotlib" caption="matplotlib plotter. The buttons on the lower left are: 1,2,3) Home, Back and Forward. Click to navigate between previously defined views (akin to web navigation), 4) pan. Click and drag to pan to a new position, 5) zoom. Click to define a rectangular region for zooming, 6) Subplot Configuration. Click to configure the parameters of the subplot and spaces for the figures, 7) {\bf Save*. Click to launch a file save dialog box. The cursor readout is on the bottom right."}% B0319 uvcoverage.jpg%ENDFIGURE%

plotxy is setup as:

    Plot points for selected X and Y axes:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: <unset>; example: vis='ngc5921.ms'
    xaxis -- Visibility file (MS) data to plot along the x-axis
            <Options: 'azimuth','baseline','channel','elevation',
            'hourangle','parallacticangle','time','u','uvdist','x'>
    yaxis -- Visibility data to plot along the y-axis
            <Options: 'amp','phase','v'>
    datacolumn -- Visibility file (MS) data column.
            default: 'corrected'; example: datacolumn='model'
            <Options: 'data' (raw),'corrected','model','weight'>
    antennaid -- Antenna index identifier
            default: -1 (all); example: antennaid=[13]
    spwid -- Spectral window index identifier
            default: -1 (all); example: spwid=0
    fieldid -- Field index identifier
            default: -1 (all); example: fieldid=0
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    timerange
    correlations -- Correlations to plot
            default: '' (all); example: correlations='RR'
            <Options: '','RR','LL','RR LL','XX','YY','XX YY'>
    --- Spectral Information ---
    nchan -- Number of channels to select
            default: -1 (all); example: nchan=55
    start -- Start channel (0-relative)
            default=0; example: start=5
    width -- Channel width (value>0 indicates averaging)
            default=1; example: width=5
    --- Plot Options ---
    subplot -- Panel number on the display screen
            default: 111 (full screen display); example:
               subplot=221 (upper left panel of 4 on frame)
               subplot=223 (lower left panel of 4 on frame)
               subplot=212 (lower panel of 2 on frame)
            <Options: subplot=yxn; where y=number of rows,
            x=number of columns, n=panel number (numbered starting
            at 1 from top left)>
    overplot -- Overplot these values on current plot (if possible)
            default: False; example: overplot=True
    plotsymbol -- pylab plot symbol
            default: ','; example: plotsymbol='bo' (blue circles)
            <Options: '-' = solid line
                      'o' = filled circles
                      '.' = points
                      '-.ro' = combined dash dot line with red circles
    title -- Plot title (above plot)
            default: ''; example: title='This is my title'
    xlabels -- Label for x axis
            default: ''; example: xlabels='X Axis'
    ylabels -- Label for y axis
            default: ''; example: ylabels='Y Axis'
    iteration -- Plot each value of this data axis on a separate plot
            default: '' (no iteration); example: iteration='antenna'
            <Options: 'antenna1','field_id','baseline','time'>
    fontsize -- Font size for labels
            default: 1; example: fontsize=2
    windowsize -- Window size
            default: 1.0; example: windowsize=0.5

*NOTE: Several tasks have both a 'fieldid' and a 'field' parameter. This allows selection by either index number or name. For 'field', the selection is a minimum match on a space-separated list of names. You can use the selectfield(vis=filename,minstring='string') to test what field names and indices you are matching. Similarly, selectantenna(vis=filename,minstring='antname') will also give you this information for antenna names.

plotoptions

The plotoptions method works in concert with the native matplotlib functionality to enable flexible representations of data displays. In particular, the arguments overplot, subplot, and plotsymbol enable visual comparisons of data results for analysis during the reduction process.

The plotxy plotoptions argument plotsymbol

The plotsymbol defines both the line or symbol for the data being drawn as well as the color; from the matplotlib online documentation (e.g., type pl.plot?:

    The following line styles are supported:
        -     : solid line
        --    : dashed line
        -.    : dash-dot line
        :     : dotted line
        .     : points
        ,     : pixels
        o     : circle symbols
        ^     : triangle up symbols
        v     : triangle down symbols
        <     : triangle left symbols
        >     : triangle right symbols
        s     : square symbols
        +     : plus symbols
        x     : cross symbols
        D     : diamond symbols
        d     : thin diamond symbols
        1     : tripod down symbols
        2     : tripod up symbols
        3     : tripod left symbols
        4     : tripod right symbols
        h     : hexagon symbols
        H     : rotated hexagon symbols
        p     : pentagon symbols
        |     : vertical line symbols
        _     : horizontal line symbols
        steps : use gnuplot style 'steps' # kwarg only
    The following color strings are supported
        b  : blue
        g  : green
        r  : red
        c  : cyan
        m  : magenta
        y  : yellow
        k  : black
        w  : white
    Line styles and colors are combined in a single format string, as in
    'bo' for blue circles.

Examples are shown in the following sections.

iteration

There are currently four iteration options available: 'antenna1','time','baseline', and 'field_id'. If one of these options is chose, the data will be split into separate plot displays for each value of the iteration axis (e.g., for the VLA, the 'antenna1' option will get you 27 displays, one for each antenna).

An example iteration session:

CASA <1>: plotxy('ngc5921.ms','channel',iteration='antenna1')
Reading data...
Reading data...
Reading data...
Number of points being plotted : 39186
Python Plotting time :  1.92055296898  sec.
Number of points being plotted : 37674
Python Plotting time :  1.92539596558  sec.
Number of points being plotted : 36162
Python Plotting time :  2.1625828743  sec.
Type "n" to show the next set of plots 
 Type "s" to stop iterating 
 Iteration option: 

%BEGINFIGURE{label="fig:msplot_iteration" caption="plotxy iteration plot: The first set of plots from the example. Each time you press 'n', you will get the next series of plots. Note that the subplot parameter is currently ignored and it defaults to a 3x1 panel display."}% msplot iteration.jpg%ENDFIGURE%

subplot

The plotxy plotoptions argument subplot takes three numbers. The first is the number of y panels (stacking vertically), the second is the number of xpanels (stacking horizontally) and the third is the number of the panel you want to draw into. For example, mp.plotoptions(subplot=212) will be drawing into the lower of two panels on the figure.

An example use of subplot capability (Figure \ref{multiplot}):

plotxy('ngc5921.ms','channel',                  # plot channels for the ngc5921.ms data set
       plotsymbol='go',                         # use green circles
       subplot=211)                             # plot to the top of two panels
plotxy('ngc5921.ms','u',                        # plot uv coverage for ngc5921.ms data set
       plotsymbol=224                           # plot to the fourth panel (lower right) in 2x2 grid
plotxy('ngc5921.ms','x',                        # plot antenna plot for ngc5921.ms data set
       subplot=223)                             # plot to the lower left in a 2x2 grid
                                                # NOTE: You can change the gridding and panel size
                                                        by manipulating the ny x nx grid.

%BEGINFIGURE{label="fig:multiplot" caption="Multi-panel display of visibility versus channel (Top), antenna array configuration (bottom left) and the resulting uv coverage (bottom right). The script to make this plot is three commands: 1) plotxy('ngc5921.ms',xaxis='channel',fieldid=0,subplot=211,plotsymbol='go'), 2) plotxy(xaxis='x',subplot=223,plotsymbol='bo'), 3) plotxy(xaxis='u',subplot=224,plotsymbol='bo') "}% ngc5921 multiplot.jpg%ENDFIGURE%

Browse the Data

The browsetable task is available for viewing the data directly (either CASA MeasurementSet, calibration table or CASA image).

    Browse a table (visibility data set, calibration table, or image):
    
    Keyword arguments:
    tablename -- Name of table file on disk (MS, calibration table, image)
            default: ; example: vis='ngc5921.ms'

An example is:

CASA <2>: browsetable 'ngc5921.ms'
--------> browsetable('ngc5921.ms')

%BEGINFIGURE{label="fig:tablebrowser1" caption="browsetable: The browser displays the main table within a frame. Hit the expand button to fill the browser frame (this has been done for this figure). You can scroll through the data (x=columns of the MAIN table, and y=the rows) or select a specific page or row as desired."}% tablebrowser2.jpg%ENDFIGURE%

%BEGINFIGURE{label="fig:tablebrowser3" caption="browsetable: You can use the Menu option View to look at other tables within an MS. If you select on View:Table Keywords you get the image displayed. You can then select on a table to view its contents."}% tablebrowser3.jpg%ENDFIGURE%

%BEGINFIGURE{label="fig:tablebrowser4" caption="browsetable: View the SOURCE table of the MS"}% tablebrowser4.jpg%ENDFIGURE%

Data Flagging

flagdata task

flagdata will flag the visibility data set based on the specified data selections, most of the information coming from a run of the listobs task (with/without verbose=True). Currently you can select:

  • antenna identifiers (antennaid)
  • baseline pairs (baseline)
    • baseline expects a baseline composed of a single pair list [ant1,ant2]
  • channels (chans)
    • chans will take a list of channels (e.g., [0,1,2,55,56,57]) or anything that will produce such a list of integer channel values (e.g., range(0,4)+range(55,62)).
  • clip data (uses: clipfunction, clipcorr, and clipminmax)
    • clipfunction can be: 'ABS', 'ARG', 'RE', 'IM', or 'NORM'
    • clipcorr can be: 'I','XX','YY','RR','LL'
    • _clipminmax is a list containing the minimum and maximum values for the data, outside of which the data will be flagged.
  • field identifiers (fieldid)
  • spectral window identifiers (spwid)
  • time range (timerange)
    • example: timerange=['26-APR-2003/02:45:00.0','26-APR-2003/02:49:00.0']

Flag Antenna/Channels

plotxy('ngc5921.ms','channel',iteration='antenna1')
restore()
flagdata(vis='ngc5921.ms',antennaid=[0],chans=[10,11,12,13,14,15])
restore()
plotxy('ngc5921.ms','channel',iteration='antenna1')

%BEGINFIGURE{label="fig:flagdata_antchan" caption="flagdata: Example showing before and after displays using a selection of one antenna and a range of channels; note that each invocation of the flagdata task represents a cumulative selection, i.e., running antennaid=[0] will flag all data with antenna 0, while antennaid=[0],chans=[10,11,12,13,14,15] will flag only those channels on antenna 1."}% msplot channelants.pngmsplot flagantchan.png%ENDFIGURE%

Clip

plotxy('ngc5921.ms','uvdist')
flagdata(vis='ngc5921.ms',clipcorr='LL',clipminmax=[0.0,1.6])
plotxy('ngc5921.ms','uvdist')

%BEGINFIGURE{label="fig:flagdata" caption="flagdata: Flagging example using the clip facility."}% msplot clipbefore.pngmsplot clipbefore.png%ENDFIGURE%

Interactive flagging

flagxy

Interactive flagging (e.g. ``see it - flag it'') is possible on the X-Y displays of the data.

The key task is flagxy. flagxy enables the use of the matplotlib cursor to mark a rectangular region (if no region is set) or it can flag a specified region=[xmin,xmax,ymin,ymax]. The diskwrite parameter determines whether the flags are written to disk or just displayed. The short help on flagxy is listed below; notice that it is identical to the plotxy method with two additional parameters: 1) region which sets the [xmin,xmax,ymin,ymax] for non-iteractive region flagging - set this to [0.0] for interactive; 2) diskwrite which specifies whether the flags will be just shown or written to disk and saved. In addition, when region=[0.0] for interactive, you can set multiple region boxes; then typing either 's' for show or 'w' for write in the command window will take the appropriate action.

    Plot points for flagging selected X and Y axes; if region is not specified, i
t enables interactive setting of flag boxes using the mouse. Use the task flagdat
a to write these flags to disk.
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: <unset>; example: vis='ngc5921.ms'
    xaxis -- Visibility file (MS) data to plot along the x-axis
            <Options: 'azimuth','baseline','channel','elevation',
            'hourangle','parallacticangle','time','u','uvdist','x'>
    yaxis -- Visibility data to plot along the y-axis
            <Options: 'amp','phase','v'>
    datacolumn -- Visibility file (MS) data column.
            default: 'CORRECTED_DATA'; example: datacolumn='DATA'
            <Options: 'DATA' (raw),'CORRECTED_DATA','MODEL'>
    antennaid -- Antenna index identifier
            default: -1 (all); example: antennaid=[13]
    spwid -- Spectral window index identifier
            default: -1 (all); example: spwid=0
    fieldid -- Field index identifier
            default: -1 (all); example: fieldid=0
    timerange
    correlations -- Correlations to plot
            default: '' (all); example: correlations='RR'
            <Options: '','RR','LL','RR LL','XX','YY','XX YY'>
    --- Spectral Information ---
    nchan -- Number of channels to select
            default: -1
    start -- Start channel (0-relative)
    width -- Channel width (value>0 indicates averaging)
    --- Plot Options ---
    subplot -- Panel number on the display screen
            default: 111 (full screen display); example:
               subplot=221 (upper left panel of 4 on frame)
               subplot=223 (lower left panel of 4 on frame)
               subplot=212 (lower panel of 2 on frame)
            <Options: subplot=yxn; where y=number of rows,
            x=number of columns, n=panel number (numbered starting
            at 1 from top left)>
    overplot -- Overplot these values on current plot (if possible)
            default: False; example: overplot=True
    plotsymbol -- pylab plot symbol
            default: ','; example: plotsymbol='bo' (blue circles)
            <Options: '-' = solid line
                      'o' = filled circles
                      '.' = points
                      '-.ro' = combined dash dot line with red circles
    title -- Plot title (above plot)
            default: ''; example: title='This is my title'
    xlabels -- Label for x axis
            default: ''; example: xlabels='X Axis'
    ylabels -- Label for y axis
            default: ''; example: ylabels='Y Axis'
    iteration -- Plot each value of this data axis on a separate plot
            default: '' (no iteration); example: iteration='antenna'
            <Options: 'antenna', 'baseline','time'>
    fontsize -- Font size for labels
            default: 1; example: fontsize=2
    windowsize -- Window size
            default: 1.0; example: windowsize=0.5
    --- Flag Options ---
    region -- Flagging region specified as [xmin,xmax,ymin,ymax]
            default: [0.0] = mark region interactively
            example: region=[53,60,0.0,2.0]
    diskwrite -- Write flags to disk or just display them
            default: False; example: diskwrite=True

%BEGINFIGURE{label="fig:markflags" caption="Plot of amplitude versus uv distance with both the cursor determined and command line specified flagging regions removed (Right). Example scripts would be: 1) plotxy(xaxis='uvdist',plotsymbol='b,',subplot=111) #plot the data first, 2) flagxy(region=[400,450,1.0,1.2],datacolumn='DATA') #plot the data again and show the flags in a region that you've specified, 3) flagxy(region=[0.0]) # interactively flag the data with the cursor. "}% markflags.jpgmarkflags2.jpg%ENDFIGURE%

Synthesis Calibration

Basic Synthesis Calibration Fundamentals

The visibilities measured by an instrument must be calibrated before formation of an image. This is becausethe wavefronts received and processed by the observational hardwarehave been corrupted by a variety of effects. These include the effects of transmission through the atmosphere, the imperfect detailsof reception by antennas, the effects of conversion to an amplified electronic (digital) signal and transmission through thesignal processing system, and the effects of formation of the cross-power spectra by a correlator. Calibration is the process of reversing these effects to arrive at corrected visibilities which resemble as closely as possible the visibilities that would have been measured in vacuum by a perfect system. The subject of this chapter of the cookbook is the determination of these effects by using the visibility data itself.

The relationship between the observed and ideal (desired) visibilities on the baseline between antennas i and j may be expressed by the Measurement Equation:

\vec{V}_{ij}~=~J_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}

where \vec{V}_{ij} represents the observed visbility, \vec{V}_{ij}^{\mathrm{~IDEAL}} represents the corresponding ideal visibilities, and J_{ij} represents the accumulation of all corruptions affecting baseline ij. The visibilities are indicated as vectors spanning the four correlation combinations which can be formed from dual-polarization signals. These four correlations are related directly to the Stokes parameters which fully describe the radiation. The J_{ij} term is therefore a 4\times4 matrix.

Most of the effects contained in J_{ij} (indeed, the most important of them) are antenna-based, i.e., they arise from measurable physical properties of (or above) individual antenna elements in a synthesis array. Thus, adequate calibration of an array of N_{ant} antennas forming N_{ant}(N_{ant}-1)/2 baseline visibilities is usually achieved through the determination of only N_{ant} factors, such that J_{ij} = J_i \otimes J_j^{*}. For the rest of this chapter, we will usually assume that J_{ij} is factorable in this way, unless otherwise noted.

As implied above, J_{ij} may also be factored into the sequence of specific corrupting effects, each having their own particular (relative) importance and physical origin, which determines their unique algebra. Including the most commonly considered effects, the Measurement Equation can be written:

\vec{V}_{ij}~=~M_{ij}~B_{ij}~G_{ij}~D_{ij}~E_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}

where:

  • T_{ij}~=~ Polarization-independent multiplicative effects introduced by the troposphere, such as opacity and path-length variation.
  • P_{ij}~=~ Parallactic angle, which describes the orientation of the polarization coordinates on the plane of the sky. This term varies according to the type of the antenna mount.
  • E_{ij}~=~ Effects introduced by properties of the optical components of the telescopes, such as the collecting area's dependence on elevation.
  • D_{ij}~=~ Instrumental polarization response. "D-terms" describe the polarization leakage between feeds (e.g. how much the R-polarized feed picked up L-polarized emission, and vice versa).
  • G_{ij}~=~ Electronic gain response due to components in the signal path between the feed and the correlator. This complex gain term G_{ij} includes the scale factor for absolute flux density calibration, and may include phase and amplitude corrections due to changes in the atmosphere (in lieu of T_{ij}). These gains are polarization-dependent.
  • B_{ij}~=~ Bandpass (frequency-dependent) response, such as that introduced by spectral filters in the electronic transmission system
  • M_{ij}~=~ Baseline-based correlator (non-closing) errors. By definition, these are not factorable into antenna-based parts.

Note that the terms are listed in the order in which they affect the incoming wavefront (G and B represent an arbitrary sequence of such terms depending upon the details of the particular electronic system). Note that $M$ differs from all of the rest in that it is not antenna-based, and thus not factorable into terms for each antenna.

As written above, the measurement equation is very general; not all observations will require treatment of all effects, depending upon the desired dynamic range. E.g., bandpass need only be considered for continuum observations if observed in a channelized mode and very high dynamic range is desired. Similarly, instrumental polarization calibration can usually be omitted when observing (only) total intensity using circular feeds. Ultimately, however, each of these effects occurs at some level, and a complete treatment will yield the most accurate calibration. Modern high-sensitivity instruments such as ALMA and EVLA will likely require a more general calibration treatment for similar observations with older arrays in order to reach the advertised dynamic ranges on strong sources.

In practice, it is usually far too difficult to adequately measure most calibration effects absolutely (as if in the laboratory) for use in calibration. The effects are usually far too changable. Instead, the calibration is achieved by making observations of calibrator sources on the appropriate timescales for the relevant effects, and solving the measurement equation for them using the fact that we have N_{ant}(N_{ant}-1)/2 measurements and only N_{ant} factors to determine (except for M which is only sparingly used). (_Note: By partitioning the calibration factors into a series of consecutive effects, it might appear that the number of free parameters is some multiple of N_{ant}, but the relative algebra and timescales of the different effects, as well as the the multiplicity of observed polarizations and channels compensate, and it can be shown that the problem remains well-determined until, perhaps, the effects are direction-dependent within the field of view. Limited solvers for such effects are under study; the calibrater tool currently only handles effects which may be assumed constant within the field of view. Corrections for the primary beam are handled in the imager tool._ Once determined, these terms are used to correct the visibilities measured for the scientific target. This procedure is known as cross-calibration (when only phase is considered, it is called phase-referencing).

The best calibrators are point sources at the phase center (constant visibility amplitude, zero phase), with sufficient flux density to determine the calibration factors with adequate SNR on the relevant timescale. The primary gain calibrator must be sufficiently close to the target on the sky so that its observations sample the same atmospheric effects. A bandpass calibrator usually must be sufficiently strong (or observed with sufficient duration) to provide adequate per-channel sensitivity for a useful calibration. In practice, several calibrators are usually observed, each with properties suitable for one or more of the required calibrations.

Synthesis calibration is inherently a bootstrapping process. First, the dominant calibration term is determined, and then, using this result, more subtle effects are solved for, until the full set of required calibration terms is available for application to the target field. The solutions for each successive term are relative to the previous terms. Occasionally, when the several calibration terms are not sufficiently orthogonal, it is useful to re-solve for earlier types using the results for later types, in effect, reducing the effect of the later terms on the solution for earlier ones, and thus better isolating them. This idea is a generalization of the traditional concept of self-calibration, where initial imaging of the target source supplies the visibility model for a re-solve of the gain calibration (G or T). Iteration tends toward convergence to a statistically optimal image. In general, the quality of each calibration and of the source model are mutually dependent. In principle, as long as the solution for any calibration component (or the source model itself) is likely to improve substantially through the use of new information (provided by other improved solutions), it is worthwhile to continue this process.

In practice, these concepts motivate certain patterns of calibration for different types of observation, and the calibrater tool in CASA is designed to accomodate these patterns in a general and flexible manner. For a spectral line total intensity observation, the pattern is usually:

  1. Solve for G on the bandpass calibrator
  2. Solve for B on the bandpass calibrator, using G
  3. Solve for G on the primary gain (near-target) and flux density calibrators, using B solutions just obtained
  4. Scale G solutions for the primary gain calibrator according to the flux density calibrator solutions
  5. Apply G and B solutions to the target data
  6. Image the calibrated target data

If opacity and gain curve information are relevant and available, these types are incorporated in each of the steps (in future, an actual solve for opacity from appropriate data may be folded into this process):

  1. Solve for G on the bandpass calibrator, using T (opacity) and E (gain curve) solutions already derived.
  2. Solve for B on the bandpass calibrator, using G, T (opacity), and E (gain curve) solutions.
  3. Solve for G on primary gain (near-target) and flux density calibrators, using B, T (opacity), and E (gain curve) solutions.
  4. Scale G solutions for the primary gain calibrator according to the flux density calibrator solutions
  5. Apply T (opacity), E (gain curve), G, and B solutions to the target data
  6. Image the calibrated target data

For continuum polarimetry, the typical pattern is:

  1. Solve for G on the polarization calibrator, using (analytical) P solutions.
  2. Solve for D on the polarization calibrator, using P and G solutions.
  3. Solve for G on primary gain and flux density calibrators, using P and D solutions.
  4. Scale G solutions for the primary gain calibrator according to the flux density calibrator solutions.
  5. Apply P, D, and G solutions to target data.
  6. Image the calibrated target data.

For a spectro-polarimetry observation, these two examples would be folded together.

In all cases the calibrator model must be adequate at each solve step. At high dynamic range and/or high resolution, many calibrators which are nominally assumed to be point sources become slightly resolved. If this has biased the calibration solutions, the offending calibrator may be imaged at any point in the process and the resulting model used to improve the calibration. Finally, if sufficiently strong, the target may be self-calibrated as well.

General Calibrater Mechanics

The calibrater tasks/tool are designed to solve and apply solutions for all of the solution types listed above (and more are in the works). This leads to a single basic sequence of execution for all solves, regardless of type:

  1. Set the calibrator model visibilities
  2. Select the visibility data which will be used to solve for a calibration type
  3. Arrange to apply any already-known calibration types (the first time through, none may yet be available)
  4. Arrange to solve for a specific calibration type, including specification of the solution timescale and other specifics
  5. Execute the solve process
  6. Repeat 1-4 for all required types, using each result, as it becomes available, in step 2, and perhaps repeating for some types to improve the solutions

By itself, this sequence doesn't guarantee success; the data provided for the solve must have sufficient SNR on the appropriate timescale, and must provide sufficient leverage for the solution (e.g., D solutions require data taken over a sufficient range of parallactic angle in order to separate the source polarization contribution from the instrumental polarization).

Calibration Tasks

The calibration tasks are as follows:

  • accum - Accumulate incremental calibration solutions into a cumulative cal table
  • bandpass - B calibration solving; supports pre-apply of other calibrations
  • blcal - baseline-based G (or B) calibration; supports pre-apply of other calibrations
  • clearcal - Re-initialize visibility data set calibration data
  • correct - Apply calculated calibration solutions
  • fluxscale - Bootstrap the flux density scale from standard calibraters
  • fringecal - baseline-based fringe-fitting calibration solving; supports pre-apply of other calibrations
  • gaincal - G calibration solving; supports pre-apply of other calibrations
  • pointcal - Solve for pointing error calibration
  • plotcal - Plot calibration solutions
  • setjy - Compute the model visibility for a specified source flux density
  • smooth - Smooth calibration solutions derived from one or more sources
  • uvmodelfit - Fit a single component source model to the uv data
The following sections outline the use of these tasks in standard calibration processes.

Calibration models for absolute flux density (setjy)

When solving for visibility-plane calibration, CASA calibration applications compare the observed DATA column with the MODEL_DATA column. The first time that an imaging or calibration task is executed for a given MS, the MODEL_DATA column is created and initialized with unit point source flux density visibilities (unpolarized) for all sources (e.g. AMP=1, phase=0^{\circ}). The setjy task is then used to set the proper flux density for flux calibrators. For sources that are recognized flux calibrators (listed in Table \ref{fluxcal.table}), setjy will calculate the flux densities, Fourier transform the data and write the results to the MODEL_DATA column. For the VLA, the default source models are customarily point sources defined by the Baars or Perley-Taylor flux density scales, or point sources of unit flux density if the flux density is unknown. The MODEL_DATA column can also be filled with a model generated from an image of the source (e.g. the Fourier transform of an image generated after initial calibration of the data).

%BEGINTABLE{label="tbl:fluxcal.table" caption="Recognized Flux Density Calibrators."}%
3C NameSorted ascending B1950 Name J2000 Name
3C286 1328+307 1331+305
3C48 0134+329 0137+331
3C147 0538+498 0542+498
3C138 0518+165 0521+166
3C295 1409+524 1411+522
-- 1934-638 --
%ENDTABLE%

By default the setjy task will cycle through all fields and spectral windows, setting the flux density either to 1 Jy (unpolarized), or if the source is recognized as one of the calibrators in the above table, to the flux density (assumed unpolarized) appropriate to the observing frequency. To run setjy on a measurement set called data.ms:

setjy(vis='data.ms')                # This will set all fields and spectral windows

To limit this operation to certain fields and spectral windows, use the fieldid and/or spwid parameters, which take single values:

setjy(vis='data.ms',fieldid=0)         # only set the flux density of
                                       # the first field (all spectral windows)

setjy(vis='data.ms',fieldid=1,spwid=17)# only set the flux density of
                                       # the second field in spectral window 17

The full-polarization flux density (I,Q,U,V) may also be explicitly provided:

setjy(vis='data.ms',
     fieldid=1,spwid=16,               # Run setjy on field 2, spwid 17
     fluxdensity=[3.5,0.2,0.13,0.0])   #  and set I,Q,U,V explicitly

If the flux density calibrator is resolved at the observing frequency, the point source model generated by setjy will not be appropriate. If available, a model image of the resolved source at the observing frequency may be used to generate the appropriate visibilities with the ft task. Model images for some flux density calibrators are provided with CASA:
  • RPM RHE4: located in /usr/lib/casapy/data/nrao/VLA/CalModels
  • MAC OSX .dmg: located in /opt/casa/data/nrao/VLA/CalModels
  • NRAO-AOC stable: /home/casa/data/nrao/VLA/CalModels
  • NRAO-AOC daily: /home/ballista/casa/daily/data/nrao/VLA/CalModels

The models available are:
3C138_C.im/  3C138_Q.im/  3C147_K.im/  3C286_C.im/  3C286_Q.im/  3C48_C.im/  3C48_Q.im/  README
3C138_K.im/  3C138_U.im/  3C147_Q.im/  3C286_K.im/  3C286_U.im/  3C48_K.im/  3C48_U.im/
3C138_L.im/  3C138_X.im/  3C147_U.im/  3C286_L.im/  3C286_X.im/  3C48_L.im/  3C48_X.im/

These are all un-reconvolved images of AIPS CC lists, properly scaled
to the Perley-Taylor 1999 flux density for the frequencies at which 
they were observed.

It is important that the model image {\em not} be one convolved with a finite beam; it must have units of Jy/pixel (not Jy/beam). Also, the frequency range of the image must cover the frequencies in the dataset. Finally, the amplitude scale in the image must be correct (beware of variation due to non-zero spectral index).

Copy the model image to the working directory; the following illustrates its use.

# Import the data
importvla(archivefiles='AS776_A031015.xp2',vis='ngc7538_XBAND.ms',freqtol=10000000.0,bandname='X')

# Flag the ACs
flagautocorr('ngc7538_XBAND.ms')


# METHOD 1:  Use point source model for 3C48, plus uvrange in solve

# Use point source model for 3C48
setjy(fieldid=0);

# Limit 3C48 (fieldid=0) solutions to uvrange = 0-40 klambda
gaincal(vis='ngc7538_XBAND.ms', caltable='cal.G',msselect='FIELD_ID IN [0]',solint=60.0,refant=10, nchan=0,uvrange=[0,40],append=False,gaincurve=False,opacity=False)

# Append phase-calibrator's solutions (no uvrange) to the same table
gaincal(vis='ngc7538_XBAND.ms', caltable='cal.G',msselect='FIELD_ID IN [2]',solint=60.0,refant=10, nchan=0,uvrange=[0],append=True,gaincurve=False,opacity=False)

# Fluxscale
fluxscale(vis='ngc7538_XBAND.ms',caltable='cal.G',reference=['0137+331'],
       transfer=['2230+697'], fluxtable='cal.Gflx',append=False)



# METHOD 2: use a resolved model copied from the data respository
#   for 3C48, and no uvrange
#   (NB: detailed freq-dep flux scaling TBD)

ft(vis='ngc7538_XBAND.ms',fieldid=0,model='3C48_X.im')

# Solutions on both calibrators with no uvrange
gaincal(vis='ngc7538_XBAND.ms', caltable='cal.G2',msselect='FIELD_ID IN [0,2]',solint=60.0,refant=10, nchan=0,uvrange=[0],append=False,gaincurve=False,opacity=False)

# Fluxscale
fluxscale(vis='ngc7538_XBAND.ms',caltable='cal.G2',reference=['0137+331'],
          transfer=['2230+697'], fluxtable='cal.G2flx',append=False)


# Both methods give 2230 flux densities ~0.7 Jy, in good agreement with
#   AIPS

Types of calibrations to solve for and apply

The following sections describe the basics of solving for the various types of calibration solutions. The examples given here are very general. The assembly of these building blocks into calibration patterns for several use cases is demonstrated in the real examples provided at the end of this cookbook.

A priori calibrations

This section describes how to apply _a priori_ calibration information that may be available.

Gain curves

Gain curve calibration involves compensating for the effects of elevation on the amplitude of the received signals at each antenna. Antennas are not absolutely rigid, and so their effective collecting area and net surface accuracy vary with elevation as gravity deforms the surface. This calibration is especially important at higher frequencies where the deformations represent a greater fraction of the observing wavelength. By design, this effect is usually minimized (i.e., gain maximized) for elevations between 45 and 60 degrees, with the gain decreasing at higher and lower elevations. Gain curves are most often described as 2nd- or 3rd-order polynomials in zenith angle.

At this writing, gain curve calibration has been implemented in CASA for the VLA, with gain curve polynomial coefficients available directly from the CASA data repository. To make gain curve corrections for VLA data, set the gaincurve parameter to True for any of the calibration tasks, e.g.,:

gaincal('data.ms','cal.G0',gaincuve=True,solint=0.,refant=11)
#Note gaincurve=True is the default so this parameter can be omitted for VLA data
gaincal('data.ms','cal.G0',solint=0.,refant=11) # is equivalent

NOTE: Set gaincurve=False if you are not using VLA data.

The gain curve will be calculated per timestamp. Upon execution of a calibration task (e.g., gaincal, bandpass, correct, etc), the gain curve data appropriate to the observing frequencies will be automatically retrieved from the data repository and applied.

Opacity corrections

The troposphere is not completely transparent. At high radio frequencies (>15 GHz), water vapor and molecular oxygen begin to have a substantial effect on radio observations. According to the physics of radiative transmission, the effect is threefold. First, radio waves from astronomical sources are absorbed (and therefore attenuated) before reaching the antenna. Second, since a good aborber is also a good emitter, significant noise-like power will be added to the overall system noise. Finally, the optical path length through the troposphere introduces a time-dependent phase error. In all cases, the effects become worse at lower elevations due to the increased air mass through which the antenna is looking. In CASA, the opacity correction described here compensates only for the first of these effects, tropospheric attenuation, using a plane-parallel approximation for the troposphere to estimate the elevation dependence.

Opacity corrections are a component of calibration type 'T'. To make opacity corrections in CASA, an estimate of the zenith opacity is required (see observatory-specific chapters for how to measure zenith opacity). With a value for zenith opacity in hand (0.1 nepers, say), use the following parameters:

gaincal('data.ms','cal.G0',solint=0.,refant=11,tau=0.1)
#Note: the opacity parameter is True by default and so only the 
#tau parameter needs to be specified if you want to apply this.

If opacity=True (default), the calibration tasks will apply an elevation-dependent opacity correction (scaled to 0.1 nepers at the zenith for all antennas for this example) calculated at each timestamp (t=-1).

Generalizations to antenna- and time-dependent opacities, including derivation (from weather information) and solving (directly from the visibility data) capabilities, will be made available in the future.

Other a priori Calibrations

Other a priori calibrations will be added to the calibrater tool in the near future. These will include antenna-position (phase) corrections, system temperature normalization (amplitude) corrections, tropospheric phase corrections derived from Water Vapor Radiometry (WVR) measurements, instrumental line-length corrections, etc. Where appropriate, solving capabilities for these effects will also be added.

Time-dependent complex gain calibration (gaincal: G, T, GSPLINE)

G solutions

Systematic time-dependent complex gain errors are almost always the dominant calibration effect, and a solution for them is almost always necessary before proceeding with any other calibration. Traditionally, this calibration type has been a catch-all for a variety of similar effects, including: the relative amplitude and phase gain for each antenna, phase and amplitude drifts in the electronics of each antenna, amplitude response as a function of elevation (gain curve), and tropospheric amplitude and phase effects. In CASA, it is possible to handle many of these effects separately, as available information and circumstances warrant, but it is still possible to solve for the net effect using calibration type G.

Generally speaking, type G can represent any per-spectral window multiplicative polarization- and time-dependent complex gain effect downstream of the polarizers. (Polarization independent effects upstream of the polarizers may also be treated with G.) Multi-channel data (per spectral window) will be averaged in frequency before solving (use calibration type B to solve for frequency-dependent effects within each spectral window).

To solve for G on, say, fields 1 & 2, on a 90s timescale, and apply, e.g., gain curve corrections:

gaincal('data.ms',
        caltable='cal.G',                 # Write solutions to disk file 'cal.G'
        msselect='FIELD_ID IN [0,1]',     # Restrict data selection
        solint=90,                        # Solve for phase and amp on a 90s timescale
        refant=3)                         #
                                          # Note: gaincurve=True by default

plotcal('cal.G','amp')                    # Inspect solutions

These G solution will be referenced to antenna 4. Choose a well-behaved antenna that is located near the center of the array for the reference antenna. For non-poloarization datasets, reference antennas need not be specified although you can if you want. If no reference antenna is specified, an effective phase reference that is an average over the data will be calculated and used. For data that requires polarization calibration, you must choose a reference antenna that has a constant phase difference between the right and left polarizations (e.g. no phase jumps or drifts). If no reference antenna (or a poor one) is specified, the phase reference may have jumps in the R--L phase, and the resulting polarization angle response will vary during the observation, thus corrupting the polarization imaging.

To apply this solution to the calibrators and the target source (field 3, say):

correct('data.ms',
        msselect='FIELD_ID IN [0,1,2]',     # Restrict data selection (cals + src)
        opacity=False,                      # Don't apply opacity correction
        gaintable='cal.G')                  # Apply G solutions and correct data
                                            # (written to the CORRECTED_DATA column)
                                            # Note: calwt=True by default
plotxy('data.ms',xaxis='channel',datacolum='data',subplot=211)
plotxy('data.ms',xaxis='channel',datacolumn='corrected',subplot=212)

T solutions

At high frequencies, it is often the case that the most rapid time-dependent gain errors are introduced by the troposphere, and are polarization-independent. It is therefore unnecessary to solve for separate time-dependent solutions for both polarizations, as is the case for G. Calibration type T is available to calibrate such tropospheric effects, differing from G only in that a single common solution for both polarizations is determined. In cases where only one polarization is observed, type T is adequate to describe the time-dependent complex multiplicative gain calibration.

In the following example, we assume we have a 'G' solution obtained on a longish timescale (> a few minutes, say), and we want a residual T solution to track the polarization-independent variations on a very short timescale:

gaincal('data.ms',                       # Visibility dataset
        caltable='cal.T',                # Specify output table name
        gaintype='T',                    # Solve for T
        msselect='FIELD_ID IN [0,1]',    # Restrict data selection to calibrators
        solint=3.,                       # Obtain solutions on a 3s timescale
        gaintable='cal120.G')            # Pre-apply prior G solution

For dual-polarization observations, it will always be necesary to obtain a G solution to account for differences and drifts between the polarizations (which traverse different electronics), but solutions for rapidly varying polarization-independent effects such as those introduced by the troposphere will be optimized by using T. Note that T can be used in this way for self-calibration purposes, too.

GSPLINE solutions

At high radio frequencies, where tropospheric phase fluctuates rapidly, it is often the case that there is insufficient SNR to obtain a robust G or T solution on timescales short enough to track the variation. In this case it is desirable to solve for a best-fit functional form for each antenna using the GSPLINE solver. The GSPLINE solver fits time-series of cubic B-splines to the phase and/or amplitude of the calibrator visbilities. Unlike ordinary G, a single common GSPLINE solution will be determined from data for all selected spectral windows and fields specified in msselect, and the resulting solution will be applicable to any field or spectral window in the same Measurement Set. An important consequence of this is that all fields used to obtain a GSPLINE amplitude solution must have models with accurate relative flux densities. (Use of incorrect relative flux densities will introduce spurious variations in the GSPLINE amplitude solution.)

The GSPLINE solver requires a number of unique additional parameters, compared to ordinary G and T solving. The duration of each spline segment is controlled by splinetime. The actual splinetime will be adjusted such that an integral number of equal-length spline segments will fit within the overall range of data.

Phase splines require that cycle ambiguities be resolved prior to the fit; this operation is controlled by npointaver and phasewrap. The npointaver parameter controls how many contiguous points in the time-series are used to predict the cycle ambiguity of the next point in the time-series, and phasewrap sets the threshold phase jump (in degrees) that would indicate a cycle slip. Large values of npointaver improve the SNR of the cycle estimate, but tend to frustrate ambiguity detection if the phase rates are large. The phasewrap parameter may be adjusted to influence when cycles are detected. Generally speaking, large values (>180^\circ) are useful when SNR is high and phase rates are low. Smaller values for phasewrap can force cycle slip detection when low SNR conspires to obscure the jump, but the algorithm becomes significantly less robust. More robust algorithms for phase-tracking are under development (including fringe-fitting).

To solve for GSPLINE phase and amplitudes, with splines of duration 600s:

gaincal('data.ms',
        caltable='cal.spline.ap',
        gaintype='GSPLINE'               # Solve for GSPLINE
        calmode='ap'                     # Solve for amp & phase
        msselect='FIELD_ID IN [0,1]',    # Restrict data selection to calibrators
        splinetime=600.)                 # Set spline timescale to 10min

The GSPLINE solutions can not yet be plotted using plotcal.

Currently, several caveats should be kept in mind when using the GSPLINE solver:

TBD

Flux density scale calibration (fluxscale)

The 'G' or 'T' solutions obtained from calibrators for which the flux density was unknown and assumed to be 1 Jy are correct in a time- and antenna- relative sense, but are mis-scaled by a factor equal to the inverse of the square root of the true flux density. This scaling can be corrected by enforcing the constraint that mean gain amplitudes determined from calibrators of unknown flux density should be the same as determined from those with known flux densities. The fluxscale task exists for this purpose. Given a 'G' table, 'cal.G', containing solutions for a flux density calibrator (3C286, say) and for one or more random calibrator sources with unknown flux densities (0234+285 and 0323+022, say) use the fluxscale task as follows:

fluxscale(vis='data.ms',
          caltable='cal.G',                  # Select input table
          fluxtable= 'cal.Gflx',             # Write scaled solutions to cal.Gflx
          reference='3C286',                 # 3C286 = flux calibrator
          transfer=['0234+258', '0323+022')  # Select calibrators to scale

The output table, 'cal.Gflx', contains solutions that are properly scaled for all calibrators.

Note that the assertion that the gain solutions are independent of the calibrator includes the assumption that the gain amplitudes are strictly not systematically time dependent. While synthesis antennas are designed as much as possible to achieve this goal, in practice, a number of effects conspire to frustrate it. When relevant, it is advisable to pre-apply gain curve and opacity corrections when solving for the 'G' solutions that will be fluxscaled. When the G solutions are essentially constant for each calibrator separately, the fluxscale operation is likely to be robust.

The fluxscale task can be executed on either 'G' or 'T' solutions, but it should only be used on one of these types if solutions exist for both. GSPLINE solutions do not yet support fluxscale.

If the reference and transfer fields were observed in different spectral windows, the refspwmap parameter may be used to achieve the scaling calculation across spectral window boundaries. The refspwmap parameter takes a vector of indices indicating the spectral window mapping for the reference fields, such that refspwmap(i)=j means that reference field amplitudes from spectral window j will be used for spectral window i.

fluxscale(vis='data.ms',
          caltable='cal.G',                  # Select input table
          fluxtable= 'cal.Gflx',             # Write scaled solutions to cal.Gflx
          reference=['3C286'],               # 3C286 = flux calibrator
          transfer=['0234+258', '0323+022']  # Select calibrators to scale
          refspwmap=[0,0,0])                 # Use spwid 1 scaling for spwids 2 & 3

fluxscale(vis='data.ms',
          caltable='cal.G',                  # Select input table
          fluxtable='cal.Gflx',              # Write scaled solutions to cal.Gflx
          reference=['3C286'],               #  3C286 = flux calibrator,
          refspwmap=[0,0,1,1],               #  select spwids for scaling,
          transfer=['0234+285','0323+022'])  #  select calibrators to scale,

In this example, reference amplitudes from spectral window 1 will be used for spectral windows 1 and 2 and reference amplitudes from spectral window 2 will be used for spectral windows 3 and 4.

Resolved flux density calibrators

If the flux density calibrator is resolved, the assumption that it is a point source will cause solutions on outlying antennas to be biased in amplitude. In turn, the flux-density scaling step will be biased on these antennas as well. In general, it is best to use a resolved model for the calibrator as described in Section \ref{cal.models}, but if such a model is not available, it is important to limit the solution on the flux density calibrator to only the subset of antennas that have baselines short enough that the point-source assumption is valid. This can be done by using antenna and uvrange selection when solving for the flux density calibrator. For example, if antennas 1 through 8 are the antennas among which the baselines are short enough that the point-source assumption is valid, and we want to be sure to limit the solutions to the use of baselines shorter than 15000 wavelengths, then we can assemble properly scaled solutions for the other calibrator as follows (note: specifying both an antenna and a uvrange constraint prevents inclusion of antennas with only a small number of baselines within the specified uvrange from being included in the solution; such antennas will have poorly constrained solutions):

# first solve for gain solutions for the flux density calibrator
# (3C286 observed in field 1) using a subset of antennas
gaincal(vis='data.ms',
        caltable='cal.G',              # write solutions to cal.G
        msselect='FIELD_ID IN [0]      # Select the flux density calibrator
            && ANTENNA1 IN [0:7]       #  antennas 1-8,
            && ANTENNA2 IN [0:7]',
        uvrange=[0,15],                #  and limit uvrange to 0-15klambda
        solint=90)                     #  on 90s timescales, write solutions
                                       #  to table called cal.G

# now solve for other calibrator (0234+285 in field 2) using all antennas
#  (implicitly) and append these solutions to the same table
gaincal(vis='data.ms',
        caltable='cal.G',              # write solutions to cal.G
        msselect='FIELD_ID IN [1]',
        solint=90,
        append=T)                      # Set up to write to the same table

# now run fluxscale to adjust scaling
fluxscale(vis='data.ms',
          caltable='cal.G',        # Input table with unscaled cal solutions
          fluxtable='cal.Gflx',    # Write scaled solutions to cal.Gflx
          reference='3C286',       # Use 3c286 as ref with limited uvrange
          transfer='0234+285')     # Transfer scaling to 0234+285

The fluxscale calculation will be performed using only the antennas common to both fields, but the result will be applied to all antennas on the transfer field. %% DSS: QUESTION: do you need to select antennas or will selection of %% uvrange be adequate. If former, then need to change code so only %% uvrange need be specified.

Bandpass calibration (bandpass: B, BPOLY)

B solutions

For channelized data, it is often desirable to solve for the gain variations in frequency as well as in time. Variation in frequency arises as a result of non-uniform filter passbands or other dispersive effects in signal transmission. It is usually the case that these frequency-dependent effects vary on timescales much longer than the time-dependent effects handled by types 'G' and 'T'. Thus, it makes sense to solve for them as a separate term: 'B'.

Calibration type 'B' differs from 'G' only in that it is determined for each channel in each spectral window. It is possible to solve for it as a function of time, but it is most efficient to keep the 'B' solving timescale as long as possible, and use 'G' or 'T' for rapid timesscale variations.

'B' solutions are limited by the SNR available per channel, which may be quite small. It is therefore important that the data be coherent over the time-range of the 'B' solutions. As a result, 'B' solutions are almost always preceded by a 'G' or 'T' solve. In turn, if the 'B' solution improves the frequency domain coherence significantly, a 'G' or 'T' solution following it will be better than the original.

To solve for 'B' (on a very long timescale, i.e., constant for an observation shorter than 1 day), using a prior G solution:

bandpass(vis='data.ms',               # input data set
         caltable='cal.B',            #
         mode='channel',              # Use channels 3-57 (avoid end channels)
         nchan=55,start=2,            #
         msselect='FIELD_ID==0',      # Select bandpass calibrater (field 0)
         gaintable='cal.G',           # Pre-apply gain solutions derived previously
         solint=86400.,               # Setup a long timescale (assumes bandpass
         refant=14)                   #               is constant over this length).

Note that the solution has only been obtained for the subset of 55 channels starting with channel 3. Explicit channel selection like this is only necessary if it is desired that some channels be avoided (e.g., end channels that may not be well-behaved). The default is to obtain a solution for every channel.

BPOLY solutions

For some observations, it may be the case that the SNR per channel is insufficient to obtain a usable per-channel 'B' solution. In this case it is desirable to solve instead for a best-fit functional form for each antenna using the BPOLY solver. The 'BPOLY' solver fits polynomials to the amplitude and phase of the calibrator visibilities as a function of frequency. Unlike ordinary 'B', a single common BPOLY solution will be determined for all spectral windows specified (or implicit) in the setdata execution. As such, it is usually most meaningful to select individual spectral windows for BPOLY solves, unless groups of adjacent spectral windows are known a priori to share a single continuous bandpass response over their combined frequency range (e.g., PdBI data). Currently, BPOLY solutions cannot be solved for in a time-dependent manner.

The 'BPOLY' solver requires a number of unique parameters. The degamp and degphase parameters indicate the polynomial degree desired for the amplitude and phase solutions. The maskcenter parameter is used to indicate the number of channels in the center of the band to avoid passing to the solution (e.g., to avoid Gibbs ringing in central channels for PdBI data).

For example, to solve for a BPOLY (5th order in amplitude, 7th order in phase), using data from field 2, with G corrections pre-applied:

bandpass(vis='data.ms',               # input data set
         caltable='cal.BPOLY',        #
         mode='channel',              # Use channels 3-57 (avoid end channels)
         nchan=55,start=2,            #
         msselect='FIELD_ID==0',      # Select bandpass calibrater (field 0)
         bandtype='BPOLY',            # Select bandpass polynomials
         degamp=5,                    # 5th order amp
         degphase=7,                   # 7th order phase
         gaintable='cal.G',           # Pre-apply gain solutions derived previously
         refant=14)                   #   

Note that all available spectral windows will be used to obtain a single solution spanning them all. If separate solutions for each spectral window are desired, solve for each separately, e.g., if there are 3 spectral windows (0,1,2):

bandpass(vis='data.ms',            
         caltable='cal.BPOLY.0',
         mode='channel',
         nchan=55,start=2,
         msselect='FIELD_ID==0 && DATA_DESC_ID==0', 
         bandtype='BPOLY',
         degamp=5,
         degphase=7, 
         gaintable='cal.G',
         refant=14)

bandpass(vis='data.ms',            
         caltable='cal.BPOLY.1',
         mode='channel',
         nchan=55,start=2,
         msselect='FIELD_ID==0 && DATA_DESC_ID==1', 
         bandtype='BPOLY',
         degamp=5,
         degphase=7, 
         gaintable='cal.G',
         refant=14)

bandpass(vis='data.ms',            
         caltable='cal.BPOLY.2',
         mode='channel',
         nchan=55,start=2,
         msselect='FIELD_ID==0 && DATA_DESC_ID==2', 
         bandtype='BPOLY',
         degamp=5,
         degphase=7, 
         gaintable='cal.G',
         refant=14)

Each solution is stored in a separate table. As a result, subsequent calibration operations must be undertaken for each spectral window separately.

The BPOLY solutions can not yet be plotted using plotcal.

Instrumental Polarization Calibration (D)

TBD

Calibration Interpolation (accum)

Calibration solutions (most notably G or T) must be interpolated onto the timestamps of the science target observations. Currently, three time-dependent interpolation options are available for specification in the interp parameter in the accum task. These are nearest, linear and aipslin. The nearest interpolation option applies the calibration factor nearest in time to each datum. The linear interpolation options applies to each datum an amplitude and phase which are (separately) interpolated (or extrapolated) from the two nearest (in time) calibration solutions. For phase, it is assumed that there is no cycle ambiguity to consider, i.e., the direction of the smaller phase difference (necessarily always < 180 degrees) between the two solutions is considered the correct direction for interpolation. The aipslin interpolation option emulates the on-the-fly calibration interpolation in classic AIPS, where the amplitude is interpolated linearly (like linear), but interpolated phase is determined from linear interpolation of the real and imaginary parts. This mode avoids the complication of determining the minimum phase ambiguity, but the result is decidedly non-linear in phase for interpolations over more than a few 10s of degrees (as the phase difference between interpolating solutions approaches 180 degrees, aipslin tends toward nearest for the phase interpolation.

An example using accum to interpolate an existing table onto a new time grid.

accum(vis='ngc5921.ms',            #
      incrtable='ngc5921.gcal',    # this is the table that will be interpolated
      caltable='ngc5921_20s.gcal', # this will be the interpolated output table
      accumtime=20.)               # timescale for sampling the output table
                                   # Note: interp wasn't specified = linear option

%BEGINFIGURE{label="fig:plotcal_G" caption="plotcal: Display of the amplitude solutions for NGC 5921; original (top), interpolated solutions-20s sampling (bottom)."}% plotcal G.jpgplotcal interp.jpg%ENDFIGURE%

Calibration Smoothing (smooth)

Calibration solutions (most notably G or T) may be smoothed over a longer time interval to reduce noise and outliers. The smoothing will use the smoothtime and smoothtype parameters to determine the new data points which will replace the previous points on the same time sampling grid as for the tablein solutions. Two smoothtype options are currently supported: 1) median, 2) mean.

An example using the smooth task to smooth an existing table:

gaincal(vis='ngc5921.ms',
        caltable='ngc5921_05s.gcal',
        mode='channel',
        nchan=55,start=2,step=1,
        msselect='FIELD_ID <= 1',
        solint=0.5)                #
                                   # Note: we use the defaults for other parameters
plotcal('ngc5921_05s.gcal','amp')  # Plot the amplitudes
smooth(vis='ngc5921.ms',
       tablein='ngc5921_05s.gcal', # input calibration table
       caltable='ngc5921_sm.gcal', # output calibration table (smoothed)
       smoothtime=1000.)           # use 1000 seconds for the smoothing time and
                                   # the default smoothtype='median'
plotcal('ngc5921_sm.gcal','amp')   # Plot the amplitudes

%BEGINFIGURE{label="fig:plotcal_smooth" caption="plotcal: Display of the amplitude solutions for short solution interval table (0.5 seconds: top) and the smoothed table using a smoothtime of 1000 seconds"}% plotcal 05s.jpgplotcal smoothed.jpg%ENDFIGURE%

Incremental Calibration (accum)

It is occasionally desirable to solve for and apply calibration incrementally. This is the case when a calibration table of a certain type already exists (from a previous solve), an incremental solution of the same type and relative to the first is required, and it is not possible to recover the cumulative solution by a single solve.

Much of the time, it is, in fact, possible to recover the cumulative solution. This is because the equation describing the solution for the incremental solution (using the original solution), and that describing the solution for their product are fundamentally the same equation---the cumulative solution, if unique, must always be the same no matter what initial solution is. One circumstance where an incremental solution is necessary is the case of phase-only self-calibration relative to a full amplitude and phase calibration already obtained (from a different field).

For example, a phase-only ``G'' self-calibration on a target source may be desired to tweak the full amplitude and phase ``G'' calibration already obtained from a calibrator. The initial calibration (from the calibrator) contains amplitude information, and so must be carried forward, yet the phase-only solution itself cannot (by definition) recover this information, as a full amplitude and phase self-calibration would. In this case, the initial solution must be applied while solving for the phase-only solution, then the two solutions combined to form a cumulative calibration embodying the net effect of both. In terms of the Measaurement Equation, the net calibration is the product of the initial and incremental solutions.

The analog of accumulate in classic AIPS is the use of CLCAL to combine a series of (incremental) SN calibration tables to form successive (cumulative) CL calibration tables.

Cumulative calibration tables also provide a means of generating carefully interpolated calibration, on variable user-defined timescales, that can be examined prior to application to the data with correct. The solutions for different fields and/or spectral windows can be interpolated in different ways, with all solutions stored in the same table.

The only difference between incremental and cumulative calibration tables is that incremental tables are generated directly from the calibration solving tasks (gaincal, bandpass, etc), and cumulative tables are generated from other cumulative and incremental tables via accum. In all other respects (internal format, application to data with correct, plotting with plotcal, etc.), they are the same, and therefore interchangable. Thus, accumulate and cumulative calibration tables need only be used when circumstances require it.

The accum task represents a generalization on the classic AIPS CLCAL model of cumulative calibration in that its application is not limited to accumulation of ``G'' solutions (SN/CL tables in classic AIPS are the analog of ``G'' (and, implicitly, ``T'') in aips++). In principle, any basic calibration type can be accumulated (onto itself), as long as the result of the accumulation (matrix product) is of the same type. This is true of all the basic types, except ``D''. Accumulation is currently supported for ``B'', ``G'', and ``T'', and, in future, ``F'' (ionospheric Faraday rotation), ``J'' (generic full-polarization calibration), fringe-fitting, and perhaps others. Accumulation of certain specialized types (e.g., ``GSPLINE'', ``TOPAC'', etc.) onto the basic types will be supported in the near future. The treatment of various calibration from ancilliary data (e.g., system temperatures, weather data, WVR, etc.), as they become available, will also make use of accumulate to achieve the net calibration.

Note that accumulation only makes sense if treatment of a uniquely incremental solution is required (as described above), or if a careful interpolation or sampling of a solution is desired. In all other cases, re-solving for the type in question will suffice to form the net calibration of that type. For example, the product of an existing ``G'' solution and an amplitude and phase ``G'' self-cal (solved with the existing solution applied), is equivalent to full amplitude and phase ``G'' selfcal (with no prior solution applied), as long as the timescale of this solution is at least as short as that of the existing solution.

The tablein parameter is used to specify the existing cumulative calibration table to which an incremental table is to be applied. Initially, no such table exists, and accumulate will generate one from scratch (on-the-fly), using the timescale (in seconds) specified by the parameter accumtime. These nominal solutions will be unit-amplitude, zero-phase (i.e., unit matrix) calibration, ready to be adjusted by accumulation according to the settings of other parameters. When accumtime is negative (the default), the table name specified in tablein must exist and will be used.

The incrtable parameter is used to specify the incremental table that should be applied to tablein. The calibration type of incrtable sets the type assumed in the operation, so tablein (if specified) must be of the same type. If it is not, accum will exit with an error message. (Certain combinations of types and subtypes will be supported by accum in the future.)

The caltable parameter is used to specify the name of the output table to write. If un-specified (or ``''), then tablein will be overwritten. Use this feature with care, since an error here will require building up the cumulative table from the most recent distinct version (if any).

The field parameter specifies those field names in tablein to which the incremental solution should be applied. The solutions for other fields will be passed to caltable unaltered. If the cumulative table was created from scratch in this run of accumulate, then the solutions for these other fields will be unit-amplitude, zero-phase, as described above.

The calfield parameter is used to specify the fields to select from incrtable to use when applying to tablein. Together, use of field and calfield permit completely flexible combinations of calibration accumulation with respect to fields. Multiple runs of accum can be used to generate a single table with many combinations. In future, a ``self'' mode will be enabled that will simplify the accumulation of field-specific solutions.

The interp parameter is used to specify the interpolation type to use on the incremental solutions. The currently available interpolation types are ``nearest'', ``linear'', and ``aipslin''.

Here is an example:

# obtain G solutions from calibrator
gaincal(vis='ap366.sim',
        caltable='cal.G0',
        msselect='FIELD_ID IN [9,11]',
        solint=300);


# obtain proper flux density scale
fluxscale (vis='ap366.sim',
           caltable='cal.G0',
           fluxtable='cal.G1',
           reference='1328+307')

# generate cumulative table for target source on 20s timescale
accum(vis='ap366.sim',
      tablein='',
      incrtable='cal.G1',
      caltable='cal.cG0',
      field='0957+561',        #  calibrate target...
      calfield='0917+624',     #  ...with calibrator
      interp='linear',
      accumtime=20)

# apply this calibration to target
correct(vis='ap366.sim',
        gaintable='cal.cG0')

#    (image target with imager tool)

# phase-selfcal target on (faster) 60s timescale
gaincal(vis='ap366.sim',
        caltable='cal.G2',
        msselect='FIELD_ID IN [10]',
        calmode='p',                   # phase-only
        solint=60,
        gaintable='cal.cG0');


# accumulate new solution onto existing one
accum(vis='ap366.sim',
      tablein='cal.cG0',             # existing cumulative (input)
      incrtable='cal.G2',            # new incremental (input)
      caltable='cal.cG1',            # new cumulative (output)
      field='0957+561',              # calibrate target...
      calfield='0957+561',           #   ...with its own solutions
      interp='linear')

# apply new cumulative solution to data
correct(vis='ap366.sim',
        gaintable='cal.cG0')

#   (another round of imaging, etc.)

Solution Examination (plotcal)

The plotcal task is available for examining solutions of all of the basic solvable types (G,T,B,D,M,MF,K). To plot amplitude or phase as a function of time for G (or T,M) solutions:

plotcal('cal.G','amp')
plotcal('cal.G','phase')

%BEGINFIGURE{label="fig:plotcal_G" caption="plotcal: Display of the amplitude and phase gain solutions (for all data)."}% plotcal G.jpgplotcal Gp.jpg%ENDFIGURE%

Similarly, to plot amplitude or phase as a function of channel for B solutions:

plotcal('cal.B','amp')
plotcal('cal.B','phase')

%BEGINFIGURE{label="fig:plotcal_B" caption="plotcal: Display of the amplitude and phase bandpass solutions (for all data)."}% plotcal Ba.jpgplotcal Bp.jpg%ENDFIGURE%

Note that if there is more than one timestamp in the B table, the user will be queried to interactively advance the plot to each timestamp. Other plot types are also available:

  • [AMP] -- Amplitude vs. time or chan
  • [PHASE] -- Inverse of amplitude vs. time or chan
  • [RLHASE] -- RL phase vs. time
  • [XYPHASE] -- XY phase vs. time
  • [DELAY] -- DELAY vs time
  • [DELAYRATE] -- DELAYRATE vs time

Several selection parameters are available to obtain plots of subsets of solutions:

  • [antennas] -- A list of antenna indices to plot (default: all antennas)
  • [caldescids] -- A list of description ids (default: all)

The previous examples will show the solutions for all antennas and spectral windows on a single plot. If per-antenna solution plots are desired, use multiplot=True, and specify the number of plots to appear on each page using nxpanels and nypanels. For example to show 5 plots per page (arranged vertically) of G phase solutions on each page, arranged vertically:

plotcal('cal.B','amp',antennaid=[5,6])    # show antenna 5,6 bandpass
plotcal('cal.B','amp',nypanel=5,          # default nxpanel=1
        multiplot=True)                   # draw separate plots
#Note: On the command line you are prompted as:
* Please use: cp.next() to see the next plot
*             cp.stopiter() will stop the plots

To show 6 plots per page of B amplitudes on a 3x2 grid:

plotcal('cal.B','amp',nxpanel=3,nypanel=2,# set up 3x2 grid
        multiplot=True)

%BEGINFIGURE{label="fig:plotcal_Bmulti" caption="plotcal: Display of a 3x2 grid of bandpass solutions, iterating over antenna identifier index."}% plotcal Bmulti.jpg%ENDFIGURE%

For B, if there is more than one timestamp and multiplot=T, the antennas plots will be cylced through for each timestamp in turn.

Application of the final calibration (correct)

After all relevant calibration types have been determined, they must be applied to the target source. For example, to apply G and B solutions to source fields 2,3,4:

correct(vis='ngc5921.ms',               # Visibility set to correct
                                        # data will be written to the CORRECTED_DATA
                                        # column
        msselect='FIELD_ID IN [2,3,4]', # restrict correction to specified fields
        gaintable='cal.Gflx',           # gain solutions to apply (time dependent)
        bptable='cal.B')                # bandpass solutions to apply (freq dependent)

Different detailed combinations of calibration application can be achieved by running this sequence more than once, and including specific field and/or spectral window selections in msselect, as appropriate. For example, to limit the G solutions applied to field 3 to those obtained from field 1 (otherwise, same as above):

correct(vis='ngc5921.ms',
        msselect='FIELD_ID IN [2,4]',
        gaintable='cal.Gflx',
        bptable='cal.B')
correct(vis='ngc5921.ms',
        msselect='FIELD_ID IN [3]',
        gaintable='cal.Gflx',
        gainselect='FIELD_ID==1',
        bptable='cal.B')

It is important to remember the relative nature of each calibration term. A term solved for in the presence of others is, in effect, residual to the others, and so must be used in combination with them (or new versions of them) in subsequent processing. At the same time, it is important to avoid isolating the same calibration effects in more than one term, e.g., by solving for both G and T separately (without applying the other), and then using them together. It is always a good idea to examine the corrected data after calibration (using plotxy to compare the raw ('data') and corrected ('corrected') visibilities).

Visibility Model Fitting (uvmodelfit)

It is often desirable to fit simple analytic source component models directly to visibility data. Such fitting has its origins in early interferometry, especially VLBI, where arrays consisted of only a few antennas and the calibration and deconvolution problems were poorly constrained. These methods overcame the calibration uncertainties by fitting the models to calibration-independent closure quantities and the deconvolution problem by drastically limiting the number of free parameters required to describe the visibilities. Today, even with larger and better calibrated arrays, it is still desirable to use visibility model fitting in order to extract geometric properties such as the positions and sizes of discrete components in radio sources. Fits for physically meaningful component shapes such as disks, rings, and optically thin spheres, though idealized, enable connecting source geometry directly to the physics of the emission regions.

Visibility model fitting is controlled entirely by the uvmodelfit task, which allows fits for a single component point or Gaussian. The user specifies the number of non-linear solution iterations (niter), the component type (comptype), an initial guess for the component parameters (sourcepar), and optionally, a vector of Booleans selecting which component parameters should be allowed to vary (fixpar), and a filename in which to store a CASA componentlist for use in other applications (file). The function returns a vector containing the resulting parameter list. This vector can be edited at the command line, and specified as input (sourcepar) for another round of fitting.

The sourcepar parameter is currently the only way to specify the starting parameters for the fit. For points, there are three parameters: I (total flux density), and relative direction (RA, Dec) offsets (in arcsec) from the observation's phase center. For Gaussians, there are three additional parameters: the Gaussian's semi-major axis width (arcsec), the aspect ratio, and position angle (degrees). It should be understood that the quality of the result is very sensitive to the starting parameters provided by the user. If this first guess is not sufficiently close to the global \chi^2 minimum, the algorithm will happily converge to an incorrect local minimum. In fact, the \chi^2 surface, as a function of the component's relative direction parameters, has a shape very much like the inverse of the absolute value of the dirty image of the field. Any peak in this image (positive or negative) correponds to a local \chi^2 minimum that could conceivable capture the fit. It is the user's responsibility to ensure that the correct minimum does the capturing.

Currently, uvmodelfit relies on the likelihood that the source is very near the phase center (within a beamwidth) and/or the user's savvy in specifying the starting parameters. This fairly serious constraint will soon be relieved somewhat by enabling a rudimentary form of uv-plane weighting to increase the likelihood that the starting guess is on a slope in the correct \chi^2 valley.

Improvements in the works for visibility model fitting include:

  • User-specifiable uv-plane weighting
  • Additional component shapes, including elliptical disks, rings,
and optically thin spheroids.
  • Optional calibration pre-application
  • Multiple components. The handling of more than one component
depends mostly on efficient means of managing the list itself (not easy in command line options), which are currently under development.
  • Combined component and calibration fitting.

Example (See Fig \ref{fig:modelfit}):

#
# Note: It's best to channel average the data before running a modelfit
#

split('data.ms',                # split data from your ms
      outputvis='1445_avg.ms',  # split to a new file containing just the gain cal
      fieldid=1                 # 2nd source = 1445
      nchan=1,                  # get a single channel that is the
      start=5,                  # average of 55 channels starting
      step=55,                  # at channel 6
      datacolumn='CORRECTED_DATA')# split the corrected data

uvmodelfit('1445_avg.ms',       # use averaged data
           niter=5,             # Do 5 iterations
           comptype='P',        # P=Point source, G=Gaussian, D=Disk
           sourcepar=[2.0,.1,.1],# Source parameters for a point source
                                # [flux, long offset, lat offset]
           nchan=-1,start=0,step=1, # reset to defaults
           file='gcal.cl')      # Output component list file
                                # Initial guess is that it's close to the phase center
                                # and has a flux of 2.0 (a priori we know it's 2.47)
#Output looks like:
CASA <25>: uvmodelfit('1445_avg.ms/',niter=5,comptype='P',sourcepar=[2.0,.1,.1],file='gcal.cl',ncha,step=1,start=0)
Tue Dec 12 23:02:05 2006      WARN Calibrater::setdata:
Selection is empty: reverting to sorted MeasurementSet
There are 19656 - 3 = 19653 degrees of freedom.
 iter=0:   reduced chi2=0.0413952:  I=2,  dir=[0.1, 0.1] arcsec
 iter=1:   reduced chi2=0.0011285:  I=2.48495,  dir=[-0.0265485, -0.0189735] arcsec
 iter=2:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00196871, 0.00409329] arcsec
 iter=3:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00195744, 0.00411176] arcsec
 iter=4:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00195744, 0.00411178] arcsec
 iter=5:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00195744, 0.00411178] arcsec
If data weights are arbitrarily scaled, the following formal errors
 will be underestimated by at least a factor sqrt(reduced chi2). If 
 the fit is systematically poor, the errors are much worse.
I = 2.48547 +/- 0.0172627
x = -0.00195744 +/- 0.159619 arcsec
y = 0.00411178 +/- 0.170973 arcsec
Writing componentlist to file: /Users/jmcmulli/ALMA/TST5/Regression/Scripts/gcal.cl

#Looks reasonable - got the right flux around the phase center
#chi2 went down: expect chi2 = 2*number of visibilities/number of degrees of freedom
# degrees of freedom = 3 for point source (flux and long,lat offsets)
# Now use the component list to generate model data

ft('1445_avg.ms',  
   complist='gcal.cl')           # Fourier transform the component list -
                                 # this writes it into the MODEL_DATA column
                                 # of the MS
#
plotxy('data.ms',
       xaxis='uvdist',           # Plot data versus uv distance
       fieldid=1,                # Select 1445+0990
       datacolumn='corrected')   # Plot corrected data
plotxy('data.ms',                #
       xaxis='uvdist',           #
       fieldid=1, 
       overplot=True,            # Specify overplot
       plotsymbol='bo')          # Specify blue circles for model data

%BEGINFIGURE{label="fig:modelfit" caption="Use of plotxy to display corrected data (red points) and uv model fit data (blue circles)."}% modelfit.jpg%ENDFIGURE%

Prepare uv data for imaging

Examine calibrated source data

Once the source data is calibrated you should examine the $uv$ data and flag anything that looks bad. If you find source data that has not been flanked by calibration scans, delete it (it will not be calibrated). For example:

flagxy('data.ms','uvdist','amp')              # Display data for flagging

See Chapter on Data Editing for descriptions of how to display/edit the data in xy-plot format.

uv Continuum subtraction (contsub)

At this point, consider whether you are likely to need continuum subtraction. If there is significant continuum emission present in what is intended as a spectral line observation, continuum subtraction may be desirable. You can estimate and subtract continuum emission in the $uv$-plane prior to imaging or wait and subtract an estimate of it in the image-plane. Note that neither method is ideal, and the choice depends primarily upon the distribution and strength of the continuum emission. Subtraction in the uv-plane is desirable if continuum emission dominates the source, since deconvolution of the line emission will be more robust if not subject to errors in deconvolution of the brighter continuum. There is also a performance benefit since the continuum is probably the same in each channel of the observation, and it is desirable to avoid duplication of effort. However, the main drawback of subtraction in the uv-plane is that it is only strictly correct for the phase center, since without the Fourier transform, the visibilities only describe the phase center. Thus, uv-plane continuum subtraction will be increasingly poor for emission distributed further from the phase center. If the continuum emission is relatively weak, it is usually adequate to subtract it in the image plane; this is described in the Image Analysis section of this cookbook. Here, we describe how to do continuum subtraction in the uv-plane.

uv plane continuum subtraction is performed by the contsub task. First, determine which channels in your data cube do not have line emission, perhaps by forming a preliminary image as described in the next chapter. This image will also help you decide whether or not you need to come back and do uv-plane continuum subtraction at all.

For each baseline, and over the timescale specified in solint, contsub will provide a simple linear fit to the real and imaginary parts of the (continuum-only) channels specified in channels, and subtract this model from all channels. Choose the timescale to be shorter than the timescale for changes in the visibility function of the continuum, but be careful not to make it so short that the SNR of the estimated continuum suffers substantially. For example:

contsub(vis='data.ms',                   # Select visibility data set
        fieldid=2,                       # Choose the source data (field 3),
        spwid=0,                         #  spectral window 1,
        chans=range(4,7)+range(50,60),   #  line free channels 4-6 & 50-59,
        solint=0.0,                      #  estimate/subtract per scan
        mode='subtract');                #  & subtract cont. from line data.
                                         # Line-only data will be written into
                                         # the  CORRECTED_DATA column.

Running contsub with mode='subtract' will replace the CORRECTED_DATA column in the MS with continuum-subtracted line data and the MODEL_DATA column with the continuum model. You can use mode='replace' to replace the CORRECTED_DATA column with the continuum model; however, it is probably better to use mode='subtract' and then use split to select the MODEL_DATA and form a dataset appropriate for forming an image of the estimated continuum. Note that a continuum image formed from this model will only be strictly correct near the phase center, for the reasons described above.

Note that it is currently the case that contsub will overwrite the CORRECTED_DATA column. Therefore, it is desirable to first split the relevant corrected data into a new Measurement Set. Also, contsub requires that the MODEL_DATA and CORRECTED_DATA columns exist, and split does not generate them. These columns are generally created when filling to an MS but can also be generated by the imager tool when forming the exploratory image described above. If you run contsub on the original dataset, you will have to re-apply the calibration as described in the previous chapter.

So, the recommended procedure is as follows:

  • Finish calibration as described in the previous chapter
  • Use split to form a separate dataset
  • Use the imager tool on the split result to form an
exploratory image that is useful for determining the line-free channels
  • Use contsub with mode='subtract' to subtract
the continuum from the CORRECTED_DATA in the split dataset (and write the continuum model in the MODEL_DATA column
  • Image the line-only emission with the imager tool
  • If an image of the estimated continuum is desired, run split
again (on the *contsub*'d dataset), and select the MODEL_DATA; then run imager to image it

Optional: Split out calibration uv data (split)

split(vis='ngc5921.ms',                 # input MS
      outputvis='ngc5921_src.split.ms', # Output just the source data
      fieldid=2,                        # Select the third source (0-based)
      spwid=0,                          # Select the first spectral window
      nchan=46,                         # Select a subset of the data channels
      start=5,                          # Start with channel 6 
      step=1,                           # 
      datacolumn='CORRECTED_DATA')      # Take the calibrated data column

Baseline-based Calibration (M, MF, K) -- Tool-based

Some calibration cannot be factored into antenna-based factors. Such effects occur physically after the signals from each antenna have been combined in the correlator. These may occur as a consequence of finite time- and frequency-averaging over other calibration factors before they are corrected. Any loss of coherence in such averaging gives rise to residual baseline-based errors which depend on the details of the combination of antenna-based errors on the baseline. Also, if only a single baseline is available, a single calibration solution for that baseline is obtainable, and short of additional information, it simply cannot be factored into unique terms for each antenna. Therefore, such situations necessarily require baseline-based calibration solutions.

It is important to note that source structure also introduces baseline-based information in the visibilities, and so it can be difficult to distiguish baseline-based errors from the information in which we are interested, namely, the astronomincal visibilities (which are imaged). Therefore, baseline based calibration factors should be used with great care, to avoid changing the desired astrophysical information in ways that cannot be determined. As indicated below, there are some calibration circumstances where such extreme measures are warranted (e.g., resolved bandpass calibrators), but the careful observer should always consider how different calibration factors affect the data, and this is especially important for baseline-based factors.

M, MF solutions (Generic Baseline-based Gain)

The calibration types M and MF provide the baseline-based analogs of G and B, respectively. M provides for (per-spectral window) time-dependent gain calibration for each baseline independently. MF provides for a frequency-dependent (within each spectral window) version of M. One or the other of these types can be used to compensate for any residual closure errors, usually after all antenna-based calibrations have converged. Since these types can absorb legitimate visibility information from the data, they should be used with great care, and usually only on sources for which there is no doubt about the source structure. It is therefore largely meaningless to use them in self-calibration---they will only reinforce the current source model, and can even artifically absorb thermal noise in the data.

To solve for M on an hourly timescale, using previously determined G and B solutions:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]') # Restrict data selection
cb.setapply(type='G',table='cal.G')      # Apply existing G solution
cb.setapply(type='B',table='cal.B')      # Apply existing G solution

cb.setsolve(type='M',table='cal.M',      # Setup to solve for M on
    t=3600)                               #  an hourly timescale,
                                          #  write solutions to a table on
                                          #  disk called 'cal.M'
cb.solve()                               # Solve

Note that refant is, of course, meaningless for baseline-based calibration types.

To apply these solutions, along with G and B:

cb.reset()                                  # Reset calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1,2]')  # Restrict data selection
cb.setapply(type='G',table='cal.G')         # Apply G solutions
cb.setapply(type='B',table='cal.B')         # Apply B solutions
cb.setapply(type='M',table='cal.M')         # Apply M solutions
cb.correct()                                # Correct data and write to
                                            #  CORRECTED_DATA column in MS

Use of MF is essentially identical, except that it will probably be used on even longer timescales than M (as for B, compared to G).

An M solution can be especially useful for obtaining normalized bandpass solutions from significantly resolved calibrators where a source model adequate for pre-B G-solving is not available. Ordinarily, if the bandpass calibrator is nearly point-like, we first solve for G (using a point-like model):

\vec{V}_{ij}~=~G_{ij}~\vec{V}_{ij}^{\mathrm{~point}}

Then, use this G to solve for B (using the same model):

\vec{V}_{ij}~=~B_{ij}~\left(G_{ij}~\vec{V}_{ij}^{\mathrm{~point}}\right)

However, we will get (at best) awkwardly scaled and/or poorly converged B solutions if our source model is not sufficient for the either of these solutions. In this circumstance, we can use M to absorb both time-dependent gain variations during the bandpass calibration scan and the source structure. First solve for M, using a point-like model:

\vec{V}_{ij}~=~M_{ij}~\vec{V}_{ij}^{\mathrm{~point}}

Then, use this solution to solve for B:

\left(M_{ij}^{-1}~\vec{V}_{ij}\right)~=~B_{ij}~\vec{V}_{ij}^{\mathrm{~point}}

The resulting B solution is nearly as good as one obtained with a good, resolved source model and G. It is somewhat less sensitive because more free parameters have been introduced, but this technique can often be useful. Note that the data for the bandpass calibrator, after correction by B and M, will be that of a point source, since M has absorbed the source structure information. This information has been sacrificed for a decent bandpass calibration applicable to other sources.

K solutions (Baseline-based fringe-fitting)

Fringe-fitting is essentially a generalization of ordinary phase calibration (the phase part of G, B, M, and/or MF) in which coefficients of the Taylor expansion of the phase in time and frequency are solved for. Usually, the expansion is only taken to first order in time and frequency. In this case, the phase can be written:

\phi = \phi(t_o,\nu_o) + 2\pi(\nu - \nu_o)\tau + 2\pi\nu(t - t_o)\dot{\tau}

In this equation, t_o and \nu_o are a fiducial time and frequency within the solution interval (where the phase is \phi(t_o,\nu_o)), and \tau and \dot{\tau} are the delay (phase slope in frequency) and delay-rate (phase slope in time), respectively.

Note: As written, this equation defines the group delay (by virtue of referring the frequency to the fiducial value) where the delay-rate is not necessarily the time derivative of the delay. For most current observations, this is the case simply because the noise on the delay solution far exceeds the variation implied by the delay-rates. The equation can also be written in terms of the phase delay, where the \nu_o is dropped from the second term, and the delay-rate is exactly the derivative of the delay. This mode will be useful for the wide-bandwidth instruments of the near future, and will be an option available for users of K.

Evidently, fringe-fitting permits more accurate phase tracking in both time and frequency when delay and delay-rate errors are large. Such errors originate in independent clock and position errors of antennas and in errors in the direction to radio sources. Fringe-fitting is normally associated with VLBI because these clock and geometry errors are more significant there than for smaller connected-element arrays, but the wide bandwidths and high frequencies of future instruments will likely find fringe-fitting useful.

The current implementation of fringe-fitting in CASA is baseline-based and quite simple. It solves the above equation for \phi(t_o,\nu_o), \tau, and \dot{\tau} for each baseline independently, assuming less than one cycle of phase over the bandwidth and duration of the solution interval.

Note: Solutions outside the first ambiguity require implementation of a first-guess mechanism (an FFT) to reduce the phase to within the first ambiguity. This is still under development.

A separate phase and delay is determined for each polarization, but only one rate is determined for both.

Note: This is almost always justified since the phase and delay correspond to signal path lengths, and the delay rate corresponds to the speed of the signal along the path. The phase and delay are likely to be different for different polarizations since these signals traverse different electronics with different path lengths, but the speed at which the signal travels is the same. In any case, this constraint will be relaxable in a future version.

To obtain a scan-based K solution, with an existing MF solution applied:

cb.setdata(msselect='FIELD_ID==0')        # select data
cb.setapply(type='MF',table='cal.MF')     # arrange apply of MF
cb.setsolve(type='K',table='cal.K',t=0)   # arrange solve for K
cb.solve()                                # solve

Application is like any other type:

cb.setdata(msselect='FIELD_ID IN [0,1]')  # select data
cb.setapply(type='MF',table='cal.MF')     # arrange apply of MF
cb.setapply(type='K',table='cal.K')       # arrange apply of K
cb.correct()                                # correct

Note that only nearest interpolation is available for application of K. This will be remedied in the near future.

Imaging Data

Overview

This chapter describes how to make and deconvolve images starting from a calibrated MeasurementSet.

Imaging Tasks

The current imaging tasks are:

  • invert - create a dirty image and psf
  • clean - calculate a deconvolved image with a selected clean algorithm
  • mosaic - calculate a multi-field deconvolved image with selected clean algorithm
  • feather - combine a single dish and synthesis image in the Fourier plane

Fine grained imaging: use of the imaging tool, im

The tool for imaging and deconvolution is imager (im). It has been designed to do gridding/degridding of visibility data, Fourier transforms, deconvolution using various methods, etc. A complete overview of the capabilities of Imager can be found in the \ahlink{Reference Manual}{imager}; note: at this time the Reference Manual still refers to the Glish syntax.

There are two sorts of tool functions in im; passive and active. The passive functions like setimage, setdata etc. set the state of the im tool. Then, active functions such as makeimage, clean etc. act on the data according to the previously set state.

To use an im tool, one must first construct it from a MeasurementSet and then configure it (set its ``state''). When you are finished, you destroy the tool (which of course does not affect the actual MeasurementSet on disk). To construct an imager tool for a measurement set located at /home/data/data.ms:

im.open('/home/data/data.ms')            # Load data into imager tool
im.summary()                             # Summarize state of tool
im.close()                               # End the tool

The summary function is always useful to summarize the current state of the imager tool.

Setting Imaging Parameters

Data Selection

To make an image from the visibilities one needs to select the part of the data from the MeasurementSet that is going to be used (e.g which pointing or channels or spectral window that is going to be imaged). This is done with the \ahlink{imager.setdata}{imager:imager.setdata.function} function. If you don't run this function, you get all of the data.

Here are some examples (refer to the \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node151.html}{Reference Manual} for more details).

im.setdata(mode='channel', nchan=15,      # Select 15 channels, third
             spwid=2, fieldid=3)          #  spectral window  and fourth field
                                          # REMEMBER: everything is 0-based!

im.setdata(mode='channel', nchan=256,     # Select every second channel
             step=2, spwid=2, field=3)    #  to make 256 total channels

im.setdata(mode='velocity', nchan=63,     # Select data based on
             mstart=quantity(20.,'km/s'), #  radio velocity
             mstep=quantity(-100.,'m/s'),
             spwid=2,
             fieldid=3)
im.setdata(mode='opticalvelocity',        # Select data based on
             nchan=63,
             mstart=quantity(20.'km/s'),  #  optical velocity
             mstep=quantity(-100.,'m/s')) #

# Consider a more complex data selection with diferent number of channels and
# different spectral windows:

im.setdata(mode='channel', spwid=[1,2],   # Select spectral windows 2 & 3,
             start=[3,4],                 #  spwid 1: start at 4, select 10 chans
             nchan=[10,12], step=[1,1])   #  spwid 2: start at 5, select 12 chans
                                          #  step by 1 channel for each spwid

Other functions can also further reduce the data selected. For example, \ahlink{imager.filter}{imager:imager.filter.function} and \ahlink{imager.uvrange}{imager:imager.uvrange.function}.

Setting Image Parameters

To set the actual imaging parameters use the \ahlink{imager.setimage}{imager:imager.setimage.function} function. This is a {\bf required} function in {\tt Imager} (in fact, the only one presently). This function is passive; it just sets state so that when you tell {\tt Imager} to do something (like make an image), it knows what to do.

This function controls things like image size, cell size, spectral and polarization selection, sampling, phasecenter, and number of facets (for wide-field imaging). Images constructed afterwards (using {\em e.g.} the \ahlink{imager.clean}{imager:imager.clean.function} function) will have these parameters.

One important point to note is that the image size does NOT have to be a power of two. Just choose some reasonably composite even number close to the number that you want. {\bf Do not} use a prime number since the Fourier Transform will then be very slow.

Another important point is that the data selection between {\tt setdata} and {\tt setimage} should be aligned, that is, there must be some overlap (e.g., don't select fieldid=1 in {\tt setdata} and fieldid=2 in {\tt setimage}). If you aren't doing any averaging, then the data selection fields ({\tt fieldid, mode, nchan, start, spwid, etc}) should be the same in both function calls; see Section \ref{chansel} for more details.

As an example, to set the image parameters for a simple image cube:

im.setimage(nx=600, ny=600,                # Define image to be 600x600 pixels
     cellx=quantity(0.5,'arcsec'),
     celly=quantity(0.5,'arcsec'),         #  with 0.5'' pixels for
     fieldid=2, mode='channel',            #  field 3 and image 20 channels
     nchan=20, start=10)                   #  starting at channel 11.

The phase center is that of the specified field center. In addition, each selected channel will be imaged as one channel of the output image.

Channel Selection and Combination

Functions \ahlink{imager.setdata}{imager:imager.setdata.function} and \ahlink{imager.setimage}{imager:imager.setimage.function} both have arguments {\tt nchan, start} and {\tt step}. {\tt setdata} is used to select channels. {\tt setimage} is used to specify how many {\bf output} channels the image will have, and how the input channels should be combined (averaging, gridding).

You should call {\tt setdata} first. After you have called this function, subsequent active {\tt Imager} function calls only have available to them the data that you selected.

Note however, that when you call {\tt setimage}, the specification of the channels is still in an absolute sense. Thus, if you set {\tt nchan=10, start=20} in {\tt setdata}, and you wish to image all of these channels (let us say one image plane per channel), you must {\bf also} set {\tt nchan=10, start=20} in {\tt setimage}.

You might think that you have already selected the 20 channels you want with {\tt setdata} and therefore in function {\tt setimage} setting {\tt start=1} would select the first of those 20 channels. This is {\bf not} the way it works. If you asked for, say, channel 1 with {\tt setimage} when you did {\bf not} select channel 1 with function {\tt setdata}, that channel would be imaged but empty in the output image.

You use the channel selection parameters in {\tt setimage} to specify how the selected channels will be combined and how many output channels the image will have. Basically, there are two sorts of ways that you might like to use the channels that you have selected.

Firstly, in a multi-frequency synthesis image, all of the selected channels are gridded onto the $uv$ plane to reduce band-width smearing (which would be incurred if you averaged the channels and then gridded). In this case, the {\tt step} argument is not generally relevant; leave it at 1 if explicitly 'mfs' is used. For example:

im.setdata(mode='channel',      # Select 32 channels and start at channel 11
     nchan=32, start=10, step=1)
im.setimage(mode='mfs',         # In mfs mode, create the average of
     nchan=1, start=0, step=1)    #  the 32 channels selected above

Or, if you want a straight average:

im.setdata(mode='channel',      # select 32 channels starting at channel 11
     nchan=32, start=10, step=1)
im.setimage(mode='channel',     # Select 1 channel - this will be the
     nchan=1, start=10, step=32)  #  average of the 32 channels selected above

Secondly, when you make a spectral-line cube, you may wish to select/combine channels in a variety of ways according to the science you want to do. Here are some examples.

im.setdata(mode='channel',       # Select 200 consecutive channels starting
     nchan=200, start=10, step=1)  #  with channel 11
im.setimage(mode='channel',      # Form an image plane for each
     nchan=200, start=10, step=1)  #  selected channel
\end{verbatim}

\begin{verbatim}
im.setdata(mode='channel',       # Select 200 channels starting with channel
     nchan=200, start=10, step=5)  #  11, pick every fifth channel
im.setimage(mode='channel',      # Form an image plane for each
     nchan=200, start=10, step=5)  #  selected channel

im.setdata(mode='channel',       # Select 200 channels starting with channel
     nchan=200, start=10, step=5)  #  11, pick every fifth channel
im.setimage(mode='channel',      # Each channel of the image is formed by
     nchan=100, start=10, step=10) #  averaging 2 successively selected channels

In the above example, channels 11, 16, 21, 26, 31, etc... are selected. During imaging, channels 11 and 16 will be averaged to form output image channel 0. Channels 21 and 26 are averaged to form output channel 1 and so on.

Of course you could also use {\tt mode='mfs'} when combining groups of channels if you want an output image of more than one channel. In this case the combination is done by gridding rather than averaging.

Now to an example when one wants to make a cube image from multiple spectral windows:

im.setdata(mode='channel',               # Select 600 data channels from
     spwid=[0,1,2], nchan=[200,200,200],   #  3 spectral windows(200 from each)
     start=[0,0,0], step=[1,1,1],          #  start at channel 1, step by 1,
     fieldid=1)                            #  for field 2
im.setimage(mode='channel',              # Create 150 channels, average
     spwid=[0,1,2], nchan=150,             #  4 channels for each image
     start=0, step=4,                      #  plane.
     fieldid=1)                            #

In the above example we select 600 data channels from 3 spectral windows (200 from each spectral window). Then in the {\tt setimage} step we make imager combine 4 channels to make one image channel. Note, the spectral windows should have some overlapping channels for this procedure. {\tt imager} will figure out what the overlap is and create a continuous image cube from all 3 spectral windows.

Now consider an example in which all data in spectral windows 1 and 2 are selected. Then define an image in terms of velocity values:

im.setdata(fieldid=1, spwid=[0,1])       # Select field 2 and sp windows 1 & 2
im.setimage(nx=800, ny=800,              # Image will have 800x800 pixels
     cellx=quantity(0.5,'arcsec'),
     celly=quantity(0.5,'arcsec'),       # 0.5'' on a side, create 30 channels,
     mode='velocity', nchan=30,          #  start at -10km/s and step by
     mstart=quantity(-10.,'km/s'),
     mstep=quantity(1.8,'km/s'),         #  1.8km/s.  Do this for field 2
     spwid=[0,1], fieldid=1)             #  and use both spectral windows

Examples for mosaics are given in Section \ref{mosaic}.

Weighting

The above steps show how to set up the {\tt Imager} tool as desired. In addition, before imaging one may wish to set some values in the MeasurementSet itself. This is necessary for visibility weighting. The visibility imaging weights are computed prior to the actual imaging and stored in a column of the MeasurementSet called ``IMAGING\_WEIGHT''.

The first time an {\tt Imager} tool is attached to a MeasurementSet it initializes the imaging weights to the natural weighting scheme. Thereafter, whatever weighting scheme you last used is the one you will get if you don't explicitly run one of the weighting functions.

The values in the IMAGING\_WEIGHT column are set, changed, and examined by the following functions.

  • _im.weight -- sets the column
using one of natural, uniform, or Briggs (robust) weighting. For the latter two methods, one can specify the field of view over which the minimization of the sidelobes is done (thus achieving what is often called super-uniform weighting). Below are some examples of how to set imaging weights:
im.weight(type='natural')                # Natural weighting
im.weight(type='uniform')                # Uniform over entire field of view
im.weight(type='uniform', npixels=300)   # Uniform over specified size
im.weight(type='briggs', rmode='norm',   # An example of Briggs
     robust=0.5)                           #  weighting

  • im.filter -- applies an optimal filter for a given shape (often called 'tapering'). In the following example we apply a Gaussian filter:

im.filter(type='gaussian',                 # Apply a Gaussian taper
     bmaj=quantity(4.0,'arcsec'),
     bmin=quantity(2.5,'arcsec'),          #  that is 4x2.5 arcsec in size
     bpa=quantity(60.,'deg'))              #  at a position angle of 60deg

  • im.plotweights -- plots the imaging weights either as a function of uv distance or on a gridded plane.

  • im.sensitivity -- calculates and returns the sensitivity of the resulting weighting both absolutely and relative to natural weighting.

  • im.fitpsf -- calculates the dirty point spread function and returns the best fitting Gaussian.

_WARNING: All the weighting schemes modify the MeasurementSet and thus are conserved even after destroying the Imager tool and may no longer be suitable for subsequent imaging. You can of course reset the weighting scheme with the im.weight function.

Creating and Deconvolving Images

Creating Images

  1. It may be helpful to use the im.advise function to help determine the cell size, as well as the number of pixels for a given field of view.

Imagine we want to make an image over a field of view of 10~arcmin. The im.advise function will return the maximum pixel size required and number of pixels. If the number of facets required is larger than 1 then one needs a wide-field imaging algorithm as described in the \ahlink{wide-field}{GRwidefield} chapter of Getting Results. The recommendations should always be considered in the context of your imaging goals before being used. For example, to use the advise function on a split MeasurementSet that contains only calibrated source data:

im.open('source.ms')               # Create the imager tool
im.advise(fieldofview=quantity(10.,'arcmin'))  # Provide advice if FOV=10'

 # Logger will report something like:
   Maximum uv distance = 31068.1 wavelengths
   Recommended cell size < 3.31956 arcsec
   Recommended number of pixels = 192
   Dispersion in uv, w distance = 16011.2, 6277.93 wavelengths
   Best fitting plane is w = -0.0543279 * u + -0.418078 * v
   Dispersion in fitted w = 4882.22 wavelengths
   Wide field cleaning is not necessary

  1. It is often useful to make a dirty image of the field before deconvolution. Use the
im.makeimage function to make a dirty image and the point spread function:

im.makeimage(type='corrected', # Make a dirty image and store
     image='dirty.image')      #  it in the file called 'dirty.image'

im.makeimage(type='psf',       # Make a PSF image and store it
     image='dirty.beam')        #  in the file on disk called 'dirty.beam'

Deconvolution

At this point, you are ready to deconvolve the point spread function (usually called the dirty beam) from the dirty image. The goal of this step is to correct the dirty image for the sidelobes in the point spread function which arise from the limited Fourier plane coverage in your observation.

The available deconvolution algorithms are:

  • CLEAN; H\"ogbom, Clark and multi-scale variants of the CLEAN algorithm are available using \ahlink{im.clean}{imager:imager.clean.function}.

If you use a multi-scale algorithm, you should also set up the scale sizes via the im.setscales function.

CLEAN hints

For data using a PSF with high sidelobes, data that hasn't been properly calibrated, or in mosaics in which the PSF used is very different between different fields, the clean residuals may begin to diverge (rather than converge to zero). In these cases, there are a number of controls to help rein the cleaning process:

  • Reconcile the visibilities often and regenerate new residuals from the residual visibilities.
    • If using one of the multi-field algorithms (mf*), the setmfcontrol parameters provide additional controls on the process.
      • cyclefactor: The threshold in a major cycle by default is 1.5 times the product of the peak residual and the maximum outer sidelobe. This can be increased to raise the threshold.
      • cyclespeedup: Number of iterations after which it will increase the threshold in a major cycle by 2.
    • Manually set niter to a small value and restart clean multiple times. im subtracts the model visibility everytime it is started with a model image. This wya, one can force a major cycle every niter iterations.
  • Use a mask around the region where one believes there is real emission. In confused
regions or with strong sources, use interactive masking (\ref{regionmask}) to change the mask by looking at the residual after some iteration of clean.
  • Use a very low gain.
  • Re-calibrate, edit the data.

Example: Force major cycles:

im.open("orion.ms" )                       # load data into imager (im) tool
im.setdata(mode="none" ,                   # set all channels
        spwid=[0, 1] ,                     # set spectral windows 0 and 1
        fieldid=6)                         # set field 7
im.setimage(nx=500, ny=500,                # set 500 x 500
        cellx=quantity(2.0,'arcsec'),      # set cells to be 2"x2"
        celly=quantity(2.0,'arcsec'),
        stokes="I" ,                       # set Stokes I
        spwid=[0,1],                       # image spectral windows 1 and 2
        fieldid=6)                         # image field 7
im.weight(type="briggs",                   # Briggs weighting parameters
        robust=0,
        fieldofview=quantity(10.,arcsec'))
im.make('field_7c')                        # make a blank image to use as starting model
im.setscales(scalemethod='uservector',     # set multiscale clean scales
        uservector=[0,3,10,50])
im.setmfcontrol(stoplargenegatives=-1)     # continue component search even if
                                           # we've found negative components

  for k in range(10):                         # do a loop to force major cycles based
     im.clean(algorithm='mfmultiscale',    #  on the number of iterations
     niter=2000,                           # in the initial trial, 10000 iterations
                                           # diverged so we choose a 2000 iterations
                                           # (fraction of 10000) to force the major cycle
     gain=0.1, threshold='0.005Jy',        #  force major cycles every 2000 iterations
     model=['field_7c'],
     residual=['field_7c.residual'],
     image=['field_7c.restored'],
     mask=['orion.mask2'])                 # use a tight mask around emission
                                           # you can generate this with interactivemask

  • Maximum Entropy; maximum entropy and maximum emptiness methods are available. Functions that implement MEM are im.mem and dc.mem.

im.setdata(mode='channel',       # Select 200 consecutive channels starting
     nchan=200, start=10, step=1)  #  with channel 11
im.setimage(mode='channel',      # Form an image plane for each
     nchan=200, start=10, step=1)  #  selected channel
im.mem(algorithm='entropy',      # Select maximum entropy algorithm
     niter=10,                     # niter is smaller than that used in CLEAN as
                                   # it is not the components but the number of
                                   # iterations to maximize the entropy
     sigma=quantity(0.1,'Jy'),     # target noise to achieve in residual image
     targetflux=quantity(10.,'Jy'),# an estimate of the total flux in the image -
                                   # if uncertain, start with 1/10th to 1/2 of the
                                   # expected source flux
     constrainflux=F,              # set this to 'T' if you want the total flux fixed
                                   # to the target flux specified above
     prior='priorimage',           # if available, an image that gives some knowledge of
                                   # how the flux is distributed. This is not necessary.
     image=['maxen.image'],        # output images
     model=['maxen.model'],
     residual=['maxen.resid'],
     mask=['mask.im'])             # If needed you can specify a region to constrain the
                                   # flux

MEM hints

How to set up MEM so it can converge; how to decide when to stop MEM

  • Run MEM with only a few iterations (niter=5-10) and no mask. The image will be
poor but you can use it to define a mask. This is critical as MEM can easily diverge if you have complicated emission and no mask. Go through this procedure, increasing niter a bit until you have a good mask.

  • Start with about half of the flux density expected in the final image. This allows
MEM adequate cycles to achieve convergence. If you start with the full known flux density MEM may not converge to the correct flux. If you start with too little flux density, MEM may obtain a divergent flux. If in doubt, start with a small value, and watch the convergence. If you get no convergence on the final map flux with 20 to 50 iterations, increase the input Target Flux until you see MEM behaving reasonably. Note: when using a single-dish image as the starting model, the target flux should be set to the value of the single-dish flux since this is a known, rigid constraint.

  • Use a sigma input that is about or lower than
what you expect for the final image. This is not too critical. If it is clearly too high, MEM will stop before it has obtained a good image. You want to make sure sigma is low enough that MEM continues to deconvolve the image until it has converged to a good solution. You will probably stop MEM manually anyway based on the displayprogress plot.

  • When you have arrived at good initial inputs to MEM and you have a good mask,
the displayprogress plot will look something like Figure \ref{fig:displayprogress}. You will get discontinuities between each major cycle. Here you see 6 major cycles (1st one has only 1 iteration). Up through major cycle 5 (iterations 10-32), the peak flux found and residual sigma decrease steadily while flux increases. During cycle 5, all parameters stabilize. In cycle 6, the peak flux and sigma show signs of instability (they get a small bump). This is the first sign that MEM is digging too deep. In this example, stop MEM before the peak and sigma become unstable.

  • If you compare images: stopping the iterations after cycle 5 will get you
a nice flat residual map and a good deconvolved image with low background. If you go beyond cycle 5, you will get increased background, stripes and a worse residual image.

  • When in doubt, make lots of images and compare. Also, compare MEM with CLEAN
or multi-scale CLEAN.

The deconvolution methods do not keep a list of CLEAN components in sequence. Instead the CLEAN components are immediately added to the model image. This allows interoperability of the CLEAN and MEM algorithms as well the deconvolution functions in the \ahlink{Deconvolver} {deconvolver:deconvolver} \tool. It also allows editing of the resulting images using the Image tool.

The deconvolution functions can return the residual and restored images, as well as the updated model image. In addition, there are \ahlink{imager.residual}{imager:imager.residual.function} and \ahlink{imager.restore}{imager:imager.restore.function} functions that also compute residual and restored images. The distinction is that in the former, the residual is that calculated by the deconvolution method at the end of iteration, whereas the latter returns the residual calculated by going back to the original visibility data (except for the cases of multi-field and wide-field algorithms).

An example of using Clark CLEAN deconvolution with the {\tt imager} tool is given below:

im.clean(algorithm='clark',     # Clean a single field with Clark clean
     niter=2000, gain=0.2,      #  using 2000 iterations, set loop gain=0.2,
     model=['model_2000'],      #  The model image is called model_2000,
     residual=['residual_2000'],#  the residual image is residual_2000,
     image=['restored_2000'])   #  the final restored image is restored_2000.
                                #  All images are written to disk.

If the model image does not pre-exist it will be created and filled with the CLEAN delta function components found. If it does pre-exist (it may be non-empty (perhaps the result of a prior deconvolution) its Fourier transform is first subracted. This is also how you would continue to CLEAN deeper from a prior run of {\tt clean or mem}.

Note that the model image is updated when this function finishes. So if it was non-empty on input, you should save a copy first, if you wish to preserve that model.

Multi-scale CLEAN

General

Delta function CLEAN algorithms, such as the Clark or Hogbom algorithms, must use a modest gain factor (traditionally 0.1) in order to image extended emission in a reasonable manner. Otherwise, emissions and sidelobes get confused and error striping can result.

Multi-scale CLEAN algorithms attempt to recognize that the emission in the sky usually comes in a variety of scale sizes; it decomposes the image into Gaussians of the specified scale sizes. This means it does not spend huge amounts of time CLEANing large extended sources pretending that they are really collections of delta functions. However, note: because you are CLEANing multiple scale sizes, multi-scale CLEAN generally takes longer than CLEAN to run.

The \ahlink{im.setscales}{imager:imager.setscales.function} function allows you to choose these scales, either with an automatic method (you say how many scales you want) or by direct specification of the scale sizes in pixels.

Multi-scale CLEAN does not suffer from the same problems as delta function CLEAN and can use a larger value of the gain factor. Sometimes an oscillatory pattern will occur in the total cleaned flux with the peak residuals bouncing back and forth between positive and negative. If this occurs, try reducing the gain factor or try reducing the size of the largest scale.

As an example:

im.setdata(mode='channel', nchan=15,   # Select 15 channels,
     start=10, step=1,                 #  starting with channel 11, use
     spwid=2, fieldid=3)               #  spectral window 3 and field 4
im.setimage(nx=600, ny=600,            # Set up the imaging parameters
     cellx=quantity(0.5,'arcsec'),     #  Combine all channels into a single
     celly=quantity(0.5,'arcsec'),
     mode='mfs',                       #  plane using multi-frequency synthesis.
     spwid=2, fieldid=3))
im.setscales(scalemethod='nscales',    # Set up multi-scale components
     nscales=3)                        # Specify number of scales, allow imager
                                       #  to determine the scale sizes.
im.clean(algorithm='msclean',          # Deconvolve using multi-scale CLEAN
           model=['model'],            #  Call the model image 'model' and
           image=['restored'],         #  restored image 'restored'
           niter=100, gain=0.3)        #  clean down 100 iterations using a gain loop
                                       #  of 0.3.

Multi-scale CLEAN hints

  • Use im.setscales to set scalemethod='uservector'. Look at the general
distribution of emission and choose scales that are appropriate for the emission (e.g., start with 0 and 3 pixels; add scale sizes until the largest scale is about the size of the largest diffuse clump in the map). For example, assume you have emission that covers a 4 arcminute area, and there is significant diffuse emission that looks like it has a size scale of about 30 - 40 arcseconds. With 1 arcsecond pixels, choose deconvolution scales of [0,3,10,30] pixels. You can go smaller (e.g., [0,2,5,10]), but don't go much larger.

  • The more scale sizes you choose, the more memory it will take to deconvolve
the image. Be conservative.

  • The largest scale you choose should fit in any deconvolution masks you set
(or multi-scale CLEAN will not be able to use the largest scale).

  • If you have a mosaic where the fields are not sampled very quickly (e.g.,
field uv coverages differ somewhat) then the PSF will also differ between fields, sometimes significantly. \casa\ finds an average PSF that is common to all fields and cleans down to a certain level in each major cycle. In almost any mosaic that is taken with current instrumentation, fields are observed at different times and can be observed at slightly different frequencies. Thus, the PSFs between fields will differ enough that convergence in multi-scale CLEAN can be challenging. Try: in imager.setmfcontrol, start with cyclefactor=3 and cyclespeedup=500.

  • While multi-scale CLEAN is running, watch the progress. If there is
significant large-scale emission, initial major cycles will only get flux on the largest scale. As the number of cycles increase, more scales will begin to accumulate flux. Eventually, smaller scales will go negative; don't panic. The smaller scales are compensating for errors in the large scale flux distribution. Stop the iterations when you see minimal or negative flux on all scales. We recommend using the imager.setmfcontrol method with the stoplargenegatives=-1 setting. See the example (\ref{fig:displayprogress2}). Notice as the number of iterations increases, the final threshold obtained in the residual image decreases.

Mosaicing

The Fourier transform relationship between the Fourier plane and the image plane must include the primary beam:

V(u) = \int A(x) I(x) e^{-i2 \pi u x} dx

Where V(u) is the measured visibilities; A(x) is the primary beam response; and I(x) is the true brightness distribution on the sky.

Hence, given the image, one can simulate the corresponding $uv$ data. However, given the data and desiring the image, we have an inverse problem to solve.

Early attempts at mosaicing treated each field independently, deconvolving and self-calibrating each, and then sewing the overlapping fields' images together via the basic mosaicing equation:

I(x) = \frac{ \sum_f I_{f}(x)  A_{f}(x) } { \sum_f A^{2}_{f}(x) }.

Where the subscript f refers to each field in the mosaic. However, Cornwell (1988) demonstrated that better results can be obtained via a simultaneous deconvolution of the data from all the fields. This simultaneous deconvolution was achieved by using maximum entropy (MEM) or maximum emptiness as a solution engine to solve the inverse problem. Total power could be added as additional fields with their own primary beam. However, MEM's positivity bias, which is detrimental to low SNR imaging, led to a search for other algorithms to image multi-field data.

Sault et al. (1996) have implemented mosaicing algorithms which can use either CLEAN or MEM for simultaneous deconvolution.

Mosaicing Algorithm

Cornwell, Holdaway, and Uson (1994) proposed the following mosaicing algorithm: generate the mosaic of the dirty images and a single approximate point-spread function (PSF), and then proceed with any conventional single field deconvolution algorithm. For high-quality Fourier-plane coverage and similar PSF's for all fields in the mosaic, this approach is not limited by the differences in the approximate PSF and each field's actual PSF until the possible image dynamic range exceeded a few hundred to one.

\casa\ takes this approach to mosaicing a step further: perform an incremental deconvolution of the residuals with the approximate PSF, with an exact subtraction of the cumulative model brightness distribution at the end of each incremental ``major cycle'' (similar in concept to the major cycles of the Clark CLEAN).

If all of the fields are observed with many short snapshots over and over again (this is the preferred way to make a mosaic observation) then each field will have similar Fourier coverage and hence similar synthesized beams. An approximate PSF can be created which is a fairly good match to the actual PSF of each of the fields. Also, if the sky-coverage of the observed fields is Nyquist or better, then the approximate, shift-invariant PSF will be a reasonable match to the actual PSF of sources at various locations across the mosaic. The residual visibilities from each field can be transformed and mosaiced to make a single residual mosaic image. This mosaic image can be deconvolved with the deconvolution method of your choice; for example, with Clark CLEAN, Multiscale CLEAN, maximum entropy, or maximum emptiness.

The deconvolution algorithm cannot deconvolve arbitrarily deeply, because at some level the discrepancies between our approximate shift-invariant PSF and the true PSF at any location in the image will become apparent, and we will start ``cleaning'' error flux. Hence, we need to stop deconvolving when we have gotten down to the level of these PSF discrepancies. At this point, we take the part of the model brightness distribution we have just deconvolved and calculate model visibilities (using the measurement equation) and subtract them from the (corrected) data visibilities. To the extent that the primary beam and sky pointing are exact, the visibility subtraction is also exact. The residual visibilities can then be re-mosaiced, but the peak residual is at a much lower level. The process of deconvolving with the approximate, shift-invariant PSF then continues, and another increment to the model brightness distribution is formed, removed from the remaining residual visibilities, and added to the cumulative model brightness distribution. Borrowing from the Clark CLEAN's terminology, we call each cycle of incremental deconvolution and exact visibility subtraction a ``major cycle''.

How to Create a Mosaic Image

Set the Data Fields

Mosaicing can be a time consuming process, so it may be worthwhile to make a restricted version of the mosaic first. For example, you may want to image a few fields at lower resolution to reduce the number of pixels you are imaging. Eventually, you will want to image most or all of the observed fields. Use the {\tt setdata} function to restrict the data over which you will generate your image, e.g.,

im.setdata(fieldid=range(0,4))       # Select first 4 fields

Set the Image

One of the fields must be specified in im.setimage to provide the direction of the resultant image's reference pixel. For example, with a 25 pointing (5 x 5 raster) observation, field 13 could be the central field:

im.setdata(fieldid=range(0,25))      # Select all 25 pointings
im.setimage(nx=256, ny=256,          # Create a 256x256 resultant image
     cellx=quantity(3.,"arcsec"),    # Choose a cell size of 3x3 arcseconds
     celly=quantity(3.,"arcsec"),
     fieldid=12)                      # Select field 13 as the center

Setting the Voltage pattern (primary beam)

We must account for the primary beam pattern when imaging the same region but from different pointings. CASA currently has primary beam patterns stored for:

  1. ATCA
  2. GBT
  3. GMRT
  4. HATCREEK
  5. NMA
  6. NRAO12M
  7. NRAO140FT
  8. OVRO
  9. VLA
  10. WSRT
  11. OTHER

in addition, there are specific beams available for each of the following:

  1. ATCA_L1, ATCA_L2, ATCA_L3, ATCA_S, ATCA_C, ATCA_X, GBT, GMRT, HATCREEK, NRAO12M, NRAO140FT, OVRO, VLA, VLA_INVERSE, VLA_NVSS, VLA_2NULL, VLA_4, VLA_P, VLA_L, VLA_C, VLA_X, VLA_U, VLA_K, VLA_Q, WSRT, WSRT_LOW

im.setvp(dovp=T)  # dovp=do the voltage pattern correction
                    # this will default to use the voltage pattern for the
                    # telescope used and frequency observed. If no such
                    # default exists, a warning is issued.

If a voltage pattern is not provided, you can specify your own voltage pattern and bind it to the telescopes in a MeasurementSet by using the \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node228.html}{vpmanager} (voltage pattern manager). The Vpmanager will produce a table describing the different telescopes' voltage patterns, and this table can be used by Imager. See the \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node228.html}{vpmanager} section in the User Reference Manual for details. This is generally not necessary.

Weighting

If you use ``uniform'' or ``briggs'' weighting, the weighting details will depend upon the way the data are gridded. However, if all fields are specified in function setdata, then the weights from all fields will be gridded onto a single grid for the purposes of calculating the weights. This is valid for natural weighting but will not be valid for other types of weighting. You probably want to weight the data on a field-by-field basis:

im.weight(type="uniform",mosaic=T)  # This will weight each field separately

Options

For mosaicing, there is a key option which can be used. In particular, the gridding can use the primary beam as the convolution function. This is achieved using the option ftmachine="mosaic".

im.setoptions(ftmachine='mosaic')   # do a mosaic gridding

The normal gridder uses a spheroidal function to grid the data. For mosaics, using the Fourier transform of the primary beam as the gridding function has some advantages.

After a major cycle, the primary beam correction using the mosaic gridder leaves the (uncleaned) noise floor flat while it gets the deconvolved component scaled properly for flux scale. The use of the spheroidal gridding function, after correcting for the primary beam, tends to lift the noise at the edge of the beams and thus is non-optimal for images with emission extending to the edge of the mosaic.

In addition, controls are available using the setmfcontrol function to help adjust the cycle parameters (the conditions for ending the deconvolution cycles) for multi-field (and wide-field) imaging.

# Set the image plane flux scale type to "SAULT"
# Set a cutoff for the minimum primary beam level to use
# constpb is used for SAULT weighting to set the flux scale constant above
# this level.
im.setmfcontrol(scaletype='SAULT', # This selects Sault weighting which uses a
                                     # primary beam function that is flat across most
                                     # of the mosaiced image but is attenuated at
                                     # the edges of the mosaic so that the noise level
                                     # does not become excessive there
                  minpb=0.07,        # minimum primary beam level to use in each field
                  constpb=0.4)       # The noise is constant above this primary
                                     # beam level

Deconvolving

To image and deconvolve, use either imager's CLEAN or MEM functions. Only algorithms with the ``mf'' prefix will perform multi-field imaging correctly (i.e. algorithm ``clark'' will grid the data from all specified fields onto the same grid, resulting in a very confused image. CLEAN's mosaicing methods include mfclark, mfhogbom, and mfmultiscale, while MEM's mosaicing methods include mfentropy and mfemptiness. Some hints for CLEAN and MEM are provided at \ref{cleanhints} and \ref{memhint} respectively. A full example of how to create a mosaic image is given below. A detailed script is given at the end of this chapter.

im.open('data.ms')  # load data into imager
                                                # choose all fields (1-10) and all channels
im.setdata(mode='channel', nchan=62, start=0,   # restrict the data to the
     step=1,fieldid=range(0,10), spwid=[0,1])   # first two spectral windows
im.setimage(nx=640, ny=256,                     # select the output image size
     cellx=quantity(0.5,'arcsec'),              # and cell size properties
     celly=quantity(0.5,'arcsec'),stokes='I',
     mode='channel',
     nchan=18,start=2,step=5,fieldid=0,
     spwid=[0,1])                               # do 18 channels starting
                                                # at channel 3 and averaging 5
                                                # channels; use field 1 as the mosaic
                                                # field center
im.setvp(dovp=T)                                # do primary beam correction
im.weight(type="uniform",mosaic=T)              # use uniform weighting for each field
im.setoptions(padding=1.0, ftmachine='mosaic')  # do mosaic gridding
im.clean(algorithm="mfclark" , niter=1000,      # use mfclark algorithm
     gain=0.1, threshold=quantity(0.005,'Jy'),
     model=["src.all.cln.model"],               # stop cleaning at a threshold
                                                # of 0.005 Jy
     image=["srcmos"], residual=["src.all.cln.resid"])

Mosaic Details

Controlling the Major Cycles

The key to making the incremental deconvolution in \casa\ multi-field imaging successful lies in controlling just how deeply to deconvolve in the major cycles. The control parameters discussed here can be set with \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node191.html\#imager:imager.setmfcontrol.function}{im.setmfcontrol}. Deconvolving too deeply with the approximate PSF will waste computation time as slightly different PSF sidelobes add and subtract components. Not deconvolving deeply enough also causes problems as it necessitates more major cycles, again slowing down the computation time.

  • by increasing the cyclefactor. The major cycle cleaning
threshold is a fraction of the peak image residual. That residual fraction is determined by the cyclefactor multiplied by the peak negative sidelobe of the approximate PSF. Stopping the major cycle cleaning sooner can be accomplished by increasing the cyclefactor. Values ranging between 2 and 4 are common.

  • by decreasing the cyclespeedup. What if the
cyclefactor is set too low? The cycle threshold as calculated above can be made to drift upwards by setting the cyclespeedup. The threshold will double every cyclespeedup iterations in the major cycle until the major cycle stops. If the cyclespeedup is less than or equal to 0.0, no adjustments to the calculated cycle threshold will be made.

In addition to these cycle control parameters, which are applicable to mosaicing with CLEAN, Multi-Scale CLEAN, MEM, or Maximum Emptiness, there are two more control arguments set by im.setmfcontrol which are applicable only to Multi-Scale CLEAN. These are stoplargenegatives and stoppointmode, which are discussed below.

Mosaic Details with Multi-Scale CLEAN

See the Section \ref{msclean} for multi-scale basics.

Sometimes in the first few iterations of Multi-scale CLEAN in the mosaicing context, the largest scale will be dominated by large negative residuals (i.e. this is just the negative bowl, integrated over the area of the largest clean-component scale). One way to fix this is to make the largest scale smaller. Another way is to use a tighter mask which excludes finding large scale components in the bowl region. And a third, ad hoc way is to stop the major cycle when a negative component is found on the largest scale. This allows the exact subtraction to proceed, often resulting in a reduced bowl level. Stopping the major cycle upon encountering a negative component on the largest scale should only be performed on the first few cycles, as during subsequent cycles small amplitude large scale components may be required to make adjustments in the image. The number of cycles for which stopping when a negative component is found on the largest scale can be controlled by the parameter stoplargenegatives in imager.setmfcontrol. As smaller scales may require negative components to correct for errors made by ``over-cleaning'' in the larger cycles, no restriction should be placed on negative components from smaller size scales.

Details with MEM

If there are bright unresolved or barely resolved sources in the field, it may be advantageous to perform a Clark Clean down to the level of the peak extended emission, or include a component list in the model, because MEM does not work well on bright point-like sources.

The maximum entropy/emptiness algorithm has been modified to fit into the incremental deconvolution/major cycle framework adopted by mosaicing in \casa. These algorithms deal with both the incremental brightness distribution, which it seeks to solve given the approximate PSF and the residual mosaic image, and the cumulative brightness distribution for the calculation of the entropy function and its gradients. When maximum entropy starts up, it typically takes the algorithm four or five iterations to ``find itself'' and get the control parameters set to achieve the proper balance between the gradients of the entropy and the $\chi^2$. Once a good balance is struck, the algorithm makes marked progress towards convergence. At the end of a major cycle, the relevant control parameters are saved and used for the next major cycle.

An example similar to the single field case but with the algorithm changed to 'mfentropy' for multiple fields:

im.setdata(mode='channel',         # Select 200 consecutive channels starting at 11
     fields=range(0,10),           # Select 10 mosaic fields
     nchan=200, start=10, step=1)  #
im.setimage(mode='channel',        # Form an image plane for each selected channel
     nx=300,ny=300,                # Create a 300x300 pixel image with
     cellx=quantity(2.,'arcsec'),
     celly=quantity(2.,'arcsec'),  # 2"x2" pixels
     nchan=200,start=10,step=1,    # use all channels selected in setdata
     field=5)                      # Use field 6 as the mosaic center
im.mem(algorithm='mfentropy',      # Select maximum entropy algorithm for mosaic
     niter=10,                     # niter is smaller than that used in CLEAN as
                                   # it is not the components but the number of
                                   # iterations to maximize the entropy
     sigma=quantity(0.1,'Jy'),     # target noise to achieve in residual image
     targetflux=quantity(10.,'Jy'),# an estimate of the total flux in the image -
                                   # if uncertain, use 1/10 of the expected flux
     constrainflux=F,              # set this to 'T' if you want the total flux fixed
                                   # to the target flux specified above
     prior='priorimage',           # if available, an image that gives some knowledge of
                                   # how the flux is distributed. This is not necessary.
     image=['maxen.image'],        # output images
     model=['maxen.model'],
     residual=['maxen.resid'],
     mask=['mask.im'])             # If needed you can specify a region to constrain the
                                   # flux

Flux Scale Images

When correcting for the effects of the primary beam to achieve an accurate and uniform flux scale across the image (i.e. by dividing by the primary beam in the case of a single field observation), the noise and errors at the edge of the mosaic sky coverage are amplified. The noise amplification distracts from the visual beauty of the image and may complicate the image's display and interpretation.

Sault et al (1996) endorse a different image plane weighting in the mosaicing which results in a constant noise level across the image, but has a variable flux scale, e.g., for Nyquist sampled mosaics, flux scale decreases outside of the overlap region of the mosaic field - just as the single field fluxscale decreases toward the edge of the primary beam.

In CASA an image plane weighting similar to Sault's scheme has been implemented, but the noise goes to zero outside the region covered by the primary beams. The flux scale is position dependent, but it is mostly flat over most of the mosaic sky coverage (depending on how constpb is set within the setmfcontrol function and how much overlap there is between pointing centers). In order to get a constant flux image across the mosaic, one must divide by the fluxscale image (See Section \ref{chapter:analysis} for details on manipulating images). The flux scale images can be created by setting the fluxscale argument in Imager's setmfcontrol function (in case an inverse-Sault weighting is needed to correct the image). Regions outside the multi-field primary beam pattern will have a zero value.

Masks

Routinely in single field deconvolution, only the inner quarter of an image is deconvolved so that the sidelobes from this region can be correctly subtracted from the entire image. However, in the multi-field case, such a restriction usually does not exist. The major cycles only deconvolve down to a certain level, fixed by the sidelobe characteristics of the PSF. After that, the exact subtraction of the deconvolved flux is carried out. Typically, the exact subtraction is performed by multiplying the brightness distribution by a field's primary beam, convolving by that field's exact PSF, multiplying by the primary beam again, and subtracting from the previous cycle's residual mosaic. The two primary beam multiplications ensure that the far out effects of the PSF, which will not be correct due to the full-image deconvolution, will not effect the model brightness distribution.

If no mask is used, the major cycle deconvolution algorithms create a mask from the generalized primary beam pattern of all observed fields, with zero outside the outermost fields' primary beam main lobes. If you don't want this mask for some reason, you should supply your own mask image (as in Section \ref{masks}).

An Example Mosaic Script

The following script makes an interferometer-only mosaic image from the multi-field test measurementset which is distributed with CASA:

im.open('XCAS.ms')
# Use all the data for the mosaic
im.setdata(mode="none",
             spwid=[1:2], fieldid=[1:7])

# Use the first field as the image center
im.setimage(nx=256, ny=256, cellx="3arcsec",
              celly="3arcsec", stokes="I", doshift=F,
              mode="mfs", spwid=[1:2], fieldid=1,  facets=1)

# Weight each field individually
im.weight(type="uniform",mosaic=T)

# Use all the data for the mosaic
im.setdata(mode="none", nchan=1, start=1, step=1,
             spwid=[1:2], fieldid=[1:7])

# Using the mosaic gridder that uses the FT of the Primary Beam as the gridding function
im.setoptions(ftmachine='mosaic')

# Use the voltage pattern (primary beam) - it will lookup the telescope and use the
# appropriate PB for the observing frequency
im.setvp(dovp=T, usedefaultvp=T, dosquint=F)

# Make a MEM image  (using  mask image) - set names for results
maskname = ['mem.mask']               # maskname will be mem.mask
modname = ['mem.model']              # modname will be mem.model
imgname = ['mem.image']              # imgname will be mem.image
resname = ['mem.resid']              # resname will be mem.resid
scalename = ['mem.scale']            # scalename will be mem.scale

# set some multi-field control parameters.
im.setmfcontrol(cyclefactor=3.0,  # set to 3.0* max sidelobe * max residual
     cyclespeedup=20.0,             # threshold doubles in this many iterations
     fluxscale=scalename)           # name of fluxscale image
im.mem(algorithm='mfentropy',     # use multi-field Maximum Entropy algorithm
     niter=80, sigma='0.001Jy',     # number of iterations and target sigma
     targetflux='10.0Jy',           # target flux for final image
     constrainflux=F,               # constrain image to match target flux
     displayprogress=T,  fixed=F,   # display progress, don't keep model fixed
     complist='', prior='',         # set these to blank
     mask=maskname,                 # setup output file names
     model=modname,
     image=imgname,
     residual=resname)

# Make a multi-scale CLEAN image  for comparison
modname = ['msclean.model']          # modname will be msclean.model
imgname = ['msclean.image']          # imgname will be msclean.image
resname = ['msclean.resid']          # resname will be msclean.resid
scalename = ['msclean.scale']        # scalename will be msclean.scale

im.setscales(scalemethod='uservector',  # method by which scales are set
     uservector=[0.0, 3.0, 10.0, 20.0])   # vector of scale sizes in pixels

### please note the use of parameter stoplargenegatives
### which is set to false to let multi-scale clean to continue despite hitting
### a negative on the largest scale

im.setmfcontrol(cyclefactor=3.0,  # set to 3.0 * max sidelobe * max residual
     stoplargenegatives=-1,         # continue despite negatives
     fluxscale=scalename)           # name of fluxscale image
im.clean(algorithm='mfmultiscale',# use multi-field multi-scale algorithm
     niter=1000, gain=0.6,          # number of iterations and loop gain
     threshold='0Jy',               # stop cleaning at this threshold
     displayprogress=T, fixed=F,    # display progress, don't keep model fixed
     complist='',                   # name of component list
     mask=maskname,                 # set names for produced files
     model=modname,
     image=imgname,
     residual=resname)

# unload data from tool
im.close()

Imaging combined single dish/synthesis data

Methods

Below is a list of current methods used to combine single dish and synthesis data (Stanimirovic 2002).

  • Feathering in the image domain (IMMERGE in miriad, IMERG in AIPS,
im.feather in CASA). Images are feathered together in the Fourier plane. Intermediate size scales are down-weighted to give interferometer resolution while preserving single-dish total flux density.

_Fastest and least computer intensive way to add short-spacings. Also the most robust way relative to the other 3 methods which all require a non-linear deconvolution at the end._

  • Linear combination - the single-dish image and the interferometer dirty image
are linearly combined in the image plane, a composite beam is constructed, and the resulting composite image is deconvolved with the composite beam. Existing packages don't directly support this method but it can be done via image manipulation and subsequent deconvolution.

_Straight linear combination is not optimal but this method has the advantage that it does not require either Fourier transformation of the single-dish data which can suffer severely from edge effects, nor deconvolution of the single-dish data which is especially uncertain and leads to amplification errors._

  • Model Image - use the single-dish image as the starting model and subtract
this model from the uv data. In the shortest uv spacing, the single-dish-sampled structure will be preserved in the model information. In the uv overlap region, the source structure can be modified from the single-dish flux density distribution during deconvolution using the interferometer uv data. In the interferometer-only region of the uv-plane, the deconvolution will proceed as usual with no single-dish constraints.

Works effectively with good uv overlap (i.e., large single-dish).

  • Full Joint Deconvolution - single-dish and interferometer data are gridded
together. All regions of the uv-plane are jointly deconvolved.

_Theoretically the best way to do short spacing correction. This method depends heavily on a good estimate of the interferometer and single-dish noise variances._

Note: These techniques rely on images (data and masks) having the same number of axes. If errors are encountered with complaints about image shapes, use the image.summary function to look at the input images axes.

Examples are provided for adding an axis to an image (\ref{reorder}) and also for regridding an existing image to another coordinate system (\ref{exregrid}).

Feathering

In the image feathering technique, first the calibrated single-dish image is corrected for the beam response and the calibrated interferometric data are Fourier transformed and deconvolved to create an interferometer-only image.

The feathering technique does the following:

  • The single-dish and interferometer images are Fourier transformed.
  • The beam from the single-dish image is Fourier transformed (FTSDB(u,v)).
  • The Fourier transform of the interferometer image is multiplied by (1-FTSDB(u,v)).
This basically down weights the shorter spacing data from the interferometer image.
  • The Fourier transform of the single-dish image is scaled by the volume ratio of the
interferometer restoring beam to the single dish beam.
  • The results from c and d are added and Fourier transformed back to the image plane.

The term feathering derives from the tapering or down-weighting of the data in this technique; the overlapping, shorter spacing data from the deconvolved interferometer image is weighted down compared to the single dish image while the overlapping, longer spacing data from the single-dish are weighted down compared to the interferometer image.

The tapering uses the transform of the low resolution point spread function. This can be specified as an input image or the appropriate telescope beam for the single-dish. The point spread function for a single dish image may also be calculated using makeimage.

Advice: Note that if you are feathering large images, be advised to have the number of pixels along the X and Y axes to be composite numbers and definitely not prime numbers. In general FFTs work much faster on even and composite numbers. You may use subimage function of the image tool to trim the number of pixels to something desirable.

Note: This method is analogous to the AIPS IMERG task and the MIRIAD immerge task with option 'feather'.

Feathering Example

im.setvp(dovp=T)                    # Do primary beam correction; it will use the default
                                    # primary beam for the single-dish telescope
                                    # and frequency
im.feather(image='feathered.image'  # Resulting, combined image
    highres='synthesis.image',      # Synthesis image
    lowres='singledish.image')      # Single-dish image
                                    # If the beam response of the single-dish telescope
                                    # is not stored in AIPS++ then, one can optionally
                                    # specify the 'lowpsf' image if  available.

The feather task works on both single plane images and on multi-plane images (data cubes). The synthesis and single-dish images must have the same number of axes however; for example, default images in \casa\ have axes of: 1) Direction (e.g., RA) 2) Direction (e.g., Dec), 3) Stokes (e.g., I), and 4) Spectral (e.g., frequency or velocity). These axes can be manipulated (added or deleted) as necessary using the {\bf image} tool; this tool has not yet been ported to CASA .

uv-plane Combination

There are two principal methods for the uv-plane combination of single-dish and synthesis data.

  • Use the single-dish image as a starting model for the deconvolution (CLEAN or MEM).
  • Full joint deconvolution of the datasets.

Use single-dish image as starting model

In this first technique, a non-linear combination of the synthesis and single-dish data is performed in the uv plane. In the \casa\ implementation, the single-dish image is used to make a model (clean component) image which is then used as the starting point for the deconvolving algorithm to interpolate the missing short baselines.

During the deconvolution step in the CLEAN process, the interferometric data is extrapolated to shorter spacings. When starting CLEAN with a model from a single-dish image, the single-dish image is Fourier transformed and data in the area of uv-overlap is compared in a major cycle. The uv spacings provided by the single-dish image thus provides information (constraints) which help to prevent CLEAN from over-extrapolating in this region of the uv-plane.

The imager function, setsdoptions can be used to set a factor by which to scale the single-dish image, if necessary (typically to convert from K to Jy/beam).

The sdpsf parameter (optional) should be used if an external PSF image of the single dish is needed to calculate the beam parameters of the primary beam of the dish. This is usually needed if the single-dish image is from a non standard telescope or the beam is not in the CASA system; see the example in feather - \ref{feather}} for generating a beam in this fashion.

A mask image can be provided to the clean algorithm. This mask image helps guide the clean (and mem) algorithms and should be chosen carefully based on the 'known' emission. A mask image can be created from an existing image via {\tt interactivemask} (\ref{regionmask}).

Information on basic imaging is found in \ref{chapter:image}. Information on mosaic imaging is found in \ref{mosaic}.

There are two ways to get a model from a single-dish image:

  • Directly convert the single-dish image from Jy/beam to Jy/pixel.
  • Deconvolve the single-dish image with MEM or multiscale-CLEAN (using large scales only) to obtain a model image (Jy/pixel).

Example

Example: Directly convert the single-dish image to a model and use this as a starting model for the deconvolution.

im.open(filename='n4826_both.ms')    # Create imager tool using synthesis data
im.setdata(fieldid=range(0,7),       # Select mosaic fields 1-7
           spwid=[0,1,2])            # Select spectral windows 1-3
im.setimage(nx=256, ny=256,          # Resultant image will be 256x256
   cellx=quantity(1.,'arcsec'),
   celly=quantity(1.,'arcsec'),      #   with 1" pixels
   stokes='I',                       # Resultant image will be Stokes I
   mode='channel',                   # Define image planes by channel
   nchan=30,                         #   30 planes
   start=46,                         #   Starting with channel 47
   step=4,                           #   Averaging 4 channels
   fieldid=0,                        #   Use fieldid=1 as the phase center reference
   spwid=[0,1,2])                    #   Use spectral windows 1-3
im.setmfcontrol(scaletype="NONE",    # Set some multi-field processing parameters
                                     #   NONE indicates no scaling
   minpb=0.1)                        #   Level at which the primary beam will be applied
im.setvp(dovp=T)                     # Do primary beam correction

                                     # Make starting model image from single-dish image
im.makemodelfromsd(sdimage='n4826_12mchan.im', # specify single-dish image
   modelimage='n4826_joint1',        # specify name of output model image
   maskimage='n4826.mask')           # specify make image

                                     # joint deconvolution and clean
im.clean(algorithm='mfclark',        # Use multi-field clark algorithm
   model=['n4826_joint1'],           # Use model image generated above as the initial model
   gain=0.2, niter=500)              # set gain and iterations
                                     # you can specify a clean mask if desired - see
                                     # Section 5.3.4.1 on interactivemask
                                     # Note: if the output image name isn't specified, the
                                     # restored image name will be the model image name,
                                     # 'n4826_joint1' appended with '.restored'

im.close()                           # Close tool

When combining single-dish images (in feather or uv-plane combination), the single-dish image can be scaled by a factor (typically to convert from K to Jy/beam) using the setsdoptions function.

im.setsdoptions(scale=0.5)

Full Joint Deconvolution

This is currently under construction.

References (Single-Dish and Interferometric Data Combination

For a review of techniques see: Stanimirovic, S. 2002, astro-ph/0205329, "Short-Spacings Correction from the Single Dish perspective".

  • Emerson, D. T. 1974, {\it MNRAS}, {\bf 169}, 607

  • Helfer et al., 2003, {\it ApJS}, {\bf 145}, 259, Appendix B

  • Holdaway, M. A. 1999, in ASP Conf. Ser. 180, Synthesis Imaging in Radio Astronomy II, ed. ed. G. B. Taylor, C. L. Carilli, \& R. A. Perley (San Francisco: ASP), 401

  • Stanimirovic, S., Staveley-Smith, L., Dickey, J. M., Sault, R. J., \& Snowden, S. 1999, {\it MNRAS}, {\bf 302}, 417

  • Vogel, S. N., Wright, M. C. H., Plambeck, R. L., \& Welch, W. J. 1984, {\it ApJ}, {\bf 283}, 655

Wide Field Imaging (non-coplanar effects)

Under Construction

The problem of imaging interferometric data for wide fields involves correcting for the non-co-planarity of the array when inverting the visibilities in the Fourier basis. The distortions in the image domain increase as a function of distance from the phase center. This distortion, if not corrected, limits the imaging dynamic range for fields with significant emission away from the phase center. The measurement equation for such an observation is

%$ V(u,v,w)=\int\int I(l,m) e^{2\pi\iota(u l + v m + w \sqrt{1-l^2-m^2})} \frac{dl dm}{\sqrt{1-l^2-m^2}} $%

with the usual meaning for the various symbols. The two algorithms in CASA for correcting for the non-co-planarity are called the ``w-projection'' and the ``faceted imaging'' algorithm.

w-projection

The w-projection algorithm is a matched filter approach. A set of filters corresponding to various value of w for the %$e^{2\pi\iota w \sqrt{1-l^2-m^2}}$% term are constructed and applied to the 2D co-planar visibility function to evaluate the 3D non-co-planarity visibilities as:

V(u,v,w)=G(u,v,w)*V(u,v)

where G(u,v,w) are the filters. For fast evaluation of the left-hand side, G is pre-computed for a discrete set of w with uniform sampling in \sqrt{w} and the filter corresponding to the nearest value of the w is used.

This effectively increases the limit before which the errors due to the non-coplanar array dominates. The errors in the PSF computed using the w-projection approach are smaller than those incurred in using the approximate PSF in the minor cycle of Clark-Clean. Effectively therefore, Clark-Clean (and its variants) uses one form of approximate PSF in the minor cycle, while w-projection uses another. The accuracy of the major cycle in w-projection depends on the resolution at the which the w-axis is sampled by the filters. For VLA L-band imaging, 256 samples along the w-axis is probably sufficient (set via the {\tt facets} argument of the setimage() method of the imager).

Faceted imaging

The tradition approach for wide-field imaging involves approximating the (curved) sky by a set of tangent planes. The size of the facets is set such that the 2D approximation of the measurement equation is valid on each facet. The part of sky covered by each facet is them imaged and deconvolved in the usual manner by phase rotating the visibilities to the center of each facet. The Clean-ed images for each of the facets are then appropriately rotated and projected onto a single 2D tangent image. The accuracy of the major cycle for this algorithm also depends on the number (and hence the size) of the facets, but the dependence is different from that of the w-projection algorithm.

It can be shown that the imaging on each facet in the image domain is equivalent to a shear operation in the visibility domain. The CASA implementation of the faceted imaging algorithm uses this technique, which effectively does the faceting in the visibility domain rather than in the image domain. The functional advantage of this approach is that the users see (and manage) a single 2D image as against multiple facet images when the faceting is done in the image domain. This is a significant advantage, when using faceted imaging.

Comparison of the two methods

The w-projection algorithm can be shown to be about 10x faster than the faceted algorithm. The major cost of major-minor cycle based deconvolution algorithms is the cost of the major cycle. The dominant cost of major cycle is in the the prediction of the visibilities, given a model image. Typically, in faceted imaging approach, each major cycle involves predicting the visibilities for each facet. Though the facets without any emission need not be predicted saving some compute time, this saving disappears for typical P- and L-bands fields which invariable have emission is most facets.

The disadvantage of the w-projection algorithm is that it needs the entire image for the minor cycle. Since a single image covers the entire field of view, this approach requires larger run-time memory. This can however be relaxed by putting smaller fields around the dominant flanking sources in the field of view, which can be of smaller sizes.

Example

Single Field Imaging

A typical script for imaging a Measurement Set named MS using the w-projection algorithm is as follows. The number of facets in this case corresponds to the number of discreet values of $w$ for which the gridding convolution functions are evaluated. This ultimately determins the imaging dynamic range. However since the runtime is only weakly dependant on the number of facets, setting it to a high number like 256 is sufficient for VLA L-band imaging.

    #
    # Set the various paraments of imaging
    #
    MS       = ''                    # Name of the MS

    ALGO     = 'mfclark'             # The algorithm to use for
                                       # deconvolution

    IMSIZE   = [4096,4096]           # The image size

    CELLSIZE = ['2arcsec','2arcsec'] # The cellsize

    SPWID    = [1,2]                 # Spectral windows to image

    FIELDID  = 1                     # Field IDs to image

    NFACETS  = 256                   # No. of w-planes.

    STOKES   = 'I'                   # Stokes value

    MODEL_IMAGE_NAME ='1046.im'      # Name of the image files.
                                     # The restored Clean-ed image
                                     # and the residual imgaes are
                                     # stored in images with
                                     # ``.clean'' and ``.res''
                                     # appended to this name.

    NITER    = 10000                 # No. of Clean components

    THRESHOLD= '100e-3mJy'           # The Cleaning threshold.  The
                                     # iterations are stopped when
                                     # the magnitude of the
                                     # strongest components is lower
                                     # than this value.

    DOINTERACTIVE = F                # Set it to T to do interactive
                                     # box setting in-between iterations.

    im.open(MS)

    im.setdata(spwid=SPWID,fieldid=FIELDID,mode='channel')

    im.setimage(nx=IMSIZE[1],ny=IMSIZE[2],
                cellx=CELLSIZE[1],celly=CELLSIZE[2],
                facets=NFACETS,stokes=stokes,
                spwid=SPWID,fieldid=FIELDID)
                im.setoptions(ftmachine='wproject')

    im.clean(algorithm=ALGO,
             niter=NITER,threshold=THRESHOLD,
             interactive=DOINTERACTIVE,
             model=[MODEL_IMAGE_NAME],
             image=[MODEL_IMAGE_NAME+'.clean'],
             residual=[MODEL_IMAGE_NAME+'.res'])

References

EVLA Memo: http://www.aoc.nrao.edu/evla/geninfo/memoseries/evlamemo67.pdf

Mosaicing

Under Construction - See Advanced manual chapter

Single Dish Imaging

Under Construction - See Advanced manual chapter

Imaging combined single dish/synthesis data

Under Contruction - See Advanced manual chapter

Wide Field Imaging (non-coplanar effects)

Under Construction - See Advanced manual chapter

Self-Calibration

Under Construction

Displaying Images

This chapter describes how to display data with the casaviewer either as a stand-alone or through the viewer task. You can display both images and MeasurementSets.

Starting the casaviewer

casaviewer is the name of the stand-alone application that is available with a CASA installation. You can call this command from the command line in the following ways:

  • Start the casaviewer with no default image/MS loaded; it will pop up the Load Data frame and a blank, standard "Viewer Display Panel_. Selecting a file on disk in the Load Data panel will provide options for how to display the
data. Images can be displayed as: 1) Raster Image, 2) Contour Map, 3) Vector map or 4) Marker Map. MeasurementSets can only be displayed as raster.

> casaviewer &

  • Start the casaviewer with the selected image; the image will be displayed in the Viewer Display Panel. If the image is a cube (more than one plane for frequency or polarization) then it will be one the first plane of the cube.

> casaviewer image_filename &

  • Start the casaviewer with the selected MeasurementSet; note the additional parameter indicating that it is an ms; the default is 'image'.

> casaviewer ms_filename ms &

In addition, within the casapy environment, there is a viewer task which can be used to call up an image (_Note: currently the parameter list of the viewer task needs to expand to handle the extra parameter to indicate an MS; until then, please use the stand-alone invocations for viewing MSs_):

viewer(imagename=None)
    View an image or visibility data set:
    
    Keyword arguments:
    imagename -- Name of file to visualize
            default: ; example: imagename='ngc5921.image'

The viewer can be started as:

CASA <4>: viewer
--------> viewer()

or

CASA <5>: viewer 'ngc5921_task.image'
--------> viewer('ngc5921_task.image')

The viewer GUI

%BEGINFIGURE{label='fig:viewer0' caption='Viewer Display Panel with no data loaded. Each section of the GUI is explained below"}% viewer0.jpg%ENDFIGURE%

The main parts of the GUI are the menus:

  • Data
    • Open - open an image from disk
    • Register - register selected image (menu expands to the right containing all loaded images)
    • Close - close selected image (menu expands to the right)
    • Adjust - open the adjust panel
    • Print - print the displayed image
    • Close Panel - close the Viewer Display Panel
    • Quit Viewer - currently disabled
  • Display Panel
    • New Panel - create a new Viewer Display Panel
    • Panel Options - open the panel options frame
    • Print - print displayed image
    • Close Panel - close the Viewer Display Panel
  • Tools
    • Currently blank - will hold annotations and image analysis tools

Below this are icons for fast access to some of these menu items:

  • folder - Data:Open shortcut -- pulls up Load Data panel
  • wrench - Data:Adjust shortcut -- pulls up Data Display Options panel
  • panels - Data:Register shortcut -- pull up menu of loaded data
  • delete - Data:Close shortcut -- closes/unloads selected data
  • panel - Display Panel:New Panel
  • panel wrench - Display Panel:Panel Options -- pulls up Viewer Canvas Manager
  • print - Display Panel:Print -- print data

Important Bug Note: Please use the icon buttons whenever possible instead of the menus. The Register and Close menus especially are known to lead to viewer crashes in some cases. You'll usually find that the first four icon buttons are all you need. Click on the display panel titlebar then hover over the buttons for brief reminders of their purpose.

Below this are the eight mouse control buttons. These allow/show the assignment of the mouse buttons for different operations. Clicking in one of these buttons will re-assign a mouse button to that operation.

  • Zooming (magnifying glass icon)
    • Zooming is accomplished by pressing down the selected mouse button at the start point, dragging the mouse away from that point, and releasing the selected mouse button when the zoom box encloses the desired zoom area. Once the button is released, the zoom rectangle can be moved by clicking inside it with the selected mouse button and dragging it around. To zoom in, simply double click with the selected button inside the rectangle. Double clicking outside the rectangle will result in a zoom out.
  • Panning (hand icon)
    • Panning is accomplished by pressing down on the selected mouse button at the point you wish to move, dragging the mouse to the position where you want the first point moved to, and releasing the selected mouse button. Note: The arrow keys, Page Up, Page Down, Home and End keys, and scroll wheel (if any) can also be used to scroll through your data once you have zoomed in. For these to work, the mouse must be over the display panel drawing area, but no mouse tool need be active. Note: this is currently not enabled.
  • Stretch-shift colormap fiddling
  • Brightness-contrast colormap fiddling
  • Positioning
    • This enables the user to place a crosshair marker on the image to indicate a position. Depending on the context, the positions may be used to flag MeasurementSet data (not yet enabled) or display image spectral profiles (also not currently enabled). Click on the position to place the crosshair; once placed you can drag it to move to another location. Double click is not needed for this control.
  • Rectangle and Polygon region drawing
    • A rectangle region is generated exactly the same way as the zoom rectangle, and is set by double clicking within the rectangle. Polygon regions can be constructed by progressively clicking the selected mouse button at the desired location of each vertex, and clicking in the same location twice to complete the polygon. Once constructed, it can be moved by dragging inside the polygon, and reshaped by dragging the various handles at the vertices.
  • Polyline drawing
    • A polyline can be constructed with this button selected. It is almost identical to the polygon region tool. Create points by clicking at the positions wanted and then double-click to finish the line.

Below this area is the actual display surface.

Below the display is the 'tape deck' which provides basic movement between image planes along a selected third dimension of an image cube. This set of buttons is only enabled when the first-registered image reports that it has more than one plane along the 'Z axis'. In the most common case, the animator controls the frequency channel being viewed. From left to right, the tape deck controls allow the user to:

  • rewind to the start of the sequence (i.e., the first plane)
  • step backwards by one plane
  • play backwards, or repetitively step backwards
  • stop any current play
  • play forward, or repetitively step forward
  • step forward by one plane
  • fast forward to the end of the sequence

To the right of the tape deck is an editable text box indicating the current frame number and a sunken label showing the total number of frames. One can type a channel number into the current frame to jump to that channel. Below this is a slider for controlling the animation speed. To the right of this is the 'Full/Compact' toggle. In full mode, additional controls for blinking and for controlling the frame value and step are available; the default setting is for compact. In 'Blink' mode, when more than one raster image is registered in the Viewer Display Panel, the tapedeck will control which is being displayed at the moment. The images registered should cover the same portion of the sky, using the same coordinate projection.

Viewing a raster map

A raster map of an image shows pixel intensities in a two-dimensional cross-section of gridded data with colors selected from a finite set of (normally) smooth and continuous colors, i.e., a colormap.

Starting the casaviewer with an image as a raster map will look something like:

%BEGINFIGURE{label="fig:viewer1" caption="casaviewer: Illustration of a raster image in the Viewer Display Panel(left) and the Load Data panel (right)."}% viewer1.jpgviewer loaddata.jpg%ENDFIGURE%

You will see the GUI which consists of two main windows, entitled "Viewer Display Panel" and "Load Data". In the "Load Data" panel, you will see all of the files in the current working directory along with their type (Image, MeasurementSet, etc). After selecting a file, you are presented with the available data types for these data. Clicking on the button Raster Map will create a display as above. The main parts of the "Viewer Display Panel" GUI are discussed in the following Section (#ViewerGui).

Viewing a contour map

Viewing a contour image is similar the process above. A contour map shows lines of equal pixel intensity (e.g., flux density) in a two dimensional cross-section of gridded data. Contour maps are particularly useful for overlaying on raster images so that two different measurements of the same part of the sky can be shown simultaneously.

%BEGINFIGURE{label="fig:viewer1" caption="casaviewer: Illustration of a raster image in the Viewer Display Panel(left) and the Load Data panel (right)."}% viewer5.jpgviewer displaydata5.jpg%ENDFIGURE%

Viewing a MeasurementSet with visibility data

Visibility data can also be displayed and flagged directly from the viewer (Note: flagging is not currently enabled). For MeasurementSet files the only option for display is 'Raster' (similar to AIPS task TVFLG).

%BEGINFIGURE{label="fig:viewer_ms1" caption="casaviewer: Display of visibility data. The default axes are time vs. baseline."}% viewer ms1.jpgviewer ms2.jpg%ENDFIGURE%

_Note: There is also a bug in the current MS viewing which disables display of the data and flags; use the 'Adjust' panel 'Flagging Options' Menu to change the 'Show Flagged Regions' option to 'Masked to Background'. This will be the default for Patch 2._

Adjusting Display Parameters

The data display can be adjusted by the user as needed. The following illustrate the available options in the catagories of:
  • Display axes
  • Hidden axes
  • Basic Settings
  • Position tracking
  • Axis labels
  • Axis label properties

%BEGINFIGURE{label="fig:datadisplay" caption="casaviewer: Data display options. In the first image, the Display axes, Hidden axes, and Basic Settings options are shown; in the second image, the Position tracking and Axis labels options are shown; in the third image, the Axis label properties are shown."}% viewer datadisplay2.jpgviewer datadisplay3.jpgviewer datadisplay4.jpg%ENDFIGURE%

This older web page gives details of individual display options. Although it has not yet been integrated into the reference manual for the newer CASA, it is accurate in most cases:

http://aips2.nrao.edu/daily/docs/user/Display/node267.html

Adjusting Canvas Parameters/Multi-panel displays

The display area or Canvas can also be manipulated through two sets of values:
  • Margins - specify the spacing for the left, right, top, and bottom margins
  • Number of panels - specify the number of panels in x and y and the spacing between those panels.

The following illustrates a multi-panel display along with the Viewer Canvas Manager settings which created it.

%BEGINFIGURE{label="fig:viewer_canvas" caption="casaviewer: A multi-panel display set up through the Viewer Canvas Manager."}% viewer canvas.jpgviewer4.jpg%ENDFIGURE%

Overlay contours on a raster map

Contours of either a second data set or the same data set can be used for comparison or to enhance visualization of the data. The Adjust Panel will have multiple tabs which allow adjusting each data set individually (Note tabs along the top). To enable this simply open up the Load Data panel (Use the Data menu or click on the Folder icon), select the data set and select Contour.

%BEGINFIGURE{label="fig:viewer_overlay" caption="casaviewer: Display contour overlay on top of a raster image."}% viewer datadisplay1.jpgviewer3.jpg%ENDFIGURE%

Image Analysis

Under Construction

Image Analysis methods

Get an image summary

ia.open('imagename.im')
summary=ia.summary() # Note: formatted output goes to the logger

# Note: summary is now a Python dictionary with the structure:
# header
#    axisnames - vector containing string names of the image axes
#    axisunits - vector containing string units for each axis
#    defaultmask 
#    hasmask - boolean
#    imagetype - internal image type
#    incr - vector containing the increment values along each axis 
#    masks - vector of strings for any mask images
#    ndim - number of image dimensions (e.g., 4 [ra, dec, stokes, frequency])
#    refpix - vector of floats for the reference pixel along each axis
#    refval - vector of values at each reference pixel
#    restoringbeam 
#       imagetype
#       objectname
#       restoringbeam
#          major - the value and units for the major axis of the restoring beam
#          minor - the value and units for the minor axis of the restoring beam
#          positionangle - the value and units for the position angle
#    shape - image shape
#    tileshape - internal format shape
#    unit - image units (e.g., Jy/beam)
# return

Printing the summary value at the command line yields:

CASA <27>: summary
  Out[27]: 
{'header': {'axisnames': ['Right Ascension',
                          'Declination',
                          'Stokes',
                          'Frequency'],
            'axisunits': ['rad', 'rad', '', 'Hz'],
            'defaultmask': '',
            'hasmask': False,
            'imagetype': 'PagedImage',
            'incr': [-7.2722052166430395e-05,
                     7.2722052166430395e-05,
                     1.0,
                     24414.0625],
            'masks': [],
            'ndim': 4,
            'refpix': [129.0, 129.0, 1.0, 1.0],
            'refval': [4.022983925846928,
                       0.08843001543437938,
                       1.0,
                       1412808153.2593751],
            'restoringbeam': {'imagetype': 'Intensity',
                              'objectname': '',
                              'restoringbeam': {'major': {'unit': 'arcsec',
                                                          'value': 51.692642211914062},
                                                'minor': {'unit': 'arcsec',
                                                          'value': 45.703586578369141},
                                                'positionangle': {'unit': 'deg',
                                                                  'value': 14.277947425842285}}},
            'shape': [256, 256, 1, 46],
            'tileshape': [64, 64, 1, 8],
            'unit': 'Jy/beam'},
 'return': []}

All Python dictionaries have a range of functions to work with (do a summary. to see all the options). For example:

summary.keys()        # returns all the keys to the dictionary ['header','return']
summary.get('header') # returns all the elements of the header
summary['header']     # equivalent command

x=summary.values()[0] # set x = the header dictionary
x['ndim']             # returns 4 for a standard CASA image 

Get image statistics

CASA <50>: ia.open('ngc5921_task.image')
CASA <51>: ia.statistics() # Note: formatted output goes to the logger
{'return': True,
 'statsout': {'blc': [0, 0, 0, 0],
              'blcf': '15:24:08.404, +04.31.59.181, I, 1.41281e+09Hz',
              'flux': [12.136708280654085],
              'max': [0.12773475050926208],
              'maxpos': [134, 134, 0, 38],
              'maxposf': '15:21:53.976, +05.05.29.998, I, 1.41374e+09Hz',
              'mean': [4.7899158775357123e-05],
              'min': [-0.023951441049575806],
              'minpos': [230, 0, 0, 15],
              'minposf': '15:20:17.679, +04.31.59.470, I, 1.41317e+09Hz',
              'npts': [3014656.0],
              'rms': [0.00421889778226614],
              'sigma': [0.0042186264443439979],
              'sum': [144.399486397083],
              'sumsq': [53.658156081703709],
              'trc': [255, 255, 0, 45],
              'trcf': '15:19:52.390, +05.35.44.246, I, 1.41391e+09Hz'}}

Restrict statistics to a region:

CASA <60>: blc=[0,0,0,23]

CASA <61>: trc=[255,255,0,23]

CASA <62>: bbox=rg.box(blc=blc,trc=trc)

CASA <63>: ia.statistics(region=bbox)

0%....10....20....30....40....50....60....70....80....90....100%
Warning no plotter attached.  Attach a plotter to get plots
  Out[63]: 
{'return': True,
 'statsout': {'blc': [0, 0, 0, 23],
              'blcf': '15:24:08.404, +04.31.59.181, I, 1.41337e+09Hz',
              'flux': [0.21697188625158217],
              'max': [0.061052091419696808],
              'maxpos': [124, 132, 0, 23],
              'maxposf': '15:22:04.016, +05.04.59.999, I, 1.41337e+09Hz',
              'mean': [3.9390207550122095e-05],
              'min': [-0.018510516732931137],
              'minpos': [254, 20, 0, 23],
              'minposf': '15:19:53.588, +04.36.59.216, I, 1.41337e+09Hz',
              'npts': [65536.0],
              'rms': [0.0040461532771587372],
              'sigma': [0.0040459925613279676],
              'sum': [2.5814766420048016],
              'sumsq': [1.0729132921679772],
              'trc': [255, 255, 0, 23],
              'trcf': '15:19:52.390, +05.35.44.246, I, 1.41337e+09Hz'}}

You can also use the viewer to interactively obtain image statistics on a region:

viewer('imagename.im')
# Now use the right mouse button (default for region setting)
# create a region and then double click inside to obtain statistics on that region
# Currently this supports a single plane only and the output goes to your casapy 
# terminal window as:

ngc5921_task.image

n           Std Dev     RMS         Mean        Variance    Sum
660         0.01262     0.0138      0.005622    0.0001592   3.711     

Flux        Med |Dev|   Quartile    Median      Min         Max
0.3119      0.003758    0.004586    0.001434    -0.009671   0.06105   

Single Dish Analysis

Under Construction

CASA uses the ATNF Spectral Analysis Package (ASAP) for traditional single dish analysis functionality:

http://www.atnf.csiro.au/computing/software/asap/

The User Guide is available at: http://www.atnf.csiro.au/computing/software/asap/userguide/

The Reference Manual is available at: http://www.atnf.csiro.au/computing/software/asap/refman/

All of the ASAP functionality is available with a CASA installation. In the following, we outline how to access ASAP functionality within CASA and the data flow for standard use cases.

Using ASAP

ASAP is included with the CASA installation/build. It is not loaded upon start-up, however, and must be imported as a standard Python package:

CASA <1>: import asap as sd

Note: Currently there is a warning message produced when this is typed; it is informational on changes in the underlying Python libraries and can safely be ignored.
DeprecationWarning: The sre module is deprecated, please import re.
 import sre

Once this is done, all of the ASAP functionality is now under the Python 'sd' tool. The ASAP interface is essentially the same as that of the CASA toolkit, that is, there are groups of functionality (aka tools) which have the ability to operate on your data. Type:

CASA <4>: sd.
sd.__class__               sd._validate_bool          sd.list_scans
sd.__date__                sd._validate_int           sd.mask_and
sd.__delattr__             sd.asapfitter              sd.mask_not
sd.__dict__                sd.asaplinefind            sd.mask_or
sd.__doc__                 sd.asaplog                 sd.merge
sd.__file__                sd.asaplotbase             sd.os
sd.__getattribute__        sd.asaplotgui              sd.plf
sd.__hash__                sd.asapmath                sd.plotter
sd.__init__                sd.asapplotter             sd.print_log
sd.__name__                sd.asapreader              sd.quotient
sd.__new__                 sd.average_time            sd.rc
sd.__path__                sd.calfs                   sd.rcParams
sd.__reduce__              sd.calnod                  sd.rcParamsDefault
sd.__reduce_ex__           sd.calps                   sd.rc_params
sd.__repr__                sd.commands                sd.rcdefaults
sd.__setattr__             sd.defaultParams           sd.reader
sd.__str__                 sd.dosigref                sd.scantable
sd.__version__             sd.dototalpower            sd.selector
sd._asap                   sd.fitter                  sd.simple_math
sd._asap_fname             sd.is_ipython              sd.sys
sd._asaplog                sd.linecatalog             sd.unique
sd._is_sequence_or_number  sd.linefinder              sd.version
sd._n_bools                sd.list_files              sd.welcome
sd._to_list                sd.list_rcparameters       sd.xyplotter

...to see the list of tools.

In particular, the following are essential for most reduction sessions:
  • sd.scantable - the data structure for ASAP and the core methods for manipulating the data; allows importing data, making data selections, basic operations (averaging, baselines, etc) and setting data characteristics (e.g., frequencies, etc).
  • sd.selector - selects a subset of data for subsequent operations
  • sd.fitter - fit data
  • sd.plotter - plotting facilities (uses matplotlib)

The scantable functions (type 'sd.scantable' to see the full list) are used most often and can be applied to both the initial scantable and to any spectrum from that scan table.

Data Import/Export

Import

Data can be loaded into ASAP by using the scantable function which will read a variety of recognized formats (RPFITS, varieties of SDFITS, and the CASA MeasurementSet). For example:

CASA <1>: scans = sd.scantable('OrionS_rawACSmod', average=False)
Importing OrionS_rawACSmod...

NOTE: It is important to use the average=False parameter setting as the calibration routines supporting GBT data require all of the individual times and phases.

NOTE: GBT data may need some pre-processing prior to using ASAP. In particular, the program which converts GBT raw data into CASA MeasurementSets tends to proliferate the number of spectral windows due to shifts in the tracking frequency; this is being worked on by GBT staff. In addition, GBT SDFITS is currently not readable by ASAP (in progress).

NOTE:The MeasurementSet to scantable conversion is able to deduce the reference and source data and assigns an '_r' to the reference data to comply with the ASAP conventions.

NOTE: GBT observing modes are identifiable in scantable in the name assignment: position switched ('_ps'), Nod ('_nod'), and frequency switched ('_fs'). These are combined with the reference data assignment. (For example, the reference data taken in position switched mode observation are assigned as '_psr'.)

Use the summary function to examine the data and get basic information:

CASA <8>: scans.summary()
--------------------------------------------------------------------------------
 Scan Table Summary
--------------------------------------------------------------------------------
Beams:         1   
IFs:           26  
Polarisations: 2   (linear)
Channels:      8192

Observer:      Joseph McMullin
Obs Date:      2006/01/19/01:45:58
Project:       AGBT06A_018_01
Obs. Type:     OffOn:PSWITCHOFF:TPWCAL
Antenna Name:  GBT
Flux Unit:     Jy
Rest Freqs:    [4.5490258e+10] [Hz]
Abcissa:       Channel
Selection:     none

Scan Source         Time      Integration       
     Beam    Position (J2000)
          IF       Frame   RefVal          RefPix    Increment   
--------------------------------------------------------------------------------
  20 OrionS_psr     01:45:58    4 x       30.0s
        0    05:15:13.5 -05.24.08.2
            0      LSRK   4.5489354e+10   4096    6104.233
            1      LSRK   4.5300785e+10   4096    6104.233
            2      LSRK   4.4074929e+10   4096    6104.233
            3      LSRK   4.4166215e+10   4096    6104.233
  21 OrionS_ps      01:48:38    4 x       30.0s
        0    05:35:13.5 -05.24.08.2
            0      LSRK   4.5489354e+10   4096    6104.233
            1      LSRK   4.5300785e+10   4096    6104.233
            2      LSRK   4.4074929e+10   4096    6104.233
            3      LSRK   4.4166215e+10   4096    6104.233
  22 OrionS_psr     01:51:21    4 x       30.0s
        0    05:15:13.5 -05.24.08.2
            0      LSRK   4.5489354e+10   4096    6104.233
            1      LSRK   4.5300785e+10   4096    6104.233
            2      LSRK   4.4074929e+10   4096    6104.233
            3      LSRK   4.4166215e+10   4096    6104.233
  23 OrionS_ps      01:54:01    4 x       30.0s
        0    05:35:13.5 -05.24.08.2
            0      LSRK   4.5489354e+10   4096    6104.233
            1      LSRK   4.5300785e+10   4096    6104.233
            2      LSRK   4.4074929e+10   4096    6104.233
            3      LSRK   4.4166215e+10   4096    6104.233
  24 OrionS_psr     02:01:47    4 x       30.0s
        0    05:15:13.5 -05.24.08.2
           12      LSRK   4.3962126e+10   4096   6104.2336
           13      LSRK    4.264542e+10   4096   6104.2336
           14      LSRK    4.159498e+10   4096   6104.2336
           15      LSRK   4.3422823e+10   4096   6104.2336
  25 OrionS_ps      02:04:27    4 x       30.0s
        0    05:35:13.5 -05.24.08.2
           12      LSRK   4.3962126e+10   4096   6104.2336
           13      LSRK    4.264542e+10   4096   6104.2336
           14      LSRK    4.159498e+10   4096   6104.2336
           15      LSRK   4.3422823e+10   4096   6104.2336
  26 OrionS_psr     02:07:10    4 x       30.0s
        0    05:15:13.5 -05.24.08.2
           12      LSRK   4.3962126e+10   4096   6104.2336
           13      LSRK    4.264542e+10   4096   6104.2336
           14      LSRK    4.159498e+10   4096   6104.2336
           15      LSRK   4.3422823e+10   4096   6104.2336
  27 OrionS_ps      02:09:51    4 x       30.0s
        0    05:35:13.5 -05.24.08.2
           12      LSRK   4.3962126e+10   4096   6104.2336
           13      LSRK    4.264542e+10   4096   6104.2336
           14      LSRK    4.159498e+10   4096   6104.2336
           15      LSRK   4.3422823e+10   4096   6104.2336

  • scantable manipulation

Once you have a scantable in ASAP, you can select a subset of the data based on scan numbers, sources, or types of scan:

CASA <5>: scan27=scans.get_scan(27)                 # Get the 27th scan
CASA <6>: scans20to24=scans.get_scan(range(20,25))  # Get scans 20 - 24
CASA <7>: scans_on=scans.get_scan('*_ps')           # Get ps scans on source
CASA <8>: scansOrion=scans.get_scan('Ori*')         # Get all Orion scans

To copy a scantable, do:

CASA <15>: ss=scans.copy()

Data Export

ASAP can save scantables in a variety of formats, suitable for reading into other packages. The formats are:

  • ASAP -- This is the internal format used for ASAP. It is the only format that allows the user to restore the data, fits, etc, without loosing any information. As mentioned before, the ASAP scantable is a CASA Table (memory-based table). This function just converts it to a disk-based table. You can access this with the CASA browsetable task or any other CASA table tasks.

  • SDFITS -- The Single Dish FITS format. This format was designed for interchange between packages but few packages can actually read it.

  • ASCII -- A simple text based format suitable for the user to process using Python or other means.

scans.save('output_filename','format'), e.g.,

CASA <19>: scans.save('FLS3a_calfs','MS2')

ASAP Data

Within ASAP, data is stored in a scantable, which holds all of the observational information and provides functionality to manipulate the data and information. The building block of a scantable is an integration which is a single row of a scantable. Each row contains just one spectrum for each beam, IF and polarization.

  • Data selection

In addition to the basic data selection above, data can be selected based on IF, beam, polarization, scan number as well as values such as Tsys. To make a selection you create a selector object which you then define with various selection functions, e.g.,

sel = sd.selector()      # initialize a selector object
                         # sel. will list all options
sel.set_ifs(0)           # select only the first IF of the data
scans.set_selection(sel) # apply the selection to the data
print scans              # shows just the first IF

  • State information

Some properties of a scantable apply to all of the data. For example, spectral units, frequency frame, or Doppler type. This information can be set using the scantable set\_xxxx methods.

scans.set_fluxunit('K')  # Set the flux unit for data to Kelvin
scans.set_unit('GHz')    # Use GHZ as the spectral axis for plots

  • Masks

Several functions (fitting, baseline subtraction, statistics, etc) may be run on a range of channels (or velocity/frequency ranges). You can create masks of this type using the create\_mask function:

# spave = an averaged spectrum
spave.set_unit('channel')
rmsmask=spave.create_mask([5000,7000])   # create a region over channels 5000-7000
rms=spave.stats(stat='rms',mask=rmsmask) # get rms of line free region

rmsmask=spave.create_mask([3000,4000],invert=True) # choose the region *excluding* the specified channels

The mask is stored in a simple Python variable (a list) and so may be manipulated using an Python facilities.

  • Management

scantables can be listed via:

CASA <33>: sd.list_scans()
The user created scantables are:
['scans20to24', 's', 'scan27']

As every scantable will consume memory, if you will not use it any longer, you can explicitly remove it via:

del scans

Basic Processing

Calibration

For some observatories, the calibration happens transparently as the input data contains the Tsys measurements taken during the observations. The nominal 'Tsys' values may be in Kelvin or Jansky. The user may wish to apply a Tsys correction or apply gain-elevation and opacity corrections.

  • Tsys scaling

If the nominal Tsys measurement at the telescope is wrong due to incorrect calibration, the scale function allows it to be corrected.

scans.scale(1.05,tsys=True) # by default only the spectra are scaled (and not the corresponding Tsys) unless tsys=True

  • Unit Conversion

To convert measurements in Kelvin to Jansky (and vice versa), the convert\_flux function may be used. This converts and scales the data to the selected units. The user may need to supply the aperture efficiency, telescope diameter or the Jy/K factor

scans.convert_flux(eta=0.48, d=35.) # Unknown telescope
scans.convert_flux(jypk=15) # Unknown telecope (alternative)
scans.convert_flux() # known telescope (mostly AT telescopes)
scans.convert_flux(eta=0.48) # if telescope diameter known

  • Gain-Elevation and Opacity Corrections

At higher frequencies, it is important to make corrections for atmospheric opacity and gain-elevation effects. NOTE: Currently, the MS to scantable conversion does not adequately populate the azimuth and elevation in the scantable. As a result, one must calculate these via:

scans.recalc_azel()
Computed azimuth/elevation using 
Position: [882590, -4.92487e+06, 3.94373e+06]
 Time: 01:48:38 Direction: 05:35:13.5 -05.24.08.2
     => azel: 154.696 43.1847 (deg)
 Time: 01:48:38 Direction: 05:35:13.5 -05.24.08.2
     => azel: 154.696 43.1847 (deg)
 Time: 01:48:38 Direction: 05:35:13.5 -05.24.08.2
     => azel: 154.696 43.1847 (deg)
 Time: 01:48:38 Direction: 05:35:13.5 -05.24.08.2
     => azel: 154.696 43.1847 (deg)
 Time: 01:48:38 Direction: 05:35:13.5 -05.24.08.2
     => azel: 154.696 43.1847 (deg)
...

Once you have the correct Az/El, you can correct for a known opacity by:

scans.opacity(tau=0.09)  # Opacity from which the correction factor: exp(tau*zenith-distance)

  • Uncalibrated data: (GBT)

Data from the GBT is uncalibrated and comes as sets of integrations representing the different phases within a calibration cycle (e.g., on source, calibration on, on source, calibration off, on reference, calibration on; on reference, calibration off). Currently, there are a number of routines emulating the standard GBT calibration (in GBTIDL):
  • calps - calibrate position switched data
  • calfs - calibrate frequency switched data
  • calnod - calibration nod (beam switch) data

All these routines calibrate the spectral data to antenna temperature adopting the GBT calibration formalism as described in the GBTIDL calibration document. There are two basic steps:

1. determine system temperature using a noise tube calibrator (sd.dototalpower())

For each integration, the system temperature is calculated from CAL noise on/off data as:

T_{sys} = T_{cal} x \frac{<ref_{caloff}>}{<ref_{calon} - ref_{caloff}>} + \frac{T_{cal}}{2}

ref refers to reference data and the spectral data are averaged across the bandpass. Note that the central 80% of the spectra are used for the calculation.

2. determine antenna temperature (sd.dosigref())

The antenna temperature for each channel is calculated as:

T_a(\nu) = T_{sys} x \frac{sig(\nu) - ref(\nu)}{ref(\nu)}

where sig = \frac{1}{2}(sig_{calon} + sig_{caloff}), ref = \frac{1}{2}(sig_{calon} + sig_{caloff}).

Each calibration routine may be used as:

scans=sd.scantable('inputdata',False)         # create a scantable called 'scans'
calibrated_scans = sd.calps(scans,[scanlist]) # calibrate scantable with position-switched scheme

Note: For calps and calnod, the scanlist must be scan pairs in correct order as these routines only do miminum checking.

Averaging

There are several means of averaging data within ASAP.

  • average in time

sd.average_time(scantable,mask,scanav,weight,align)

where:

    Parameters:
        one scan or comma separated  scans
        mask:     an optional mask (only used for 'var' and 'tsys' weighting)
        scanav:   True averages each scan separately.
                  False (default) averages all scans together,
        weight:   Weighting scheme.
                    'none'     (mean no weight)
                    'var'      (1/var(spec) weighted)
                    'tsys'     (1/Tsys**2 weighted)
                    'tint'     (integration time weighted)
                    'tintsys'  (Tint/Tsys**2)
                    'median'   ( median averaging)
        align:    align the spectra in velocity before averaging. It takes
                  the time of the first spectrum in the first scantable
                  as reference time.
    Example:
  
stave = sd.average_time(scans,weight='tintsys')

  • average polarizations

averaged_scan = scans.average_pol(mask,weight)

where:
    Parameters:
        mask:        An optional mask defining the region, where the
                     averaging will be applied. The output will have all
                     specified points masked.
        weight:      Weighting scheme. 'none' (default), 'var' (1/var(spec)
                     weighted), or 'tsys' (1/Tsys**2 weighted)

    Example:

spave = stave.average_pol(weight='tsys')

Smoothing

Smoothing on data can be done as follows:

scantable.smooth(kernel,      # type of smoothing: 'hanning' (default), 'gaussian', 'boxcar'
          width,              # width in pixls (ignored for hanning); FWHM for gaussian.
          insitu)             # if False (default), do smoothing in-situ; otherwise, make new scantable

Example:
# spave is an averaged spectrum
spave.smooth('boxcar',5)      # do a 5 pixel boxcar smooth on the spectrum
sd.plotter.plot(spave)        # should see smoothed spectrum

Baseline Fitting

To make a baseline fit, first create a mask of channels:

msk=scans.create_mask([100,400],[600,900])
scans.poly_baseline(msk,order=1)

This will fit a first order polynomial to the selected channels and subtract this polynomial from the full spectrum.

The auto_poly_baseline can be used to automatically baseline your data without having to specify channel ranges for the line free data. It automatically figures out the line-free emission and fits a polynomial baseline to that data. The user can use masks to fix the range of channels or velocity range for the fit as well as mark the band edge as invalid:

scans.auto_poly_baseline(mask,edge,order,threshold,chan_avg_limit,plot,insitu):

    Parameters:
        mask:       an optional mask retreived from scantable
        edge:       an optional number of channel to drop at
                    the edge of spectrum. If only one value is
                    specified, the same number will be dropped from
                    both sides of the spectrum. Default is to keep
                    all channels. Nested tuples represent individual
                    edge selection for different IFs (a number of spectral
                    channels can be different)
        order:      the order of the polynomial (default is 0)
        threshold:  the threshold used by line finder. It is better to
                    keep it large as only strong lines affect the
                    baseline solution.
        chan_avg_limit:
                    a maximum number of consequtive spectral channels to
                    average during the search of weak and broad lines.
                    The default is no averaging (and no search for weak
                    lines). If such lines can affect the fitted baseline
                    (e.g. a high order polynomial is fitted), increase this
                    parameter (usually values up to 8 are reasonable). Most
                    users of this method should find the default value
                    sufficient.
        plot:       plot the fit and the residual. In this each
                    indivual fit has to be approved, by typing 'y'
                    or 'n'
        insitu:     if False a new scantable is returned.
                    Otherwise, the scaling is done in-situ
                    The default is taken from .asaprc (False)

    Example:
scans.auto_poly_baseline(order=2,threshold=5)

Fitting

Multi-component Gaussian fitting is available. This is done by creating a fitting object, specifying fit parameters and finally fitting the data. Fitting can be done on a scantable selection or an entire scantable using the auto_fit function.


#spave is an averaged spectrum

f=sd.fitter()                           # create fitter object
msk=spave.create_mask([3928,4255])      # create region around line                     
f.set_function(gauss=1)                 # set a single gaussian component               
f.set_scan(spave,msk)                   # set the data and region for the fitter        
#f.set_gauss_parameters(0.4,4100,200)   # set initial guesses for Gaussian
f.fit()                                 # fit - automatically guess the fit parameters
f.plot(residual=True)                   # plot residual
f.get_parameters()                      # retrieve fit parameters
#   0: peak = 0.786 K , centre = 4091.236 channel, FWHM = 70.586 channel
#      area = 59.473 K channel
f.store_fit('orions_hc3n_fit.txt')      # store fit                                     # *copy and paste from log*

Plotting

The ASAP plotter uses the same Python matplotlib library as in CASA (for x-y plots). It is accessed via the:

sd.plotter        # see all functions (omitted here)
sd.plotter.plot(scans) # the workhorse function
sd.plotter.set
sd.plotter.set_abcissa     sd.plotter.set_legend      sd.plotter.set_range
sd.plotter.set_colors      sd.plotter.set_linestyles  sd.plotter.set_selection
sd.plotter.set_colours     sd.plotter.set_mask        sd.plotter.set_stacking
sd.plotter.set_font        sd.plotter.set_mode        sd.plotter.set_title
sd.plotter.set_histogram   sd.plotter.set_ordinate    
sd.plotter.set_layout      sd.plotter.set_panelling   

Spectra can be plotted at any time, and it will attempt to do the correct layout depending on whether it is a set of scans or a single scan.

The details of the plotter display (matplotlib) are detailed in the earlier section.

Scan Mathematics

It is possible to do simple mathematics directly on scantables from the CASA command line using the +,-,*,/ operators as well as their cousins %$+=, -=, *=, /=%%.

CASA <10>: scan2=scan1+2.0 # add 2.0 to data
CASA <11>: scan *= 1.05    # scale spectrum by 1.05

NOTE: mathematics between two scantables is not currently available in ASAP.

Single Dish Imaging

Single dish imaging is supported within CASA using standard tasks/tools. The data must be in the MeasurementSet format. Once there, you can use the sdgrid task or the im (imager) tool to create images:

Tool example:
scans.save('outputms','MS2')                           # Save your data from ASAP into an MS

im.open('outputms')                                    # open the data set
im.selectvis(nchan=901,start=30,step=1,                #choose a subset of the dataa   
spwid=0,field=0)                                       # (just the key emission channels)                 
dir='J2000 17:18:29 +59.31.23'                         #set map center                
im.defineimage(nx=150,cellx='1.5arcmin',               #define image parameters
phasecenter=dir,mode='channel',start=30,               # (note it assumes symmetry if ny,celly 
nchan=901,step=1)                                      #  aren't specified)
                                                                       
im.setoptions(ftmachine='sd',cache=1000000000)         # choose SD gridding                
im.setsdoptions(convsupport=4)                         # use this many pixels to support the gridding
                                                       # function used (default=prolate spheroidal wave function)        
im.makeimage(type='singledish',image='FLS3a_HI.image') # make the image

Single Dish Spectral Analysis Use Case

#           MeasurementSet Name:  /home/rohir3/jmcmulli/SD/OrionS_rawACSmod      MS Version 2
#
# Project: AGBT06A_018_01
# Observation: GBT(1 antennas)
#
#Data records: 256       Total integration time = 1523.13 seconds
#   Observed from   01:45:58   to   02:11:21
#
#Fields: 4
#  ID   Name          Right Ascension  Declination   Epoch
#  0    OrionS        05:15:13.45      -05.24.08.20  J2000
#  1    OrionS        05:35:13.45      -05.24.08.20  J2000
#  2    OrionS        05:15:13.45      -05.24.08.20  J2000
#  3    OrionS        05:35:13.45      -05.24.08.20  J2000
#
#Spectral Windows:  (8 unique spectral windows and 1 unique polarization setups)
#  SpwID  #Chans Frame Ch1(MHz)    Resoln(kHz) TotBW(kHz)  Ref(MHz)    Corrs
#  0        8192 LSRK  45464.3506  6.10423298  50005.8766  45489.3536  RR  LL HC3N
#  1        8192 LSRK  45275.7825  6.10423298  50005.8766  45300.7854  RR  LL HN15CO
#  2        8192 LSRK  44049.9264  6.10423298  50005.8766  44074.9293  RR  LL CH3OH
#  3        8192 LSRK  44141.2121  6.10423298  50005.8766  44166.2151  RR  LL HCCC15N
#  12       8192 LSRK  43937.1232  6.10423356  50005.8813  43962.1261  RR  LL HNCO
#  13       8192 LSRK  42620.4173  6.10423356  50005.8813  42645.4203  RR  LL H15NCO
#  14       8192 LSRK  41569.9768  6.10423356  50005.8813  41594.9797  RR  LL HNC18O
#  15       8192 LSRK  43397.8198  6.10423356  50005.8813  43422.8227  RR  LL SiO

# Scans: 21-24  Setup 1 HC3N et al
# Scans: 25-28  Setup 2 SiO et al

casapath=os.environ['AIPSPATH']

#ASAP script                            # COMMENTS                                      #GBIDL equivalent
#-------------------------------------- ----------------------------------------------- --------------------------
import asap as sd                       #import ASAP package into CASA                  
                                        #Orion-S (SiO line reduction only)
                                        #Notes:
                                        #scan numbers (zero-based) as compared to GBTIDL

                                        #changes made to get to OrionS_rawACSmod
                                        #modifications to label sig/ref positions
os.environ['AIPSPATH']=casapath         #set this environment variable back - ASAP changes it


s=sd.scantable('OrionS_rawACSmod',False)#load the data without averaging                # filein,'Orion-S.raw.fits'

%BEGINFIGURE{label='fig:scantable', caption='Multi-panel display of the scantable. There are two plots per scan indicating the _psr (reference position data) and the _ps (source data).'}% scantable.png%ENDFIGURE%

#s.summary()                            #summary info                                   # summary
                                                                                        # fileout,'Orion-S-reduced.fits'
s.set_fluxunit('K')                     # make 'K' default unit

scal=sd.calps(s,[20,21,22,23])          # Calibrate HC3N scans                          # for i=21,24,2 do begin getps,i,ifnum=0,plnum=0,units='Ta*',

%BEGINFIGURE{label='fig:scal', caption='Two panel plot of the calibrated spectra. The GBT data has a separate scan for the SOURCE and REFERENCE positions so scans 20,21,22 and 23 result in these two spetra.'}% scal.png%ENDFIGURE%


scal.recalc_azel()                      # recalculate az/el to                          # tau=0.09 & accum & getps, i, ifnum=0,plnum=1,units='Ta*',
scal.opacity(0.09)                      # do opacity correction                         # tau=0.09 & accum & end & ave
sel=sd.selector()                       # Prepare a selection
sel.set_ifs(0)                          # select HC3N IF
scal.set_selection(sel)                 # get this IF
stave=sd.average_time(scal,weight='tintsys')    # average in time
spave=stave.average_pol(weight='tsys')  # average polarizations;Tsys-weighted (1/Tsys**2) average
sd.plotter.plot(spave)                  # plot

spave.smooth('boxcar',5)                # boxcar 5                                      # boxcar,5
spave.auto_poly_baseline(order=2)       # baseline fit order=2                          # chan & nregion,[500,3000,5000,7500] & nfit,2
sd.plotter.plot(spave)                  # plot                                          # baseline

spave.set_unit('GHz')                                                                   # freq
sd.plotter.plot(spave)
sd.plotter.set_histogram(hist=True)     # draw spectrum using histogram                 # histogram
sd.plotter.axhline(color='r',linewidth=2) # zline                                       # zline
sd.plotter.save('orions_hc3n_reduced.eps')# save postscript spectrum                    # write_ps,'orions_hc3n.ps'
%BEGINFIGURE{label='fig:spave', caption='Calibrated spectrum with a line at zero (using histograms).'}% spave.png%ENDFIGURE%
spave.set_unit('channel')                                                               # chan
rmsmask=spave.create_mask([5000,7000])  # get rms of line free regions                  # stats,5000,7000
rms=spave.stats(stat='rms',mask=rmsmask)                                                                
                                        #---------------------------------------------- # Chans   bchan   echan      Xmin      Xmax      Ymin Ymax
                                                                                        #  2001    5000    7000    5000.0    7000.0  -0.33294 0.27287
                                        #  rms
                                        #---------------------------------------------- #                 Mean      Median      RMS  Variance  Area
                                        #Scan[0] (OrionS_ps) Time[2006/01/19/01:52:05]: #            -0.018826   -0.016566 0.098964 0.0097939 -37.672
                                        # IF[0] = 0.048
                                        #----------------------------------------------
                                        # LINE
linemask=spave.create_mask([3900,4200])
max=spave.stats('max',linemask)         #  IF[0] = 0.918
sum=spave.stats('sum',linemask)         #  IF[0] = 64.994
median=spave.stats('median',linemask)   #  IF[0] = 0.091
mean=spave.stats('mean',linemask)       #  IF[0] = 0.210
                                                                                        # Chans  bchan    echan      Xmin      Xmax        Ymin Ymax
                                                                                        #   301   3900     4200    3900.0    4200.0    -0.21815 1.0648
                                                                                        #                  Mean    Median       RMS    Variance Area
                                                                                        #               0.22148   0.15098   0.30922    0.095619 66.664
                                        # Fitting
spave.set_unit('channel')               # set units to channel                          # chan
sd.plotter.plot(spave)                  # plot spectrum
f=sd.fitter()
msk=spave.create_mask([3928,4255])      # create region around line                     # gregion,[4000,4200]
f.set_function(gauss=1)                 # set a single gaussian component               # ngauss,1
f.set_scan(spave,msk)                   # set the data and region for the fitter        # gparamvalues,0,[1.,4100.,100.]
f.fit()                                 # fit                                           # gauss
f.plot(residual=True)                   # plot residual

%BEGINFIGURE{label='fig:gaussfit', caption='Plot of fit and residual.'}% Error: (3) can't find gaussfit.png in Software %ENDFIGURE%

f.get_parameters()                      # retrieve fit parameters
#   0: peak = 0.786 K , centre = 4091.236 channel, FWHM = 70.586 channel
#      area = 59.473 K channel
                                                                                        #***** Initial Guesses
                                                                                        #          G#      Height  Center (chan)  FWHM (chan) 
                                                                                        #Init:      1       1.000      4100.0000       100.0
                                                                                        #***** Fitted Gaussians
                                                                                        #        Height            Center (chan)       FWHM (chan)
                                                                                        # 1      0.8281 ( 0.01670) 4091.8367 ( 0.7487) 75.73 (  1.763)

f.store_fit('orions_hc3n_fit.txt')      # store fit                                     # *copy and paste from log*

# Save the spectrum
spave.save('orions_hc3n_reduced','ASCII',True)  # save the spectrum                     # write_ascii,'orions_hc3n.spc'

Single Dish Imaging Use Case

# Project: AGBT02A_007_01
# Observation: GBT(1 antennas)
# 
#   Telescope Observation Date    Observer       Project
#   GBT       [                   4.57539e+09, 4.5754e+09]Lockman        AGBT02A_007_01
#   GBT       [                   4.57574e+09, 4.57575e+09]Lockman        AGBT02A_007_02
#   GBT       [                   4.5831e+09, 4.58313e+09]Lockman        AGBT02A_031_12
# 
# Thu Feb 1 23:15:15 2007    NORMAL ms::summary:
# Data records: 76860       Total integration time = 7.74277e+06 seconds
#    Observed from   22:05:41   to   12:51:56
# 
# Thu Feb 1 23:15:15 2007    NORMAL ms::summary:
# Fields: 2
#   ID   Name          Right Ascension  Declination   Epoch
#   0    FLS3a         17:18:00.00      +59.30.00.00  J2000
#   1    FLS3b         17:18:00.00      +59.30.00.00  J2000
# 
# Thu Feb 1 23:15:15 2007    NORMAL ms::summary:
# Spectral Windows:  (2 unique spectral windows and 1 unique polarization setups)
#   SpwID  #Chans Frame Ch1(MHz)    Resoln(kHz) TotBW(kHz)  Ref(MHz)    Corrs
#   0        1024 LSRK  1421.89269  2.44140625  2500        1420.64269  XX  YY
#   1        1024 LSRK  1419.39269  2.44140625  2500        1418.14269  XX  YY


# FLS3 data calibration
# this is calibration part of FLS3 data
#
casapath=os.environ['AIPSPATH']
import asap as sd
os.environ['AIPSPATH']=casapath

print '--Import--'

s=sd.scantable('FLS3_all_newcal_SP',false)         # read in MeasurementSet

print '--Split--'

# splitting the data for each field
s0=s.get_scan('FLS3a*')                            # split the data for the field of interest
s0.save('FLS3a_HI.asap')                           # save this scantable to disk (asap format)
del s0                                             # free up memory from scantable

print '--Calibrate--'
s=sd.scantable('FLS3a_HI.asap')                    # read in scantable from disk (FLS3a)
s.set_fluxunit('K')                                # set the brightness units to Kelvin
scanns = s.getscannos()                            # get a list of scan numbers
sn=list(scanns)                                    # convert it to a list
print "No. scans to be processed:", len(scanns)

res=sd.calfs(s,sn)                                 # calibrate all scans listed using frequency switched calibration method

print '--Save calibrated data--'
res.save('FLS3a_calfs', 'MS2')                         # Save the dataset as a MeasurementSet

print '--Image data--'
                                                                
im.open('FLS3a_calfs')                                 # open the data set
im.selectvis(nchan=901,start=30,step=1,                #choose a subset of the dataa   
spwid=0,field=0)                                       # (just the key emission channels)                 
dir='J2000 17:18:29 +59.31.23'                         #set map center                
im.defineimage(nx=150,cellx='1.5arcmin',               #define image parameters
phasecenter=dir,mode='channel',start=30,               # (note it assumes symmetry if ny,celly 
nchan=901,step=1)                                      #  aren't specified)
                                                                       
im.setoptions(ftmachine='sd',cache=1000000000)         # choose SD gridding                
im.setsdoptions(convsupport=4)                         # use this many pixels to support the gridding
                                                       # function used (default=prolate spheroidal wave function)        
im.makeimage(type='singledish',image='FLS3a_HI.image') # make the image

%BEGINFIGURE{label='fig:HI_cube', caption='FLS3a HI emission. The display illustrates the visualization of the data cube (left) and the profile display of the cube at the cursor location (right); the Tools menu of the Viewer Display Panel has a Spectral Profile button which brings up this display. By default, it grabs the left-mouse button. Pressing down the button and moving in the display will show the profile variations.'}% HI cube.png%ENDFIGURE%

Simulation

Under Construction

Task Summaries

accum

    Accumulate incremental calibration solutions into a cumulative calibration table:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    tablein -- Input (cumulative) calibration table 
            default: '' (create); example: tablein='ngc5921.gcal'
    incrtable -- Input incremental calibration table name
            default: ; example: incrtable='ngc5921_phase.gcal'
            
    caltable -- Output calibration table (cumulative)
            default: ; example: caltable='ngc5921.Cgcal'
    field -- List of field names to update in input cumulative table 
            default: -1 (all); example: field='0957+561'
            
    calfield  -- List of field names in incremental table to use.
            default: -1 (all); example: calfield='0917+624'
            
    interp -- Interpolation mode to use on incremental solutions
            default: 'linear'; example: interp='nearest'
            
    accumtime -- Cumulative table timescale when creating from scratch
            default: -1.0; example: accumtime=5.0
    spwmap -- Spectral windows to apply
            default: [-1]; example: spwmap=[0,0,1,1]

bandpass

    Calculate a bandpass calibration solution:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    caltable -- Name of output Gain calibration table
            default: ; example: caltable='ngc5921.gcal'
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            
    nchan -- Number of channels to select (when mode='channel')
            default: 0; example: nchan=45 (note: nchan=0 selects all)
    start -- Start channel, 0-relative;active when mode='channel','velocity'
            default=0; example: start=5,start='20km/s'
    step -- Increment between channels;active when mode='channel','velocity'
            default=1; example: step=1, step='1km/s'
    msselect -- Optional subset of the data to select (field,spw,time,etc)
            default:'';example:msselect='FIELD_ID==0', 
            msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'
            
    --- Solve properties
    solint --  Solution interval (sec)
            default: 86400 (=long time); example: solint=60.
    refant -- Reference antenna *name*
            default: -1 (none); example: refant='13'
    bandtype -- Type of bandpass solution (B or BPOLY)
            default: 'B'; example: bandtype='BPOLY'
    append -- Append solutions to the (existing) table
            default: False; example: append=True
    degamp -- Polynomial degree for amplitude solution
            default: 3; example: degamp=2
    degphase -- Polynomial degree for phase solution
            default: 3; example: degphase=2
    visnorm -- Normalize data prior to solution
            default: False; example: visnorm=True
    bpnorm -- Normalize result?
            default: True; example: bpnorm=False
    maskcenter -- Number of channels to avoid in center of each band
            default: 1; example: maskcenter=5
    maskedge -- Fraction of channels to avoid at each band edge (in %)
            default: 5; example: maskedge=3
    --- Other calibration to pre-apply (weights are calibrated by default)
    gaincurve -- Apply VLA antenna gain curve correction
            default: True; example: gaincurve=False
            
    opacity -- Apply opacity correction
            default: True; example: opacity=False
            
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    gaintable -- Gain calibration solutions
            default: ''; example: gaintable='ngc5921.gcal'
    gainselect -- Select subset of calibration solutions from gaintable
            default:''; example: gainselect='FIELD_ID==1'
    bptable -- Bandpass calibration solutions
            default: ''; example: bptable='ngc5921.bcal'
    pointtable -- Pointing calibration solutions
            defalut: ''; example: pointtable='ngc5921.ptcal'

blcal

    Calculate a baseline-based calibration solution (gain or bandpass):
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    caltable -- Name of output Gain calibration table
            default: ; example: caltable='ngc5921.gcal'
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            
    nchan -- Number of channels to select (when mode='channel')
            default: 1; example: nchan=45 (note: nchan=0 selects all)
    start -- Start channel, 0-relative;active when mode='channel','velocity'
            default=0; example: start=5,start='20km/s'
    step -- Increment between channels;active when mode='channel','velocity'
            default=1; example: step=1, step='1km/s'
    msselect -- Optional subset of the data to select (field,spw,time,etc)
            default:'';example:msselect='FIELD_ID==0', 
            msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'
            
    gaincurve -- Apply VLA antenna gain curve correction
            default: True; example: gaincurve=False
            
    opacity -- Apply opacity correction
            default: True; example: opacity=False
            
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    gaintable -- Gain calibration solutions
            default: ''; example: gaintable='ngc5921.gcal'
    gainselect -- Select subset of calibration solutions from gaintable
            default:''; example: gainselect='FIELD_ID==1'
    solint --  Solution interval (sec)
            default: 86400 (=long time); example: solint=60.
    refant -- Reference antenna *name*
            default: -1 (any integer=none); example: refant='13'
    freqdep -- Solve for frequency dependent solutions
            default: False (gain; True=bandpass); example: freqdep=True

browsetable

    Browse a table (visibility data set or calibration table):
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'

clean

    Calculate a deconvolved image with selected clean algorithm:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    imagename -- Name of output images: restored=imagename.restored;
            residual=imagename.residual,model=imagename
            default: ; example: imagename='ngc5921'
    mode -- Type of selection 
            default: ; example: mode='channel'; 
            
    alg -- Algorithm to use
            default: 'hogbom'; example: alg='clark'; 
            
    niter -- Number iterations, set to zero for no CLEANing
            default: 500; example: niter=500
    gain -- Loop gain for CLEANing
            default: 0.1; example: gain=0.1
    threshold -- Flux level at which to stop CLEANing (units=mJy)
            default: 0.0; example: threshold=0.0
    mask -- Name(s) of mask image(s) used for CLEANing
            default: ; example: mask='orion.mask'
    cleanbox -- List of [blc-x,blc-y,trc-x,trc-y] values
            default: []; example: cleanbox=[110,110,150,145]
            Note: This can also be a filename with clean values:
            fieldindex blc-x blc-y trc-x trc-y
    nchan -- Number of channels to select
            default: 1; example: nchan=45
    start -- Start channel, 0-relative
            default=0; example: start=5
    width -- Channel width (value > 1 indicates channel averaging)
            default=1; example: width=5
    step -- Step in channel number
            default=1; example: step=2      
    imsize -- Image size in spatial pixels (x,y)
            default = [256,256]; example: imsize=[512,512]
    cell -- Cell size in arcseconds (x,y)
            default=[1,1]; example: cell=[0.5,0.5]
    stokes -- Stokes parameters to image
            default='I'; example: stokes='IQUV'; 
            
    fieldid -- Field index identifier
            default=0; example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default=-1 (all); example: spwid=1
    weighting -- Weighting to apply to visibilities
            default='natural'; example: weighting='uniform'; 
            
    rmode -- Robustness mode (for use with Brigg's weighting)
            default='none'; example='abs'; 
            
    robust -- Brigg's robustness parameter 
            default=0.0; example: robust=0.5; 
            
    uvfilter -- Apply additional filtering/uv tapering of the visibilities.
            defalt=False; example: uvfilter=True
    uvfilterbmaj -- Major axis of filter (arcseconds)
            default=1.; example: uvfilterbmaj=12.
    uvfilterbmin -- Minor axis of filter (arcseconds)
            default=1.; example: uvfilterbmin=12.
    uvfilterbpa -- Position angle of filter (degrees)
            default=0.' example: uvfilterbpa=57.

clearcal

    Re-initializes the calibration for a visibility data set; MODEL_DATA is set to unity (in total intensity
    and unpolarized) and CORRECTED_DATA is set to the original (observed) DATA:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'

contsub

    Continuum fitting and subtraction in the uv plane:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    fieldid -- Field index identifier
            default=unset; example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default=0; example: spwid=1
    channels -- Range of channels to fit
            default:; example: channels=range(4,7)+range(50,60)
    solint -- Averaging time (seconds)
            default: 0.0 (scan-based); example: solint=10
    fitorder -- Polynomial order for the fit
            default: 0; example: fitorder=1
    fitmode -- Use of the continuum fit model
            default: 'subtract'; example: fitmode='replace'
            

correct

    Apply calibration solution(s) to data:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    msselect -- Optional subset of the data to select (field,spw,time,etc)
            default:'';example:msselect='FIELD_ID==0', 
            msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'
            
    gaincurve -- Apply VLA antenna gain curve correction
            default: True; example: gaincurve=False
            
    opacity -- Apply opacity correction
            default: True; example: opacity=False
            
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    gaintable -- Gain calibration solutions
            default: ''; example: gaintable='ngc5921.gcal'
    gainselect -- Select subset of calibration solutions from gaintable
            default:''; example: gainselect='FIELD_ID==1'
    bptable -- Bandpass calibration solutions
            default: ''; example: bptable='ngc5921.bcal'
    blbased -- Apply baseline-based solutions (from blcal)
            default: False; example: blbased=True
    pointtable -- Pointing calibration solutions
            default: ''; example: pointtable='ngc5921.ptcal'
    calwt -- Calibrate weights along with data
            default: False; example: calwt=True

exportuvfits

    Convert a CASA visibility data set (MS) to a UVFITS file:
    The FITS file is always written in floating point format and the
    data is always stored in the primary array of the FITS file.
    By default, a single-source UVFITS file is written, but if the
    MS contains more than one field or if you select the multi-source
    argument=True, a multi-source UVFITS file will be written (you
    must ensure that the data shape is fixed for all data due to the
    limitations of the UVFITS format).
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='3C273XC1.ms'
    fitsfile -- Name of output UV FITS file
            default: ; example='3C273XC1.fits'
    datacolumn -- Data column to write
            default: 'corrected'; example: datacolumn='data'
            
    fieldid -- Field index identifier
            default=0; example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default=0; example: spwid=1
    nchan -- Number of channels to select
            default: 1; example: nchan=45
    start -- Start channel, 0-relative
            default=0; example: start=5
    width -- Channel width (value > 1 indicates channel averaging)
            default=1; example: width=5
    writesyscal -- Write GC and TY tables
            default=False; example: writesyscal=True
    multisource -- Write in multi-source format
            default=False; example: multisource=True
    combinespw -- Handle spectral window as IF
            default=False; example: combinespw=True
    writestation -- Write station name instead of antenna name
            default=False; example: writestation=True

feather

    Feather together an interferometer and a single dish image in the Fourier plane: 
    
    Keyword arguments:
    imagename -- Name of output feathered image
            default: False; example: imagename='orion_combined'
    highres -- Name of high resolution (interferometer) image
            default: False; example: imagename='orion_vla.im'
    lowres -- Name of low resolution (single dish) image
            default: False; example: imagename='orion_gbt.im'

flagautocorr

    Flag autocorrelations (typically in a filled VLA data set):
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'

flagdata

    Flag data based on selections:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    antennaid -- Antenna index identifier
            default: [-1] (all); example: antennaid=3
    baseline -- Baseline (composed of a list: [ant1,ant2])
            default: [-1] (all); example: baseline=[2,3]
    chans -- Channel range to clip
            default: ; example: chans=range(0,4)+range(55,62)
            Note: Remember in Python ranges go up to but not including
            the last value (so the first range covers channels 0,1,2,3).
    clipfunction -- Defines the function used in evaluating data for clipping
            default: 'ABS'; example: clipfunction='RE'
            
    clipcorr -- Defines the correlation(s) to clip
            default: 'I'; example: clipcorr='RR'
            
    clipminmax -- Sets the range of data values that will not be flagged
            default: ; example: clipminmax=[0.0,1.5]
    fieldid -- Field index identifier
            default: -1 (all); example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default: -1 (all); example: spwid=0
    timerange -- Time range selection
            default: ; example: timerange=['26-APR-2003/02:45:00.0','26-APR-2003/02:49:00.0']
    unflag -- Unflag the data (True/False)
            default: False; example: unflag=True

flagxy

    Plot points for flagging selected X and Y axes; if region is not specified, it enables interactive setting of 
    flag boxes using the mouse. Use the task flagdata to write these flags to disk.
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    xaxis -- Visibility file (MS) data to plot along the x-axis
            
    yaxis -- Visibility data to plot along the y-axis
            
    datacolumn -- Visibility file (MS) data column.
            default: 'corrected'; example: datacolumn='model'
            
    antennaid -- Antenna index identifier
            default: -1 (all); example: antennaid=[13]
    spwid -- Spectral window index identifier
            default: -1 (all); example: spwid=0
    fieldid -- Field index identifier
            default: -1 (all); example: fieldid=0
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    timerange
    correlations -- Correlations to plot
            default: '' (all); example: correlations='RR'
            
    --- Spectral Information ---
    nchan -- Number of channels to select
            default: -1 (all); example: nchan=55
    start -- Start channel (0-relative)
            default=0; example: start=5
    width -- Channel width (value>0 indicates averaging)
            default=1; example: width=5
    --- Plot Options ---
    subplot -- Panel number on the display screen
            default: 111 (full screen display); example:
               subplot=221 (upper left panel of 4 on frame)
               subplot=223 (lower left panel of 4 on frame)
               subplot=212 (lower panel of 2 on frame)
            
    overplot -- Overplot these values on current plot (if possible)
            default: False; example: overplot=True
    plotsymbol -- pylab plot symbol
            default: ','; example: plotsymbol='bo' (blue circles)
            
    fontsize -- Font size for labels
            default: 1; example: fontsize=2
    windowsize -- Window size
            default: 1.0; example: windowsize=0.5
    --- Flag Options ---
    region -- Flagging region specified as [xmin,xmax,ymin,ymax]
            default: [0.0] = mark region interactively
            example: region=[53,60,0.0,2.0]
    diskwrite -- Write flags to disk or just display them
            Note: This parameter is only active when region is set.
            default: False; example: diskwrite=True

fluxscale

    Bootstrap the flux density scale from standard calibrators:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    caltable -- Name of input calibration table
            default: ; example: caltable='ngc5921.gcal'
    fluxtable -- Name of output, flux-scaled calibration table
            default: ; example: fluxtable='ngc5921.gcal2'
    reference -- Reference field name (transfer flux scale from)
            default: ; example: reference='1328+307'
    transfer -- Transfer field name(s)
            default: -1 (all); example: transfer=['1445+09900002']

fringecal

    Calculate a baseline-based fringe-fitting solution (phase, delay, delay-rate); Note: currently this is limited 
    to delays and delay-rates in the central ambiguity:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    caltable -- Name of output Gain calibration table
            default: ; example: caltable='ngc5921.gcal'
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            
    nchan -- Number of channels to select (when mode='channel')
            default: 1; example: nchan=45 (note: nchan=0 selects all)
    start -- Start channel, 0-relative;active when mode='channel','velocity'
            default=0; example: start=5,start='20km/s'
    step -- Increment between channels;active when mode='channel','velocity'
            default=1; example: step=1, step='1km/s'
    msselect -- Optional subset of the data to select (field,spw,time,etc)
            default:'';example:msselect='FIELD_ID==0', 
            msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'
            
    gaincurve -- Apply VLA antenna gain curve correction
            default: False; example: gaincurve=False
            
    opacity -- Apply opacity correction
            default: False; example: opacity=False
            
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    solint --  Solution interval (sec)
            default: 0; example: solint=60.
    refant -- Reference antenna *name*
            default: -1 (any integer=none); example: refant='13'

ft

    Fourier transform the specified model (or component list) and insert into the MODEL_DATA c
olumn of the specified visibility data set: 
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    model -- Name of input model image
            default: False; example: model='3C373XC1.model'
    complist -- Name of component list
            default: ''; example: complist='test.cl'
    incremental -- Add to the existing MODEL_DATA column?
            default: False; example: incremental=True

gaincal

    Solve for gain calibration:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    caltable -- Name of output Gain calibration table
            default: ; example: caltable='ngc5921.gcal'
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            
    uvrange -- Optional subset of the uv range (kilo lambda)
            default=[0] (all); example: uvrange=[0,15]
    --- Solve properties
    solint --  Solution interval (sec)
            default: -1 (scan based); example: solint=60.
    refant -- Reference antenna *name*
            default: -1 (any integer=none); example: refant='13'
    gaintype -- Type of gain solution (G, T, or GSPLINE)
            default: 'G'; example: gaintype='GSPLINE'
    calmode -- Type of solution
            default: 'ap' (amp and phase); example: calmode='p'
            
    append -- Append solutions to the (existing) table
            default: False; example: append=True
    splinetime -- Spline timescale (sec); used for gaintype='GSPLINE'
            default: 10800; example: splinetime=5000
    npointaver -- Tune phase-unwrapping algorithm for gaintype='GSPLINE'
            default: 10; example: npointaver=5
    phasewrap -- Wrap the phase when differences greater than this are seen (degrees)
            Used for gaintype='GSPLINE'
            default: 250; example: phasewrap=200
    --- Other calibration to pre-apply (weights are calibrated by default)
    gaincurve -- Apply VLA antenna gain curve correction
            default: True; example: gaincurve=False
            
    opacity -- Apply opacity correction
            default: True; example: opacity=False
            
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    gaintable -- Gain calibration solutions
            default: ''; example: gaintable='ngc5921_phase.gcal'
    bptable -- Bandpass calibration solutions
            default: ''; example: bptable='ngc5921.bcal'
    pointttable -- Pointing calibration solutions
            default: ''; example: pointtable='ngc5921.ptcal'

importarchive

    Convert VLA archive file(s) to a CASA visibility data set (MS):
    
    Keyword arguments:
    archivefiles -- Name of input VLA archive file(s)
            default: 
            example: archivefiles=['AP314_A950519.xp1','AP314_A950519.xp2']
    vis -- Name of output visibility file (MS)
            default: ; example: vis='NGC7538.ms'
    bandname -- VLA Frequency band
            default:  - all bands; example: bandname='K'
            
    #projectname -- Observing project name
    #       default: ; example='AP314'
    freqtol -- Tolerance in frequency shift in naming spectral windows
            default: channel width of current spectral window in Hz
            example: 150000.0

importasdm

    Convert an ALMA Science Data Model observation into a CASA visibility file (MS)
    
    Keyword arguments:
    asdm -- Name of input ASDM file (directory)
            default: ; example: asdm='ExecBlock3'

importuvfits

    Convert a UVFITS file to a CASA visibility data set (MS):
    
    Keyword arguments:
    fitsfile -- Name of input UV FITS file
            default: ; example='3C273XC1.fits'
    vis -- Name of output visibility file (MS)
            default: ; example: vis='3C273XC1.ms'

invert

    Calculate the dirty image and dirty beam:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    imagename -- Name of output images: restored=imagename.restored;
            residual=imagename.residual,model=imagename
            default: ; example: imagename='ngc5921'
    mode -- Type of selection 
            default: ; example: mode='channel'; 
            
    nchan -- Number of channels to select
            default: 1; example: nchan=45
    start -- Start channel, 0-relative
            default=0; example: start=5
    width -- Channel width (value > 1 indicates channel averaging)
            default=1; example: width=5
    step -- Step in channel number
            default=1; example: step=2      
    imsize -- Image size in spatial pixels (x,y)
            default = [256,256]; example: imsize=[512,512]
    cell -- Cell size in arcseconds (x,y)
            default=[1,1]; example: cell=[0.5,0.5]
    stokes -- Stokes parameters to image
            default='I'; example: stokes='IQUV'; 
            
    fieldid -- Field index identifier
            default=0; example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default=0; example: spwid=1
    weighting -- Weighting to apply to visibilities
            default='natural'; example: weighting='uniform'; 
            
    rmode -- Robustness mode (for use with Brigg's weighting)
            default='none'; example='abs'; 
            
    robust -- Brigg's robustness parameter 
            default=0.0; example: robust=0.5; 
            

listhistory

    List the processing history of a dataset:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'

listobs

    List the observations in a dataset:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    verbose -- List each observation in addition to the summary
            default=False; example: verbose=True
            

makemask

    Derive/Calculate a mask image from another image or a visibility data set and 
       a set of imaging parameters:
    
    Keyword arguments:
    image -- Name of input image
            default: ; example: image='ngc5921_task.image'
            
    interactive -- Indicate whether you want to view the image and
            interactively select regions.
            default: False; example: interactive=True
            
    cleanbox -- List of [blc-x,blc-y,trc-x,trc-y] values
            default: []; example: cleanbox=[110,110,150,145]
            Note: This can also be a filename with clean values:
            fieldindex blc-x blc-y trc-x trc-y
    expr -- An expression specifying the mask
            default: ''; example: expr='somestring'
            
    ----------------------------------------------------------------
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    imagename -- Name of output mask image
            default: ; example: imagename='ngc5921.mask'
    mode -- Type of selection 
            default: ; example: mode='channel'; 
            
    nchan -- Number of channels to select
            default: 1; example: nchan=45
    start -- Start channel, 0-relative
            default=0; example: start=5
    width -- Channel width (value > 1 indicates channel averaging)
            default=1; example: width=5
    step -- Step in channel number
            default=1; example: step=2      
    imsize -- Image size in spatial pixels (x,y)
            default = [256,256]; example: imsize=[512,512]
    cell -- Cell size in arcseconds (x,y)
            default=[1,1]; example: cell=[0.5,0.5]
    stokes -- Stokes parameters to image
            default='I'; example: stokes='IQUV'; 
            
    fieldid -- Field index identifier
            default=0; example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default=0; example: spwid=1

mosaic

    Calculate a multi-field deconvolved image with selected clean algorithm:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    imagename -- Name of output images: restored=imagename.restored;
            residual=imagename.residual,model=imagename
            default: ; example: imagename='ngc5921'
    mode -- Type of selection 
            default: False; example: mode='channel'; 
            
    mfalg -- Algorithm to use
            default: 'mfclark'; example: mfalg='mfhogbom'; 
            
    niter -- Number iterations, set to zero for no CLEANing
            default: 500; example: niter=500
    gain -- Loop gain for CLEANing
            default: 0.1; example: gain=0.1
    threshold -- Flux level at which to stop CLEANing (units=mJy)
            default: 0.0; example: threshold=0.0
    mask -- Name(s) of mask image(s) used for CLEANing
            default: ; example: mask='orion.mask'
    nchan -- Number of channels to select
            default: 1; example: nchan=45
    start -- Start channel, 0-relative
            default=0; example: start=5
    width -- Channel width (value > 1 indicates channel averaging)
            default=1; example: width=5
    step -- Step in channel number
            default=1; example: step=2      
    imsize -- Image size in spatial pixels (x,y)
            default = [256,256]; example: imsize=[512,512]
    cell -- Cell size in arcseconds (x,y)
            default=[1,1]; example: cell=[0.5,0.5]
    stokes -- Stokes parameters to image
            default='I'; example: stokes='IQUV'; 
            
    fieldid -- Field index identifier
            default=0; example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    phasecenter -- Phase center (field identifier or direction string)
            default=0; example: phasecenter=62
            
    spwid -- Spectral window index identifier
            default=0; example: spwid=1
    weighting -- Weighting to apply to visibilities
            default='natural'; example: weighting='uniform'; 
            
    rmode -- Robustness mode (for use with Brigg's weighting)
            default='none'; example='abs'; 
            
    robust -- Brigg's robustness parameter 
            default=0.0; example: robust=0.5; 
            
    scaletype -- Image plane flux scale type
            default='NONE'; example: scaletype='SAULT'
            
    constpb -- In Sault weighting, the flux scale is constant above this PB level
            default=0.4; example: constpb=0.3
    minpb -- Minimum PB level to use
            default=0.1; example: minpb=0.15

plotants

    Plot the antenna distribution in local reference frame; X-toward local east; Y-toward local north:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'

plotcal

        Plot calibration solutions: 
        
        Keyword arguments:
        tablein -- Name of input calibration table 
                default: ; example: vis='ngc5921.gcal'
        yaxis -- Visibility data to plot along the y-axis
                
        antennaid -- Antenna index identifier(s)
                default: -1 (all); example: antennaid=[13]
        caldescid -- Calibrater data description ID (combination of SPW and polarization)
                default: -1 (all); example: caldescid=0
        --- Plot Options ---
        nxpanel -- Panel number in the x-direction
                default: 1; example: nxpanel=2
        nypanel -- Panel number in the y-direction 
                default: 1; example: nypanel=2
        multiplot -- Plot data (e.g., from different antennas) on separate plots
                default: False; example: multiplot=True

plotxy

    Plot points for selected X and Y axes:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    xaxis -- Visibility file (MS) data to plot along the x-axis
            
    yaxis -- Visibility data to plot along the y-axis
            
    datacolumn -- Visibility file (MS) data column.
            default: 'corrected'; example: datacolumn='model'
            
    antennaid -- Antenna index identifier
            default: -1 (all); example: antennaid=[13]
    spwid -- Spectral window index identifier
            default: -1 (all); example: spwid=0
    fieldid -- Field index identifier
            default: -1 (all); example: fieldid=0
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    timerange
    correlations -- Correlations to plot
            default: '' (all); example: correlations='RR'
            
    --- Spectral Information ---
    nchan -- Number of channels to select
            default: -1 (all); example: nchan=55
    start -- Start channel (0-relative)
            default=0; example: start=5
    width -- Channel width (value>0 indicates averaging)
            default=1; example: width=5
    --- Plot Options ---
    subplot -- Panel number on the display screen
            default: 111 (full screen display); example:
               subplot=221 (upper left panel of 4 on frame)
               subplot=223 (lower left panel of 4 on frame)
               subplot=212 (lower panel of 2 on frame)
            
    overplot -- Overplot these values on current plot (if possible)
            default: False; example: overplot=True
    plotsymbol -- pylab plot symbol
            default: ','; example: plotsymbol='bo' (blue circles)
            
    fontsize -- Font size for labels
            default: 1; example: fontsize=2
    windowsize -- Window size
            default: 1.0; example: windowsize=0.5

pointcal

    Solve for pointing error calibration:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    model -- Name of input model (component list or image)
            default: ; example: model='ngc5921.im'
    caltable -- Name of output Pointing calibration table
            default: ; example: caltable='ngc5921.gcal'
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            
    gaincurve -- Apply VLA antenna gain curve correction
            default: True; example: gaincurve=False
            
    opacity -- Apply opacity correction
            default: True; example: opacity=False
            
    tau -- Opacity value to apply (if opacity=True)
            default:0.0001; example: tau=0.0005
    solint --  Solution interval (sec)
            default: -1 (scan based); example: solint=60.
    refant -- Reference antenna *name*
            default: -1 (any integer=none); example: refant='13'

smooth

        Smooth calibration solution(s) derived from one or more sources:
        
        Keyword arguments:
        vis -- Name of input visibility file (MS)
                default: ; example: vis='ngc5921.ms'
        tablein -- Input calibration table (any type)
                default: ; example: tablein='ngc5921.gcal'
        caltable -- Output calibration table (smoothed)
                default: ; example: caltable='ngc5921_smooth.gcal'
        append -- Append solutions to an existing calibration table?
                default: False; example: append=True
        calselect -- Optional subset of calibration data to select (e.g., field)
                default: ''; example: calselect='FIELD_NAME=FIELD1'
        smoothtype -- The smoothing filter to be used
                default: 'mean'; example: smoothtype='median'
                
        smoothtime -- Smoothing filter time (sec)
                default: 0.0; example: smoothtime=10.

setjy

    Compute the model visibility for a specified source flux density: 
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    fieldid -- Field index identifier (0-based)
            default=-1 (all); example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier (0-based)
            default=-1 (all); example: spwid=1
    fluxdensity -- Specified flux density (I,Q,U,V) in Jy
            default=-1 (lookup the value; use 1.0 if not found)
            example: fluxdensity=[2.6,0.2,0.3,0.5]
    standard -- Flux density standard
            default: 'Perley-Taylor 99'; example: standard='Baars'
            

split

    Create a new data set (MS) from a subset of an existing data set (MS):
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    outputvis -- Name of output visibility file (MS)
            default: ; example: outputvis='ngc5921_src.ms'
    fieldid -- Field index identifier
            default=-1 (all); example: fieldid=1
    field -- Field name(s); this will use a minimum match on the strings
            that you provide; to use multiple strings, enter a single string
            with spaces separating the field names
            default: ''(will use fieldid); example: field='133',field='133 N5921'
    spwid -- Spectral window index identifier
            default=-1 (all); example: spwid=1
    nchan -- Number of channels to select
            default:-1 (all); example: nchan=45
    start -- Start channel, 0-relative
            default=0; example: start=5
    step -- Step in channel number
            default=1; example: step=2      
    timebin -- Value for time averaging
            default='-1s' (no averaging); example: timebin='30s'
    timerange -- Select time range subset of data
            default=''; 
            example: timerange='YYYY/MM/DD/HH:MM:SS.sss'
            timerange='< YYYY/MM/DD/HH:MM:SS.sss'
            timerange='> YYYY/MM/DD/HH:MM:SS.sss'
            timerange='ddd/HH:MM:SS.sss'
            timerange='< ddd/HH:MM:SS.sss'
            timerange='> ddd/HH:MM:SS.sss'
    datacolumn -- Which data set column to split out
            default='corrected'; example: datacolumn='data'
            

uvmodelfit

    Fit a single component source model to the uv data:
    
    Keyword arguments:
    vis -- Name of input visibility file (MS)
            default: ; example: vis='ngc5921.ms'
    niter -- Number of fitting iterations to execute
            default: 0; example: niter=5
    comptype -- component model type
            default: 'P'; example: comptype='G'
            
    sourcepar -- Starting guess for component parameters (flux,xoffset,yoffset)
            default: [1,0,0]; example: sourcepar=[2.5,0.3,0.1]
            Note: Flux is in Jy, xoffset is in arcsec, yoffset is in arcsec.
    fixpar -- Control which parameters to let vary in the fit
            default: [] (all vary); example: vary=[False,True,True]
            (this would fix the flux to that set in sourcepar but allow the
            x and y offset positions to be fit).
    file -- Optional output component list table
            default: ''; example: file='componentlist.cl'
    --- Data Selection ---
    mode -- Type of data selection
            default: 'none' (all data); example: mode='channel'
            

viewer

    View an image or visibility data set:
    
    Keyword arguments:
    imagename -- Name of file to visualize
            default: ; example: imagename='ngc5921.image'

Step-by-Step Guides

Coming soon - see Advanced cookbook for tool-based example

Annotated Example Scripts

Note: The NGC 5921 data set is available with the full CASA rpm distribution. Other data sets can be made available upon request. These scripts are intended to illustrate the types of commands needed for different types of reduction/astronomical observations.

NGC 5921 -- VLA -- red-shifted HI emission

Note: This script does not include self-calibration steps.

importuvfits(fitsfile='/aips++/data/nrao/VLA/ngc5921.fits', # input UVFITS file
             vis='ngc5921.ms')                              # output data set (MS)
flagautocorr(vis='ngc5921.ms')                              # Flag autocorrelations
setjy(vis='ngc5921.ms',                              # Set flux density  to 1331+305 (3C 286)
       fieldid=0)
gaincal(vis='ngc5921.ms',                          # Derive gain calibration solutions
       'ngc5921.gcal',                             # Write solutions to this file
       mode='channel',                             # Select data; drop out channels that
       nchan=55,                                   # may bias the solution
       start=2,                                    # Select data to be used for solving
       msselect='FIELD_ID <= 1',                   # Fields 0,1 are the gain, bandpass cals
       tau=0.0001,                                 # Correct opacity for each time
       solint=0.,                                  # Solve for each scan
       refant=14)                                  # Choose well behaved antenna
bandpass(vis='ngc5921.ms',                         # Derive bandpass calibration solution
       'ngc5921.bcal',                             # Write solutions to this file
       msselect='FIELD_ID==0',                     # Select bandpass calibrator (3C286)
       tau=0.0001,                                 # Correct opacity with specified value
       gaintable='ngc5921.gcal',                   # Apply gain solutions from this table
       refant=14,                                  # Use this reference antenna
       nchan=0,                                    # Set nchan back to default
       start=0,                                    # Set start back to default
       step=1,                                     # Set step back to default
       solint=86400.0)                             # Set interval to be entire observation
fluxscale(vis='ngc5921.ms',                        # Transfer the flux density scale
       caltable='ngc5921.gcal',                    # Use the gain solutions from this table
       fluxtable='ngc5921.fluxscale',              # Write the scaled solutions to this table
       reference=['1331+30500002'],                # Transfer flux density scale from 3C286
       transfer=['1445+09900002'])                 # to the gain calibrator
correct(vis='ngc5921.ms',                          # Correct the target source data
       msselect='FIELD_ID IN [1,2]',               # Select sources to which calibration is applied
       tau=0.0001,                                 # Correct opacity with specified value
       gaintable='ngc5921.fluxscale',              # Apply flux-scaled gain solutions
       gainselect='FIELD_ID==1',                   # from the gain calibrater
       bptable='ngc5921.bcal')                     # Apply bandpass calibration solutions
split(vis='ngc5921.ms',                            # Split out calibrated data - cal
       outputvis='ngc5921_cal.split.ms',           # Write data to this file
       fieldid=1,                                  # Choose calibrater
       spwid=0,                                    # Choose spectral window
       nchan=63,                                   #   and channels to write out
       start=0,
       step=1,
       datacolumn='CORRECTED_DATA')                # Choose calibrated data
contsub(vis='ngc5921.ms',                          # Subtract the continuum in the uv plane
       fieldid=2,                                  # Choose NGC 5921 (fieldid=2)
       spwid=0,                                    # Choose spectral window 1
       channels=range(4,7)+range(50,60),           # line free channels 4-6 and 50-59
       solint=0.0,                                 # use scan-averaging
       fitmode='subtract')                         # subtract continuum from line data
                                                   # line-only data is written to CORRECTED_DATA
split(vis='ngc5921.ms',                            # Split out calibrated data - src
       outputvis='ngc5921_src.split.ms',
       fieldid=2,
       spwid=0,
       nchan=46,
       start=5,
       step=1,
       datacolumn='CORRECTED_DATA')
clean(vis='ngc5921_src.split.ms',                  # Use split out calibrated data
       imagename='ngc5921_task',                   # Specify name of output: 
                                                   # model    = 'ngc5921_task.model'
                                                   # residual = 'ngc5921_task.residual'
                                                   # image    = 'ngc5921_task.image'
       mode='channel',                             # Imaging parameters
       alg='hogbom',                               # Use Hogbom clean
       niter=6000,                                 # number of iterations
       gain=0.1,                      
       threshold=8.,                               # clean down to a threshold of 8 mJy
       mask='',                                    # clean the inner quarter = no mask
       nchan=46,
       start=0,
       step=1,
       fieldid=0,
       imsize=[256,256],             
       cell=[15.,15.],                             # cell size in arcseconds
       weighting='briggs',                         # Use robust weighting
       rmode='norm',
       robust=0.5)

NGC 5921 data summary

Summary created with ms.summary(verbose=T):

This is written to the casalog (which is just a 'tail -f' on the casapy.log file).

           MeasurementSet Name:  ngc5921.ms      MS Version 2

   Observer: TEST     Project:
Observation: VLA
Data records: 22653       Total integration time = 5280 seconds
   Observed from   13-Apr-1995/09:19:00   to   13-Apr-1995/10:47:00

   ObservationID = 1         ArrayID = 1
  Date        Timerange                Scan  Field          DataDescIds
  13-Apr-1995/09:19:00.0 - 09:24:30.0     1  1331+30500002  [1]
              09:27:30.0 - 09:29:30.0     2  1445+09900002  [1]
              09:33:00.0 - 09:48:00.0     3  N5921          [1]
              09:50:30.0 - 09:51:00.0     4  1445+09900002  [1]
              10:22:00.0 - 10:23:00.0     5  1445+09900002  [1]
              10:26:00.0 - 10:43:00.0     6  N5921          [1]
              10:45:30.0 - 10:47:00.0     7  1445+09900002  [1]
Fields: 3
  ID   Name          Right Ascension  Declination   Epoch
  1    1331+30500002 13:31:08.29      +30.30.32.96  J2000   ** Flux/bandpass cal, 3c 286
                                                               (J2000 name & coords.)
  2    1445+09900002 14:45:16.47      +09.58.36.07  J2000   ** Gain calibrator
  3    N5921         15:22:00.00      +05.04.00.00  J2000   ** Source

Data descriptions: 1 (1 spectral windows and 1 polarization setups)
  ID    Ref.Freq    #Chans  Resolution  TotalBW     Correlations
  1     1413.45MHz  63      24.4141kHz  1550.2 kHz  RR  LL
Feeds: 28: printing first row only
  Antenna   Spectral Window     # Receptors    Polarizations
  1         -1                  2              [         R, L]
Antennas: 27:
  ID   Name  Station   Diam.    Long.         Lat.
  1    1     VLA:N7    25.0 m   -107.37.07.2  +33.54.12.9
  2    2     VLA:W1    25.0 m   -107.37.05.9  +33.54.00.5
  3    3     VLA:W2    25.0 m   -107.37.07.4  +33.54.00.9
  4    4     VLA:E1    25.0 m   -107.37.05.7  +33.53.59.2
  5    5     VLA:E3    25.0 m   -107.37.02.8  +33.54.00.5
  6    6     VLA:E9    25.0 m   -107.36.45.1  +33.53.53.6
  7    7     VLA:E6    25.0 m   -107.36.55.6  +33.53.57.7
  8    8     VLA:W8    25.0 m   -107.37.21.6  +33.53.53.0
  9    9     VLA:N5    25.0 m   -107.37.06.7  +33.54.08.0
  10   10    VLA:W3    25.0 m   -107.37.08.9  +33.54.00.1
  11   11    VLA:N4    25.0 m   -107.37.06.5  +33.54.06.1
  12   12    VLA:W5    25.0 m   -107.37.13.0  +33.53.57.8
  13   13    VLA:N3    25.0 m   -107.37.06.3  +33.54.04.8
  14   14    VLA:N1    25.0 m   -107.37.06.0  +33.54.01.8
  15   15    VLA:N2    25.0 m   -107.37.06.2  +33.54.03.5
  16   16    VLA:E7    25.0 m   -107.36.52.4  +33.53.56.5
  17   17    VLA:E8    25.0 m   -107.36.48.9  +33.53.55.1
  18   18    VLA:W4    25.0 m   -107.37.10.8  +33.53.59.1
  19   19    VLA:E5    25.0 m   -107.36.58.4  +33.53.58.8
  20   20    VLA:W9    25.0 m   -107.37.25.1  +33.53.51.0
  21   21    VLA:W6    25.0 m   -107.37.15.6  +33.53.56.4
  22   22    VLA:E4    25.0 m   -107.37.00.8  +33.53.59.7
  24   24    VLA:E2    25.0 m   -107.37.04.4  +33.54.01.1
  25   25    VLA:N6    25.0 m   -107.37.06.9  +33.54.10.3
  26   26    VLA:N9    25.0 m   -107.37.07.8  +33.54.19.0
  27   27    VLA:N8    25.0 m   -107.37.07.5  +33.54.15.8
  28   28    VLA:W7    25.0 m   -107.37.18.4  +33.53.54.8


Tables:
   MAIN                   22653 rows
   ANTENNA                   28 rows
   DATA_DESCRIPTION           1 row
   DOPPLER             <absent>
   FEED                      28 rows
   FIELD                      3 rows
   FLAG_CMD             <empty>
   FREQ_OFFSET         <absent>
   HISTORY                  270 rows
   OBSERVATION                1 row
   POINTING                 168 rows
   POLARIZATION               1 row
   PROCESSOR            <empty>
   SOURCE                     3 rows
   SPECTRAL_WINDOW            1 row
   STATE                <empty>
   SYSCAL              <absent>
   WEATHER             <absent>

Appendices

AIPS -- CASA dictionary

https://safe.nrao.edu/wiki/bin/view/Software/LegacyCASA-AIPSDictionary

MIRIAD -- CASA dictionary

This appendix provides a list of common Miriad tasks, and their equivalent CASA tool or tool function names. The two packages differ in both their architecture and calibration and imaging models, and there is often not a direct correspondence. However, this index does provide a scientific user of CASA who is familiar with MIRIAD, with a simple translation table to map their existing data reduction knowledge to the new package.

%BEGINTABLE{label="tbl:MIRIADCASA" caption="MIRIAD - CASA dictionary"}%
MIRIAD Task Description CASA tool/function
atlod load ATCA data atcafiller
blflag Interactive baseline based editor/flagger mp raster displays
cgcurs Interactive image analysis qtviewer
cgdisp Image display, overlays qtviewer
clean Clean an image im
fits FITS image filler ia.imagefromfits
gpboot Set flux density scale cb.fluxscale
gpcal Polarization leakage and gain calibration cb with 'G' and 'D'
gpcopy copy calibration tables not needed
gpplt Plot calibration solutions cp.plot
imcomb Image combination im
imfit Image-plane component fitter ia.imagefitter
impol Create polarization images ia.imagepol
imstat Image statistics ia.statistics
imsub Extract sub-image ia.subimage
invert Synthesis imaging im
linmos linear mosaic combination of images im
maths Calculations involving images ia.imagecalc, ia.calc
mfcal Bandpass and gain calibration cb with 'G' and 'B'
prthd Print header of image or uvdata ia.summary, ms.summary
restor Restore a clean component model im
selfcal selfcalibration of visibility data im, cb
tvclip automated flagging based on clip levels af
tvdisp Load image to TV display qtviewer
tvflag Interactive TB data editing mp
uvaver Average/select data, apply calibration ms.split
uvfit uv-plane component fitter cb
uvflag Command-based flagging af
uvgen Simulator sm
uvlist List uv-data tb
uvmodel Source model computation im.ft
uvplt uv-data plotting ms
uvsplit split uv file in sources and spectral windows ms.split
%ENDTABLE%

CLIC -- CASA dictionary

This appendix provides a list of common CLIC tasks, and their equivalent CASA tool or tool function names. The two packages are very similar since the CASA software to reduce IRAM data is based on the CLIC reduction procedures.

%BEGINTABLE{label="tbl:CLICCASA" caption="CLIC - CASA dictionary"}%
CLIC Function Description CASA tool/function
load Load data almatifiller
print Print text summary of data ms.summary
flag Flag data mp, af, qtviewer
phcor Atmospheric phase correction almatifiller
rf Radio frequency bandpass cb.setsolvebandpoly, cb.solve
phase Phase calibration cb.setsolvegainspline,cb.solve
flux Absolute flux calibration cb.fluxscale
ampl Amplitude calibration cb.setsolvegainspline,cb.solve
table Split out calibrated data (uv table) ms.split
%ENDTABLE%

-- JosephMcMullin - 25 Sep 2006
Topic revision: r62 - 2013-02-13, PatrickMurphy
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback