_Version: February 28, 2007 (mjc)

## Introduction

This document describes how to calibrate and image interferometric data using tasks from the CASA (Common Astronomy Software Application) package (single-dish analysis will be incorporated in this document at a later date). CASA is a suite of astronomical data reduction tools and tasks that can be run via the IPython interface to Python. Tools in CASA provide the full capability of the package. Tasks are essentially IPython interface scripts to the tools, but with specific, limited access to them. The idea for having tasks is that they are simpler to use, provide a more familiar interface, and are easier to learn for most astronomers who are familiar with radio interferometric data reduction (and hopefully for novice users as well). Currently, CASA is in an alpha release and this should be taken into account as users begin to learn the package.

For the moment, the audience is assumed to have some basic grasp of the fundamentals of synthesis imaging, so details of how a radio interferometer works and why the data needs to undergo calibration in order to make synthesis images are left to other documentation --- a good place to start might be Synthesis Imaging in Radio Astronomy II (1999, ASP Conference Series Vol. 180, eds. Taylor, Carilli & Perley).

Finally, this document provides some examples of executing tasks in CASA and what might be expected during and at the end of such execution. These examples are found in Appendix II.

Current CASA documentation consists of:

• CASA Task Analysis Cookbook - This document

If you have experience with CLIC, AIPS, or MIRIAD software packages to reduce interferometric data, Appendices 2--4 provide CLIC-CASA, AIPS-CASA, and MIRIAD-CASA dictionaries which map common CLIC, AIPS and MIRIAD tasks/adverbs to their counterparts in CASA.

## CASA Basics --- Useful Information for First-Time Users

### Starting CASA

After having run the appropriate casainit script, CASA is started by typing casapy on the command line. After startup information, you should see the casapy command prompt: CASA <1>:

### Ending CASA

You can exit CASA by typing CTRL-D, %exit, or quit.

### Python

Within CASA, you use python to interact with the system. This does not mean an extensive python course is necessary - basic interaction with the system (assigning parameters, running tasks) is straightforward. At the same time, the full potential of python is at the more experienced user's disposal. Some further details about python, ipython, and the interaction between python and CASA can be found here.

#### Tools

Originally, CASA consisted of a collection of tools, combined in the so-called toolkit. Since the majority of prospective users is far more familiar with the concept of tasks, an effort is underway to replace most - if not all - toolkit functionality by tasks.

While running CASA, you will have access to and be interacting with tasks, either indirectly by providing parameters to a task, or directly by running a task. Each task has a well defined purpose, and a number of associated parameters, the values of which are to be supplied by the user. Technically speaking, tasks are built on top of tools - when you are running a task, you are running tools in the toolkit, though this should be transparent.

#### What does this mean for the normal user?

As more tasks are being written, and the functionality of each task is enhanced, there will be less and less reason to run tools in the toolkit. We are working toward a system in which direct access to the underlying toolkit is unnecessary for all standard data processing.

### Data

#### Measurement set, Images

Interferometric data are stored in a so-called measurement set, which is a generalized description of data from any interferometric telescope. Traditionally, this kind of data has been described as visibility data, or uv data. A Fourier transform to the image plane creates images.

#### Where are my data ?

Your data are in directories located in the same area you started CASA from. The name of the top level directory is identical to the name of your measurement set or image. Deleting a measurement set or an image means deleting the top level directory, and all underlying directories and files, using the file delete method of the operating system you started CASA from. For example, when running CASA on a Linux system, in order to delete the measurement set named AM675 type:

CASA <5>: !rm -r AM675


from within CASA. The ! tells CASA that a system command follows, and the -r makes sure that all subdirectories are being deleted recursively. Any system command may be executed in this manner from within CASA.

As mentioned in the introduction, tasks in CASA are python (ipython ?) interfaces to the more basic toolkit. Tasks are executed to perform a single job, such as loading, plotting, flagging, calibrating, and imaging the data.

Basic information on tasks, including the parameters used and their defaults, can be obtained by typing help taskname at the CASA prompt, where taskname is the name of a given task. help taskname provides a description of the task and then lists all parameters, a brief description of the parameter, the parameter default, an example setting the parameter and any options if there are limited allowed values for the parameter.

*NOTE: Your PAGER environment variable determines how help is displayed. export PAGER=LESS will give you a view of the task help (when doing 'help task') but will vanish and return you to the command line when you are done viewing it. export PAGER=more will scroll the help onto your command window and then return you to your prompt (but leaving it on display). Setting PAGER=cat will give you the more equivalent without some extra formatting baggage and is the recommended choice.

### Setting Parameters and Invoking Tasks

Tasks require input parameters (sometimes called keywords). One can either 1) assign parameters, one at a time, from the CASA prompt, and then execute the task; 2) execute the task in one line, specifically assigning each parameter within the line; or 3) using the position within the task call to specify the value of the parameter. For example the following are equivalent:

# use parameter order for invoking tasks
plotxy('ngc5921.ms','channel','amp','data')
# specify parameter names for each
plotxy(vis='ngc5921.ms',xaxis='channel',yaxis='amp',datacolumn='data')
# when specifying the parameter name, order doesn't matter
plotxy(xaxis='channel',vis='ngc5921.ms',datacolumn='data',yaxis='amp')


You can also set parameters by performing the assigment within the CASA shell and then inspecting them using the inp command.

CASA <4>: vis
Out[4]: False

CASA <5>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "False"
verbose     =  False

CASA <6>: vis='ngc5921.ms'

CASA <7>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "ngc5921.ms"
verbose     =  False


All task parameters have global scope within CASA, i.e., the parameter values are common to all tasks and at the CASA command line. This allows the convenience of not changing parameters that are shared between tasks but does require care when chaining together sequences of task invocations (to insure proper values are provided).

To inspect a parameter value just type it at the command line:
CASA <16>: alg
Out[16]: 'clark'


Help on a parameter can be obtained by typing 'help par.parameter_name',e.g.,

CASA <17>: help par.alg
---------> help(par.alg)
Help on function alg in module parameter_dictionary:

alg()
Imaging algorithm to use. Options: 'clark','hogbom','csclean','multiscale','mfclark',
'mfhogbom','mfmultisc ale':

Default: 'hogbom'


### Saving and Recalling Task Parameters

Parameters for a given task can be saved by using the saveinputs command and restored using the execfile filename command.

CASA <1>: vis='ngc5921.ms'

CASA <2>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:
vis         = "ngc5921.ms"
verbose     =  False

CASA <3>: saveinputs 'listobs' # defaults to 'taskname.saved'
--------> saveinputs('listobs')

CASA <4>: !more 'listobs.saved'
vis         = "ngc5921.ms"
verbose     =  False

CASA <5>: vis='someotherfile.ms'

CASA <6>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "someotherfile.ms"
verbose     =  False

CASA <7>: execfile 'listobs.saved'
--------> execfile('listobs.saved')

CASA <8>: inp 'listobs'
--------> inp('listobs')

listobs -- List the observations in a data set:

vis         = "ngc5921.ms"
verbose     =  False

CASA <9>: saveinputs 'listobs','ngc5921_listobs.par'  #specify a file
--------> saveinputs('listobs','ngc5921_listobs.par')

CASA <10>: !more ngc5921_listobs.par
vis         = "ngc5921.ms"
verbose     =  False

CASA <11>:


#### VLA: Filling data from VLA archive format

VLA data in on-line archive format are read into CASA from disk using the importvla task.

Note: autocorrelations are filled automatically into the data set. Autocorrelation data is not needed for standard interferometric data, further, the imaging routine will try to image the autocorrelation data (it assumes it is single dish data) which will swamp any real signal. Thus, it is necessary to flag the autocorrelation data any time before imaging.

#### Filling data from UVFITS format

For UVFITS format, use importuvfits.

### Data Examination, Editing, and Flagging

#### Plot the Data

The principal tool for X-Y plots of visibility data is plotxy. Currently amplitudes and phases can be plotted vs several x-axis options. Data selection is by antenna, spectral window, correlation, and field. plotxy is underlaid by the pl tool, and uses the matplotlib plotting library.

#### Flag the Data

flagdata will flag the visibility data set based on the specified data selections, most of the information coming from a run of the listobs task (with/without verbose=True).

##### flagxy

Interactive flagging (e.g. see it - flag it'') is possible on the X-Y displays of the data. flagxy is similar to plotxy with built-in interactive editing capabilities. Since flags are inserted into the measurement set, it is useful to backup (or make a copy) of the measurement set before extensive flagging is done. Some useful hints on using flagxy:

1. Select the data to be flagged. Set xaxis = 'time' or 'uvdist'; yaxis = 'amp' is most useful.

1. Several plots on a page can be useful. For example, fieldid = -1; interaction = 'field_id';
subplot = 331; flagxy() will plot 9 fields.

1. When plots come up, use the cursor to specify flagged data by defining a simple box. Continue
as desired.

1. Do not fiddle with other functions.

1. Type s in the CASA window to exit without appying any flagging. The potential
unflagged data will be displayed.

1. Type w in the CASA window to exit with application of all flags. This step is
not easily reversable.

### Calibration

The calibration tasks are as follows:

• setjy - Compute the model visibility for a specified source flux density
• gaincal - Time-based complex gain calibration solving; supports pre-apply of other calibrations
• fluxscale - Bootstrap the flux density scale from standard calibrators
• bandpass - bandpass calibration solving; supports pre-apply of other calibrations
• accum - Accumulate incremental calibration solutions into a cumulative cal table
• correct - Apply calculated calibration solutions

• clearcal - Re-initialize visibility data set calibration data
• plotcal - Plot calibration solutions
• smooth - Smooth calibration solutions derived from one or more sources

##### setjy

setjy places the specified flux density of a (point) source in the model column of the measurement set. setjy knows the flux density as a function of frequency for standard flux calibrators, and the value of the flux density can be inserted for any other source. If the source is not well-modeled as a point source, then the model column can be filled with a model generated from an image of the source (see task ft).

##### gaincal

gaincal determines solutions for the time-based complex antenna gains, for each spectral window, from the specified calibration sources. N.B. the solutions are stored in a table (subdirectory) which is specified by the user, not automatically by the task; therefore care must be taking in naming the table for future use. A solution interval may be specified. For the VLA, antenna gain curves may be applied. A spline fit for the solution may be determined.

##### fluxscale

fluxscale bootstraps the flux density scale from standard calibrators to the so-called secondary or phase calibration sources. setjy must be run first to set the scale on one source, and of course a solutions table must be available.

##### bandpass

bandpass calculates a bandpass calibration solution; it can solve for gain variations in frequency as well as in time. However, the time-based part of the variations across the bandpass is usually much longer than that solved for by gaincal, so generally one uses a long time scale for solving for the bandpass.

##### accum

accum applies an incremental solution table to a previous table, and writes out a cumulative solution table. The same caution as mentioned in gaincal about the naming of these tables applies to accum as well. Different interpolation schemes may be selected.

##### correct

The final step in the calibration process, correct may apply several calibration tables (e.g. gaincal, bandpass, pointing). The corrections are applied to the data column of the visibility, writing the corrected data column. Any existing corrected data is overwritten.

### Imaging Data

• invert - create a dirty image and psf
• clean - calculate a deconvolved image with a selected clean algorithm
• mosaic - calculate a multi-field deconvolved image with selected clean algorithm
• feather - combine a single dish and synthesis image in the Fourier plane

### Displaying Images

Here we describe how to display data with the casaviewer either as a stand-alone or through the viewer task. One can display both images and MeasurementSets.

The viewer will display images in raster, contour, or vecotr form. Blinking and movies are available for spectral-line image cubes.

Executing the viewer task will bring up two windows: a green viewer screen that can be enlarged, and a file catalog list. Click on an image or ms from the file catalog list, choose the proper display, and the image should pop up on the green screen. Clicking on the wrench tool (second from left on upper left) will obtain the data display options. Most functions are self-documenting.

The viewer can be run outside of casapy by typiing casaviewer.

### accum

    Accumulate incremental calibration solutions into a cumulative calibration table:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
tablein -- Input (cumulative) calibration table
default: '' (create); example: tablein='ngc5921.gcal'
incrtable -- Input incremental calibration table name
default: ; example: incrtable='ngc5921_phase.gcal'

caltable -- Output calibration table (cumulative)
default: ; example: caltable='ngc5921.Cgcal'
field -- List of field names to update in input cumulative table
default: -1 (all); example: field='0957+561'

calfield  -- List of field names in incremental table to use.
default: -1 (all); example: calfield='0917+624'

interp -- Interpolation mode to use on incremental solutions
default: 'linear'; example: interp='nearest'

accumtime -- Cumulative table timescale when creating from scratch
default: -1.0; example: accumtime=5.0
spwmap -- Spectral windows to apply
default: [-1]; example: spwmap=[0,0,1,1]


### bandpass

    Calculate a bandpass calibration solution:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
caltable -- Name of output Gain calibration table
default: ; example: caltable='ngc5921.gcal'
mode -- Type of data selection
default: 'none' (all data); example: mode='channel'

nchan -- Number of channels to select (when mode='channel')
default: 0; example: nchan=45 (note: nchan=0 selects all)
start -- Start channel, 0-relative;active when mode='channel','velocity'
default=0; example: start=5,start='20km/s'
step -- Increment between channels;active when mode='channel','velocity'
default=1; example: step=1, step='1km/s'
msselect -- Optional subset of the data to select (field,spw,time,etc)
default:'';example:msselect='FIELD_ID==0',
msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'

--- Solve properties
solint --  Solution interval (sec)
default: 86400 (=long time); example: solint=60.
refant -- Reference antenna *name*
default: -1 (none); example: refant='13'
bandtype -- Type of bandpass solution (B or BPOLY)
default: 'B'; example: bandtype='BPOLY'
append -- Append solutions to the (existing) table
default: False; example: append=True
degamp -- Polynomial degree for amplitude solution
default: 3; example: degamp=2
degphase -- Polynomial degree for phase solution
default: 3; example: degphase=2
visnorm -- Normalize data prior to solution
default: False; example: visnorm=True
bpnorm -- Normalize result?
default: True; example: bpnorm=False
maskcenter -- Number of channels to avoid in center of each band
maskedge -- Fraction of channels to avoid at each band edge (in %)
--- Other calibration to pre-apply (weights are calibrated by default)
gaincurve -- Apply VLA antenna gain curve correction
default: True; example: gaincurve=False

opacity -- Apply opacity correction
default: True; example: opacity=False

tau -- Opacity value to apply (if opacity=True)
default:0.0001; example: tau=0.0005
gaintable -- Gain calibration solutions
default: ''; example: gaintable='ngc5921.gcal'
gainselect -- Select subset of calibration solutions from gaintable
default:''; example: gainselect='FIELD_ID==1'
bptable -- Bandpass calibration solutions
default: ''; example: bptable='ngc5921.bcal'
pointtable -- Pointing calibration solutions
defalut: ''; example: pointtable='ngc5921.ptcal'


### blcal

    Calculate a baseline-based calibration solution (gain or bandpass):

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
caltable -- Name of output Gain calibration table
default: ; example: caltable='ngc5921.gcal'
mode -- Type of data selection
default: 'none' (all data); example: mode='channel'

nchan -- Number of channels to select (when mode='channel')
default: 1; example: nchan=45 (note: nchan=0 selects all)
start -- Start channel, 0-relative;active when mode='channel','velocity'
default=0; example: start=5,start='20km/s'
step -- Increment between channels;active when mode='channel','velocity'
default=1; example: step=1, step='1km/s'
msselect -- Optional subset of the data to select (field,spw,time,etc)
default:'';example:msselect='FIELD_ID==0',
msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'

gaincurve -- Apply VLA antenna gain curve correction
default: True; example: gaincurve=False

opacity -- Apply opacity correction
default: True; example: opacity=False

tau -- Opacity value to apply (if opacity=True)
default:0.0001; example: tau=0.0005
gaintable -- Gain calibration solutions
default: ''; example: gaintable='ngc5921.gcal'
gainselect -- Select subset of calibration solutions from gaintable
default:''; example: gainselect='FIELD_ID==1'
solint --  Solution interval (sec)
default: 86400 (=long time); example: solint=60.
refant -- Reference antenna *name*
default: -1 (any integer=none); example: refant='13'
freqdep -- Solve for frequency dependent solutions
default: False (gain; True=bandpass); example: freqdep=True


### browsetable

    Browse a table (visibility data set or calibration table):

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'



### clean

    Calculate a deconvolved image with selected clean algorithm:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
imagename -- Name of output images: restored=imagename.restored;
residual=imagename.residual,model=imagename
default: ; example: imagename='ngc5921'
mode -- Type of selection
default: ; example: mode='channel';

alg -- Algorithm to use
default: 'hogbom'; example: alg='clark';

niter -- Number iterations, set to zero for no CLEANing
default: 500; example: niter=500
gain -- Loop gain for CLEANing
default: 0.1; example: gain=0.1
threshold -- Flux level at which to stop CLEANing (units=mJy)
default: 0.0; example: threshold=0.0
cleanbox -- List of [blc-x,blc-y,trc-x,trc-y] values
default: []; example: cleanbox=[110,110,150,145]
Note: This can also be a filename with clean values:
fieldindex blc-x blc-y trc-x trc-y
nchan -- Number of channels to select
default: 1; example: nchan=45
start -- Start channel, 0-relative
default=0; example: start=5
width -- Channel width (value > 1 indicates channel averaging)
default=1; example: width=5
step -- Step in channel number
default=1; example: step=2
imsize -- Image size in spatial pixels (x,y)
default = [256,256]; example: imsize=[512,512]
cell -- Cell size in arcseconds (x,y)
default=[1,1]; example: cell=[0.5,0.5]
stokes -- Stokes parameters to image
default='I'; example: stokes='IQUV';

fieldid -- Field index identifier
default=0; example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default=-1 (all); example: spwid=1
weighting -- Weighting to apply to visibilities
default='natural'; example: weighting='uniform';

rmode -- Robustness mode (for use with Brigg's weighting)
default='none'; example='abs';

robust -- Brigg's robustness parameter
default=0.0; example: robust=0.5;

uvfilter -- Apply additional filtering/uv tapering of the visibilities.
defalt=False; example: uvfilter=True
uvfilterbmaj -- Major axis of filter (arcseconds)
default=1.; example: uvfilterbmaj=12.
uvfilterbmin -- Minor axis of filter (arcseconds)
default=1.; example: uvfilterbmin=12.
uvfilterbpa -- Position angle of filter (degrees)
default=0.' example: uvfilterbpa=57.


### clearcal

    Re-initializes the calibration for a visibility data set; MODEL_DATA is set to unity (in total intensity
and unpolarized) and CORRECTED_DATA is set to the original (observed) DATA:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'



### contsub

    Continuum fitting and subtraction in the uv plane:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
fieldid -- Field index identifier
default=unset; example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default=0; example: spwid=1
channels -- Range of channels to fit
default:; example: channels=range(4,7)+range(50,60)
solint -- Averaging time (seconds)
default: 0.0 (scan-based); example: solint=10
fitorder -- Polynomial order for the fit
default: 0; example: fitorder=1
fitmode -- Use of the continuum fit model
default: 'subtract'; example: fitmode='replace'



### correct

    Apply calibration solution(s) to data:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
msselect -- Optional subset of the data to select (field,spw,time,etc)
default:'';example:msselect='FIELD_ID==0',
msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'

gaincurve -- Apply VLA antenna gain curve correction
default: True; example: gaincurve=False

opacity -- Apply opacity correction
default: True; example: opacity=False

tau -- Opacity value to apply (if opacity=True)
default:0.0001; example: tau=0.0005
gaintable -- Gain calibration solutions
default: ''; example: gaintable='ngc5921.gcal'
gainselect -- Select subset of calibration solutions from gaintable
default:''; example: gainselect='FIELD_ID==1'
bptable -- Bandpass calibration solutions
default: ''; example: bptable='ngc5921.bcal'
blbased -- Apply baseline-based solutions (from blcal)
default: False; example: blbased=True
pointtable -- Pointing calibration solutions
default: ''; example: pointtable='ngc5921.ptcal'
calwt -- Calibrate weights along with data
default: False; example: calwt=True


### exportuvfits

    Convert a CASA visibility data set (MS) to a UVFITS file:
The FITS file is always written in floating point format and the
data is always stored in the primary array of the FITS file.
By default, a single-source UVFITS file is written, but if the
MS contains more than one field or if you select the multi-source
argument=True, a multi-source UVFITS file will be written (you
must ensure that the data shape is fixed for all data due to the
limitations of the UVFITS format).

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='3C273XC1.ms'
fitsfile -- Name of output UV FITS file
default: ; example='3C273XC1.fits'
datacolumn -- Data column to write
default: 'corrected'; example: datacolumn='data'

fieldid -- Field index identifier
default=0; example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default=0; example: spwid=1
nchan -- Number of channels to select
default: 1; example: nchan=45
start -- Start channel, 0-relative
default=0; example: start=5
width -- Channel width (value > 1 indicates channel averaging)
default=1; example: width=5
writesyscal -- Write GC and TY tables
default=False; example: writesyscal=True
multisource -- Write in multi-source format
default=False; example: multisource=True
combinespw -- Handle spectral window as IF
default=False; example: combinespw=True
writestation -- Write station name instead of antenna name
default=False; example: writestation=True


### feather

    Feather together an interferometer and a single dish image in the Fourier plane:

Keyword arguments:
imagename -- Name of output feathered image
default: False; example: imagename='orion_combined'
highres -- Name of high resolution (interferometer) image
default: False; example: imagename='orion_vla.im'
lowres -- Name of low resolution (single dish) image
default: False; example: imagename='orion_gbt.im'



### flagautocorr

    Flag autocorrelations (typically in a filled VLA data set):

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'



### flagdata

    Flag data based on selections:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
antennaid -- Antenna index identifier
default: [-1] (all); example: antennaid=3
baseline -- Baseline (composed of a list: [ant1,ant2])
default: [-1] (all); example: baseline=[2,3]
chans -- Channel range to clip
default: ; example: chans=range(0,4)+range(55,62)
Note: Remember in Python ranges go up to but not including
the last value (so the first range covers channels 0,1,2,3).
clipfunction -- Defines the function used in evaluating data for clipping
default: 'ABS'; example: clipfunction='RE'

clipcorr -- Defines the correlation(s) to clip
default: 'I'; example: clipcorr='RR'

clipminmax -- Sets the range of data values that will not be flagged
default: ; example: clipminmax=[0.0,1.5]
fieldid -- Field index identifier
default: -1 (all); example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default: -1 (all); example: spwid=0
timerange -- Time range selection
default: ; example: timerange=['26-APR-2003/02:45:00.0','26-APR-2003/02:49:00.0']
unflag -- Unflag the data (True/False)
default: False; example: unflag=True


### flagxy

    Plot points for flagging selected X and Y axes; if region is not specified, it enables interactive setting of
flag boxes using the mouse. Use the task flagdata to write these flags to disk.

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
xaxis -- Visibility file (MS) data to plot along the x-axis

yaxis -- Visibility data to plot along the y-axis

datacolumn -- Visibility file (MS) data column.
default: 'corrected'; example: datacolumn='model'

antennaid -- Antenna index identifier
default: -1 (all); example: antennaid=[13]
spwid -- Spectral window index identifier
default: -1 (all); example: spwid=0
fieldid -- Field index identifier
default: -1 (all); example: fieldid=0
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
timerange
correlations -- Correlations to plot
default: '' (all); example: correlations='RR'

--- Spectral Information ---
nchan -- Number of channels to select
default: -1 (all); example: nchan=55
start -- Start channel (0-relative)
default=0; example: start=5
width -- Channel width (value>0 indicates averaging)
default=1; example: width=5
--- Plot Options ---
subplot -- Panel number on the display screen
default: 111 (full screen display); example:
subplot=221 (upper left panel of 4 on frame)
subplot=223 (lower left panel of 4 on frame)
subplot=212 (lower panel of 2 on frame)

overplot -- Overplot these values on current plot (if possible)
default: False; example: overplot=True
plotsymbol -- pylab plot symbol
default: ','; example: plotsymbol='bo' (blue circles)

fontsize -- Font size for labels
default: 1; example: fontsize=2
windowsize -- Window size
default: 1.0; example: windowsize=0.5
--- Flag Options ---
region -- Flagging region specified as [xmin,xmax,ymin,ymax]
default: [0.0] = mark region interactively
example: region=[53,60,0.0,2.0]
diskwrite -- Write flags to disk or just display them
Note: This parameter is only active when region is set.
default: False; example: diskwrite=True


### fluxscale

    Bootstrap the flux density scale from standard calibrators:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
caltable -- Name of input calibration table
default: ; example: caltable='ngc5921.gcal'
fluxtable -- Name of output, flux-scaled calibration table
default: ; example: fluxtable='ngc5921.gcal2'
reference -- Reference field name (transfer flux scale from)
default: ; example: reference='1328+307'
transfer -- Transfer field name(s)
default: -1 (all); example: transfer=['1445+09900002']



### fringecal

    Calculate a baseline-based fringe-fitting solution (phase, delay, delay-rate); Note: currently this is limited
to delays and delay-rates in the central ambiguity:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
caltable -- Name of output Gain calibration table
default: ; example: caltable='ngc5921.gcal'
mode -- Type of data selection
default: 'none' (all data); example: mode='channel'

nchan -- Number of channels to select (when mode='channel')
default: 1; example: nchan=45 (note: nchan=0 selects all)
start -- Start channel, 0-relative;active when mode='channel','velocity'
default=0; example: start=5,start='20km/s'
step -- Increment between channels;active when mode='channel','velocity'
default=1; example: step=1, step='1km/s'
msselect -- Optional subset of the data to select (field,spw,time,etc)
default:'';example:msselect='FIELD_ID==0',
msselect='FIELD_ID IN [0,1]', msselect='FIELD_ID <= 1'

gaincurve -- Apply VLA antenna gain curve correction
default: False; example: gaincurve=False

opacity -- Apply opacity correction
default: False; example: opacity=False

tau -- Opacity value to apply (if opacity=True)
default:0.0001; example: tau=0.0005
solint --  Solution interval (sec)
default: 0; example: solint=60.
refant -- Reference antenna *name*
default: -1 (any integer=none); example: refant='13'


### ft

    Fourier transform the specified model (or component list) and insert into the MODEL_DATA c
olumn of the specified visibility data set:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
model -- Name of input model image
default: False; example: model='3C373XC1.model'
complist -- Name of component list
default: ''; example: complist='test.cl'
incremental -- Add to the existing MODEL_DATA column?
default: False; example: incremental=True



### gaincal

    Solve for gain calibration:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
caltable -- Name of output Gain calibration table
default: ; example: caltable='ngc5921.gcal'
mode -- Type of data selection
default: 'none' (all data); example: mode='channel'

uvrange -- Optional subset of the uv range (kilo lambda)
default=[0] (all); example: uvrange=[0,15]
--- Solve properties
solint --  Solution interval (sec)
default: -1 (scan based); example: solint=60.
refant -- Reference antenna *name*
default: -1 (any integer=none); example: refant='13'
gaintype -- Type of gain solution (G, T, or GSPLINE)
default: 'G'; example: gaintype='GSPLINE'
calmode -- Type of solution
default: 'ap' (amp and phase); example: calmode='p'

append -- Append solutions to the (existing) table
default: False; example: append=True
splinetime -- Spline timescale (sec); used for gaintype='GSPLINE'
default: 10800; example: splinetime=5000
npointaver -- Tune phase-unwrapping algorithm for gaintype='GSPLINE'
default: 10; example: npointaver=5
phasewrap -- Wrap the phase when differences greater than this are seen (degrees)
Used for gaintype='GSPLINE'
default: 250; example: phasewrap=200
--- Other calibration to pre-apply (weights are calibrated by default)
gaincurve -- Apply VLA antenna gain curve correction
default: True; example: gaincurve=False

opacity -- Apply opacity correction
default: True; example: opacity=False

tau -- Opacity value to apply (if opacity=True)
default:0.0001; example: tau=0.0005
gaintable -- Gain calibration solutions
default: ''; example: gaintable='ngc5921_phase.gcal'
bptable -- Bandpass calibration solutions
default: ''; example: bptable='ngc5921.bcal'
pointttable -- Pointing calibration solutions
default: ''; example: pointtable='ngc5921.ptcal'


### importvla

    Convert VLA archive file(s) to a CASA visibility data set (MS):

Keyword arguments:
archivefiles -- Name of input VLA archive file(s)
default:
example: archivefiles=['AP314_A950519.xp1','AP314_A950519.xp2']
vis -- Name of output visibility file (MS)
default: ; example: vis='NGC7538.ms'
bandname -- VLA Frequency band
default:  - all bands; example: bandname='K'

#projectname -- Observing project name
#       default: ; example='AP314'
freqtol -- Tolerance in frequency shift in naming spectral windows
default: channel width of current spectral window in Hz
example: 150000.0



### importasdm

    Convert an ALMA Science Data Model observation into a CASA visibility file (MS)

Keyword arguments:
asdm -- Name of input ASDM file (directory)
default: ; example: asdm='ExecBlock3'


### importuvfits

    Convert a UVFITS file to a CASA visibility data set (MS):

Keyword arguments:
fitsfile -- Name of input UV FITS file
default: ; example='3C273XC1.fits'
vis -- Name of output visibility file (MS)
default: ; example: vis='3C273XC1.ms'



### invert

    Calculate the dirty image and dirty beam:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
imagename -- Name of output images: restored=imagename.restored;
residual=imagename.residual,model=imagename
default: ; example: imagename='ngc5921'
mode -- Type of selection
default: ; example: mode='channel';

nchan -- Number of channels to select
default: 1; example: nchan=45
start -- Start channel, 0-relative
default=0; example: start=5
width -- Channel width (value > 1 indicates channel averaging)
default=1; example: width=5
step -- Step in channel number
default=1; example: step=2
imsize -- Image size in spatial pixels (x,y)
default = [256,256]; example: imsize=[512,512]
cell -- Cell size in arcseconds (x,y)
default=[1,1]; example: cell=[0.5,0.5]
stokes -- Stokes parameters to image
default='I'; example: stokes='IQUV';

fieldid -- Field index identifier
default=0; example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default=0; example: spwid=1
weighting -- Weighting to apply to visibilities
default='natural'; example: weighting='uniform';

rmode -- Robustness mode (for use with Brigg's weighting)
default='none'; example='abs';

robust -- Brigg's robustness parameter
default=0.0; example: robust=0.5;



### listhistory

    List the processing history of a dataset:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'



### listobs

    List the observations in a dataset:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
verbose -- List each observation in addition to the summary
default=False; example: verbose=True



    Derive/Calculate a mask image from another image or a visibility data set and
a set of imaging parameters:

Keyword arguments:
image -- Name of input image

interactive -- Indicate whether you want to view the image and
interactively select regions.
default: False; example: interactive=True

cleanbox -- List of [blc-x,blc-y,trc-x,trc-y] values
default: []; example: cleanbox=[110,110,150,145]
Note: This can also be a filename with clean values:
fieldindex blc-x blc-y trc-x trc-y
expr -- An expression specifying the mask
default: ''; example: expr='somestring'

----------------------------------------------------------------
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
imagename -- Name of output mask image
mode -- Type of selection
default: ; example: mode='channel';

nchan -- Number of channels to select
default: 1; example: nchan=45
start -- Start channel, 0-relative
default=0; example: start=5
width -- Channel width (value > 1 indicates channel averaging)
default=1; example: width=5
step -- Step in channel number
default=1; example: step=2
imsize -- Image size in spatial pixels (x,y)
default = [256,256]; example: imsize=[512,512]
cell -- Cell size in arcseconds (x,y)
default=[1,1]; example: cell=[0.5,0.5]
stokes -- Stokes parameters to image
default='I'; example: stokes='IQUV';

fieldid -- Field index identifier
default=0; example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default=0; example: spwid=1


### mosaic

    Calculate a multi-field deconvolved image with selected clean algorithm:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
imagename -- Name of output images: restored=imagename.restored;
residual=imagename.residual,model=imagename
default: ; example: imagename='ngc5921'
mode -- Type of selection
default: False; example: mode='channel';

mfalg -- Algorithm to use
default: 'mfclark'; example: mfalg='mfhogbom';

niter -- Number iterations, set to zero for no CLEANing
default: 500; example: niter=500
gain -- Loop gain for CLEANing
default: 0.1; example: gain=0.1
threshold -- Flux level at which to stop CLEANing (units=mJy)
default: 0.0; example: threshold=0.0
nchan -- Number of channels to select
default: 1; example: nchan=45
start -- Start channel, 0-relative
default=0; example: start=5
width -- Channel width (value > 1 indicates channel averaging)
default=1; example: width=5
step -- Step in channel number
default=1; example: step=2
imsize -- Image size in spatial pixels (x,y)
default = [256,256]; example: imsize=[512,512]
cell -- Cell size in arcseconds (x,y)
default=[1,1]; example: cell=[0.5,0.5]
stokes -- Stokes parameters to image
default='I'; example: stokes='IQUV';

fieldid -- Field index identifier
default=0; example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
phasecenter -- Phase center (field identifier or direction string)
default=0; example: phasecenter=62

spwid -- Spectral window index identifier
default=0; example: spwid=1
weighting -- Weighting to apply to visibilities
default='natural'; example: weighting='uniform';

rmode -- Robustness mode (for use with Brigg's weighting)
default='none'; example='abs';

robust -- Brigg's robustness parameter
default=0.0; example: robust=0.5;

scaletype -- Image plane flux scale type
default='NONE'; example: scaletype='SAULT'

constpb -- In Sault weighting, the flux scale is constant above this PB level
default=0.4; example: constpb=0.3
minpb -- Minimum PB level to use
default=0.1; example: minpb=0.15


### plotants

    Plot the antenna distribution in local reference frame; X-toward local east; Y-toward local north:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'



### plotcal

        Plot calibration solutions:

Keyword arguments:
tablein -- Name of input calibration table
default: ; example: vis='ngc5921.gcal'
yaxis -- Visibility data to plot along the y-axis

antennaid -- Antenna index identifier(s)
default: -1 (all); example: antennaid=[13]
caldescid -- Calibrater data description ID (combination of SPW and polarization)
default: -1 (all); example: caldescid=0
--- Plot Options ---
nxpanel -- Panel number in the x-direction
default: 1; example: nxpanel=2
nypanel -- Panel number in the y-direction
default: 1; example: nypanel=2
multiplot -- Plot data (e.g., from different antennas) on separate plots
default: False; example: multiplot=True


#plotxy

### plotxy

    Plot points for selected X and Y axes:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
xaxis -- Visibility file (MS) data to plot along the x-axis

yaxis -- Visibility data to plot along the y-axis

datacolumn -- Visibility file (MS) data column.
default: 'corrected'; example: datacolumn='model'

antennaid -- Antenna index identifier
default: -1 (all); example: antennaid=[13]
spwid -- Spectral window index identifier
default: -1 (all); example: spwid=0
fieldid -- Field index identifier
default: -1 (all); example: fieldid=0
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
timerange
correlations -- Correlations to plot
default: '' (all); example: correlations='RR'

--- Spectral Information ---
nchan -- Number of channels to select
default: -1 (all); example: nchan=55
start -- Start channel (0-relative)
default=0; example: start=5
width -- Channel width (value>0 indicates averaging)
default=1; example: width=5
--- Plot Options ---
subplot -- Panel number on the display screen
default: 111 (full screen display); example:
subplot=221 (upper left panel of 4 on frame)
subplot=223 (lower left panel of 4 on frame)
subplot=212 (lower panel of 2 on frame)

overplot -- Overplot these values on current plot (if possible)
default: False; example: overplot=True
plotsymbol -- pylab plot symbol
default: ','; example: plotsymbol='bo' (blue circles)

fontsize -- Font size for labels
default: 1; example: fontsize=2
windowsize -- Window size
default: 1.0; example: windowsize=0.5


### pointcal

    Solve for pointing error calibration:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
model -- Name of input model (component list or image)
default: ; example: model='ngc5921.im'
caltable -- Name of output Pointing calibration table
default: ; example: caltable='ngc5921.gcal'
mode -- Type of data selection
default: 'none' (all data); example: mode='channel'

gaincurve -- Apply VLA antenna gain curve correction
default: True; example: gaincurve=False

opacity -- Apply opacity correction
default: True; example: opacity=False

tau -- Opacity value to apply (if opacity=True)
default:0.0001; example: tau=0.0005
solint --  Solution interval (sec)
default: -1 (scan based); example: solint=60.
refant -- Reference antenna *name*
default: -1 (any integer=none); example: refant='13'


### smooth

        Smooth calibration solution(s) derived from one or more sources:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
tablein -- Input calibration table (any type)
default: ; example: tablein='ngc5921.gcal'
caltable -- Output calibration table (smoothed)
default: ; example: caltable='ngc5921_smooth.gcal'
append -- Append solutions to an existing calibration table?
default: False; example: append=True
calselect -- Optional subset of calibration data to select (e.g., field)
default: ''; example: calselect='FIELD_NAME=FIELD1'
smoothtype -- The smoothing filter to be used
default: 'mean'; example: smoothtype='median'

smoothtime -- Smoothing filter time (sec)
default: 0.0; example: smoothtime=10.


### setjy

    Compute the model visibility for a specified source flux density:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
fieldid -- Field index identifier (0-based)
default=-1 (all); example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier (0-based)
default=-1 (all); example: spwid=1
fluxdensity -- Specified flux density (I,Q,U,V) in Jy
example: fluxdensity=[2.6,0.2,0.3,0.5]
standard -- Flux density standard
default: 'Perley-Taylor 99'; example: standard='Baars'



### split

    Create a new data set (MS) from a subset of an existing data set (MS):

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
outputvis -- Name of output visibility file (MS)
default: ; example: outputvis='ngc5921_src.ms'
fieldid -- Field index identifier
default=-1 (all); example: fieldid=1
field -- Field name(s); this will use a minimum match on the strings
that you provide; to use multiple strings, enter a single string
with spaces separating the field names
default: ''(will use fieldid); example: field='133',field='133 N5921'
spwid -- Spectral window index identifier
default=-1 (all); example: spwid=1
nchan -- Number of channels to select
default:-1 (all); example: nchan=45
start -- Start channel, 0-relative
default=0; example: start=5
step -- Step in channel number
default=1; example: step=2
timebin -- Value for time averaging
default='-1s' (no averaging); example: timebin='30s'
timerange -- Select time range subset of data
default='';
example: timerange='YYYY/MM/DD/HH:MM:SS.sss'
timerange='< YYYY/MM/DD/HH:MM:SS.sss'
timerange='> YYYY/MM/DD/HH:MM:SS.sss'
timerange='ddd/HH:MM:SS.sss'
timerange='< ddd/HH:MM:SS.sss'
timerange='> ddd/HH:MM:SS.sss'
datacolumn -- Which data set column to split out
default='corrected'; example: datacolumn='data'



### uvmodelfit

    Fit a single component source model to the uv data:

Keyword arguments:
vis -- Name of input visibility file (MS)
default: ; example: vis='ngc5921.ms'
niter -- Number of fitting iterations to execute
default: 0; example: niter=5
comptype -- component model type
default: 'P'; example: comptype='G'

sourcepar -- Starting guess for component parameters (flux,xoffset,yoffset)
default: [1,0,0]; example: sourcepar=[2.5,0.3,0.1]
Note: Flux is in Jy, xoffset is in arcsec, yoffset is in arcsec.
fixpar -- Control which parameters to let vary in the fit
default: [] (all vary); example: vary=[False,True,True]
(this would fix the flux to that set in sourcepar but allow the
x and y offset positions to be fit).
file -- Optional output component list table
default: ''; example: file='componentlist.cl'
--- Data Selection ---
mode -- Type of data selection
default: 'none' (all data); example: mode='channel'



### viewer

    View an image or visibility data set:

Keyword arguments:
imagename -- Name of file to visualize
default: ; example: imagename='ngc5921.image'



## Annotated Example Scripts

Note: The NGC 5921 data set is available with the full CASA rpm distribution. Other data sets can be made available upon request. These scripts are intended to illustrate the types of commands needed for different types of reduction/astronomical observations.

### NGC 5921 -- VLA -- red-shifted HI emission

Note: This script does not include self-calibration steps.

importuvfits(fitsfile='/aips++/data/nrao/VLA/ngc5921.fits', # input UVFITS file
vis='ngc5921.ms')                              # output data set (MS)
flagautocorr(vis='ngc5921.ms')                              # Flag autocorrelations
setjy(vis='ngc5921.ms',                              # Set flux density  to 1331+305 (3C 286)
fieldid=0)
gaincal(vis='ngc5921.ms',                          # Derive gain calibration solutions
'ngc5921.gcal',                             # Write solutions to this file
mode='channel',                             # Select data; drop out channels that
nchan=55,                                   # may bias the solution
start=2,                                    # Select data to be used for solving
msselect='FIELD_ID <= 1',                   # Fields 0,1 are the gain, bandpass cals
tau=0.0001,                                 # Correct opacity for each time
solint=0.,                                  # Solve for each scan
refant=14)                                  # Choose well behaved antenna
bandpass(vis='ngc5921.ms',                         # Derive bandpass calibration solution
'ngc5921.bcal',                             # Write solutions to this file
msselect='FIELD_ID==0',                     # Select bandpass calibrator (3C286)
tau=0.0001,                                 # Correct opacity with specified value
gaintable='ngc5921.gcal',                   # Apply gain solutions from this table
refant=14,                                  # Use this reference antenna
nchan=0,                                    # Set nchan back to default
start=0,                                    # Set start back to default
step=1,                                     # Set step back to default
solint=86400.0)                             # Set interval to be entire observation
fluxscale(vis='ngc5921.ms',                        # Transfer the flux density scale
caltable='ngc5921.gcal',                    # Use the gain solutions from this table
fluxtable='ngc5921.fluxscale',              # Write the scaled solutions to this table
reference=['1331+30500002'],                # Transfer flux density scale from 3C286
transfer=['1445+09900002'])                 # to the gain calibrator
correct(vis='ngc5921.ms',                          # Correct the target source data
msselect='FIELD_ID IN [1,2]',               # Select sources to which calibration is applied
tau=0.0001,                                 # Correct opacity with specified value
gaintable='ngc5921.fluxscale',              # Apply flux-scaled gain solutions
gainselect='FIELD_ID==1',                   # from the gain calibrater
bptable='ngc5921.bcal')                     # Apply bandpass calibration solutions
split(vis='ngc5921.ms',                            # Split out calibrated data - cal
outputvis='ngc5921_cal.split.ms',           # Write data to this file
fieldid=1,                                  # Choose calibrater
spwid=0,                                    # Choose spectral window
nchan=63,                                   #   and channels to write out
start=0,
step=1,
datacolumn='CORRECTED_DATA')                # Choose calibrated data
contsub(vis='ngc5921.ms',                          # Subtract the continuum in the uv plane
fieldid=2,                                  # Choose NGC 5921 (fieldid=2)
spwid=0,                                    # Choose spectral window 1
channels=range(4,7)+range(50,60),           # line free channels 4-6 and 50-59
solint=0.0,                                 # use scan-averaging
fitmode='subtract')                         # subtract continuum from line data
# line-only data is written to CORRECTED_DATA
split(vis='ngc5921.ms',                            # Split out calibrated data - src
outputvis='ngc5921_src.split.ms',
fieldid=2,
spwid=0,
nchan=46,
start=5,
step=1,
datacolumn='CORRECTED_DATA')
clean(vis='ngc5921_src.split.ms',                  # Use split out calibrated data
imagename='ngc5921_task',                   # Specify name of output:
mode='channel',                             # Imaging parameters
alg='hogbom',                               # Use Hogbom clean
niter=6000,                                 # number of iterations
gain=0.1,
threshold=8.,                               # clean down to a threshold of 8 mJy
nchan=46,
start=0,
step=1,
fieldid=0,
imsize=[256,256],
cell=[15.,15.],                             # cell size in arcseconds
weighting='briggs',                         # Use robust weighting
rmode='norm',
robust=0.5)


#### NGC 5921 data summary

Summary created with ms.summary(verbose=T):

This is written to the casalog (which is just a 'tail -f' on the casapy.log file).

           MeasurementSet Name:  ngc5921.ms      MS Version 2

Observer: TEST     Project:
Observation: VLA
Data records: 22653       Total integration time = 5280 seconds
Observed from   13-Apr-1995/09:19:00   to   13-Apr-1995/10:47:00

ObservationID = 1         ArrayID = 1
13-Apr-1995/09:19:00.0 - 09:24:30.0     1  1331+30500002  [1]
09:27:30.0 - 09:29:30.0     2  1445+09900002  [1]
09:33:00.0 - 09:48:00.0     3  N5921          [1]
09:50:30.0 - 09:51:00.0     4  1445+09900002  [1]
10:22:00.0 - 10:23:00.0     5  1445+09900002  [1]
10:26:00.0 - 10:43:00.0     6  N5921          [1]
10:45:30.0 - 10:47:00.0     7  1445+09900002  [1]
Fields: 3
ID   Name          Right Ascension  Declination   Epoch
1    1331+30500002 13:31:08.29      +30.30.32.96  J2000   ** Flux/bandpass cal, 3c 286
(J2000 name & coords.)
2    1445+09900002 14:45:16.47      +09.58.36.07  J2000   ** Gain calibrator
3    N5921         15:22:00.00      +05.04.00.00  J2000   ** Source

Data descriptions: 1 (1 spectral windows and 1 polarization setups)
ID    Ref.Freq    #Chans  Resolution  TotalBW     Correlations
1     1413.45MHz  63      24.4141kHz  1550.2 kHz  RR  LL
Feeds: 28: printing first row only
Antenna   Spectral Window     # Receptors    Polarizations
1         -1                  2              [         R, L]
Antennas: 27:
ID   Name  Station   Diam.    Long.         Lat.
1    1     VLA:N7    25.0 m   -107.37.07.2  +33.54.12.9
2    2     VLA:W1    25.0 m   -107.37.05.9  +33.54.00.5
3    3     VLA:W2    25.0 m   -107.37.07.4  +33.54.00.9
4    4     VLA:E1    25.0 m   -107.37.05.7  +33.53.59.2
5    5     VLA:E3    25.0 m   -107.37.02.8  +33.54.00.5
6    6     VLA:E9    25.0 m   -107.36.45.1  +33.53.53.6
7    7     VLA:E6    25.0 m   -107.36.55.6  +33.53.57.7
8    8     VLA:W8    25.0 m   -107.37.21.6  +33.53.53.0
9    9     VLA:N5    25.0 m   -107.37.06.7  +33.54.08.0
10   10    VLA:W3    25.0 m   -107.37.08.9  +33.54.00.1
11   11    VLA:N4    25.0 m   -107.37.06.5  +33.54.06.1
12   12    VLA:W5    25.0 m   -107.37.13.0  +33.53.57.8
13   13    VLA:N3    25.0 m   -107.37.06.3  +33.54.04.8
14   14    VLA:N1    25.0 m   -107.37.06.0  +33.54.01.8
15   15    VLA:N2    25.0 m   -107.37.06.2  +33.54.03.5
16   16    VLA:E7    25.0 m   -107.36.52.4  +33.53.56.5
17   17    VLA:E8    25.0 m   -107.36.48.9  +33.53.55.1
18   18    VLA:W4    25.0 m   -107.37.10.8  +33.53.59.1
19   19    VLA:E5    25.0 m   -107.36.58.4  +33.53.58.8
20   20    VLA:W9    25.0 m   -107.37.25.1  +33.53.51.0
21   21    VLA:W6    25.0 m   -107.37.15.6  +33.53.56.4
22   22    VLA:E4    25.0 m   -107.37.00.8  +33.53.59.7
24   24    VLA:E2    25.0 m   -107.37.04.4  +33.54.01.1
25   25    VLA:N6    25.0 m   -107.37.06.9  +33.54.10.3
26   26    VLA:N9    25.0 m   -107.37.07.8  +33.54.19.0
27   27    VLA:N8    25.0 m   -107.37.07.5  +33.54.15.8
28   28    VLA:W7    25.0 m   -107.37.18.4  +33.53.54.8

Tables:
MAIN                   22653 rows
ANTENNA                   28 rows
DATA_DESCRIPTION           1 row
DOPPLER             <absent>
FEED                      28 rows
FIELD                      3 rows
FLAG_CMD             <empty>
FREQ_OFFSET         <absent>
HISTORY                  270 rows
OBSERVATION                1 row
POINTING                 168 rows
POLARIZATION               1 row
PROCESSOR            <empty>
SOURCE                     3 rows
SPECTRAL_WINDOW            1 row
STATE                <empty>
SYSCAL              <absent>
WEATHER             <absent>


## Appendix I. --- Obtaining and Installing CASA

### Installation Script

The easiest way to install CASA on a RedHat Enterprise Linux (or compatible) system is to use our installation script, load-casapy. This script will ftp the CASA RPMs and install them. To use it, first use the link above to download it to your hard disk. Next, make sure execute permission is set for the file.

Install CASA into /usr by logging in as root and running:

This option will install CASA into /usr, but it can only be run by the root user.

See the following for more details:

https://safe.nrao.edu/wiki/bin/view/Software/ObtainingCASA

### Startup

This section assumes that CASA has been installed on your LINUX or OSX system. _For NRAO-AOC testers, you should do the following on an AOC RHE4 machine:_

> . /home/casa/casainit.sh
or
> source /home/casa/casainit.csh


## Appendix II. ---- Python and CASA

CASA uses Python, IPython and matplotlib within the package.

"Python is an interpreted, interactive, object-oriented programming language" (from www.python.org); it is used as the underlying command line interface/scripting language to CASA . Note: since Python is inherently 0-based in its indexing of arrays, vectors, etc, CASA is also 0-based; any Index inputs (e.g., start (for start channel), fieldIndex, antennaID, etc) will start with 0.

IPython is an enhanced, interactive shell to Python which provides many features for efficient command line interaction.

matplotlib is a Python 2-D plotting library for publication quality figures in different hardcopy formats.

Each of the features of these components behave in the standard way within \casa\ . In the following sections, we outline the key elements for analysis interactions; see the IPython page for the full suite of functionality.

### In-line help

• TAB key

At any time, hitting the TAB key will complete any available commands or variable names and show you a list of the possible completions if there's no unambiguous result. It will also complete filenames in the current directory if no CASA or Python names match.

For example, it can be used to list the available functionality using minimum match; once you have typed enough characters to make the command unique, the TAB key will complete it.

CASA <15>: cle<TAB>
clean     clear     clearcal


• help (with no arguments) - native Python help facility

Typing 'help' will give you the default Python help and give you the help> prompt for further information; hitting at the help prompt returns you to the CASA prompt. You can also get the short help for a CASA method by typing 'help tool.method' or 'help task'.

CASA [2]: help
---------> help()

If this is your first time using Python, you should definitely check out
the tutorial on the Internet at http://www.python.org/doc/tut/.

Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules.  To quit this help utility and

To get a list of available modules, keywords, or topics, type "modules",
"keywords", or "topics".  Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word
such as "spam", type "modules spam".
help> keywords

Here is a list of the Python keywords.  Enter any keyword to get more help.

and                 else                import              raise
assert              except              in                  return
break               exec                is                  try
class               finally             lambda              while
continue            for                 not                 yield
def                 from                or
del                 global              pass
elif                if                  print

help>

You are now leaving help and returning to the Python interpreter.
If you want to ask for help on a particular object directly from the
interpreter, you can type "help(object)".  Executing "help('string')"
has the same effect as typing a particular string at the help> prompt.


### Automatic parentheses

Automatic parenthesis is enabled for calling functions with argument lists; this feature is intended to allow less typing for common situations. IPython will display the interpretation of the line, beneath the one typed, as indicated by the '-------->'. Default behavior in CASA is to have automatic parenthesis enabled.

### System shell access

Any input line beginning with a '!' character is passed verbatim (minus the '!', of course) to the underlying operating system (*the sole exception to this is the 'cd' command which must be executed without the '!'*).

Several common commands (ls, pwd, cd, less) may be executed with or without the '!'.

CASA [1]: pwd
/export/home/corsair-vml/jmcmulli/data
CASA [2]: ls n*
ngc5921.ms ngc5921.py
CASA [3]: !cp -r ../test.py .


CASA [4]: cd /home/ballista/jmcmulli/other_data
CASA [4]: pwd
/home/ballista/jmcmulli/other_data
CASA [5]: bookmark other_data
CASA [6]: cd /export/home/corsair-vml/jmcmulli/data
CASA [7]: pwd
/export/home/corsair-vml/jmcmulli/data
CASA [8]: cd -b other_data
(bookmark:data) -> /home/ballista/jmcmulli/other_data


Output from shell commands can be captured in two ways:

1. sx shell_command, !!shell_command - this captures the output to a list

CASA [1]: sx pwd # stores output of 'pwd' in a list
Out[1]: ['/home/basho3/jmcmulli/pretest']

CASA [2]: !!pwd  # !! is a shortcut for 'sx'
Out[2]: ['/home/basho3/jmcmulli/pretest']

CASA [3]: sx ls v* # stores output of 'pwd' in a list
Out[3]:
['vla_calplot.jpg',
'vla_calplot.png',
'vla_msplot_cals.jpg',
'vla_msplot_cals.png',
'vla_plotcal_bpass.jpg',
'vla_plotcal_bpass.png',
'vla_plotcal_fcal.jpg',
'vla_plotcal_fcal.png',
'vla_plotvis.jpg',
'vla_plotvis.png']

CASA [4]: x=_ # remember '_' is a shortcut for the output from the last command

CASA [5]: x
Out[5]:
['vla_calplot.jpg',
'vla_calplot.png',
'vla_msplot_cals.jpg',
'vla_msplot_cals.png',
'vla_plotcal_bpass.jpg',
'vla_plotcal_bpass.png', 'vla_plotcal_fcal.jpg',
'vla_plotcal_fcal.png',
'vla_plotvis.jpg',
'vla_plotvis.png']

CASA [6]: y=Out[2] # or just refer to the enumerated output

CASA [7]: y
Out[7]: ['/home/basho3/jmcmulli/pretest']


1. sc - captures the output to a variable; options are '-l' and '-v'

CASA [1]: sc x=pwd # capture output from 'pwd' to the variable 'x'

CASA [2]: x
Out[2]: '/home/basho3/jmcmulli/pretest'

CASA [3]: sc -l x=pwd # capture the output from 'pwd' to the variable 'x' but
# split newlines into a list (similar to sx command)

CASA [4]: x
Out[4]: ['/home/basho3/jmcmulli/pretest']

CASA [5]: sc -v x=pwd # capture output from 'pwd' to a variable 'x' and
# show what you get (verbose mode)
x ==
'/home/basho3/jmcmulli/pretest'

CASA [6]: x
Out[6]: '/home/basho3/jmcmulli/pretest'


### Logging

There are two components to logging within CASA. Logging of all command line inputs is done via IPython.

Upon startup, CASA will log all commands to a file called ipython.log. This file can be changed via the use of the ipythonrc file.

The following line sets up the logging for CASA . There are four options following the specification of the logging file: 1) append, 2) rotate (each session of CASA will create a new log file with a counter incrementing ipython.log.1~, ipython.log.2~ etc, 3) over (overwrite existing file), and 4) backup (renames exising log file to log_name~).

logfile ./ipython.log append


The command logstate will provide details on the current logging setup:

CASA [12]: logstate

File:   ipython.log
Mode:   append
State:  active


Logging can be turned on and off using the logon, logoff commands.

The second component is the output from applications which is directed to the file ./casapy.log.

Your command line history is automatically maintained and stored in the local directory as ipython.log; this file can be edited and re-executed as appropriate using the execfile 'filename' feature. In addition, logging of output from commands is sent to the file casapy.log, also in your local directory; this is brought up automatically.

The logger has a range of features including the ability to filter messages, sort by Priority, Time, etc, and the ability to insert additional comments.

* CASA logger:
%BEGINFIGURE{label='fig:logger', caption='CASA Logger GUI window'}% Error: (3) can't find casalogger1.jpg in Software %ENDFIGURE%

* CASA logger - Search facility: Specify a string in the entry box to have all instances of the found string highlighted.
%BEGINFIGURE{label='fig:logger_search',caption='CASA Logger - Search example'}% Error: (3) can't find casalogger_select.jpg in Software %ENDFIGURE%

* CASA logger - Filter facility: The log output can be sorted by Priority, Time, Origin. One can also filter for a string found in the Message.
%BEGINFIGURE{label='fig:logger_filter',caption='CASA Logger - Filter example'}% Error: (3) can't find casalogger_filter.jpg in Software %ENDFIGURE%

* CASA logger - Insert facility: The log output can be augmented by adding notes or comments during the reduction. The file should then be saved to disk to retain these changes.
%BEGINFIGURE{label='fig:logger_insert',caption='CASA Logger - Insert example'}% Error: (3) can't find casalogger_insert.jpg in Software %ENDFIGURE%

### History and Searching

Numbered input/output history is provided natively within IPython. Command history is also maintained on-line.

CASA [11]: x=1

CASA [12]: y=3*x

CASA [13]: z=x**2+y**2

CASA [14]: x
Out[14]: 1

CASA [15]: y
Out[15]: 3

CASA [16]: z
Out[16]: 10

CASA [17]: Out[14]   # Note: The 'Out' vector contains command output
Out[17]: 1

CASA [18]: _15       # Note: The return value can be accessed by _number
Out[18]: 3

CASA [19]: ___       # Note: The last three return values can be accessed as:
Out[19]: 10        #       _, __, ___


Command history can be accessed via the 'hist' command. The history is reset at the beginning of every CASA session, that is, typing 'hist' when you first enter start CASA will not provide any commands from the previous session; however, all of the commands are still available at the command line and can be accessed through the up,down arrow keys, and through searching.

CASA [22]: hist
1 : __IP.system("vi temp.py")  # Note:shell commands are designated in this way
2 : ipmagic("%run -i temp.py") # Note:magic commands are designated in this way
3 : ipmagic("hist ")
4 : more temp.py
5 : __IP.system("more temp.py")
6 : quickhelp()                # Note: autoparenthesis are added in the history
7 : im.open('ngc5921.ms')
8 : im.summary()
9 : ipmagic("pdoc im.setdata")
10: im.close()
11: quickhelp()
12: ipmagic("logstate ")
13: x=1
14: y=3*x
15: z=x**2+y**2
16: x
17: y
18: z
19: Out[16]
20: _17
21: ___


The history can be saved as a script or used as a macro for further use:

CASA [24]: save script.py 13:16
File script.py exists. Overwrite (y/[N])? y
The following commands were written to file script.py:
x=1
y=3*x
z=x**2+y**2
CASA [25]: !more script.py
x=1
y=3*x
z=x**2+y**2

CASA [26]:


Note that the history commands will be saved up to, but not including the last value (i.e., history commands 13-16 saves commands 13, 14, and 15).

There are two mechanisms for searching command history:

1. Begin typing and use the Ctrl-p (previous,up) and Ctrl-n (next,down) to search through only
the history items that match what you've typed so far. If you use Ctrl-p,Ctrl-n at a blank prompt, they behave just like the normal arrow keys.

1. Hit Ctrl-r; this opens a search prompt. Begin typing and the system searches your history for
lines that contain what you've typed so far, completing what it can.

For example:

CASA [37]:

(reverse-i-search)':


Typing anything after the colon will provide you with the last command matching the characters, for example, typing 'op' finds:

(reverse-i-search)op': im.open('ngc5921.ms')


Subsequent hitting of Ctrl-r will search for the next command matching the characters.

### Macros

Macros can be made for easy re-execution of previous commands. For example to store the commands 13-15 to the macro 'example':

CASA [31]: macro example 13:16
Macro example created. To execute, type its name (without quotes).
Macro contents:
x=1
y=3*x
z=x**2+y**2

CASA [32]: z
Out[32]: 6

CASA [33]: z=10

CASA [34]: example
Out[34]: Executing Macro...

CASA [35]: z
Out[35]: 6

CASA [36]:


### On-line editing

You can edit files on-line in two ways:

1. Using the shell access via '!vi'
2. Using the ed function; this will edit the file but upon closing, it will try to
execute the file; using the 'script.py' example above:

CASA [13]: ed script.py # this will bring up the file in your chosen editor
# when you are finished editing the file, it will automatically
# execute it (as though you had done a execfile 'script.py'
Editing... done. Executing edited code...

CASA [14]: x
Out[14]: 1

CASA [15]: y
Out[15]: 3

CASA [16]: z
Out[16]: 6


### Executing Python scripts

Python scripts are simple text files containing lists of commands as if typed at the keyboard. Note: the autoparentheses feature of IPython can not be used in scripts, that is, you should make sure all function calls have any opening and closing parentheses.

# file is script.py
# My script to plot the observed visibilities
plotxy('ngc5921.ms','uvdist') #yaxis defaults to amplitude


This can be done by:

1. execfile: this will execute the script as though you had typed the lines at the \casa\ prompt.

CASA [5]: execfile 'script.py'
--------> execfile('script.py')


### How do I exit from CASA ?

Hit or type at the CASA command line prompt:

CASA>%Exit


and press return.

## Appendix III. --- Dictionaries

### AIPS -- CASA dictionary

https://safe.nrao.edu/wiki/bin/view/Software/LegacyCASA-AIPSDictionary

This appendix provides a list of common Miriad tasks, and their equivalent CASA tool or tool function names. The two packages differ in both their architecture and calibration and imaging models, and there is often not a direct correspondence. However, this index does provide a scientific user of CASA who is familiar with MIRIAD, with a simple translation table to map their existing data reduction knowledge to the new package.

 MIRIAD Task Description CASA tool/function atlod load ATCA data atcafiller blflag Interactive baseline based editor/flagger mp raster displays cgcurs Interactive image analysis qtviewer cgdisp Image display, overlays qtviewer clean Clean an image im fits FITS image filler ia.imagefromfits gpboot Set flux density scale cb.fluxscale gpcal Polarization leakage and gain calibration cb with 'G' and 'D' gpcopy copy calibration tables not needed gpplt Plot calibration solutions cp.plot imcomb Image combination im imfit Image-plane component fitter ia.imagefitter impol Create polarization images ia.imagepol imstat Image statistics ia.statistics imsub Extract sub-image ia.subimage invert Synthesis imaging im linmos linear mosaic combination of images im maths Calculations involving images ia.imagecalc, ia.calc mfcal Bandpass and gain calibration cb with 'G' and 'B' prthd Print header of image or uvdata ia.summary, ms.summary restor Restore a clean component model im selfcal selfcalibration of visibility data im, cb tvclip automated flagging based on clip levels af tvdisp Load image to TV display qtviewer tvflag Interactive TB data editing mp uvaver Average/select data, apply calibration ms.split uvfit uv-plane component fitter cb uvflag Command-based flagging af uvgen Simulator sm uvlist List uv-data tb uvmodel Source model computation im.ft uvplt uv-data plotting ms uvsplit split uv file in sources and spectral windows ms.split
%ENDTABLE%

### CLIC -- CASA dictionary

This appendix provides a list of common CLIC tasks, and their equivalent CASA tool or tool function names. The two packages are very similar since the CASA software to reduce IRAM data is based on the CLIC reduction procedures.

%BEGINTABLE{label="tbl:CLICCASA" caption="CLIC - CASA dictionary"}%
CLIC Function Description CASA tool/function
ampl Amplitude calibration cb.setsolvegainspline,cb.solve
flag Flag data mp, af, qtviewer
flux Absolute flux calibration cb.fluxscale
phase Phase calibration cb.setsolvegainspline,cb.solve
phcor Atmospheric phase correction almatifiller
print Print text summary of data ms.summary
rf Radio frequency bandpass cb.setsolvebandpoly, cb.solve
table Split out calibrated data (uv table) ms.split
%ENDTABLE%

## Appendix IV --- Synthesis Calibration

### Basic Synthesis Calibration Fundamentals

The visibilities measured by an instrument must be calibrated before formation of an image. This is becausethe wavefronts received and processed by the observational hardwarehave been corrupted by a variety of effects. These include the effects of transmission through the atmosphere, the imperfect details amplified electronic (digital) signal and transmission through thesignal processing system, and the effects of formation of the cross-power spectra by a correlator. Calibration is the process of reversing these effects to arrive at corrected visibilities which resemble as closely as possible the visibilities that would have been measured in vacuum by a perfect system. The subject of this chapter of the cookbook is the determination of these effects by using the visibility data itself.

The relationship between the observed and ideal (desired) visibilities on the baseline between antennas i and j may be expressed by the Measurement Equation:

where represents the observed visbility, represents the corresponding ideal visibilities, and represents the accumulation of all corruptions affecting baseline . The visibilities are indicated as vectors spanning the four correlation combinations which can be formed from dual-polarization signals. These four correlations are related directly to the Stokes parameters which fully describe the radiation. The term is therefore a 44 matrix.

Most of the effects contained in (indeed, the most important of them) are antenna-based, i.e., they arise from measurable physical properties of (or above) individual antenna elements in a synthesis array. Thus, adequate calibration of an array of antennas forming baseline visibilities is usually achieved through the determination of only factors, such that . For the rest of this chapter, we will usually assume that is factorable in this way, unless otherwise noted.

As implied above, may also be factored into the sequence of specific corrupting effects, each having their own particular (relative) importance and physical origin, which determines their unique algebra. Including the most commonly considered effects, the Measurement Equation can be written:

where:

• Polarization-independent multiplicative effects introduced by the troposphere, such as opacity and path-length variation.
• Parallactic angle, which describes the orientation of the polarization coordinates on the plane of the sky. This term varies according to the type of the antenna mount.
• Effects introduced by properties of the optical components of the telescopes, such as the collecting area's dependence on elevation.
• Instrumental polarization response. "D-terms" describe the polarization leakage between feeds (e.g. how much the R-polarized feed picked up L-polarized emission, and vice versa).
• Electronic gain response due to components in the signal path between the feed and the correlator. This complex gain term includes the scale factor for absolute flux density calibration, and may include phase and amplitude corrections due to changes in the atmosphere (in lieu of ). These gains are polarization-dependent.
• Bandpass (frequency-dependent) response, such as that introduced by spectral filters in the electronic transmission system
• Baseline-based correlator (non-closing) errors. By definition, these are not factorable into antenna-based parts.

Note that the terms are listed in the order in which they affect the incoming wavefront ( and represent an arbitrary sequence of such terms depending upon the details of the particular electronic system). Note that $M$ differs from all of the rest in that it is not antenna-based, and thus not factorable into terms for each antenna.

As written above, the measurement equation is very general; not all observations will require treatment of all effects, depending upon the desired dynamic range. E.g., bandpass need only be considered for continuum observations if observed in a channelized mode and very high dynamic range is desired. Similarly, instrumental polarization calibration can usually be omitted when observing (only) total intensity using circular feeds. Ultimately, however, each of these effects occurs at some level, and a complete treatment will yield the most accurate calibration. Modern high-sensitivity instruments such as ALMA and EVLA will likely require a more general calibration treatment for similar observations with older arrays in order to reach the advertised dynamic ranges on strong sources.

In practice, it is usually far too difficult to adequately measure most calibration effects absolutely (as if in the laboratory) for use in calibration. The effects are usually far too changable. Instead, the calibration is achieved by making observations of calibrator sources on the appropriate timescales for the relevant effects, and solving the measurement equation for them using the fact that we have measurements and only factors to determine (except for which is only sparingly used). (_Note: By partitioning the calibration factors into a series of consecutive effects, it might appear that the number of free parameters is some multiple of , but the relative algebra and timescales of the different effects, as well as the the multiplicity of observed polarizations and channels compensate, and it can be shown that the problem remains well-determined until, perhaps, the effects are direction-dependent within the field of view. Limited solvers for such effects are under study; the calibrater tool currently only handles effects which may be assumed constant within the field of view. Corrections for the primary beam are handled in the imager tool._ Once determined, these terms are used to correct the visibilities measured for the scientific target. This procedure is known as cross-calibration (when only phase is considered, it is called phase-referencing).

The best calibrators are point sources at the phase center (constant visibility amplitude, zero phase), with sufficient flux density to determine the calibration factors with adequate SNR on the relevant timescale. The primary gain calibrator must be sufficiently close to the target on the sky so that its observations sample the same atmospheric effects. A bandpass calibrator usually must be sufficiently strong (or observed with sufficient duration) to provide adequate per-channel sensitivity for a useful calibration. In practice, several calibrators are usually observed, each with properties suitable for one or more of the required calibrations.

Synthesis calibration is inherently a bootstrapping process. First, the dominant calibration term is determined, and then, using this result, more subtle effects are solved for, until the full set of required calibration terms is available for application to the target field. The solutions for each successive term are relative to the previous terms. Occasionally, when the several calibration terms are not sufficiently orthogonal, it is useful to re-solve for earlier types using the results for later types, in effect, reducing the effect of the later terms on the solution for earlier ones, and thus better isolating them. This idea is a generalization of the traditional concept of self-calibration, where initial imaging of the target source supplies the visibility model for a re-solve of the gain calibration ( or ). Iteration tends toward convergence to a statistically optimal image. In general, the quality of each calibration and of the source model are mutually dependent. In principle, as long as the solution for any calibration component (or the source model itself) is likely to improve substantially through the use of new information (provided by other improved solutions), it is worthwhile to continue this process.

In practice, these concepts motivate certain patterns of calibration for different types of observation, and the calibrater tool in CASA is designed to accomodate these patterns in a general and flexible manner. For a spectral line total intensity observation, the pattern is usually:

1. Solve for on the bandpass calibrator
2. Solve for on the bandpass calibrator, using
3. Solve for on the primary gain (near-target) and flux density calibrators, using solutions just obtained
4. Scale solutions for the primary gain calibrator according to the flux density calibrator solutions
5. Apply and solutions to the target data
6. Image the calibrated target data

If opacity and gain curve information are relevant and available, these types are incorporated in each of the steps (in future, an actual solve for opacity from appropriate data may be folded into this process):

1. Solve for on the bandpass calibrator, using (opacity) and (gain curve) solutions already derived.
2. Solve for on the bandpass calibrator, using , (opacity), and (gain curve) solutions.
3. Solve for on primary gain (near-target) and flux density calibrators, using , (opacity), and (gain curve) solutions.
4. Scale solutions for the primary gain calibrator according to the flux density calibrator solutions
5. Apply (opacity), (gain curve), , and solutions to the target data
6. Image the calibrated target data

For continuum polarimetry, the typical pattern is:

1. Solve for on the polarization calibrator, using (analytical) solutions.
2. Solve for on the polarization calibrator, using and solutions.
3. Solve for on primary gain and flux density calibrators, using and solutions.
4. Scale solutions for the primary gain calibrator according to the flux density calibrator solutions.
5. Apply , , and solutions to target data.
6. Image the calibrated target data.

For a spectro-polarimetry observation, these two examples would be folded together.

In all cases the calibrator model must be adequate at each solve step. At high dynamic range and/or high resolution, many calibrators which are nominally assumed to be point sources become slightly resolved. If this has biased the calibration solutions, the offending calibrator may be imaged at any point in the process and the resulting model used to improve the calibration. Finally, if sufficiently strong, the target may be self-calibrated as well.

### General Calibrater Mechanics

The calibrater tasks/tool are designed to solve and apply solutions for all of the solution types listed above (and more are in the works). This leads to a single basic sequence of execution for all solves, regardless of type:

1. Set the calibrator model visibilities
2. Select the visibility data which will be used to solve for a calibration type
3. Arrange to apply any already-known calibration types (the first time through, none may yet be available)
4. Arrange to solve for a specific calibration type, including specification of the solution timescale and other specifics
5. Execute the solve process
6. Repeat 1-4 for all required types, using each result, as it becomes available, in step 2, and perhaps repeating for some types to improve the solutions

By itself, this sequence doesn't guarantee success; the data provided for the solve must have sufficient SNR on the appropriate timescale, and must provide sufficient leverage for the solution (e.g., D solutions require data taken over a sufficient range of parallactic angle in order to separate the source polarization contribution from the instrumental polarization).

The calibration tasks are as follows:

• accum - Accumulate incremental calibration solutions into a cumulative cal table
• bandpass - B calibration solving; supports pre-apply of other calibrations
• blcal - baseline-based G (or B) calibration; supports pre-apply of other calibrations
• clearcal - Re-initialize visibility data set calibration data
• correct - Apply calculated calibration solutions
• fluxscale - Bootstrap the flux density scale from standard calibraters
• fringecal - baseline-based fringe-fitting calibration solving; supports pre-apply of other calibrations
• gaincal - G calibration solving; supports pre-apply of other calibrations
• pointcal - Solve for pointing error calibration
• plotcal - Plot calibration solutions
• setjy - Compute the model visibility for a specified source flux density
• smooth - Smooth calibration solutions derived from one or more sources
• uvmodelfit - Fit a single component source model to the uv data
The following sections outline the use of these tasks in standard calibration processes.

### Calibration models for absolute flux density (setjy)

When solving for visibility-plane calibration, CASA calibration applications compare the observed DATA column with the MODEL_DATA column. The first time that an imaging or calibration task is executed for a given MS, the MODEL_DATA column is created and initialized with unit point source flux density visibilities (unpolarized) for all sources (e.g. AMP=1, phase=0). The setjy task is then used to set the proper flux density for flux calibrators. For sources that are recognized flux calibrators (listed in Table \ref{fluxcal.table}), setjy will calculate the flux densities, Fourier transform the data and write the results to the MODEL_DATA column. For the VLA, the default source models are customarily point sources defined by the Baars or Perley-Taylor flux density scales, or point sources of unit flux density if the flux density is unknown. The MODEL_DATA column can also be filled with a model generated from an image of the source (e.g. the Fourier transform of an image generated after initial calibration of the data).

%BEGINTABLE{label="tbl:fluxcal.table" caption="Recognized Flux Density Calibrators."}%
3C Name B1950 Name J2000 Name
3C286 1328+307 1331+305
3C48 0134+329 0137+331
3C147 0538+498 0542+498
3C138 0518+165 0521+166
-- 1934-638 --
3C295 1409+524 1411+522
%ENDTABLE%

By default the setjy task will cycle through all fields and spectral windows, setting the flux density either to 1 Jy (unpolarized), or if the source is recognized as one of the calibrators in the above table, to the flux density (assumed unpolarized) appropriate to the observing frequency. To run setjy on a measurement set called data.ms:

setjy(vis='data.ms')                # This will set all fields and spectral windows


To limit this operation to certain fields and spectral windows, use the fieldid and/or spwid parameters, which take single values:

setjy(vis='data.ms',fieldid=0)         # only set the flux density of
# the first field (all spectral windows)

setjy(vis='data.ms',fieldid=1,spwid=17)# only set the flux density of
# the second field in spectral window 17


The full-polarization flux density (I,Q,U,V) may also be explicitly provided:

setjy(vis='data.ms',
fieldid=1,spwid=16,               # Run setjy on field 2, spwid 17
fluxdensity=[3.5,0.2,0.13,0.0])   #  and set I,Q,U,V explicitly


If the flux density calibrator is resolved at the observing frequency, the point source model generated by setjy will not be appropriate. If available, a model image of the resolved source at the observing frequency may be used to generate the appropriate visibilities with the ft task. Model images for some flux density calibrators are provided with CASA:
• RPM RHE4: located in /usr/lib/casapy/data/nrao/VLA/CalModels
• MAC OSX .dmg: located in /opt/casa/data/nrao/VLA/CalModels
• NRAO-AOC stable: /home/casa/data/nrao/VLA/CalModels
• NRAO-AOC daily: /home/ballista/casa/daily/data/nrao/VLA/CalModels

The models available are:
3C138_C.im/  3C138_Q.im/  3C147_K.im/  3C286_C.im/  3C286_Q.im/  3C48_C.im/  3C48_Q.im/  README
3C138_K.im/  3C138_U.im/  3C147_Q.im/  3C286_K.im/  3C286_U.im/  3C48_K.im/  3C48_U.im/
3C138_L.im/  3C138_X.im/  3C147_U.im/  3C286_L.im/  3C286_X.im/  3C48_L.im/  3C48_X.im/

These are all un-reconvolved images of AIPS CC lists, properly scaled
to the Perley-Taylor 1999 flux density for the frequencies at which
they were observed.


It is important that the model image {\em not} be one convolved with a finite beam; it must have units of Jy/pixel (not Jy/beam). Also, the frequency range of the image must cover the frequencies in the dataset. Finally, the amplitude scale in the image must be correct (beware of variation due to non-zero spectral index).

Copy the model image to the working directory; the following illustrates its use.

# Import the data
importvla(archivefiles='AS776_A031015.xp2',vis='ngc7538_XBAND.ms',freqtol=10000000.0,bandname='X')

# Flag the ACs
flagautocorr('ngc7538_XBAND.ms')

# METHOD 1:  Use point source model for 3C48, plus uvrange in solve

# Use point source model for 3C48
setjy(fieldid=0);

# Limit 3C48 (fieldid=0) solutions to uvrange = 0-40 klambda
gaincal(vis='ngc7538_XBAND.ms', caltable='cal.G',msselect='FIELD_ID IN [0]',solint=60.0,refant=10, nchan=0,uvrange=[0,40],append=False,gaincurve=False,opacity=False)

# Append phase-calibrator's solutions (no uvrange) to the same table
gaincal(vis='ngc7538_XBAND.ms', caltable='cal.G',msselect='FIELD_ID IN [2]',solint=60.0,refant=10, nchan=0,uvrange=[0],append=True,gaincurve=False,opacity=False)

# Fluxscale
fluxscale(vis='ngc7538_XBAND.ms',caltable='cal.G',reference=['0137+331'],
transfer=['2230+697'], fluxtable='cal.Gflx',append=False)

# METHOD 2: use a resolved model copied from the data respository
#   for 3C48, and no uvrange
#   (NB: detailed freq-dep flux scaling TBD)

ft(vis='ngc7538_XBAND.ms',fieldid=0,model='3C48_X.im')

# Solutions on both calibrators with no uvrange
gaincal(vis='ngc7538_XBAND.ms', caltable='cal.G2',msselect='FIELD_ID IN [0,2]',solint=60.0,refant=10, nchan=0,uvrange=[0],append=False,gaincurve=False,opacity=False)

# Fluxscale
fluxscale(vis='ngc7538_XBAND.ms',caltable='cal.G2',reference=['0137+331'],
transfer=['2230+697'], fluxtable='cal.G2flx',append=False)

# Both methods give 2230 flux densities ~0.7 Jy, in good agreement with
#   AIPS


### Types of calibrations to solve for and apply

The following sections describe the basics of solving for the various types of calibration solutions. The examples given here are very general. The assembly of these building blocks into calibration patterns for several use cases is demonstrated in the real examples provided at the end of this cookbook.

#### A priori calibrations

This section describes how to apply _a priori_ calibration information that may be available.

##### Gain curves

Gain curve calibration involves compensating for the effects of elevation on the amplitude of the received signals at each antenna. Antennas are not absolutely rigid, and so their effective collecting area and net surface accuracy vary with elevation as gravity deforms the surface. This calibration is especially important at higher frequencies where the deformations represent a greater fraction of the observing wavelength. By design, this effect is usually minimized (i.e., gain maximized) for elevations between 45 and 60 degrees, with the gain decreasing at higher and lower elevations. Gain curves are most often described as 2nd- or 3rd-order polynomials in zenith angle.

At this writing, gain curve calibration has been implemented in CASA for the VLA, with gain curve polynomial coefficients available directly from the CASA data repository. To make gain curve corrections for VLA data, set the gaincurve parameter to True for any of the calibration tasks, e.g.,:

gaincal('data.ms','cal.G0',gaincuve=True,solint=0.,refant=11)
#Note gaincurve=True is the default so this parameter can be omitted for VLA data
gaincal('data.ms','cal.G0',solint=0.,refant=11) # is equivalent


NOTE: Set gaincurve=False if you are not using VLA data.

The gain curve will be calculated per timestamp. Upon execution of a calibration task (e.g., gaincal, bandpass, correct, etc), the gain curve data appropriate to the observing frequencies will be automatically retrieved from the data repository and applied.

##### Opacity corrections

The troposphere is not completely transparent. At high radio frequencies (15 GHz), water vapor and molecular oxygen begin to have a substantial effect on radio observations. According to the physics of radiative transmission, the effect is threefold. First, radio waves from astronomical sources are absorbed (and therefore attenuated) before reaching the antenna. Second, since a good aborber is also a good emitter, significant noise-like power will be added to the overall system noise. Finally, the optical path length through the troposphere introduces a time-dependent phase error. In all cases, the effects become worse at lower elevations due to the increased air mass through which the antenna is looking. In CASA, the opacity correction described here compensates only for the first of these effects, tropospheric attenuation, using a plane-parallel approximation for the troposphere to estimate the elevation dependence.

Opacity corrections are a component of calibration type 'T'. To make opacity corrections in CASA, an estimate of the zenith opacity is required (see observatory-specific chapters for how to measure zenith opacity). With a value for zenith opacity in hand (0.1 nepers, say), use the following parameters:

gaincal('data.ms','cal.G0',solint=0.,refant=11,tau=0.1)
#Note: the opacity parameter is True by default and so only the
#tau parameter needs to be specified if you want to apply this.


If opacity=True (default), the calibration tasks will apply an elevation-dependent opacity correction (scaled to 0.1 nepers at the zenith for all antennas for this example) calculated at each timestamp (t=-1).

Generalizations to antenna- and time-dependent opacities, including derivation (from weather information) and solving (directly from the visibility data) capabilities, will be made available in the future.

##### Other a priori Calibrations

Other a priori calibrations will be added to the calibrater tool in the near future. These will include antenna-position (phase) corrections, system temperature normalization (amplitude) corrections, tropospheric phase corrections derived from Water Vapor Radiometry (WVR) measurements, instrumental line-length corrections, etc. Where appropriate, solving capabilities for these effects will also be added.

#### Time-dependent complex gain calibration (gaincal: G, T, GSPLINE)

##### G solutions

Systematic time-dependent complex gain errors are almost always the dominant calibration effect, and a solution for them is almost always necessary before proceeding with any other calibration. Traditionally, this calibration type has been a catch-all for a variety of similar effects, including: the relative amplitude and phase gain for each antenna, phase and amplitude drifts in the electronics of each antenna, amplitude response as a function of elevation (gain curve), and tropospheric amplitude and phase effects. In CASA, it is possible to handle many of these effects separately, as available information and circumstances warrant, but it is still possible to solve for the net effect using calibration type G.

Generally speaking, type G can represent any per-spectral window multiplicative polarization- and time-dependent complex gain effect downstream of the polarizers. (Polarization independent effects upstream of the polarizers may also be treated with G.) Multi-channel data (per spectral window) will be averaged in frequency before solving (use calibration type B to solve for frequency-dependent effects within each spectral window).

To solve for G on, say, fields 1 & 2, on a 90s timescale, and apply, e.g., gain curve corrections:

gaincal('data.ms',
caltable='cal.G',                 # Write solutions to disk file 'cal.G'
msselect='FIELD_ID IN [0,1]',     # Restrict data selection
solint=90,                        # Solve for phase and amp on a 90s timescale
refant=3)                         #
# Note: gaincurve=True by default

plotcal('cal.G','amp')                    # Inspect solutions


These G solution will be referenced to antenna 4. Choose a well-behaved antenna that is located near the center of the array for the reference antenna. For non-poloarization datasets, reference antennas need not be specified although you can if you want. If no reference antenna is specified, an effective phase reference that is an average over the data will be calculated and used. For data that requires polarization calibration, you must choose a reference antenna that has a constant phase difference between the right and left polarizations (e.g. no phase jumps or drifts). If no reference antenna (or a poor one) is specified, the phase reference may have jumps in the R--L phase, and the resulting polarization angle response will vary during the observation, thus corrupting the polarization imaging.

To apply this solution to the calibrators and the target source (field 3, say):

correct('data.ms',
msselect='FIELD_ID IN [0,1,2]',     # Restrict data selection (cals + src)
opacity=False,                      # Don't apply opacity correction
gaintable='cal.G')                  # Apply G solutions and correct data
# (written to the CORRECTED_DATA column)
# Note: calwt=True by default
plotxy('data.ms',xaxis='channel',datacolum='data',subplot=211)
plotxy('data.ms',xaxis='channel',datacolumn='corrected',subplot=212)


##### T solutions

At high frequencies, it is often the case that the most rapid time-dependent gain errors are introduced by the troposphere, and are polarization-independent. It is therefore unnecessary to solve for separate time-dependent solutions for both polarizations, as is the case for G. Calibration type T is available to calibrate such tropospheric effects, differing from G only in that a single common solution for both polarizations is determined. In cases where only one polarization is observed, type T is adequate to describe the time-dependent complex multiplicative gain calibration.

In the following example, we assume we have a 'G' solution obtained on a longish timescale ( a few minutes, say), and we want a residual T solution to track the polarization-independent variations on a very short timescale:

gaincal('data.ms',                       # Visibility dataset
caltable='cal.T',                # Specify output table name
gaintype='T',                    # Solve for T
msselect='FIELD_ID IN [0,1]',    # Restrict data selection to calibrators
solint=3.,                       # Obtain solutions on a 3s timescale
gaintable='cal120.G')            # Pre-apply prior G solution


For dual-polarization observations, it will always be necesary to obtain a G solution to account for differences and drifts between the polarizations (which traverse different electronics), but solutions for rapidly varying polarization-independent effects such as those introduced by the troposphere will be optimized by using T. Note that T can be used in this way for self-calibration purposes, too.

##### GSPLINE solutions

At high radio frequencies, where tropospheric phase fluctuates rapidly, it is often the case that there is insufficient SNR to obtain a robust G or T solution on timescales short enough to track the variation. In this case it is desirable to solve for a best-fit functional form for each antenna using the GSPLINE solver. The GSPLINE solver fits time-series of cubic B-splines to the phase and/or amplitude of the calibrator visbilities. Unlike ordinary G, a single common GSPLINE solution will be determined from data for all selected spectral windows and fields specified in msselect, and the resulting solution will be applicable to any field or spectral window in the same Measurement Set. An important consequence of this is that all fields used to obtain a GSPLINE amplitude solution must have models with accurate relative flux densities. (Use of incorrect relative flux densities will introduce spurious variations in the GSPLINE amplitude solution.)

The GSPLINE solver requires a number of unique additional parameters, compared to ordinary G and T solving. The duration of each spline segment is controlled by splinetime. The actual splinetime will be adjusted such that an integral number of equal-length spline segments will fit within the overall range of data.

Phase splines require that cycle ambiguities be resolved prior to the fit; this operation is controlled by npointaver and phasewrap. The npointaver parameter controls how many contiguous points in the time-series are used to predict the cycle ambiguity of the next point in the time-series, and phasewrap sets the threshold phase jump (in degrees) that would indicate a cycle slip. Large values of npointaver improve the SNR of the cycle estimate, but tend to frustrate ambiguity detection if the phase rates are large. The phasewrap parameter may be adjusted to influence when cycles are detected. Generally speaking, large values () are useful when SNR is high and phase rates are low. Smaller values for phasewrap can force cycle slip detection when low SNR conspires to obscure the jump, but the algorithm becomes significantly less robust. More robust algorithms for phase-tracking are under development (including fringe-fitting).

To solve for GSPLINE phase and amplitudes, with splines of duration 600s:

gaincal('data.ms',
caltable='cal.spline.ap',
gaintype='GSPLINE'               # Solve for GSPLINE
calmode='ap'                     # Solve for amp & phase
msselect='FIELD_ID IN [0,1]',    # Restrict data selection to calibrators
splinetime=600.)                 # Set spline timescale to 10min


The GSPLINE solutions can not yet be plotted using plotcal.

Currently, several caveats should be kept in mind when using the GSPLINE solver:

TBD

#### Flux density scale calibration (fluxscale)

The 'G' or 'T' solutions obtained from calibrators for which the flux density was unknown and assumed to be 1 Jy are correct in a time- and antenna- relative sense, but are mis-scaled by a factor equal to the inverse of the square root of the true flux density. This scaling can be corrected by enforcing the constraint that mean gain amplitudes determined from calibrators of unknown flux density should be the same as determined from those with known flux densities. The fluxscale task exists for this purpose. Given a 'G' table, 'cal.G', containing solutions for a flux density calibrator (3C286, say) and for one or more random calibrator sources with unknown flux densities (0234+285 and 0323+022, say) use the fluxscale task as follows:

fluxscale(vis='data.ms',
caltable='cal.G',                  # Select input table
fluxtable= 'cal.Gflx',             # Write scaled solutions to cal.Gflx
reference='3C286',                 # 3C286 = flux calibrator
transfer=['0234+258', '0323+022')  # Select calibrators to scale


The output table, 'cal.Gflx', contains solutions that are properly scaled for all calibrators.

Note that the assertion that the gain solutions are independent of the calibrator includes the assumption that the gain amplitudes are strictly not systematically time dependent. While synthesis antennas are designed as much as possible to achieve this goal, in practice, a number of effects conspire to frustrate it. When relevant, it is advisable to pre-apply gain curve and opacity corrections when solving for the 'G' solutions that will be fluxscaled. When the G solutions are essentially constant for each calibrator separately, the fluxscale operation is likely to be robust.

The fluxscale task can be executed on either 'G' or 'T' solutions, but it should only be used on one of these types if solutions exist for both. GSPLINE solutions do not yet support fluxscale.

If the reference and transfer fields were observed in different spectral windows, the refspwmap parameter may be used to achieve the scaling calculation across spectral window boundaries. The refspwmap parameter takes a vector of indices indicating the spectral window mapping for the reference fields, such that refspwmap(i)=j means that reference field amplitudes from spectral window j will be used for spectral window i.

fluxscale(vis='data.ms',
caltable='cal.G',                  # Select input table
fluxtable= 'cal.Gflx',             # Write scaled solutions to cal.Gflx
reference=['3C286'],               # 3C286 = flux calibrator
transfer=['0234+258', '0323+022']  # Select calibrators to scale
refspwmap=[0,0,0])                 # Use spwid 1 scaling for spwids 2 & 3


fluxscale(vis='data.ms',
caltable='cal.G',                  # Select input table
fluxtable='cal.Gflx',              # Write scaled solutions to cal.Gflx
reference=['3C286'],               #  3C286 = flux calibrator,
refspwmap=[0,0,1,1],               #  select spwids for scaling,
transfer=['0234+285','0323+022'])  #  select calibrators to scale,


In this example, reference amplitudes from spectral window 1 will be used for spectral windows 1 and 2 and reference amplitudes from spectral window 2 will be used for spectral windows 3 and 4.

##### Resolved flux density calibrators

If the flux density calibrator is resolved, the assumption that it is a point source will cause solutions on outlying antennas to be biased in amplitude. In turn, the flux-density scaling step will be biased on these antennas as well. In general, it is best to use a resolved model for the calibrator as described in Section \ref{cal.models}, but if such a model is not available, it is important to limit the solution on the flux density calibrator to only the subset of antennas that have baselines short enough that the point-source assumption is valid. This can be done by using antenna and uvrange selection when solving for the flux density calibrator. For example, if antennas 1 through 8 are the antennas among which the baselines are short enough that the point-source assumption is valid, and we want to be sure to limit the solutions to the use of baselines shorter than 15000 wavelengths, then we can assemble properly scaled solutions for the other calibrator as follows (note: specifying both an antenna and a uvrange constraint prevents inclusion of antennas with only a small number of baselines within the specified uvrange from being included in the solution; such antennas will have poorly constrained solutions):

# first solve for gain solutions for the flux density calibrator
# (3C286 observed in field 1) using a subset of antennas
gaincal(vis='data.ms',
caltable='cal.G',              # write solutions to cal.G
msselect='FIELD_ID IN [0]      # Select the flux density calibrator
&& ANTENNA1 IN [0:7]       #  antennas 1-8,
&& ANTENNA2 IN [0:7]',
uvrange=[0,15],                #  and limit uvrange to 0-15klambda
solint=90)                     #  on 90s timescales, write solutions
#  to table called cal.G

# now solve for other calibrator (0234+285 in field 2) using all antennas
#  (implicitly) and append these solutions to the same table
gaincal(vis='data.ms',
caltable='cal.G',              # write solutions to cal.G
msselect='FIELD_ID IN [1]',
solint=90,
append=T)                      # Set up to write to the same table

# now run fluxscale to adjust scaling
fluxscale(vis='data.ms',
caltable='cal.G',        # Input table with unscaled cal solutions
fluxtable='cal.Gflx',    # Write scaled solutions to cal.Gflx
reference='3C286',       # Use 3c286 as ref with limited uvrange
transfer='0234+285')     # Transfer scaling to 0234+285


The fluxscale calculation will be performed using only the antennas common to both fields, but the result will be applied to all antennas on the transfer field. %% DSS: QUESTION: do you need to select antennas or will selection of %% uvrange be adequate. If former, then need to change code so only %% uvrange need be specified.

#### Bandpass calibration (bandpass: B, BPOLY)

##### B solutions

For channelized data, it is often desirable to solve for the gain variations in frequency as well as in time. Variation in frequency arises as a result of non-uniform filter passbands or other dispersive effects in signal transmission. It is usually the case that these frequency-dependent effects vary on timescales much longer than the time-dependent effects handled by types 'G' and 'T'. Thus, it makes sense to solve for them as a separate term: 'B'.

Calibration type 'B' differs from 'G' only in that it is determined for each channel in each spectral window. It is possible to solve for it as a function of time, but it is most efficient to keep the 'B' solving timescale as long as possible, and use 'G' or 'T' for rapid timesscale variations.

'B' solutions are limited by the SNR available per channel, which may be quite small. It is therefore important that the data be coherent over the time-range of the 'B' solutions. As a result, 'B' solutions are almost always preceded by a 'G' or 'T' solve. In turn, if the 'B' solution improves the frequency domain coherence significantly, a 'G' or 'T' solution following it will be better than the original.

To solve for 'B' (on a very long timescale, i.e., constant for an observation shorter than 1 day), using a prior G solution:

bandpass(vis='data.ms',               # input data set
caltable='cal.B',            #
mode='channel',              # Use channels 3-57 (avoid end channels)
nchan=55,start=2,            #
msselect='FIELD_ID==0',      # Select bandpass calibrater (field 0)
gaintable='cal.G',           # Pre-apply gain solutions derived previously
solint=86400.,               # Setup a long timescale (assumes bandpass
refant=14)                   #               is constant over this length).


Note that the solution has only been obtained for the subset of 55 channels starting with channel 3. Explicit channel selection like this is only necessary if it is desired that some channels be avoided (e.g., end channels that may not be well-behaved). The default is to obtain a solution for every channel.

##### BPOLY solutions

For some observations, it may be the case that the SNR per channel is insufficient to obtain a usable per-channel 'B' solution. In this case it is desirable to solve instead for a best-fit functional form for each antenna using the BPOLY solver. The 'BPOLY' solver fits polynomials to the amplitude and phase of the calibrator visibilities as a function of frequency. Unlike ordinary 'B', a single common BPOLY solution will be determined for all spectral windows specified (or implicit) in the setdata execution. As such, it is usually most meaningful to select individual spectral windows for BPOLY solves, unless groups of adjacent spectral windows are known a priori to share a single continuous bandpass response over their combined frequency range (e.g., PdBI data). Currently, BPOLY solutions cannot be solved for in a time-dependent manner.

The 'BPOLY' solver requires a number of unique parameters. The degamp and degphase parameters indicate the polynomial degree desired for the amplitude and phase solutions. The maskcenter parameter is used to indicate the number of channels in the center of the band to avoid passing to the solution (e.g., to avoid Gibbs ringing in central channels for PdBI data).

For example, to solve for a BPOLY (5th order in amplitude, 7th order in phase), using data from field 2, with G corrections pre-applied:

bandpass(vis='data.ms',               # input data set
caltable='cal.BPOLY',        #
mode='channel',              # Use channels 3-57 (avoid end channels)
nchan=55,start=2,            #
msselect='FIELD_ID==0',      # Select bandpass calibrater (field 0)
bandtype='BPOLY',            # Select bandpass polynomials
degamp=5,                    # 5th order amp
degphase=7,                   # 7th order phase
gaintable='cal.G',           # Pre-apply gain solutions derived previously
refant=14)                   #


Note that all available spectral windows will be used to obtain a single solution spanning them all. If separate solutions for each spectral window are desired, solve for each separately, e.g., if there are 3 spectral windows (0,1,2):

bandpass(vis='data.ms',
caltable='cal.BPOLY.0',
mode='channel',
nchan=55,start=2,
msselect='FIELD_ID==0 && DATA_DESC_ID==0',
bandtype='BPOLY',
degamp=5,
degphase=7,
gaintable='cal.G',
refant=14)

bandpass(vis='data.ms',
caltable='cal.BPOLY.1',
mode='channel',
nchan=55,start=2,
msselect='FIELD_ID==0 && DATA_DESC_ID==1',
bandtype='BPOLY',
degamp=5,
degphase=7,
gaintable='cal.G',
refant=14)

bandpass(vis='data.ms',
caltable='cal.BPOLY.2',
mode='channel',
nchan=55,start=2,
msselect='FIELD_ID==0 && DATA_DESC_ID==2',
bandtype='BPOLY',
degamp=5,
degphase=7,
gaintable='cal.G',
refant=14)


Each solution is stored in a separate table. As a result, subsequent calibration operations must be undertaken for each spectral window separately.

The BPOLY solutions can not yet be plotted using plotcal.

TBD

### Calibration Interpolation (accum)

Calibration solutions (most notably or ) must be interpolated onto the timestamps of the science target observations. Currently, three time-dependent interpolation options are available for specification in the interp parameter in the accum task. These are nearest, linear and aipslin. The nearest interpolation option applies the calibration factor nearest in time to each datum. The linear interpolation options applies to each datum an amplitude and phase which are (separately) interpolated (or extrapolated) from the two nearest (in time) calibration solutions. For phase, it is assumed that there is no cycle ambiguity to consider, i.e., the direction of the smaller phase difference (necessarily always 180 degrees) between the two solutions is considered the correct direction for interpolation. The aipslin interpolation option emulates the on-the-fly calibration interpolation in classic AIPS, where the amplitude is interpolated linearly (like linear), but interpolated phase is determined from linear interpolation of the real and imaginary parts. This mode avoids the complication of determining the minimum phase ambiguity, but the result is decidedly non-linear in phase for interpolations over more than a few 10s of degrees (as the phase difference between interpolating solutions approaches 180 degrees, aipslin tends toward nearest for the phase interpolation.

An example using accum to interpolate an existing table onto a new time grid.

accum(vis='ngc5921.ms',            #
incrtable='ngc5921.gcal',    # this is the table that will be interpolated
caltable='ngc5921_20s.gcal', # this will be the interpolated output table
accumtime=20.)               # timescale for sampling the output table
# Note: interp wasn't specified = linear option


%BEGINFIGURE{label="fig:plotcal_G" caption="plotcal: Display of the amplitude solutions for NGC 5921; original (top), interpolated solutions-20s sampling (bottom)."}% Error: (3) can't find plotcal_G.jpg in Software Error: (3) can't find plotcal_interp.jpg in Software %ENDFIGURE%

### Calibration Smoothing (smooth)

Calibration solutions (most notably or ) may be smoothed over a longer time interval to reduce noise and outliers. The smoothing will use the smoothtime and smoothtype parameters to determine the new data points which will replace the previous points on the same time sampling grid as for the tablein solutions. Two smoothtype options are currently supported: 1) median, 2) mean.

An example using the smooth task to smooth an existing table:

gaincal(vis='ngc5921.ms',
caltable='ngc5921_05s.gcal',
mode='channel',
nchan=55,start=2,step=1,
msselect='FIELD_ID <= 1',
solint=0.5)                #
# Note: we use the defaults for other parameters
plotcal('ngc5921_05s.gcal','amp')  # Plot the amplitudes
smooth(vis='ngc5921.ms',
tablein='ngc5921_05s.gcal', # input calibration table
caltable='ngc5921_sm.gcal', # output calibration table (smoothed)
smoothtime=1000.)           # use 1000 seconds for the smoothing time and
# the default smoothtype='median'
plotcal('ngc5921_sm.gcal','amp')   # Plot the amplitudes



%BEGINFIGURE{label="fig:plotcal_smooth" caption="plotcal: Display of the amplitude solutions for short solution interval table (0.5 seconds: top) and the smoothed table using a smoothtime of 1000 seconds"}% Error: (3) can't find plotcal_05s.jpg in Software Error: (3) can't find plotcal_smoothed.jpg in Software %ENDFIGURE%

### Incremental Calibration (accum)

It is occasionally desirable to solve for and apply calibration incrementally. This is the case when a calibration table of a certain type already exists (from a previous solve), an incremental solution of the same type and relative to the first is required, and it is not possible to recover the cumulative solution by a single solve.

Much of the time, it is, in fact, possible to recover the cumulative solution. This is because the equation describing the solution for the incremental solution (using the original solution), and that describing the solution for their product are fundamentally the same equation---the cumulative solution, if unique, must always be the same no matter what initial solution is. One circumstance where an incremental solution is necessary is the case of phase-only self-calibration relative to a full amplitude and phase calibration already obtained (from a different field).

For example, a phase-only G'' self-calibration on a target source may be desired to tweak the full amplitude and phase G'' calibration already obtained from a calibrator. The initial calibration (from the calibrator) contains amplitude information, and so must be carried forward, yet the phase-only solution itself cannot (by definition) recover this information, as a full amplitude and phase self-calibration would. In this case, the initial solution must be applied while solving for the phase-only solution, then the two solutions combined to form a cumulative calibration embodying the net effect of both. In terms of the Measaurement Equation, the net calibration is the product of the initial and incremental solutions.

The analog of accumulate in classic AIPS is the use of CLCAL to combine a series of (incremental) SN calibration tables to form successive (cumulative) CL calibration tables.

Cumulative calibration tables also provide a means of generating carefully interpolated calibration, on variable user-defined timescales, that can be examined prior to application to the data with correct. The solutions for different fields and/or spectral windows can be interpolated in different ways, with all solutions stored in the same table.

The only difference between incremental and cumulative calibration tables is that incremental tables are generated directly from the calibration solving tasks (gaincal, bandpass, etc), and cumulative tables are generated from other cumulative and incremental tables via accum. In all other respects (internal format, application to data with correct, plotting with plotcal, etc.), they are the same, and therefore interchangable. Thus, accumulate and cumulative calibration tables need only be used when circumstances require it.

The accum task represents a generalization on the classic AIPS CLCAL model of cumulative calibration in that its application is not limited to accumulation of G'' solutions (SN/CL tables in classic AIPS are the analog of G'' (and, implicitly, T'') in aips++). In principle, any basic calibration type can be accumulated (onto itself), as long as the result of the accumulation (matrix product) is of the same type. This is true of all the basic types, except D''. Accumulation is currently supported for B'', G'', and T'', and, in future, F'' (ionospheric Faraday rotation), J'' (generic full-polarization calibration), fringe-fitting, and perhaps others. Accumulation of certain specialized types (e.g., GSPLINE'', TOPAC'', etc.) onto the basic types will be supported in the near future. The treatment of various calibration from ancilliary data (e.g., system temperatures, weather data, WVR, etc.), as they become available, will also make use of accumulate to achieve the net calibration.

Note that accumulation only makes sense if treatment of a uniquely incremental solution is required (as described above), or if a careful interpolation or sampling of a solution is desired. In all other cases, re-solving for the type in question will suffice to form the net calibration of that type. For example, the product of an existing G'' solution and an amplitude and phase G'' self-cal (solved with the existing solution applied), is equivalent to full amplitude and phase G'' selfcal (with no prior solution applied), as long as the timescale of this solution is at least as short as that of the existing solution.

The tablein parameter is used to specify the existing cumulative calibration table to which an incremental table is to be applied. Initially, no such table exists, and accumulate will generate one from scratch (on-the-fly), using the timescale (in seconds) specified by the parameter accumtime. These nominal solutions will be unit-amplitude, zero-phase (i.e., unit matrix) calibration, ready to be adjusted by accumulation according to the settings of other parameters. When accumtime is negative (the default), the table name specified in tablein must exist and will be used.

The incrtable parameter is used to specify the incremental table that should be applied to tablein. The calibration type of incrtable sets the type assumed in the operation, so tablein (if specified) must be of the same type. If it is not, accum will exit with an error message. (Certain combinations of types and subtypes will be supported by accum in the future.)

The caltable parameter is used to specify the name of the output table to write. If un-specified (or ''), then tablein will be overwritten. Use this feature with care, since an error here will require building up the cumulative table from the most recent distinct version (if any).

The field parameter specifies those field names in tablein to which the incremental solution should be applied. The solutions for other fields will be passed to caltable unaltered. If the cumulative table was created from scratch in this run of accumulate, then the solutions for these other fields will be unit-amplitude, zero-phase, as described above.

The calfield parameter is used to specify the fields to select from incrtable to use when applying to tablein. Together, use of field and calfield permit completely flexible combinations of calibration accumulation with respect to fields. Multiple runs of accum can be used to generate a single table with many combinations. In future, a self'' mode will be enabled that will simplify the accumulation of field-specific solutions.

The interp parameter is used to specify the interpolation type to use on the incremental solutions. The currently available interpolation types are nearest'', linear'', and aipslin''.

Here is an example:

# obtain G solutions from calibrator
gaincal(vis='ap366.sim',
caltable='cal.G0',
msselect='FIELD_ID IN [9,11]',
solint=300);

# obtain proper flux density scale
fluxscale (vis='ap366.sim',
caltable='cal.G0',
fluxtable='cal.G1',
reference='1328+307')

# generate cumulative table for target source on 20s timescale
accum(vis='ap366.sim',
tablein='',
incrtable='cal.G1',
caltable='cal.cG0',
field='0957+561',        #  calibrate target...
calfield='0917+624',     #  ...with calibrator
interp='linear',
accumtime=20)

# apply this calibration to target
correct(vis='ap366.sim',
gaintable='cal.cG0')

#    (image target with imager tool)

# phase-selfcal target on (faster) 60s timescale
gaincal(vis='ap366.sim',
caltable='cal.G2',
msselect='FIELD_ID IN [10]',
calmode='p',                   # phase-only
solint=60,
gaintable='cal.cG0');

# accumulate new solution onto existing one
accum(vis='ap366.sim',
tablein='cal.cG0',             # existing cumulative (input)
incrtable='cal.G2',            # new incremental (input)
caltable='cal.cG1',            # new cumulative (output)
field='0957+561',              # calibrate target...
calfield='0957+561',           #   ...with its own solutions
interp='linear')

# apply new cumulative solution to data
correct(vis='ap366.sim',
gaintable='cal.cG0')

#   (another round of imaging, etc.)



### Solution Examination (plotcal)

The plotcal task is available for examining solutions of all of the basic solvable types (G,T,B,D,M,MF,K). To plot amplitude or phase as a function of time for G (or T,M) solutions:

plotcal('cal.G','amp')
plotcal('cal.G','phase')


%BEGINFIGURE{label="fig:plotcal_G" caption="plotcal: Display of the amplitude and phase gain solutions (for all data)."}% Error: (3) can't find plotcal_G.jpg in Software Error: (3) can't find plotcal_Gp.jpg in Software %ENDFIGURE%

Similarly, to plot amplitude or phase as a function of channel for B solutions:

plotcal('cal.B','amp')
plotcal('cal.B','phase')


%BEGINFIGURE{label="fig:plotcal_B" caption="plotcal: Display of the amplitude and phase bandpass solutions (for all data)."}% Error: (3) can't find plotcal_Ba.jpg in Software Error: (3) can't find plotcal_Bp.jpg in Software %ENDFIGURE%

Note that if there is more than one timestamp in the B table, the user will be queried to interactively advance the plot to each timestamp. Other plot types are also available:

• [AMP] -- Amplitude vs. time or chan
• [PHASE] -- Inverse of amplitude vs. time or chan
• [RLHASE] -- RL phase vs. time
• [XYPHASE] -- XY phase vs. time
• [DELAY] -- DELAY vs time
• [DELAYRATE] -- DELAYRATE vs time

Several selection parameters are available to obtain plots of subsets of solutions:

• [antennas] -- A list of antenna indices to plot (default: all antennas)
• [caldescids] -- A list of description ids (default: all)

The previous examples will show the solutions for all antennas and spectral windows on a single plot. If per-antenna solution plots are desired, use multiplot=True, and specify the number of plots to appear on each page using nxpanels and nypanels. For example to show 5 plots per page (arranged vertically) of G phase solutions on each page, arranged vertically:

plotcal('cal.B','amp',antennaid=[5,6])    # show antenna 5,6 bandpass
plotcal('cal.B','amp',nypanel=5,          # default nxpanel=1
multiplot=True)                   # draw separate plots
#Note: On the command line you are prompted as:
* Please use: cp.next() to see the next plot
*             cp.stopiter() will stop the plots


To show 6 plots per page of B amplitudes on a 3x2 grid:

plotcal('cal.B','amp',nxpanel=3,nypanel=2,# set up 3x2 grid
multiplot=True)


%BEGINFIGURE{label="fig:plotcal_Bmulti" caption="plotcal: Display of a 3x2 grid of bandpass solutions, iterating over antenna identifier index."}% Error: (3) can't find plotcal_Bmulti.jpg in Software %ENDFIGURE%

For B, if there is more than one timestamp and multiplot=T, the antennas plots will be cylced through for each timestamp in turn.

### Application of the final calibration (correct)

After all relevant calibration types have been determined, they must be applied to the target source. For example, to apply G and B solutions to source fields 2,3,4:

correct(vis='ngc5921.ms',               # Visibility set to correct
# data will be written to the CORRECTED_DATA
# column
msselect='FIELD_ID IN [2,3,4]', # restrict correction to specified fields
gaintable='cal.Gflx',           # gain solutions to apply (time dependent)
bptable='cal.B')                # bandpass solutions to apply (freq dependent)


Different detailed combinations of calibration application can be achieved by running this sequence more than once, and including specific field and/or spectral window selections in msselect, as appropriate. For example, to limit the G solutions applied to field 3 to those obtained from field 1 (otherwise, same as above):

correct(vis='ngc5921.ms',
msselect='FIELD_ID IN [2,4]',
gaintable='cal.Gflx',
bptable='cal.B')
correct(vis='ngc5921.ms',
msselect='FIELD_ID IN [3]',
gaintable='cal.Gflx',
gainselect='FIELD_ID==1',
bptable='cal.B')


It is important to remember the relative nature of each calibration term. A term solved for in the presence of others is, in effect, residual to the others, and so must be used in combination with them (or new versions of them) in subsequent processing. At the same time, it is important to avoid isolating the same calibration effects in more than one term, e.g., by solving for both G and T separately (without applying the other), and then using them together. It is always a good idea to examine the corrected data after calibration (using plotxy to compare the raw ('data') and corrected ('corrected') visibilities).

### Visibility Model Fitting (uvmodelfit)

It is often desirable to fit simple analytic source component models directly to visibility data. Such fitting has its origins in early interferometry, especially VLBI, where arrays consisted of only a few antennas and the calibration and deconvolution problems were poorly constrained. These methods overcame the calibration uncertainties by fitting the models to calibration-independent closure quantities and the deconvolution problem by drastically limiting the number of free parameters required to describe the visibilities. Today, even with larger and better calibrated arrays, it is still desirable to use visibility model fitting in order to extract geometric properties such as the positions and sizes of discrete components in radio sources. Fits for physically meaningful component shapes such as disks, rings, and optically thin spheres, though idealized, enable connecting source geometry directly to the physics of the emission regions.

Visibility model fitting is controlled entirely by the uvmodelfit task, which allows fits for a single component point or Gaussian. The user specifies the number of non-linear solution iterations (niter), the component type (comptype), an initial guess for the component parameters (sourcepar), and optionally, a vector of Booleans selecting which component parameters should be allowed to vary (fixpar), and a filename in which to store a CASA componentlist for use in other applications (file). The function returns a vector containing the resulting parameter list. This vector can be edited at the command line, and specified as input (sourcepar) for another round of fitting.

The sourcepar parameter is currently the only way to specify the starting parameters for the fit. For points, there are three parameters: I (total flux density), and relative direction (RA, Dec) offsets (in arcsec) from the observation's phase center. For Gaussians, there are three additional parameters: the Gaussian's semi-major axis width (arcsec), the aspect ratio, and position angle (degrees). It should be understood that the quality of the result is very sensitive to the starting parameters provided by the user. If this first guess is not sufficiently close to the global minimum, the algorithm will happily converge to an incorrect local minimum. In fact, the surface, as a function of the component's relative direction parameters, has a shape very much like the inverse of the absolute value of the dirty image of the field. Any peak in this image (positive or negative) correponds to a local minimum that could conceivable capture the fit. It is the user's responsibility to ensure that the correct minimum does the capturing.

Currently, uvmodelfit relies on the likelihood that the source is very near the phase center (within a beamwidth) and/or the user's savvy in specifying the starting parameters. This fairly serious constraint will soon be relieved somewhat by enabling a rudimentary form of uv-plane weighting to increase the likelihood that the starting guess is on a slope in the correct valley.

Improvements in the works for visibility model fitting include:

• User-specifiable uv-plane weighting
• Additional component shapes, including elliptical disks, rings,
and optically thin spheroids.
• Optional calibration pre-application
• Multiple components. The handling of more than one component
depends mostly on efficient means of managing the list itself (not easy in command line options), which are currently under development.
• Combined component and calibration fitting.

Example (See Fig \ref{fig:modelfit}):

#
# Note: It's best to channel average the data before running a modelfit
#

split('data.ms',                # split data from your ms
outputvis='1445_avg.ms',  # split to a new file containing just the gain cal
fieldid=1                 # 2nd source = 1445
nchan=1,                  # get a single channel that is the
start=5,                  # average of 55 channels starting
step=55,                  # at channel 6
datacolumn='CORRECTED_DATA')# split the corrected data

uvmodelfit('1445_avg.ms',       # use averaged data
niter=5,             # Do 5 iterations
comptype='P',        # P=Point source, G=Gaussian, D=Disk
sourcepar=[2.0,.1,.1],# Source parameters for a point source
# [flux, long offset, lat offset]
nchan=-1,start=0,step=1, # reset to defaults
file='gcal.cl')      # Output component list file
# Initial guess is that it's close to the phase center
# and has a flux of 2.0 (a priori we know it's 2.47)
#Output looks like:
CASA <25>: uvmodelfit('1445_avg.ms/',niter=5,comptype='P',sourcepar=[2.0,.1,.1],file='gcal.cl',ncha,step=1,start=0)
Tue Dec 12 23:02:05 2006      WARN Calibrater::setdata:
Selection is empty: reverting to sorted MeasurementSet
There are 19656 - 3 = 19653 degrees of freedom.
iter=0:   reduced chi2=0.0413952:  I=2,  dir=[0.1, 0.1] arcsec
iter=1:   reduced chi2=0.0011285:  I=2.48495,  dir=[-0.0265485, -0.0189735] arcsec
iter=2:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00196871, 0.00409329] arcsec
iter=3:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00195744, 0.00411176] arcsec
iter=4:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00195744, 0.00411178] arcsec
iter=5:   reduced chi2=0.00112653:  I=2.48547,  dir=[-0.00195744, 0.00411178] arcsec
If data weights are arbitrarily scaled, the following formal errors
will be underestimated by at least a factor sqrt(reduced chi2). If
the fit is systematically poor, the errors are much worse.
I = 2.48547 +/- 0.0172627
x = -0.00195744 +/- 0.159619 arcsec
y = 0.00411178 +/- 0.170973 arcsec
Writing componentlist to file: /Users/jmcmulli/ALMA/TST5/Regression/Scripts/gcal.cl

#Looks reasonable - got the right flux around the phase center
#chi2 went down: expect chi2 = 2*number of visibilities/number of degrees of freedom
# degrees of freedom = 3 for point source (flux and long,lat offsets)
# Now use the component list to generate model data

ft('1445_avg.ms',
complist='gcal.cl')           # Fourier transform the component list -
# this writes it into the MODEL_DATA column
# of the MS
#
plotxy('data.ms',
xaxis='uvdist',           # Plot data versus uv distance
fieldid=1,                # Select 1445+0990
datacolumn='corrected')   # Plot corrected data
plotxy('data.ms',                #
xaxis='uvdist',           #
fieldid=1,
overplot=True,            # Specify overplot
plotsymbol='bo')          # Specify blue circles for model data


%BEGINFIGURE{label="fig:modelfit" caption="Use of plotxy to display corrected data (red points) and uv model fit data (blue circles)."}% Error: (3) can't find modelfit.jpg in Software %ENDFIGURE%

### Prepare uv data for imaging

#### Examine calibrated source data

Once the source data is calibrated you should examine the $uv$ data and flag anything that looks bad. If you find source data that has not been flanked by calibration scans, delete it (it will not be calibrated). For example:

flagxy('data.ms','uvdist','amp')              # Display data for flagging


See Chapter on Data Editing for descriptions of how to display/edit the data in xy-plot format.

### uv Continuum subtraction (contsub)

At this point, consider whether you are likely to need continuum subtraction. If there is significant continuum emission present in what is intended as a spectral line observation, continuum subtraction may be desirable. You can estimate and subtract continuum emission in the $uv$-plane prior to imaging or wait and subtract an estimate of it in the image-plane. Note that neither method is ideal, and the choice depends primarily upon the distribution and strength of the continuum emission. Subtraction in the uv-plane is desirable if continuum emission dominates the source, since deconvolution of the line emission will be more robust if not subject to errors in deconvolution of the brighter continuum. There is also a performance benefit since the continuum is probably the same in each channel of the observation, and it is desirable to avoid duplication of effort. However, the main drawback of subtraction in the uv-plane is that it is only strictly correct for the phase center, since without the Fourier transform, the visibilities only describe the phase center. Thus, uv-plane continuum subtraction will be increasingly poor for emission distributed further from the phase center. If the continuum emission is relatively weak, it is usually adequate to subtract it in the image plane; this is described in the Image Analysis section of this cookbook. Here, we describe how to do continuum subtraction in the uv-plane.

uv plane continuum subtraction is performed by the contsub task. First, determine which channels in your data cube do not have line emission, perhaps by forming a preliminary image as described in the next chapter. This image will also help you decide whether or not you need to come back and do uv-plane continuum subtraction at all.

For each baseline, and over the timescale specified in solint, contsub will provide a simple linear fit to the real and imaginary parts of the (continuum-only) channels specified in channels, and subtract this model from all channels. Choose the timescale to be shorter than the timescale for changes in the visibility function of the continuum, but be careful not to make it so short that the SNR of the estimated continuum suffers substantially. For example:

contsub(vis='data.ms',                   # Select visibility data set
fieldid=2,                       # Choose the source data (field 3),
spwid=0,                         #  spectral window 1,
chans=range(4,7)+range(50,60),   #  line free channels 4-6 & 50-59,
solint=0.0,                      #  estimate/subtract per scan
mode='subtract');                #  & subtract cont. from line data.
# Line-only data will be written into
# the  CORRECTED_DATA column.


Running contsub with mode='subtract' will replace the CORRECTED_DATA column in the MS with continuum-subtracted line data and the MODEL_DATA column with the continuum model. You can use mode='replace' to replace the CORRECTED_DATA column with the continuum model; however, it is probably better to use mode='subtract' and then use split to select the MODEL_DATA and form a dataset appropriate for forming an image of the estimated continuum. Note that a continuum image formed from this model will only be strictly correct near the phase center, for the reasons described above.

Note that it is currently the case that contsub will overwrite the CORRECTED_DATA column. Therefore, it is desirable to first split the relevant corrected data into a new Measurement Set. Also, contsub requires that the MODEL_DATA and CORRECTED_DATA columns exist, and split does not generate them. These columns are generally created when filling to an MS but can also be generated by the imager tool when forming the exploratory image described above. If you run contsub on the original dataset, you will have to re-apply the calibration as described in the previous chapter.

So, the recommended procedure is as follows:

• Finish calibration as described in the previous chapter
• Use split to form a separate dataset
• Use the imager tool on the split result to form an
exploratory image that is useful for determining the line-free channels
• Use contsub with mode='subtract' to subtract
the continuum from the CORRECTED_DATA in the split dataset (and write the continuum model in the MODEL_DATA column
• Image the line-only emission with the imager tool
• If an image of the estimated continuum is desired, run split
again (on the *contsub*'d dataset), and select the MODEL_DATA; then run imager to image it

### Optional: Split out calibration uv data (split)

split(vis='ngc5921.ms',                 # input MS
outputvis='ngc5921_src.split.ms', # Output just the source data
fieldid=2,                        # Select the third source (0-based)
spwid=0,                          # Select the first spectral window
nchan=46,                         # Select a subset of the data channels
step=1,                           #
datacolumn='CORRECTED_DATA')      # Take the calibrated data column


### Baseline-based Calibration (M, MF, K) -- Tool-based

Some calibration cannot be factored into antenna-based factors. Such effects occur physically after the signals from each antenna have been combined in the correlator. These may occur as a consequence of finite time- and frequency-averaging over other calibration factors before they are corrected. Any loss of coherence in such averaging gives rise to residual baseline-based errors which depend on the details of the combination of antenna-based errors on the baseline. Also, if only a single baseline is available, a single calibration solution for that baseline is obtainable, and short of additional information, it simply cannot be factored into unique terms for each antenna. Therefore, such situations necessarily require baseline-based calibration solutions.

It is important to note that source structure also introduces baseline-based information in the visibilities, and so it can be difficult to distiguish baseline-based errors from the information in which we are interested, namely, the astronomincal visibilities (which are imaged). Therefore, baseline based calibration factors should be used with great care, to avoid changing the desired astrophysical information in ways that cannot be determined. As indicated below, there are some calibration circumstances where such extreme measures are warranted (e.g., resolved bandpass calibrators), but the careful observer should always consider how different calibration factors affect the data, and this is especially important for baseline-based factors.

M, MF solutions (Generic Baseline-based Gain)

The calibration types M and MF provide the baseline-based analogs of G and B, respectively. M provides for (per-spectral window) time-dependent gain calibration for each baseline independently. MF provides for a frequency-dependent (within each spectral window) version of M. One or the other of these types can be used to compensate for any residual closure errors, usually after all antenna-based calibrations have converged. Since these types can absorb legitimate visibility information from the data, they should be used with great care, and usually only on sources for which there is no doubt about the source structure. It is therefore largely meaningless to use them in self-calibration---they will only reinforce the current source model, and can even artifically absorb thermal noise in the data.

To solve for M on an hourly timescale, using previously determined G and B solutions:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]') # Restrict data selection
cb.setapply(type='G',table='cal.G')      # Apply existing G solution
cb.setapply(type='B',table='cal.B')      # Apply existing G solution

cb.setsolve(type='M',table='cal.M',      # Setup to solve for M on
t=3600)                               #  an hourly timescale,
#  write solutions to a table on
#  disk called 'cal.M'
cb.solve()                               # Solve


Note that refant is, of course, meaningless for baseline-based calibration types.

To apply these solutions, along with G and B:

cb.reset()                                  # Reset calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1,2]')  # Restrict data selection
cb.setapply(type='G',table='cal.G')         # Apply G solutions
cb.setapply(type='B',table='cal.B')         # Apply B solutions
cb.setapply(type='M',table='cal.M')         # Apply M solutions
cb.correct()                                # Correct data and write to
#  CORRECTED_DATA column in MS


Use of MF is essentially identical, except that it will probably be used on even longer timescales than M (as for B, compared to G).

An M solution can be especially useful for obtaining normalized bandpass solutions from significantly resolved calibrators where a source model adequate for pre-B G-solving is not available. Ordinarily, if the bandpass calibrator is nearly point-like, we first solve for G (using a point-like model):

Then, use this G to solve for B (using the same model):

However, we will get (at best) awkwardly scaled and/or poorly converged B solutions if our source model is not sufficient for the either of these solutions. In this circumstance, we can use M to absorb both time-dependent gain variations during the bandpass calibration scan and the source structure. First solve for M, using a point-like model:

Then, use this solution to solve for B:

The resulting B solution is nearly as good as one obtained with a good, resolved source model and G. It is somewhat less sensitive because more free parameters have been introduced, but this technique can often be useful. Note that the data for the bandpass calibrator, after correction by B and M, will be that of a point source, since M has absorbed the source structure information. This information has been sacrificed for a decent bandpass calibration applicable to other sources.

K solutions (Baseline-based fringe-fitting)

Fringe-fitting is essentially a generalization of ordinary phase calibration (the phase part of G, B, M, and/or MF) in which coefficients of the Taylor expansion of the phase in time and frequency are solved for. Usually, the expansion is only taken to first order in time and frequency. In this case, the phase can be written:

In this equation, and are a fiducial time and frequency within the solution interval (where the phase is ), and and are the delay (phase slope in frequency) and delay-rate (phase slope in time), respectively.

Note: As written, this equation defines the group delay (by virtue of referring the frequency to the fiducial value) where the delay-rate is not necessarily the time derivative of the delay. For most current observations, this is the case simply because the noise on the delay solution far exceeds the variation implied by the delay-rates. The equation can also be written in terms of the phase delay, where the is dropped from the second term, and the delay-rate is exactly the derivative of the delay. This mode will be useful for the wide-bandwidth instruments of the near future, and will be an option available for users of K.

Evidently, fringe-fitting permits more accurate phase tracking in both time and frequency when delay and delay-rate errors are large. Such errors originate in independent clock and position errors of antennas and in errors in the direction to radio sources. Fringe-fitting is normally associated with VLBI because these clock and geometry errors are more significant there than for smaller connected-element arrays, but the wide bandwidths and high frequencies of future instruments will likely find fringe-fitting useful.

The current implementation of fringe-fitting in CASA is baseline-based and quite simple. It solves the above equation for , , and for each baseline independently, assuming less than one cycle of phase over the bandwidth and duration of the solution interval.

Note: Solutions outside the first ambiguity require implementation of a first-guess mechanism (an FFT) to reduce the phase to within the first ambiguity. This is still under development.

A separate phase and delay is determined for each polarization, but only one rate is determined for both.

Note: This is almost always justified since the phase and delay correspond to signal path lengths, and the delay rate corresponds to the speed of the signal along the path. The phase and delay are likely to be different for different polarizations since these signals traverse different electronics with different path lengths, but the speed at which the signal travels is the same. In any case, this constraint will be relaxable in a future version.

To obtain a scan-based K solution, with an existing MF solution applied:

cb.setdata(msselect='FIELD_ID==0')        # select data
cb.setapply(type='MF',table='cal.MF')     # arrange apply of MF
cb.setsolve(type='K',table='cal.K',t=0)   # arrange solve for K
cb.solve()                                # solve


Application is like any other type:

cb.setdata(msselect='FIELD_ID IN [0,1]')  # select data
cb.setapply(type='MF',table='cal.MF')     # arrange apply of MF
cb.setapply(type='K',table='cal.K')       # arrange apply of K
cb.correct()                                # correct


Note that only nearest interpolation is available for application of K. This will be remedied in the near future.

-- MarkClaussen - 23 Feb 2007
Topic revision: r15 - 2013-02-13, PatrickMurphy

Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback