CASA Analysis Cookbook - Tool based

Version: September 2006, draft

Error: (3) can't find multi_msplot.jpg in Software

Introduction -- Getting Started

This document describes how to calibrate and image interferometric data using the CASA (Common Astronomy Software Application) package. CASA is a suite of astronomical data reduction tools that can be run via the IPython interface to Python; it is derived from the former AIPS++ (Astronomical Information Processing System in C++).

This cookbook is arranged to provide entry points for novice and experienced users of CASA .

We recommend that novices start with Chapter 1 (Getting Started) and the Step-by-Step Guides in Chapter 7. These will provide the best means of mapping your scientific knowledge of interferometric reduction to the \casa\ applications which provide the relevant functionality. In addition, the annotated example scripts in Chapter 10 provides further specific examples in using \casa\ to reduce data.

More experienced users may refer to Chapters 3 through 6provide additional details on the general use of the core synthesis reduction tools (data editing, calibration, imaging, data display and image analysis). Chapter 9 provides a one-line summary of the available functions within these tools.

Besides this cookbook, you may also want to access CASA documentation on the web at:

http://casa.nrao.edu

This will tell you what input parameters are available for each function as well as provide guidelines for procedures not covered in this cookbook. CASA documentation consists of:

If you have experience with CLIC, AIPS, or MIRIAD software packages to reduce interferometric data, Appendices 1--3 provide CLIC-CASA, AIPS-CASA, and MIRIAD-CASA dictionaries which map common CLIC, AIPS and MIRIAD tasks/adverbs to their counterparts in CASA.

Testing Notes - ALMATST2006.01-4

  • Information on this test, downloading CASA, source information, baseline results for comparison, etc. are available at:

http://projectoffice.aips2.nrao.edu/ALMA2006.01/ALMA2006.01.html

  • For the single baseline test, the data are provided in MS (MeasurementSet) format and no filling is required.

  • The tools used in this test are: cb (calibrater), ms (measurementset, abbreviated MS) and mp (MS plot).

CASA Notes for ALMA2006.01-4

  • This is the first test using the CASA framework with Python scripting and IPython interface. Plotting performance through matplotlib is currently poor; we are still optimizing the code interactions with the Python plotting.

  • The calibration of types M, MF, and K (baseline-based gain and bandpass, fringe fitting have some temporary restrictions:
    • The fluxscale method is not supported for the baseline-based methods.
    • The accumulate method is not supported for the baseline-based methods.
    • Only the 'nearest' interpolation type can be used with baseline-based methods.

  • Flagging is disabled on multi-panel displays.
  • Flagging of calibration solutions is currently disabled.
  • me (measures), af (autoflag), and tb (tables) are only partially migrated to the new framework.
  • If you encounter any difficulties with overplotting, subplotting, etc, use the pl.clf command to clear the figure and start over.

Installation

Obtaining CASA

CASA is distributed as a series of RPMs (http://www.rpm.org) for Linux Red Hat Enterprise 4. A script, can be downloaded from:

ftp://ftp.cv.nrao.edu/casa/development/load-casapy

Once this has been downloaded, the script must be executed by a user with root privilege. Note: root privilege is required due to a known problem with the RHE4 operating system; it is currently being worked on by Red Hat.

Execute the file as:

> ./load-casapy --root
starting download of casa-data-20050103-2ds.noarch.rpm...   complete
starting download of casa-data-base-20050103-2ds.noarch.rpm...   complete
starting download of casapy-19.1259-11ds.i386.rpm...   complete
starting download of casapy-shared-19.1259-11ds.i386.rpm...   complete
starting download of ccmtools-python-0.5.5-40ds.i386.rpm...   complete
starting download of ccmtools-shared-0.5.5-40ds.i386.rpm...   complete
starting download of cfitsio-2.510-21ds.i386.rpm...   complete
starting download of pgplot-5.2.2-6ds.i386.rpm...   complete
starting download of tix-8.1.4-98.i386.rpm...   complete
starting download of casapy-python24-2.4.2-3.i386.rpm...   complete
installed casapy-shared-19.1259-11ds.i386.rpm
installed ccmtools-shared-0.5.5-40ds.i386.rpm
installed cfitsio-2.510-21ds.i386.rpm
installed casapy-python24-2.4.2-3.i386.rpm
installed ccmtools-python-0.5.5-40ds.i386.rpm
installed casapy-19.1259-11ds.i386.rpm

CASA is installed in the /usr/lib/casapy and /usr/lib/casa directories (approximately 160 MB required). It contains versions of Python (2.4.2), IPython (0.7.1), matplotlib, numeric, pytz and dateutil that are known compatible with CASA, and will not interact/interfere with the users' versions of these.

Download and install times seem to run around 3 minutes.

Startup

This section assumes that CASA has been installed on your LINUX or OSX system. _For NRAO-AOC testers, you should do the following on an AOC RHE4 machine:_

> . /home/basho/casa/framework/casa.sh
or
> source /home/basho/casa/framework/casa.csh

You should also have a copy of the following; these can be put into the appropriate directory in your home area or in the current working directory:

  • ipythonrc (in current directory or in \$HOME/.ipython); Section \ref{append.ipythonrc}
  • matplotlibrc (in current directory or in \$HOME/.matplotlib); Section \ref{append.matplotlibrc}

Then just type:

> casapy                # if you installed the RPMs then this is all that is needed

When you start casapy, you will see the following start-up information:

-bash-3.00$ casapy
___________________________________________________________
Available tools:

   ms (MS)               mp (MS plot)
   cb (calibrater)       cp (cal plot)   tb (table)            tp (table plot)
   im (imager)           af (autoflag)
   me (measures)

   pl: (pylab functions)
___________________________________________________________
Available tasks:
  data import : vlafiller, fitstoms
  imaging     : clean, feather, invert, mosaic, (wproj)

___________________________________________________________

type 'pdoc tool' or 'pdoc tool.function' for help
type 'quickhelp' for the available tool list

___________________________________________________________
Activating auto-logging.
Current session state plus future input saved to: ipython.log
Logging mode:  rotate

CASA [1]:

You should get an IPython command prompt in the xterm window where you started CASA , CASA will take approximately 10 seconds to initialize at startup in a new working directory; subsequent startups are faster. CASA is active when you get a ''CASA [1]'' prompt in the command line interface.

%BEGINFIGURE{label='fig:logger', caption='Logger GUI window'}% Error: (3) can't find casa_logger.jpg in Software %ENDFIGURE%

Your command line history is automatically maintained and stored in the local directory as ipython.log. In addition, logging of output from commands is sent to the file casapy.log, also in your local directory; this is brought up automatically. The CASA command casalog will bring up an xterm that enables viewing of the log.

Interface

CASA uses Python, IPython and matplotlib within the package.

"Python is an interpreted, interactive, object-oriented programming language" (from www.python.org); it is used as the underlying command line interface/scripting language to CASA . Note: since Python is inherently 0-based in its indexing of arrays, vectors, etc, CASA is also 0-based; any Index inputs (e.g., start (for start channel), fieldIndex, antennaID, etc) will start with 0.

IPython is an enhanced, interactive shell to Python which provides many features for efficient command line interaction.

matplotlib is a Python 2-D plotting library for publication quality figures in different hardcopy formats.

Some key links are:

Each of the features of these components behave in the standard way within \casa\ . In the following sections, we outline the key elements for analysis interactions; see the IPython page for the full suite of functionality.

On-line help

  • TAB key

AT any time, hitting will complete any available commands or variable names and show you a list of the possible completions if there's no unambiguous result. It will also complete filenames in the current directory if no CASA or Python names match.

For example, the key can be used to list the available functionality for a given tool using minimum match as appropriate (NOTE: functions beginning with an '__' are for internal use).

CASA [8]: im.<TAB>
im.__class__         im.done              im.regionmask
im.__delattr__       im.exprmask          im.residual
im.__doc__           im.feather           im.restore
im.__getattribute__  im.filter            im.sensitivity
im.__hash__          im.fitpsf            im.setbeam
im.__init__          im.ft                im.setdata
im.__new__           im.linearmosaic      im.setimage
im.__reduce__        im.make              im.setjy
im.__reduce_ex__     im.makeimage         im.setmfcontrol
im.__repr__          im.makemodelfromsd   im.setoptions
im.__setattr__       im.mask              im.setscales
im.__str__           im.mem               im.setsdoptions
im.advise            im.nnls              im.setvp
im.approximatepsf    im.open              im.smooth
im.boxmask           im.pb                im.stop
im.clean             im.pixon             im.summary
im.clipimage         im.plotsummary       im.uvrange
im.clipvis           im.plotuv            im.weight
im.close             im.plotvis
im.correct           im.plotweights
CASA [8]: im.s<TAB>
im.sensitivity   im.setjy         im.setsdoptions  im.summary
im.setbeam       im.setmfcontrol  im.setvp
im.setdata       im.setoptions    im.smooth
im.setimage      im.setscales     im.stop
CASA [8]: im.sets<TAB>
im.setscales     im.setsdoptions
CASA [8]: im.open('ng<TAB>
ngc5921.G           ngc5921.ms          ngc5921.py          ngc5921_regTest.py

  • pdoc and ?

Basic information on an application, including the parameters used and their defaults, can be obtained by typing 'pdoc method' (pdoc = print documentation).

CASA [11]: pdoc im.setdata
      Set the data parameters selection for subsequent processing:
        mode      = velocity
        nchan     = [ 1 ]
        start     = [ 0 ]
        step      = [ 1 ]
        mstart    = { value=0.0, units=km/s }
        mstep     = { value=0.0, units=km/s }
        spwid     = [ 0 ]
        fieldid   = [ 0 ]
        msselect  =
        async     = false
      ----------------------------------------

Note that parameters that fall below the dashed line are output parameters (none in this example).

The '?' can be used as a shortcut to this (though it also prints out additional programming information on the application):

\end
CASA [7]: im.setdata?
Type:           builtin_function_or_method
Base Class:     <type 'builtin_function_or_method'>
String Form:    <built-in method setdata of casac.imager object at 0xb7b4e100>
Namespace:      Interactive
Docstring:
    Set the data parameters selection for subsequent processing :
      mode      = velocity
      nchan     = [ 1 ]
      start     = [ 0 ]
      step      = [ 1 ]
      mstart    = { value=0.0, units=km/s }
      mstep     = { value=0.0, units=km/s }
      spwid     = [ 0 ]
      fieldid   = [ 0 ]
      msselect
      async     = false
    ----------------------------------------

  • help

Typing 'help' will give you the default Python help and give you the help> prompt for further information; hitting at the help prompt returns you to the CASA prompt. You can also get the short help for a CASA method by typing 'help tool.method' or 'help task'.

CASA [2]: help
---------> help()

Welcome to Python 2.4!  This is the online help utility.

If this is your first time using Python, you should definitely check out
the tutorial on the Internet at http://www.python.org/doc/tut/.

Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules.  To quit this help utility and
return to the interpreter, just type "quit".

To get a list of available modules, keywords, or topics, type "modules",
"keywords", or "topics".  Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word
such as "spam", type "modules spam".
help> keywords

Here is a list of the Python keywords.  Enter any keyword to get more help.

and                 else                import              raise
assert              except              in                  return
break               exec                is                  try
class               finally             lambda              while
continue            for                 not                 yield
def                 from                or
del                 global              pass
elif                if                  print

help>

# hit <RETURN> to return to CASA prompt

You are now leaving help and returning to the Python interpreter.
If you want to ask for help on a particular object directly from the
interpreter, you can type "help(object)".  Executing "help('string')"
has the same effect as typing a particular string at the help> prompt.

CASA [2]: help mp.array
Help on built-in function array:

array(...)
    ***
    Plot antenna distribution in local reference frame.
    X - toward local east; Y - toward local north.
    *** :
       ----------------------------------------

  • quickhelp

\item quickhelp

To obtain the available tools and tasks, type:

CASA [19]: quickhelp
---------> quickhelp()
Available tools:

 ms : MeasurementSet (MS) utilties
 mp : MS plotting (data (amp/phase) versus other quantities)
 cb : Calibration utilities
 cp : Cal solution plotting utilities
 im : Imaging utilities
 af : Statistical flagging utilities
 tb : Table utilities (selection, extraction, etc)
 tp : Table plotting utilities
 me : Measures utilities
 ---
 pl : pylab functions (matplotlib, e.g., pl.title, etc)
 ---
Available tasks:


Data Import
-----------
vlafiller : fill VLA data into a MeasurementSet
fitstoms  : fill UVFITS format into a MeasurementSet

Imaging
-------
clean     : imaging using various clean algorithms
invert    : dirty beam/map construction
mosaic    : mosaic imaging
feather   : Combine images using feathering technique
 ---

  • User Reference Manual

The User Reference Manual has not been brought up to date for the Python interface as yet. It can be useful to obtain additional information on a given method (though the examples remain in glish).

It can be reached at:

http//casa.nrao.edu/docs

Automatic parentheses

Automatic parenthesis is enabled for calling functions with argument lists; this feature is intended to allow less typing for common situations. IPython will display the interpretation of the line, beneath the one typed, as indicated by the '-------->'.

CASA [8]: mp.open 'ngc5921.ms'
--------> mp.open('ngc5921.ms')
  Out[8]: True

CASA [9]: mp.plotoptions plotsymbol='o'
Loading builtins
--------> mp.plotoptions(plotsymbol='o')
  Out[9]: True

CASA [10]: mp.array
---------> mp.array()
  Out[10]: True

*NOTE: Parentheses should not be omitted in scripts which are executed through 'execfile' or 'run -i'; IPython's autoparentheses feature only works at the command line and not in batch/scripting mode.*

Specifying arguments to functions

Arguments to functions can be specified either explicitly by name or by their order. For example the following are equivalent:

mp.vistime(column='data',what='amp') # specify them by name
mp.vistime(what='amp',column='data') # you can put them in any order!
mp.vistime('data','amp')             # note if you don't specify the names of the arguments
                                     # they must be in order!
mp.vistime                           # this uses the autoparentheses and the argument defaults!

System shell access

Any input line beginning with a '!' character is passed verbatim (minus the '!', of course) to the underlying operating system (*the sole exception to this is the 'cd' command which must be executed without the '!'*).

Several common commands (ls, pwd, cd, less) may be executed with or without the '!'.

CASA [1]: pwd
/export/home/corsair-vml/jmcmulli/data
CASA [2]: ls n*
ngc5921.ms ngc5921.py
CASA [3]: !cp -r ../test.py .

In addition, filesystem navigation is aided through the use of bookmarks to simplify access to frequently-used directories:

CASA [4]: cd /home/ballista/jmcmulli/other_data
CASA [4]: pwd
/home/ballista/jmcmulli/other_data
CASA [5]: bookmark other_data
CASA [6]: cd /export/home/corsair-vml/jmcmulli/data
CASA [7]: pwd
/export/home/corsair-vml/jmcmulli/data
CASA [8]: cd -b other_data
(bookmark:data) -> /home/ballista/jmcmulli/other_data

Output from shell commands can be captured in two ways:

  1. sx shell_command, !!shell_command - this captures the output to a list

CASA [1]: sx pwd # stores output of 'pwd' in a list
  Out[1]: ['/home/basho3/jmcmulli/pretest']

CASA [2]: !!pwd  # !! is a shortcut for 'sx'
  Out[2]: ['/home/basho3/jmcmulli/pretest']

CASA [3]: sx ls v* # stores output of 'pwd' in a list
  Out[3]:
['vla_calplot.jpg',
 'vla_calplot.png',
 'vla_msplot_cals.jpg',
 'vla_msplot_cals.png',
 'vla_plotcal_bpass.jpg',
 'vla_plotcal_bpass.png',
 'vla_plotcal_fcal.jpg',
 'vla_plotcal_fcal.png',
 'vla_plotvis.jpg',
 'vla_plotvis.png']

CASA [4]: x=_ # remember '_' is a shortcut for the output from the last command

CASA [5]: x
  Out[5]:
['vla_calplot.jpg',
 'vla_calplot.png',
 'vla_msplot_cals.jpg',
 'vla_msplot_cals.png',
 'vla_plotcal_bpass.jpg',
 'vla_plotcal_bpass.png',
 'vla_plotcal_fcal.jpg',
 'vla_plotcal_fcal.png',
 'vla_plotvis.jpg',
 'vla_plotvis.png']

CASA [6]: y=Out[2] # or just refer to the enumerated output

CASA [7]: y
  Out[7]: ['/home/basho3/jmcmulli/pretest']

  1. sc - captures the output to a variable; options are '-l' and '-v'

CASA [1]: sc x=pwd # capture output from 'pwd' to the variable 'x'

CASA [2]: x
  Out[2]: '/home/basho3/jmcmulli/pretest'

CASA [3]: sc -l x=pwd # capture the output from 'pwd' to the variable 'x' but
                      # split newlines into a list (similar to sx command)

CASA [4]: x
  Out[4]: ['/home/basho3/jmcmulli/pretest']

CASA [5]: sc -v x=pwd # capture output from 'pwd' to a variable 'x' and
                      # show what you get (verbose mode)
x ==
'/home/basho3/jmcmulli/pretest'

CASA [6]: x
  Out[6]: '/home/basho3/jmcmulli/pretest'

Logging

There are two components to logging within CASA. Logging of all command line inputs is done via IPython.

Upon startup, CASA will log all commands to a file called ipython.log. This file can be changed via the use of the ipythonrc file.

The following line sets up the logging for CASA . There are four options following the specification of the logging file: 1) append, 2) rotate (each session of CASA will create a new log file with a counter incrementing ipython.log.1~, ipython.log.2~ etc, 3) over (overwrite existing file), and 4) backup (renames exising log file to log_name~).

logfile ./ipython.log append

The command logstate will provide details on the current logging setup:

CASA [12]: logstate

File:   ipython.log
Mode:   append
State:  active

Logging can be turned on and off using the logon, logoff commands.

The second component is the output from applications which is directed to the file ./casapy.log.

Upon startup, the casalog command executes the following command:

/usr/X11R6/bin/xterm -sb -sl 1000 -fn 9x15 -geometry 80x12+0+88 -bg white -fg black \\
                     -title "CASA Log" -e "tail -f casapy.log" &

This provides an ascii file record of the output of your session which may be edited. The xterm tail display must be closed by the user when the CASA session is ended.

_Note: the logging is in a primitive state and is under development.

History and Searching

Numbered input/output history is provided natively within IPython. Command history is also maintained on-line.

CASA [11]: x=1

CASA [12]: y=3*x

CASA [13]: z=x**2+y**2

CASA [14]: x
  Out[14]: 1

CASA [15]: y
  Out[15]: 3

CASA [16]: z
  Out[16]: 10

CASA [17]: Out[14]   # Note: The 'Out' vector contains command output
  Out[17]: 1

CASA [18]: _15       # Note: The return value can be accessed by _number
  Out[18]: 3

CASA [19]: ___       # Note: The last three return values can be accessed as:
  Out[19]: 10        #       _, __, ___

Command history can be accessed via the 'hist' command. The history is reset at the beginning of every CASA session, that is, typing 'hist' when you first enter start CASA will not provide any commands from the previous session; however, all of the commands are still available at the command line and can be accessed through the up,down arrow keys, and through searching.

CASA [22]: hist
1 : __IP.system("vi temp.py")  # Note:shell commands are designated in this way
2 : ipmagic("%run -i temp.py") # Note:magic commands are designated in this way
3 : ipmagic("hist ")
4 : more temp.py
5 : __IP.system("more temp.py")
6 : quickhelp()                # Note: autoparenthesis are added in the history
7 : im.open('ngc5921.ms')
8 : im.summary()
9 : ipmagic("pdoc im.setdata")
10: im.close()
11: quickhelp()
12: ipmagic("logstate ")
13: x=1
14: y=3*x
15: z=x**2+y**2
16: x
17: y
18: z
19: Out[16]
20: _17
21: ___

The history can be saved as a script or used as a macro for further use:

CASA [24]: save script.py 13:16
File `script.py` exists. Overwrite (y/[N])? y
The following commands were written to file `script.py`:
x=1
y=3*x
z=x**2+y**2
CASA [25]: !more script.py
x=1
y=3*x
z=x**2+y**2

CASA [26]:

Note that the history commands will be saved up to, but not including the last value (i.e., history commands 13-16 saves commands 13, 14, and 15).

There are two mechanisms for searching command history:

  1. Begin typing and use the Ctrl-p (previous,up) and Ctrl-n (next,down) to search through only
the history items that match what you've typed so far. If you use Ctrl-p,Ctrl-n at a blank prompt, they behave just like the normal arrow keys.

  1. Hit Ctrl-r; this opens a search prompt. Begin typing and the system searches your history for
lines that contain what you've typed so far, completing what it can.

For example:

CASA [37]:

(reverse-i-search)`':

Typing anything after the colon will provide you with the last command matching the characters, for example, typing 'op' finds:

(reverse-i-search)`op': im.open('ngc5921.ms')

Subsequent hitting of Ctrl-r will search for the next command matching the characters.

Macros

Macros can be made for easy re-execution of previous commands. For example to store the commands 13-15 to the macro 'example':

CASA [31]: macro example 13:16
Macro `example` created. To execute, type its name (without quotes).
Macro contents:
x=1
y=3*x
z=x**2+y**2

CASA [32]: z
Out[32]: 6

CASA [33]: z=10

CASA [34]: example
Out[34]: Executing Macro...

CASA [35]: z
Out[35]: 6

CASA [36]:

On-line editing

You can edit files on-line in two ways:

  1. Using the shell access via '!vi'
  2. Using the ed function; this will edit the file but upon closing, it will try to
execute the file; using the 'script.py' example above:

CASA [13]: ed script.py # this will bring up the file in your chosen editor
                        # when you are finished editing the file, it will automatically
                        # execute it (as though you had done a execfile 'script.py'
Editing... done. Executing edited code...

CASA [14]: x
  Out[14]: 1

CASA [15]: y
  Out[15]: 3

CASA [16]: z
  Out[16]: 6

Executing Python scripts

Python scripts are simple text files containing lists of commands as if typed at the keyboard. Note: the autoparentheses feature of IPython can not be used in scripts, that is, you should make sure all function calls have any opening and closing parentheses.

\begin{verbatim} # file is script.py # My script to load a data set and plot the observed visibilities versus uv distance mp.open('ngc5921.ms') # load data into msplot tool mp.uvdist() # plot observed data versus uv distance \end{verbatim}

This can be done in two ways:

  1. execfile: this will execute the script as though you had typed the lines at the \casa\ prompt.

CASA [5]: execfile 'script.py'
--------> execfile('script.py')

  1. run -i: this will do the same as execfile; the '-i' flag indicates that it should be run within
the existing process (which knows about all of the default CASA tool names, etc; you should always use the '-i' flag if executing a script in this manner for CASA).

How do I exit from CASA?

Hit or type at the CASA command line prompt:

CASA>%Exit

and press return.

NOTE: This does not kill the open xterm showing the CASA log. Currently, this must be done manually.

Tools in CASA

The data reduction categories covered in this document are those for continuum and spectral-line data. The reduction sequences given in this document use the following CASA tools:

  • cb - calibrater tool - used to solve for the uv-plane calibration effects and to apply
calibration corrections to the observed data.
  • cp - calibration plotting tool - used to plot (and potentially) edit calibration
solutions. bility data.
  • mp - MeasurementSet plotting tool - used for plotting of visibility data (AMP and PHASE) and for interactive flagging of such data. Each plotting method has a 'column' parameter
which specifies whether the tool should plot the data, model or corrected visibilities. The 'what' parameter specifies either 'amp' or 'phase'. It can be used to investigate the data between calibration cycles.
  • tb - table tool - used to access, summarize, manipulate data in CASA tables.
  • tp - table plotting tool - used for plotting of general table quantities. Advanced tool, not generally used in reduction of data.
  • im - imager tool - used to image data in a MeasurementSet(s) (e.g., Fourier transform,
deconvolve, restore).

CASA offers both tools (applications with fine-grained functionality for total control of the reduction process) and tasks (larger functions for performing commonly used applications).

An example of a tool, is im, the imaging tool, which bundles all of the functionality needed for imaging operations; distinct functions are provided for data selection, setting of weighting, setting of imaging parameters, etc. An example of a task is clean which will produce an image cube based on the inputs to the task.

To obtain the available tools and tasks, type:

CASA [19]: quickhelp
---------> quickhelp()
Available tools:

 ms : MeasurementSet (MS) utilties
 mp : MS plotting (data (amp/phase) versus other quantities)
 cb : Calibration utilities
 cp : Cal solution plotting utilities
 im : Imaging utilities
 af : Statistical flagging utilities
 tb : Table utilities (selection, extraction, etc)
 tp : Table plotting utilities
 me : Measures utilities
 ---
 pl : pylab functions (matplotlib, e.g., pl.title, etc)
 ---
Available tasks:


Data Import
-----------
vlafiller : fill VLA data into a MeasurementSet
fitstoms  : fill UVFITS format into a MeasurementSet

Imaging
-------
clean     : imaging using various clean algorithms
invert    : dirty beam/map construction
mosaic    : mosaic imaging
feather   : Combine images using feathering technique
 ---

To use a tool, you must first load your data using the tool's open command:

CASA [2]: im.open('ngc5921.ms')
Mon Jan 2 18:35:28 2006    NORMAL Imager::open() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 378):
Opening MeasurementSet ngc5921.ms

Out[2]: True

All subsequent operations with the imager tool, im, will then correspond to the specified data set.

CASA [3]: im.summary()
Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000) "Logging summary"

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):

           MeasurementSet Name:  ngc5921.ms      MS Version 2

   Observer: TEST     Project:
Observation: VLA

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):
Data records: 22653       Total integration time = 5280 seconds
   Observed from   09:19:00   to   10:47:00

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):

   ObservationID = 1         ArrayID = 1
  Date        Timerange                Scan  FldId FieldName      DataDescIds
  13-Apr-1995/09:19:00.0 - 09:24:30.0     1      1 1331+30500002_0  [1]
              09:27:30.0 - 09:29:30.0     2      2 1445+09900002_0  [1]
              09:33:00.0 - 09:48:00.0     3      3 N5921_2        [1]
              09:50:30.0 - 09:51:00.0     4      2 1445+09900002_0  [1]
              10:22:00.0 - 10:23:00.0     5      2 1445+09900002_0  [1]
              10:26:00.0 - 10:43:00.0     6      3 N5921_2        [1]
              10:45:30.0 - 10:47:00.0     7      2 1445+09900002_0  [1]

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):
Fields: 3
  ID   Name          Right Ascension  Declination   Epoch
  1    1331+30500002_013:31:08.29      +30.30.32.96  J2000
  2    1445+09900002_014:45:16.47      +09.58.36.07  J2000
  3    N5921_2       15:22:00.00      +05.04.00.00  J2000

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):
Data descriptions: 1 (1 spectral windows and 1 polarization setups)
  ID  #Chans Frame Ch1(MHz)    Resoln(kHz) TotBW(kHz)  Ref(MHz)    Corrs
  1       63 LSRK  1412.68608  24.4140625  1550.19688  1413.44902  RR  LL

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):
Feeds: 28: printing first row only
  Antenna   Spectral Window     # Receptors    Polarizations
  1         -1                  2              [         R, L]

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):
Antennas: 27:
  ID   Name  Station   Diam.    Long.         Lat.
  1    1     VLA:N7    25.0 m   -107.37.07.2  +33.54.12.9
  2    2     VLA:W1    25.0 m   -107.37.05.9  +33.54.00.5
  3    3     VLA:W2    25.0 m   -107.37.07.4  +33.54.00.9
  4    4     VLA:E1    25.0 m   -107.37.05.7  +33.53.59.2
  5    5     VLA:E3    25.0 m   -107.37.02.8  +33.54.00.5
  6    6     VLA:E9    25.0 m   -107.36.45.1  +33.53.53.6
  7    7     VLA:E6    25.0 m   -107.36.55.6  +33.53.57.7
  8    8     VLA:W8    25.0 m   -107.37.21.6  +33.53.53.0
  9    9     VLA:N5    25.0 m   -107.37.06.7  +33.54.08.0
  10   10    VLA:W3    25.0 m   -107.37.08.9  +33.54.00.1
  11   11    VLA:N4    25.0 m   -107.37.06.5  +33.54.06.1
  12   12    VLA:W5    25.0 m   -107.37.13.0  +33.53.57.8
  13   13    VLA:N3    25.0 m   -107.37.06.3  +33.54.04.8
  14   14    VLA:N1    25.0 m   -107.37.06.0  +33.54.01.8
  15   15    VLA:N2    25.0 m   -107.37.06.2  +33.54.03.5
  16   16    VLA:E7    25.0 m   -107.36.52.4  +33.53.56.5
  17   17    VLA:E8    25.0 m   -107.36.48.9  +33.53.55.1
  18   18    VLA:W4    25.0 m   -107.37.10.8  +33.53.59.1
  19   19    VLA:E5    25.0 m   -107.36.58.4  +33.53.58.8
  20   20    VLA:W9    25.0 m   -107.37.25.1  +33.53.51.0
  21   21    VLA:W6    25.0 m   -107.37.15.6  +33.53.56.4
  22   22    VLA:E4    25.0 m   -107.37.00.8  +33.53.59.7
  24   24    VLA:E2    25.0 m   -107.37.04.4  +33.54.01.1
  25   25    VLA:N6    25.0 m   -107.37.06.9  +33.54.10.3
  26   26    VLA:N9    25.0 m   -107.37.07.8  +33.54.19.0
  27   27    VLA:N8    25.0 m   -107.37.07.5  +33.54.15.8
  28   28    VLA:W7    25.0 m   -107.37.18.4  +33.53.54.8

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):


Tables:
   MAIN                   22653 rows
   ANTENNA                   28 rows
   DATA_DESCRIPTION           1 row
   DOPPLER             <absent>
   FEED                      28 rows
   FIELD                      3 rows
   FLAG_CMD             <empty>
   FREQ_OFFSET         <absent>
   HISTORY                  336 rows
   OBSERVATION                1 row
   POINTING                 168 rows
   POLARIZATION               1 row
   PROCESSOR            <empty>
   SOURCE                     3 rows
   SPECTRAL_WINDOW            1 row
   STATE                <empty>
   SYSCAL              <absent>
   WEATHER             <absent>

Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000) ""
Mon Jan 2 18:36:21 2006    NORMAL imager::Imager::summary() (file /aips++/framework/code/synthesis/implement/MeasurementEquations/Imager.cc, line 1000):

General:
  MeasurementSet is ngc5921.ms
  Beam fit is not valid
Image: parameters not yet set (use setimage in Function Group <setup> )
Data selection settings: (use setdata in Function Group <setup> to change)
  All data selected
Options settings: (use setoptions in Function Group <setup> to change)
  Gridding cache has 398700544 complex pixels, in tiles of 16 pixels on a side
  Gridding convolution function is Spheroidal wave function
  No primary beam correction will be made
  Image plane padding : 1.2

Out[3]: True

As is standard for IPython, help can be obtained by typing 'pdoc' (print documentation) and the name of the tool or tool function:

CASA [5]: pdoc im.setdata
Set the data parameters selection for subsequent processing :
  mode      = velocity
  nchan     = [ 1 ]
  start     = [ 0 ]
  step      = [ 1 ]
  mstart    = { value=0.0, units=km/s }
  mstep     = { value=0.0, units=km/s }
  spwid     = [ 0 ]
  fieldid   = [ 0 ]
  msselect
  async     = false
----------------------------------------

CASA [6]:

All the tools have a close function which sets the tool back to its original state (clearing its use of resources). Subsequent calls to open will enable processing on alternate data sets.

What happens if something goes wrong?

First, always check that your inputs are correct; use the pdoc or ? for the method to review the inputs/output (if still unclear you can check the User Reference Manual (AIPS++/Glish based) for a list of parameters associated with each function

http://aips2.aoc.nrao.edu/stable/docs/user/documentation.html

You can submit a question/bug/enhancement via the site:

http://bugs.aoc.nrao.edu

Login (or register yourself if you don't have a login/password); click the 'Create New Issue' along the top tabs to file a question/bug/enhancement.

%BEGINFIGURE{label='fig:jira', caption='Issue/Defect Tracking system. Left: http://bugs.aoc.nrao.edu page showing login entry fields. Right: Screen after selecting the "Create New Issue" tab along the top.'}% Error: (3) can't find jira.jpg in Software Error: (3) can't find jira2.jpg in Software %ENDFIGURE%

If something has gone wrong and you want to stop what is executing, then typing 'Control-C' will usually cleanly abort the application.

Measurement Sets

Visibility data are stored in a CASA table known as a Measurement Set (MS). An MS consists of a main table containing the visibility data and associated sub-tables containing auxiliary or secondary information. Table \ref{tabselect} lists data selection parameters which may be used during a typical data reduction session while Table \ref{tabmain} identifies the commonly accessed components of the MAIN table in data sets.

Each row in a data column in the MS (e.g. DATA, ALMA_PHAS_CORR, CORRECTED_DATA) contains a matrix of observed complex visibilities at a single time stamp, for a single baseline in a single spectral window. The shape of the data matrix is given by the number of channels and the number of correlations (voltage-products) formed by the correlator for an array. Figures \ref{fig:tablekeywords} and \ref{fig:tablebrowser} illustrates the structure of the MAIN and sub-tables in an MS. Details about the contents of each column and if or how they change during the calibration procedure is described in Section \ref{chapter:calibration}.

All CASA data files, including Measurement Sets, are written into the current working directory by default, with each CASA table represented as a separate sub-directory. MS names therefore need only comply with UNIX file or directory naming conventions, and can be referred to from within CASA directly, or via full path names.

%BEGINTABLE{label="tbl:tabselect" caption="Common Data Selection Parameters"}%
Parameter Contents
ANTENNA1 First antenna in baseline
ANTENNA2 Second antenna in baseline
FIELD_ID Field (source no.) identification
DATA_DESC_ID Spectral window number, polarization identifier pair (IF no.)
ARRAY_ID Subarray number
OBSERVATION_ID Observation identification
POLARIZATION_ID Polarization identification
SCAN_NUMBER Scan number
TIME Integration midpoint time
UVW UVW coordinates
%ENDTABLE%

%BEGINTABLE{label="tbl:maintable" caption="Commonly accessed MAIN Table components for ALMA data"}%
Column Format
  Comments
DATA Complex(N_c, N_f)
  Complex visibility matrix
  ~=ALMA_PHASE_CORR by default
WEIGHT_SPECTRUM Float(N_c)
  Weight for whole data matrix
ALMA_PHASE_CORR Complex(N_c, N_f)
  On-line phase corrected complex visibility matrix
  (Not in VLA data)
ALMA_NO_PHAS_CORR Bool(N_c, N_f)
  Complex visibility matrix that has not been phase corrected
  (Not in VLA data)
ALMA_PHAS_CORR_FLAG_ROW Bool(N_c, N_f)
  Flag to use phase-corrected data or not, Default=F
  (not in VLA data)
CORRECTED_DATA Complex(N_c, N_f)
  Corrected data created by calibrater or imager tools
MODEL_DATA Complex(N_c, N_f)
  Model data created by calibrater or imager tools
IMAGING WEIGHT Float(N_c)
  created by calibrater or imager tools
FLAG Bool(N_c, N_f)
  cumulative data flags
%ENDTABLE%

*Note: when you examine table entries like FIELD_ID or DATA_DESC_ID with the table browser, you will see 0-based numbers.

%BEGINFIGURE{label="fig:tablekeywords" caption="The structure of a Measurement Set. The tables which compose a Measurement Set named ngc5921.ms on disk (Left); a few of the MAIN table columns in a Measurement Set (Right)."}% Error: (3) can't find jbrowser_ms.jpg in Software Error: (3) can't find jbrowser1.jpg in Software %ENDFIGURE%

%BEGINFIGURE{label="fig:matplotlib" caption="matplotlib plotter. The buttons on the lower left are: 1,2,3) Home, Back and Forward. Click to navigate between previously defined views (akin to web navigation), 4) pan. Click and drag to pan to a new position, 5) zoom. Click to define a rectangular region for zooming, 6) Subplot Configuration. Click to configure the parameters of the subplot and spaces for the figures, 7) {\bf Save*. Click to launch a file save dialog box. The cursor readout is on the bottom right."}% Error: (3) can't find B0319_uvcoverage.jpg in Software %ENDFIGURE%

Basic Calibration Fundamentals

Calibration and imaging responsibilities are divided between the \calibrater\ and \imager\ tools. The \calibrater\ tool handles visibility-plane calibration while the \imager\ tool deals with image formation, deconvolution, and image-plane calibration.

In a typical synthesis observation, a number of calibrators of known structure (usually point sources) and at least one of known flux density are observed in addition to the scientific target(s) in order to obtain a measure of the systematic properties of the instrument and, if relevant, the corrupting media above it. The effects of these systematic properties are then removed from the data for the scientific target. This technique is often referred to as {\em cross-calibration} and its success depends implicitly on the relationship between: 1) the true brightness distribution of the observed radio sources (what we seek); 2) the formal response of the instrument and its variety of imperfections of varying importance (what we are calibrating); and 3) the data that are actually recorded (what we have measured).

The Measurement Equation, originally defined by Hamaker, Bregman, \& Sault (1996, A\&AS, 117, 137; 1996, A\&AS, 117, 149), is an explicit description of this relationship. The Measurement Equation (in the visibility plane) for a spectral line polarization calibration is given by:

\vec{V}_{ij}~=~B_{ij}~G_{ij}~D_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}

where:

  • \vec{V}_{ij}~=~ cross-correlations between two polarizations for each of two feeds (i,j) which characterize an individual baseline (e.g. the measured visibilities).
  • \vec{V}_{ij}^{\mathrm{~IDEAL}}~=~ visibilities measured by an ideal interferometer (e.g. no instrumental errors or calibration effects). \vec{V}_{ij}^{\mathrm{~IDEAL}}~=~ is directly related to the Fourier Transform of the true polarized sky brightness, \vec{I} = (I,Q,U,V).
  • T_{ij}~=~ Complex gain effects due to the troposphere which are
polarization-independent, e.g. high-frequency opacity corrections.
  • P_{ij}~=~ Parallactic angle rotation correction.
  • D_{ij}~=~ Instrumental polarization response. "D-terms" describe the polarization leakage between feeds (e.g. how much the R-polarized feed picked up L-polarized emission, and vice versa).
  • G_{ij}~=~ Polarization-dependent complex gain effects which includes the scale factor for absolute flux density calibration, and may include phase and amplitude corrections due to changes in the atmosphere (in lieu of T_{ij}).
  • B_{ij}~=~ Bandpass response.

Note that the Measurement Equation is a matrix equation. \vec{V}_{ij}$ and $\vec{V}_{ij}^{\mathrm{~IDEAL}} are 4-vectors with elements for each correlator product. Terms like B_{ij} are 4\times4 matrices representing products of antenna pairs that `corrupt' the correlator products. CASA hardwires the Measurement Equation order and all of the specific algebra associated with the individual terms. The equation written above contains only the most commonly considered calibration terms; many others are possible and several additional types are described in the Synthesis Calibration chapter of this cookbook.

In practical calibration, it is often best to begin the calibration process by determining solutions for those terms which affect the data most. Thus, the user would normally determine gain solutions (G) first (applying pre-computed parallactic angle corrections P if doing polarization calibration), then bandpass solutions (B) and/or polarization solutions (D). Errors which are polarization-independent (T) can be determined at any point in the calibration process. T has nearly identical form to G but is located at a different point in the Measurement Equation. A practical advantage to this factorization is the ability to store polarization-independent effects which are characterized by a certain timescale or a particular parametrization (e.g., tropospheric opacity) as a separate calibration factor, thus isolating them from electronic gain effects which may vary on a different timescale and/or with different systematics. Although most data reduction sequences will probably proceed in this order, the user has complete freedom and flexibility to determine the order in which calibration effects are solved for. Self-calibration, the process by which an improved model of the target source is used to obtain improved solutions for the calibration, is also not limited to just the electronic gain (G) corrections, as has been traditionally the case. Improved solutions for, say, the instrumental polarization (D) may be obtained by self-calibration as well. In general, decision-making along the calibration path is a process of determining which effects are dominating the errors in the data and deciding how best to converge upon the optimal set of calibration solutions (for all relevant components) and source models which best describe the observed data.

The on-line \vec{V}_{ij}, are stored in the DATA column in the MAIN table of the MS; these are the data as loaded by a filler tool (e.g. fitstoms). Associated with the DATA column are scratch columns to hold the most recent version of the calibrated data (CORRECTED_DATA), and the latest visibility plane representation of the source or field model, \vec{V}_{ij}^{IDEAL} (MODEL_DATA). The latter two columns are filled in by the calibrater (cb) and imager (im) tools, respectively. The actual calibration information is stored in separate calibration tables which are written to the directory in which you started CASA. The observed DATA column does not change during reduction, but the scratch columns do. When plotting the data using a tool such as \msplot, the data sub-types _observed, corrected, and model_) can be selected individually.

Summary of Data Reduction Steps

Listed below are the basic steps for typical continuum and spectral line data reduction sessions which solve for and apply the standard complex gain ($G$) and the relative bandpass ($B$). The basic reduction sequence is:

  1. Translate the data into a Measurement Set (the CASA internal data format).
    • VLA data: Data may be translated from either a VLA archive file using the vlafiller task, or from a UVFITS file (e.g., as written by AIPS) using the fitstoms task.
  2. Examine and edit calibrator data using the autoflag (af) tool for automatic data editing and msplot (mp) tool for interactive editing.
  3. Calibrate the data using the calibrater (cb) tool. As an example, VLA, PdBI and BIMA calibration differ as described below:
    • High SNF data calibration: VLA data generally has a high signal-to-noise ratio (SNR); thus, one can determine the gain calibration with relative ease. The calibration solutions for each observations have high fidelity and the solution transfer to the target data is a straight forward interpolation between points. Calibration procedures applicable for VLA data will be appropriate for ALMA 3 and 1~mm data and some 3mm BIMA data. The specific steps required to calibrate high SNR data for each frequency group are:
      • Set the flux density of the absolute flux calibrators (e.g. the initial calibration model).
      • Determine the complex gain (G) calibration (phase & amplitude); apply parallactic angle (P) correction if doing polarization.
      • Determine a bandpass calibration (B).
      • Obtain the instrumental polarization calibration (D).
      • Transfer the flux density scale derived from the absolute flux calibrator(s).
      • Apply all calibrations to the target source data.
      • Split out calibrated source data.

    • Low SNR data calibration: PdBI & BIMA data, can have a relatively low SNR and individual gain solutions in phase and amplitude can have significant error. Thus, solutions are fitted with a polynomial before application to the target data. Further, to minimize errors in the determination of the solutions, one first gets a rough phase correction and so the data can be averaged in time to increase SNR before determining the bandpass solution. Once the bandpass is determined, the data can be averaged in frequency to get a good phase solution. Then the absolute flux scale is determined and an amplitude calibration is done. Using this series of procedures, error propagation along the calibration path is minimized. Calibration procedures applicable for PdBI/BIMA data will be appropriate for ALMA submillimeter data. Thus:
      • Get a rough time-dependent phase calibration to be used in next step.
      • Determine a bandpass calibration (B) (apply a rough time-dependent phase calibration to average the data in time before determining B). Note: BIMA data typically does not require bandpass calibration.
      • Determine a phase-only gain calibration (apply the B calibration).
      • Determine an absolute flux density calibration (apply calibrations obtained so far).
      • Determine a residual amplitude calibration (apply calibrations obtained so far).
      • Apply all calibrations to the target source data.
      • Split out calibrated source data.

  1. Examine and edit the calibrated target data.
  2. Subtract the continuum emission from the line emission in the uv-plane, if necessary.
  3. Image the target source.

Loading Data into CASA

VLA: Filling data from VLA archive format

VLA data in on-line archive format are read into \casa\ from disk or tape using the {\tt vlafiller} task.

Note: autocorrelations are filled automatically into the data set. Autocorrelation data is not needed for standard interferometric data, further, the imaging routine will try to image the autocorrelation data (it assumes it is single dish data) which will swamp any real signal. Thus, it is necessary to flag the autocorrelation data any time before imaging.

Fill from Disk:

The advantage to filling data from disk is that reading from disk is much faster and a lot less troublesome than reading from tape. It is also advantageous to do this if you expect to fill the data more than once. The disadvantage is that you need about 160~MBytes of spare disk space (or more if there are multiple archive files), as this is typically the size of a VLA archive file.

Downloaded archive format data are available with names like: AS500_A030501.xp1, AS500_A030501.xp2, ..., AS500_A030501.xpN.

The vlafiller task allows selection on frequency band, calibrator name, frequency range, project, time, subarray, and source, and supports concatenation to an existing $uv$-data set. Suppose that you have 1.3 cm line observations in 'K' band and you have copied the archive data to a subdirectory called 'data'. To run the vlafiller using archive data located on disk:

CASA [5]: pdoc vlafiller
    Perform fill operations :
      msname =
      inputfile
      project
      start     = 1970/1/1/00:00:00
      stop      = 2199/1/1/23:59:59
      centerfreq        = 1.0e18Hz
      bandwidth = 2.0e18Hz
      bandname  = *
      source
      subarray  = 0
      qualifier = -65536
      calcode   = #
      overwrite = false

vlafiller msname='as500.ms', inputfile='data/AS500_A030501.xp1', overwrite=True, bandname='K'
vlafiller msname='as500.ms', inputfile='data/AS500_A030501.xp2', overwrite=False,bandname='K'

Filling data from UVFITS format

For UVFITS format, the ms tool has the fromfits method, which can be used as:

CASA [3]: pdoc ms.fromfits
Create an ms from a uvfits file :
  msfile
  fitsfile
  nomodify  = true
  lock      = false
  obstype   = 0
  host
  forcenewserver    = false
----------------------------------------

CASA [4]: ms.fromfits('n5921.ms','/aips++/data/nrao/VLA/ngc5921.fits')
  Out[4]: True

Determine opacity corrections for VLA data

For VLA data, zenith opacity can be measured at the frequency and during the time observations are made using a VLA tipping scan in the observe file. Historical tipping data are available at:

http://www.vla.nrao.edu/astro/calib/tipper

Choose a year, and click Go to get a list of all tipping scans that have been made for that year.

If a tipping scan was made for your observation, then select the appropriate file. Go to the bottom of the page and click on the button that says Press here to continue.. The results of the tipping scan will be displayed. Go to the section called 'Overall Fit Summary' to find the fit quality and the fitted zenith opacity in percent. If the zenith opacity is reported as 6%, then the actual zenith opacity is 0.060 (this is the input needed for CASA corrections).

Data Examination and Editing

Plot the Data

pylab - matplotlib library interface

he primary graphical visualization tool in CASA for visibility data is msplot. msplot provides X-Y plots of the observed, corrected, and/or model data against a range of MS indices and parameters, as well as associated plots such as $uv$-coverage (\ref{fig:matplotlib}). Plots are displayed using the matplotlib plotting library which is also provided to the user at the command line through the pl tool.

pylab functions allow one to annotate, alter, or add to any plot displayed in the matplotlib plotter. matplotlib commands can be found at:

http://matplotlib.sourceforge.net/pylab_commands.html

pl.clf() is perhaps the most useful native command as it will clear the contents of the plotter frame.

A quick synopsis of relevant commands for altering plots are found below.

pl.axhline - draw a horizontal line across axes
pl.axvline - draw a vertical line across axes
pl.axhspan - draw a horizontal bar across axes
pl.axvspan - draw a vertical bar across axes
pl.clf     - clear the current figure
pl.close   - close the plotter (subsequent plots will reopen a plotter window)
pl.ion     - turn interaction mode on (default - this indicates that any plotting command
             will be seen immediately after the command)
pl.ioff    - turn off interaction mode (a 'show' command is required to see commands after
             this mode has been enabled)
pl.savefig - save the current figure
pl.subplot - make a subplot (numrows, numcols, axesnum)
pl.text    - add some text at location (x,y)
pl.title   - add/change the title
pl.xlim    - set/get the xlimits
pl.ylim    - set/get the ylimits
pl.xlabel  - add/change the xlabel
pl.ylabel  - add/change the ylabel

In addition, there are a range of mathematical functions provided (trigonometric, matrix algebra, logs, etc). Again, see the pylab documentation for help.

%BEGINFIGURE{label="fig:matplotlib" caption="matplotlib plotter. The buttons on the lower left are: 1,2,3) Home, Back and Forward. Click to navigate between previously defined views (akin to web navigation), 4) pan. Click and drag to pan to a new position, 5) zoom. Click to define a rectangular region for zooming, 6) Subplot Configuration. Click to configure the parameters of the subplot and spaces for the figures, 7) {\bf Save*. Click to launch a file save dialog box. The cursor readout is on the bottom right."}% Error: (3) can't find B0319_uvcoverage.jpg in Software %ENDFIGURE%

mp - msplot tool

msplot provides functions to plot a range of X vs. Y displays. Each plotting function has typically three parameters: 1) column - indicating whether you want to view the 'data' (observed data), 'corrected' (corrected data - i.e., post calibration), or the 'model' data, 2) what - indicating whether you want to see the 'amp' (amplitude) or 'phase' (phase), and 3) iteration - indicating an iteration axis with options of ANTENNA1, ANTENNA2, BASELINE, CHANNEL, FEED1, FIELD_ID, polarization_id, and TIME. To get a quick summary of the available functionality, do:

CASA [x]: mp.<TAB>

mp.__class__         mp.__repr__          mp.elevation         mp.plotoptions
mp.__delattr__       mp.__setattr__       mp.flagdata          mp.plotxy
mp.__doc__           mp.__str__           mp.hourangle         mp.setdata
mp.__getattribute__  mp.array             mp.iterplotnext      mp.setspectral
mp.__hash__          mp.azimuth           mp.iterplotstart     mp.uvcoverage
mp.__init__          mp.baseline          mp.iterplotstop      mp.uvdist
mp.__new__           mp.clearflags        mp.markflags         mp.vischannel
mp.__reduce__        mp.close             mp.open              mp.vistime
mp.__reduce_ex__     mp.done              mp.parallacticangle

After hitting the key, the available functions are displayed (NOTE: ignore the functions beginning with '__'; these indicate internal functions). The available functions are:

Tool Methods
------------
open  - open a dataset for use in subsequent plotting
close - close the dataset
done  - same as close

Data Selection
--------------
setdata     - select a subset of the dataset for plotting
setspectral - average spectral data or choose a subset of spectral data
plotoptions - sets number of display panels, turn on overplotting, etc.

Iteration
---------
iterplotstart - start iteration over iteration axis
iterplotnext  - do next plot over iteration axis
iterplotstop  - stop plot iteration

Plotting Methods
----------------
array            - display array configuration for dataset (uses ANTENNA table in
                   MeasurementSet)
uvcoverage       - display uvcoverage for selected sources
azimuth          - plot amplitude/phase versus azimuth
baseline         - plot amplitude/phase verus baseline
elevation        - plot amplitdue/phase versus elevation
vischannel       - plot amplitude/phase versus spectral channel
vistime          - plot amplitude/phase versus time
hourangle        - plot amplitude/phase verus hourangle
parallacticangle - plot amplitude/phase versus parallactic angle
uvdist           - plot amplitude/phase versus uv distance

pl.clf           - native matplotlib command to clear the current figure
                   (Note: This resets any subplot options that may have been set)

Flagging Functions
------------------
markflags  - use the cursor or select a region for flagging (region=[xmin,xmax,ymin,ymax])
flagdata   - flag the data; redraws the display to show flagged region; must set
             diskwrite=True to write the flags to disk
clearflags - clear all flags

As an example, we use the GG Tau dataset, 07feb97-g067.ms, to illustrate the msplot functionality. First, we open the dataset to allow use of the data.

mp.open('07feb97-g067.ms')               # Allow editing as well as inspection

Note that you could also type:
mp.open '07feb97-g067.ms'                # The interface will fill in the parentheses

Also note that if you aren't sure of the file name you can:
mp.open '07<TAB>                         # File completion will list any matching files

The following are illustrations of the uvdist, vischannel, uvcoverage, and array plots for VLA and/or Plateau de Bure data.

%BEGINFIGURE{label="fig:xyplots" caption="Visualizing your data. Right: mp.vischannel: A plot of channel number vs. observed amplitude shows that the two end channels on both sides of the bandpass have widely varying amplitudes while the central Gibbs channels have low amplitudes. The remaining channels are well behaved. Left: mp.uvdist: A plot of uv distance vs. observed amplitude. The observed amplitudes generally range from 0.2 to 0.3 Jy with some outliers."}% Error: (3) can't find ggtau_uvdist.jpg in Software Error: (3) can't find ggtau_gainchannel.jpg in Software %ENDFIGURE%

%BEGINFIGURE{label="fig:xyplots" caption="Antenna locations in the array for NGC 5921 (VLA, left) and GG Tau (PdBI, right)."}% Error: (3) can't find ngc5921_array.jpg in Software Error: (3) can't find ggtau_array.jpg in Software %ENDFIGURE%

plotoptions

The plotoptions method works in concert with the native matplotlib functionality to enable flexible representations of data displays. In particular, the arguments overplot, subplot, and plotsymbol enable visual comparisons of data results for analysis during the reduction process.

The mp.plotoptions argument plotsymbol

The plotsymbol defines both the line or symbol for the data being drawn as well as the color; from the matplotlib online documentation (e.g., type pl.plot?:

    The following line styles are supported:
        -     : solid line
        --    : dashed line
        -.    : dash-dot line
        :     : dotted line
        .     : points
        ,     : pixels
        o     : circle symbols
        ^     : triangle up symbols
        v     : triangle down symbols
        <     : triangle left symbols
        >     : triangle right symbols
        s     : square symbols
        +     : plus symbols
        x     : cross symbols
        D     : diamond symbols
        d     : thin diamond symbols
        1     : tripod down symbols
        2     : tripod up symbols
        3     : tripod left symbols
        4     : tripod right symbols
        h     : hexagon symbols
        H     : rotated hexagon symbols
        p     : pentagon symbols
        |     : vertical line symbols
        _     : horizontal line symbols
        steps : use gnuplot style 'steps' # kwarg only
    The following color strings are supported
        b  : blue
        g  : green
        r  : red
        c  : cyan
        m  : magenta
        y  : yellow
        k  : black
        w  : white
    Line styles and colors are combined in a single format string, as in
    'bo' for blue circles.

Examples are shown in the following sections.

The mp.plotoptions argument subplot takes three numbers. The first is the number of y panels (stacking vertically), the second is the number of xpanels (stacking horizontally) and the third is the number of the panel you want to draw into. For example, mp.plotoptions(subplot=212) will be drawing into the lower of two panels on the figure.

An example use of subplot capability (Figure \ref{multiplot}):

mp.open('ngc5921.ms')                           # load data into mp tool
mp.plotoptions(subplot=211,plotsymbol='go')     # plot to top of two panels, use green circles
mp.setdata(fieldIndex=0)                        # select a subset of the data
mp.vischannel()                                 # plot observed data versus channel
mp.plotoptions(subplot=223,plotsymbol='bo')     # plot to lower of two panels in a 2x2 grid
                                                # NOTE: You can change the gridding and panel
                                                        size by manipulating the ny x nx grid
mp.array()                                      # plot the array configuration
pl.subplot(224)                                 # plot to the fourth panel in a 2x2 grid
                                                # NOTE: You can mix pl.subplot and
                                                #       mp.plotoptions(subplot=nxnypanel) commands
mp.uvcoverage()                                 # plot uv coverage

%BEGINFIGURE{label="fig:multiplot" caption="Multi-panel display of visibility versus channel (Top), antenna array configuration (bottom left) and the resulting uv coverage (bottom right)."}% Error: (3) can't find ngc5921_multiplot.jpg in Software %ENDFIGURE%

An example overplot script would be:

# examine data before and after calibration
mp.open('test0.ms')
# plot original data
mp.plotoptions(plotsymbol='ro') # use red circles
mp.setdata(fieldIndex=0,correlations='RR') # select data subset
mp.setspectral(1,8,1) # select a middle channel (of 16)
mp.vistime('data','phase')

#plot corrected data
mp.setdata()          # re-initialize data selection
mp.setdata(fieldIndex=0,correlations='RR') # select data subset
mp.plotoptions(overplot=True,plotsymbol='bo')
mp.setspectral(1,8,1)
mp.vistime('corrected','phase') # plot phases of corrected data

# now do alternate calibration on same data set
cb.initcalset()
cb.setdata(msselect='FIELD_ID==0')
cb.setsolve(type='K',table='cal.K',t=0)
cb.solve()
cb.setdata()
cb.setapply(type='K',table='cal.K')
cb.correct()

mp.close()
mp.open('test0.ms')
mp.setdata(fieldIndex=0,correlations='RR')
mp.setspectral(1,8,1)
mp.plotoptions(overplot=True,plotsymbol='go')
mp.vistime('corrected','phase')

pl.legend(('observed','corrected (M,MF)','corrected (K)'),numpoints=1,shadow=True)

%BEGINFIGURE{label="fig:overplot" caption="Comparison of observed (red) and corrected (blue) data for a simulated data set with a phase delay variation (emulating a clock, geometry, or atmospheric delay error). The corrected data (blue) are using a standard gain and bandpass calibration while the corrected-K (green) illustrated a fringe-fitting calibration on the same data set."}% Error: (3) can't find sim_overplot.jpg in Software %ENDFIGURE%

Additional functionality can be used from native matplotlib, for example (Figure \ref{uvdist}):

# Compare observed 'data' and 'corrected' data
mp.open('B0319_0317.ms')        # Load data into mp
mp.plotoptions(subplot=121,plotsymbol='ro') # First plot (left of two panels)
mp.uvdist()                     # Plot observed 'data' versus uv distance (default)
mp.plotoptions(subplot=121,overplot=true,plotsymbol='bo') # Overplot
                                # Overplot using blue '+' symbol
mp.uvdist('corrected')          #    corrected data
mp.plotoptions(subplot=122,plotsymbol='ro') # Second plot (right of two panels)
mp.setspectral(1,5,50)          # Take spectral average of channels 6-56
mp.uvdist('data')               # Plot observed 'data'
mp.plotoptions(subplot=122,overplot=true,plotsymbol='bo')
                                # Overplot using blue '+' symbol
mp.setdata()                    # Clear data selection
                                # NOTE: This is needed when switching between
                                #    data columns; this also clears the spectral
                                #    selection so it should be reset
mp.setspectral(1,5,50)          # Take spectral average of channels 6-56
mp.uvdist('corrected')          # Plot 'corrected' data
                                # Use native matplotlib function ylim
pl.ylim(0.,1.5)                 # Extend y-axis plot limits to make room for legend
                                # Use native matplotlib function legend
pl.legend(('obs-RR','obs-LL','corr-RR','corr-LL'),numpoints=1,shadow=True)
                                # Plot legend (using only one point per type)
                                # Put a shadow around the box
                                # Use native matplotlib function text
pl.text(44.60,1.4,'Spectral average: 1 channel=channels 6-56',fontsize=8)

%BEGINFIGURE{label="fig:uvdist" caption="Comparison of observed versus corrected data for all channels (left) and for a single channel average (channels 6-56; right)."}% Error: (3) can't find B0319_uvdist.jpg in Software %ENDFIGURE%

Data Flagging

Interactive flagging

msplot

Interactive flagging (e.g. ``see it - flag it'') is possible on almost all graphical views of the data.

The key methods are markflags and flagdata. markflags enables the use of the matplotlib cursor to mark a rectangular region; markflags can also specify a region at the command line:

mp.uvdist()
mp.markflags() #no region specified so mark with cursor
               #left click top left corner (an 'x' will appear)
               #drag down and to the right and left click to mark right bottom corner
mp.flagdata()  # to redisplay plot with flagged region removed
mp.flagdata(diskwrite=true) # to write the flags to disk

mp.markflags(region=[200,250,0.1,0.15]) # specify region at command line
                                        # regions=[xmin,xmax,ymin,ymax]
mp.flagdata()  # to redisplay plot with flagged regions removed
#mp.flagdata(diskwrite=true) # to write the flags to disk

%BEGINFIGURE{label="fig:markflags" caption="Plot of amplitude versus uv distance with both the cursor determined and command line specified flagging regions removed (Right)."}% Error: (3) can't find markflags.jpg in Software Error: (3) can't find markflags2.jpg in Software %ENDFIGURE%

Using matplotlib to alter/augment a plot

As mentioned previously, one can use the pl tool to access the full range of matplotlib functionality. The following illustrates a simple use (Figure \ref{fig:gibbs}):

mp.open('/home/rohir2/jmcmulli/ALMATST1/Regression/GGTAU/07feb97-g067.ms')
mp.setdata(fieldIndex=[0],spwIndex=[2])
mp.vischannel()
pl.axhline(y=0.27,linewidth=3,color='r') # draw a horizontal line at y=0.27
pl.axvspan(31,35)                        # highlight the Gibbs channels (dip)
text(36,0.7,'Gibbs channels')            # Provide text

Essentially, all of the matplotlib functionality can be used to alter an existing plot or create a new one.

%BEGINFIGURE{label="fig:gibbs" caption="Use matplotlib (pl tool) to modify an msplot figure."}% Error: (3) can't find matplotlib1.jpg in Software %ENDFIGURE%

Non-interactive flagging

Command-based editing capabilities are supported in the autoflag tool. af is useful because it can select data of a particular type, or with particular attributes and flag these data automatically. In general, this involves selecting the data to be flagged and then running a flagging operation. The flagging commands all have the option to run them on a trial basis.

As an example, it is necessary to flag the autocorrelations in the NGC 5921 dataset, ngc5921.ms, as well as any VLA data filled from archive format files. If autocorrelation data are left in, the data and calibration will be fine but you won't be able to make a useful image because the power in the autocorrelations will swamp everything else. To do this, type:

  af.open('ngc5921.ms')              # Load data into autoflag tool
  af.setdata()                       # Use whole dataset (no selection)
  af.setautocorrelation(True)        # Select autocorrelations
  af.run()                           # Flag the autocorrelations
  af.close()

_Currently, the autoflag tool has the capability to flag autocorrelations. More functionality will be available shortly as it is ported to the CASA framework_.

Synthesis Calibration

Basic Synthesis Calibration Fundamentals

The visibilities measured by an instrument must be calibrated before formation of an image. This is becausethe wavefronts received and processed by the observational hardwarehave been corrupted by a variety of effects. These include the effects of transmission through the atmosphere, the imperfect detailsof reception by antennas, the effects of conversion to an amplified electronic (digital) signal and transmission through thesignal processing system, and the effects of formation of the cross-power spectra by a correlator. Calibration is the process of reversing these effects to arrive at corrected visibilities which resemble as closely as possible the visibilities that would have been measured in vacuum by a perfect system. The subject of this chapter of the cookbook is the determination of these effects by using the visibility data itself.

The relationship between the observed and ideal (desired) visibilities on the baseline between antennas i and j may be expressed by the Measurement Equation:

\vec{V}_{ij}~=~J_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}

where \vec{V}_{ij} represents the observed visbility, \vec{V}_{ij}^{\mathrm{~IDEAL}} represents the corresponding ideal visibilities, and J_{ij} represents the accumulation of all corruptions affecting baseline ij. The visibilities are indicated as vectors spanning the four correlation combinations which can be formed from dual-polarization signals. These four correlations are related directly to the Stokes parameters which fully describe the radiation. The J_{ij} term is therefore a 4\times4 matrix.

Most of the effects contained in J_{ij} (indeed, the most important of them) are antenna-based, i.e., they arise from measurable physical properties of (or above) individual antenna elements in a synthesis array. Thus, adequate calibration of an array of N_{ant} antennas forming N_{ant}(N_{ant}-1)/2 baseline visibilities is usually achieved through the determination of only N_{ant} factors, such that J_{ij} = J_i \otimes J_j^{*}. For the rest of this chapter, we will usually assume that J_{ij} is factorable in this way, unless otherwise noted.

As implied above, J_{ij} may also be factored into the sequence of specific corrupting effects, each having their own particular (relative) importance and physical origin, which determines their unique algebra. Including the most commonly considered effects, the Measurement Equation can be written:

\vec{V}_{ij}~=~M_{ij}~B_{ij}~G_{ij}~D_{ij}~E_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}

where:

  • T_{ij}~=~ Polarization-independent multiplicative effects introduced by the troposphere, such as opacity and path-length variation.
  • P_{ij}~=~ Parallactic angle, which describes the orientation of the polarization coordinates on the plane of the sky. This term varies according to the type of the antenna mount.
  • E_{ij}~=~ Effects introduced by properties of the optical components of the telescopes, such as the collecting area's dependence on elevation.
  • D_{ij}~=~ Instrumental polarization response. "D-terms" describe the polarization leakage between feeds (e.g. how much the R-polarized feed picked up L-polarized emission, and vice versa).
  • G_{ij}~=~ Electronic gain response due to components in the signal path between the feed and the correlator. This complex gain term G_{ij} includes the scale factor for absolute flux density calibration, and may include phase and amplitude corrections due to changes in the atmosphere (in lieu of T_{ij}). These gains are polarization-dependent.
  • B_{ij}~=~ Bandpass (frequency-dependent) response, such as that introduced by spectral filters in the electronic transmission system
  • M_{ij}~=~ Baseline-based correlator (non-closing) errors. By definition, these are not factorable into antenna-based parts.

Note that the terms are listed in the order in which they affect the incoming wavefront (G and B represent an arbitrary sequence of such terms depending upon the details of the particular electronic system). Note that $M$ differs from all of the rest in that it is not antenna-based, and thus not factorable into terms for each antenna.

As written above, the measurement equation is very general; not all observations will require treatment of all effects, depending upon the desired dynamic range. E.g., bandpass need only be considered for continuum observations if observed in a channelized mode and very high dynamic range is desired. Similarly, instrumental polarization calibration can usually be omitted when observing (only) total intensity using circular feeds. Ultimately, however, each of these effects occurs at some level, and a complete treatment will yield the most accurate calibration. Modern high-sensitivity instruments such as ALMA and EVLA will likely require a more general calibration treatment for similar observations with older arrays in order to reach the advertised dynamic ranges on strong sources.

In practice, it is usually far too difficult to adequately measure most calibration effects absolutely (as if in the laboratory) for use in calibration. The effects are usually far too changable. Instead, the calibration is achieved by making observations of calibrator sources on the appropriate timescales for the relevant effects, and solving the measurement equation for them using the fact that we have N_{ant}(N_{ant}-1)/2 measurements and only N_{ant} factors to determine (except for M which is only sparingly used). (_Note: By partitioning the calibration factors into a series of consecutive effects, it might appear that the number of free parameters is some multiple of N_{ant}, but the relative algebra and timescales of the different effects, as well as the the multiplicity of observed polarizations and channels compensate, and it can be shown that the problem remains well-determined until, perhaps, the effects are direction-dependent within the field of view. Limited solvers for such effects are under study; the calibrater tool currently only handles effects which may be assumed constant within the field of view. Corrections for the primary beam are handled in the imager tool._ Once determined, these terms are used to correct the visibilities measured for the scientific target. This procedure is known as cross-calibration (when only phase is considered, it is called phase-referencing).

The best calibrators are point sources at the phase center (constant visibility amplitude, zero phase), with sufficient flux density to determine the calibration factors with adequate SNR on the relevant timescale. The primary gain calibrator must be sufficiently close to the target on the sky so that its observations sample the same atmospheric effects. A bandpass calibrator usually must be sufficiently strong (or observed with sufficient duration) to provide adequate per-channel sensitivity for a useful calibration. In practice, several calibrators are usually observed, each with properties suitable for one or more of the required calibrations.

Synthesis calibration is inherently a bootstrapping process. First, the dominant calibration term is determined, and then, using this result, more subtle effects are solved for, until the full set of required calibration terms is available for application to the target field. The solutions for each successive term are relative to the previous terms. Occasionally, when the several calibration terms are not sufficiently orthogonal, it is useful to re-solve for earlier types using the results for later types, in effect, reducing the effect of the later terms on the solution for earlier ones, and thus better isolating them. This idea is a generalization of the traditional concept of self-calibration, where initial imaging of the target source supplies the visibility model for a re-solve of the gain calibration (G or T). Iteration tends toward convergence to a statistically optimal image. In general, the quality of each calibration and of the source model are mutually dependent. In principle, as long as the solution for any calibration component (or the source model itself) is likely to improve substantially through the use of new information (provided by other improved solutions), it is worthwhile to continue this process.

In practice, these concepts motivate certain patterns of calibration for different types of observation, and the calibrater tool in CASA is designed to accomodate these patterns in a general and flexible manner. For a spectral line total intensity observation, the pattern is usually:

  1. Solve for $G$ on the bandpass calibrator
  2. Solve for $B$ on the bandpass calibrator, using $G$
  3. Solve for $G$ on the primary gain (near-target) and flux density calibrators, using B solutions just obtained
  4. Scale G solutions for the primary gain calibrator according to the flux density calibrator solutions
  5. Apply G and B solutions to the target data
  6. Image the calibrated target data

If opacity and gain curve information are relevant and available, these types are incorporated in each of the steps (in future, an actual solve for opacity from appropriate data may be folded into this process):

  1. Solve for G on the bandpass calibrator, using T (opacity) and E (gain curve) solutions already derived.
  2. Solve for B on the bandpass calibrator, using G, T (opacity), and E (gain curve) solutions.
  3. Solve for G on primary gain (near-target) and flux density calibrators, using B, T (opacity), and E (gain curve) solutions.
  4. Scale G solutions for the primary gain calibrator according to the flux density calibrator solutions
  5. Apply T (opacity), E (gain curve), G, and B solutions to the target data
  6. Image the calibrated target data

For continuum polarimetry, the typical pattern is:

  1. Solve for G on the polarization calibrator, using (analytical) P solutions.
  2. Solve for D on the polarization calibrator, using P and G solutions.
  3. Solve for G on primary gain and flux density calibrators, using P and D solutions.
  4. Scale G solutions for the primary gain calibrator according to the flux density calibrator solutions.
  5. Apply P, D, and G solutions to target data.
  6. Image the calibrated target data.

For a spectro-polarimetry observation, these two examples would be folded together.

In all cases the calibrator model must be adequate at each solve step. At high dynamic range and/or high resolution, many calibrators which are nominally assumed to be point sources become slightly resolved. If this has biased the calibration solutions, the offending calibrator may be imaged at any point in the process and the resulting model used to improve the calibration. Finally, if sufficiently strong, the target may be self-calibrated as well.

General Calibrater Tool Mechanics

The calibrater tool is designed to solve and apply solutions for all of the solution types listed above (and more are in the works). This leads to a single basic sequence of execution for all solves, regardless of type:

  1. Set the calibrator model visibilities (using imager tool, currently)
  2. Select the visibility data which will be used to solve for a calibration type
  3. Arrange to apply any already-known calibration types (the first time through, none may yet be available)
  4. Arrange to solve for a specific calibration type, including specification of the solution timescale and other specifics
  5. Execute the solve process
  6. Repeat 1-4 for all required types, using each result, as it becomes available, in step 2, and perhaps repeating for some types to improve the solutions

By itself, this sequence doesn't guarantee success; the data provided for the solve must have sufficient SNR on the appropriate timescale, and must provide sufficient leverage for the solution (e.g., D solutions require data taken over a sufficient range of parallactic angle in order to separate the source polarization contribution from the instrumental polarization).

When all required calibration types have been determined, the calibrater tool is used to correct data as follows:

  1. Select data for calibration application
  2. Arrange to apply each available calibration
  3. Execute the application process

The calibrater tool functions are as follows:

  • setdata - This function provides for selecting the subset of data in the measurement set that subsequent solve and correct executions will operate on. Its parameters are:

    • [mode] -- 'none' (default), 'channel', or 'velocity'. If mode='channel', activates detailed channel-selection options (nchan \& start); mode='none' indicates select all data. mode='velocity indicates select channels using velocity range options (mstart \& mstep).
    • [nchan] -- indicates the number of channels to select
    • [start] -- indicates the first channel to select
    • [mstart] -- starting velocity (e.g. '-20 km/s')
    • [mstep] -- step in velocity (e.g. '1 km/s')
    • [uvrange] -- {\it uv} range selection over which solutions should be limited in kilolambda (useful if your flux calibrator is resolved)
    • [msselect] -- A Table Query Language (TaQL) expression indicating the global selection desired (fields, spectral windows, timeranges, etc.)

Examples of cb.setdata commands:

cb.setdata(msselect='FIELD_ID==3'); # select all data for field id 3

cb.setdata(msselect='FIELD_ID==3 && DATA_DESC_ID==17');
                                     # select all data for field id 3 in
                                     # spectral window 17:

cb.setdata(mode='channel',          # select data for field id 3 in spwid 17,
            nchan=33,                # limited to channels 12-44
            start=12,                # (there must be at least 44!)
            msselect='FIELD_ID==3 && DATA_DESC_ID==17')

cb.setdata(msselect='FIELD_ID==3',  # select data for field id 3 on
    uvrange=[0,15])                  # baselines with lengths less than
                                     # 15000 wavelengths

  • setapply - This function enables specification of existing calibration types for application during subsequent solve or correct operations. Multiple runs of setapply for different types are each persistent until over-ridden by other
setapply execution for each particular type or by execution of the reset function. The parameters for setapply are:

    • [type] -- the calibration type: B, BPOLY, G, GSPLINE, D, P, T, TOPAC, GAINCURVE
    • [table] -- The name of the calibration table that holds solutions to be applied (if applicable)
    • [select] -- A TaQL expression selecting which solutions to apply from the selected table.
    • [interp] -- 'nearest' (default), 'linear', or 'aipslin', indicating the form of time-dependent interpolation to use
    • [spwmap] -- A list of spectral window indices, such that spwmap(i)=j means that solutions for spectral window j will be applied to spectral window i in all subsequent operations. The default is to insist upon explicit solutions for each spectral window.
    • [unset] -- If unset=T, unset calibration application of the type chosen.
    • [opacity] -- zenity opacity (determined from, e.g. a tipping scan) for use when type=TOPAC
    • [rawspw] -- for type=GSPLINE. Use rawspw to specify initial phase corrections that can be applied to another frequency (e.g., scale 3mm phases to the 1mm band and solve for incremental phase solutions at 1mm).

Examples of cb.setapply commands:

cb.setapply(type='P')              # arrange to apply 'P' solutions
                                    # (calculated analytically by the software)
cb.setapply(type='G',
             table='cal.G')         # arrange to apply 'G' solutions
                                    # from table cal.G

cb.setapply(type='G',              # arrange to apply 'G' solutions from
             table='cal.G',         # table 'cal.G', which originated from
             select='FIELD_ID==3')  # field id 3


# Example of how to use the spwmap function:
cb.setdata(msselect='FIELD_ID IN [1:3] && DATA_DESC_ID in [2,3]')
                                    # select fields the calibration is to be
                                    # applied to and their spectral windows
                                    # But the calibration was derived in
                                    # spectral window 1, map solutions from
                                    # spwid 1 to spwids 2 & 3
cb.setapply(type='G', t=0.0,       # arrange to apply 'G' solutions made
             table='cal.G',         # from spwid 1 data in table cal.G
             spwmap=[1,1,1])        # to data in spwids 2 & 3.
cb.correct()                       # Make the actual correction to the data
 # The logger will report:
 # Applying G Jones calibration from spw=1 to spw=2
 # Applying G Jones calibration from spw=1 to spw=3

  • setsolve - This function specifies what calibration solutions will be solved for in subsequent executions. This function should
(currently) be run only once, for the particular type to be solved for. The parameters are:

    • [type] -- The calibration type to solve for: B, BPOLY, G, GSPLINE, D, P, T, TOPAC, GAINCURVE
    • [table] -- The name of the table in which to store the result
    • [t] -- The solution interval for the solve in seconds
    • [preavg] -- The extent to which data is averaged in time (in seconds)after pre-applications and prior to solving; the default is the solution interval, but some types (e.g., D) require averaging on shorter timescales.
    • [refant] -- The antenna to which the phases should be referenced.
    • [append] -- If T, append solutions obtained to an existing table, otherwise overwrite the specified table (if it exists)
    • [phaseonly] -- If phaseonly=T, solve for only the phase (not amplitude and phase)
    • [unset] -- If unset=T, unset solve of the type chosen

Example:

cb.setsolve(type='G',       # solve for 'G' on 30s timescale, reference
             table='cal.G',  #  solutions to antenna 3, and store result
             t=30,           #  in a table called 'cal.G'
             refant=3)

  • solve - Executes the solving process using the data selected
according to setdata, as corrected according to executions of setapply, and as prescribed by the execution of setsolve. To run the solve step type:

cb.solve()

  • correct - Executes the calibration application process on data
selected according to setdata, and as prescribed by execution(s) of setapply (e.g., apply calibration to the target data). To run the correct step type:
cb.correct()

  • accumulate - This function enables accumulation of calibration tables of the same type into a single calibration table. This permits incremental solving and application, as well as flexible interpolation timescale selection within a single calibration table. This functionality is described in more detail in the section on Incremental Calibration.

    • [tablein] -- The existing cumulative calibration table to which the incremental solution is to be applied. If none yet exists, leave this parameter blank, and accumulate will create one from scratch, using the timescale specified in t.
    • [incrtable] -- The name of the incremental calibration table to accumulate.
    • [tableout] -- The name of the output cumulative calibration table which is the product of tablein and incrtable. If unspecified, tablein will be overwritten.
    • [field] -- A list of field ids in the tablein table to update with the incrtable.
    • [calfield] -- A list of field ids in incrtable to apply to tablein.
    • [interp] -- The interpolatin type to use: 'nearest', 'linear', or 'aipslin'.
    • [t] -- When tablein='', this is the timescale (seconds) to sample the output table. The incrtable is then interpolated onto this timescale.

See the Incremental Calibration section for an example of the use of accumulate.

  • state - Reports the state of the calibrater tool, specifically,
the current arrangements as prescribed by previous executions of setapply and setsolve. To determine the state of the calibrater tool type:

cb.state()

  • reset - Resets the setapply/setsolve state of the calibrater
tool. Does not reset the data selection. Parameters:

    • [apply] -- if T, reset apply arrangements
    • [solve] -- if T, reset solve arrangements

Examples:

cb.reset()                   # reset all inputs relating to setapply and setsolve
cb.reset(apply=T, solve=F)   # reset inputs relating to setapply only

  • fluxscale - Given a table with properly scaled gain solutions of an absolute flux density calibrator and relative solutions of one or more other calibrators (e.g. their gain solutions were derived assuming
amplitudes of 1.0 Jy), then scale the amplitudes of the other calibrators based on the absolute flux density scale. fluxscale will report in the logger the derived flux densities of the calibrators. fluxscale parameters are:

    • [tablein] -- Name of the input table with unscaled solutions
    • [tableout] -- Name of the output table which will hold scaled solutions
    • [reference] -- Name of the absolute flux density calibrator
    • [transfer] -- Name(s) of the calibrator source(s) to be scaled
    • [refspwmap] -- Indicate how data for different spectral windows should be matched in calculating the flux density scale factor for transfer fields. The default is to match spectral windows for reference and transfer fields. When specified, the refspwmap parameter takes a vector of integers indicating which spectral window solutions to use as the reference for others, such that refspwmap[j]=i causes solutions (from reference fields) observed in the i-th spectral window to be used to reference solutions (from transfer fields) observed in the j-th spectral window.

For example, to scale the amplitudes of calibrators 0234+258 and 0323+022 based of the absolute scaling of 3C286 when all observations were made in the same spectral window and solutions for all sources are in the table called cal.G:

cb.fluxscale(tablein='cal.G',               # Select input table
          tableout = 'cal.Gflx',             # Write scaled solutions to cal.Gflx
          reference='3C286',                 # 3C286 = flux calibrator
          transfer=['0234+258', '0323+022')  # Select calibrators to scale

If 3C286 was observed in spectral window 1 while 0234+258 and 0323+022 were observed in spectral windows 2 and 3, respectively, then use the refspwmap parameter, e.g.:

cb.fluxscale(tablein='cal.G',               # Select input table
          tableout = 'cal.Gflx',             # Write scaled solutions to cal.Gflx
          reference='3C286',                 # 3C286 = flux calibrator
          transfer=['0234+258', '0323+022'   # Select calibrators to scale
          refspwmap=[1,1,1])                 # Use spwid 1 scaling for spwids 2 & 3

  • modelfit - This function solves the corrected visibility data
for simple source models. The parameters are:

    • [niter] -- The number of non-linear solution iterations to perform
(if zero, only the chi2 of the specified model is calculated)
    • [type] -- The type of model component to solve for; currently,
'P' for a point, 'G' for a Gaussian (default='P')
    • [par] -- The starting guess for component parameters as a vector,
in this order: I (total flux density, Jy), longitudinal direction offset (arcsec), latitudinal direction offset (arcsec); and additionally for elliptical Gaussians, the major axis (arcsec), aspect ratio, and, major axis position angle (degrees) (default=[1.0,0.0,0.0])
    • [vary] -- A vector of Booleans indicating which of the parameters
to allow to vary in the fit. If unspecified, all parameters will vary. This vector need only be as long as the position of the last non-varying (False) parameter.
    • [file] -- If specified, the name of an output componentlist
table containing the fit result. This componentlist table can be used, e.g., in {\tt imager.ft} to specify a model to predict onto the MODEL_DATA for subsequent processing (plotting or calibration).

Example:

cb.open('simplesource.ms');             # start calibrater tool
cb.setdata(msselect='FIELD_ID==3');     # select a single field

# Get chi2 of a specified point model:
cb.modelfit(niter=0,
             compshape='P',
             par=[3.4,0.1,-0.1]);

# Use same model as starting guess for 10-iteration fit
#  assign result to variable 'x'
x=cb.modelfit(niter=10,
                compshape='P',
                par=[3.4,0.1,-0.1]);

# Fit 10 more iterations, starting with previous
#  result, and store resulting componentlist on disk
cb.modelfit(niter=10,
             compshape='P',
             par=x,
             file='model1.cl');

# Use same model as starting guess for 10-iteration fit,
#   keeping flux density constant
cb.modelfit(niter=10,
             compshape='P',
             par=[3.4,0.1,-0.1],
             vary=[F,T,T]);


# Fit a Gaussian (a=1.2", aspect=0.8, pa=63deg)
cb.modelfit(niter=10,
             compshape='G',
             par=[3.4,0.1,-0.1,1.2,0.8,63]);

There are a number of additional calibrater functions which are used to adjust certain aspects of the solutions. These will be described in more detail below.

Calibration Interpolation

Calibration solutions (most notably G or T) must be interpolated onto the timestamps of the science target observations. Currently, three time-dependent interpolation options are available for specification in the interp parameter in setapply. These are nearest, linear and aipslin. The nearest interpolation option applies the calibration factor nearest in time to each datum. The linear interpolation options applies to each datum an amplitude and phase which are (separately) interpolated (or extrapolated) from the two nearest (in time) calibration solutions. For phase, it is assumed that there is no cycle ambiguity to consider, i.e., the direction of the smaller phase difference (necessarily always < 180 degrees) between the two solutions is considered the correct direction for interpolation. The aipslin interpolation option emulates the on-the-fly calibration interpolation in classic AIPS, where the amplitude is interpolated linearly (like linear), but interpolated phase is determined from linear interpolation of the real and imaginary parts. This mode avoids the complication of determining the minimum phase ambiguity, but the result is decidedly non-linear in phase for interpolations over more than a few 10s of degrees (as the phase difference between interpolating solutions approaches 180 degrees, aipslin tends toward nearest for the phase interpolation.

Incremental Calibration

It is occasionally desirable to solve for and apply calibration incrementally. This is the case when a calibration table of a certain type already exists (from a previous solve), an incremental solution of the same type and relative to the first is required, and it is not possible to recover the cumulative solution by a single solve.

Much of the time, it is, in fact, possible to recover the cumulative solution. This is because the equation describing the solution for the incremental solution (using the original solution), and that describing the solution for their product are fundamentally the same equation---the cumulative solution, if unique, must always be the same no matter what initial solution is. One circumstance where an incremental solution is necessary is the case of phase-only self-calibration relative to a full amplitude and phase calibration already obtained (from a different field).

For example, a phase-only ``G'' self-calibration on a target source may be desired to tweak the full amplitude and phase ``G'' calibration already obtained from a calibrator. The initial calibration (from the calibrator) contains amplitude information, and so must be carried forward, yet the phase-only solution itself cannot (by definition) recover this information, as a full amplitude and phase self-calibration would. In this case, the initial solution must be applied while solving for the phase-only solution, then the two solutions combined to form a cumulative calibration embodying the net effect of both. In terms of the Measaurement Equation, the net calibration is the product of the initial and incremental solutions.

The analog of accumulate in classic AIPS is the use of CLCAL to combine a series of (incremental) SN calibration tables to form successive (cumulative) CL calibration tables.

Cumulative calibration tables also provide a means of generating carefully interpolated calibration, on variable user-defined timescales, that can be examined prior to application to the data with setapply and correct. The solutions for different fields and/or spectral windows can be interpolated in different ways, with all solutions stored in the same table.

The only difference between incremental and cumulative calibration tables is that incremental tables are generated directly from the data via solve or (in the near future) from other ancilliary data (e.g. weather information), and cumulative tables are generated from other cumulative and incremental tables via accumulate. In all other respects (internal format, application to data via setapply and correct, plotting with calplot, etc.), they are the same, and therefore interchangable. Thus, accumulate and cumulative calibration tables need only be used when circumstances require it.

The accumulate function represents a generalization on the classic AIPS CLCAL model of cumulative calibration in that its application is not limited to accumulation of ``G'' solutions (SN/CL tables classic AIPS are the analog of ``G'' (and, implicitly, ``T'') in aips++). In principle, any basic calibration type can be accumulated (onto itself), as long as the result of the accumulation (matrix product) is of the same type. This is true of all the basic types, except ``D''. Accumulation is currently supported for ``B'', ``G'', and ``T'', and, in future, ``F'' (ionospheric Faraday rotation), ``J'' (generic full-polarization calibration), fringe-fitting, and perhaps others. Accumulation of certain specialized types (e.g., ``GSPLINE'', ``TOPAC'', etc.) onto the basic types will be supported in the near future. The treatment of various calibration from ancilliary data (e.g., system temperatures, weather data, WVR, etc.), as they become available, will also make use of accumulate to achieve the net calibration.

Note that accumulation only makes sense if treatment of a uniquely incremental solution is required (as described above), or if a careful interpolation or sampling of a solution is desired. In all other cases, re-solving for the type in question will suffice to form the net calibration of that type. For example, the product of an existing ``G'' solution and an amplitude and phase ``G'' self-cal (solved with the existing solution applied), is equivalent to full amplitude and phase ``G'' selfcal (with no prior solution applied), as long as the timescale of this solution is at least as short as that of the existing solution.

Use of accumulate is straightforward:

The tablein parameter is used to specify the existing cumulative calibration table to which an incremental table is to be applied. Initially, no such table exists, and accumulate will generate one from scratch (on-the-fly), using the timescale (in seconds) specified by the parameter t. These nominal solutions will be unit-amplitude, zero-phase (i.e., unit matrix) calibration, ready to be adjusted by accumulation according to the settings of other parameters. When t is negative (the default), the table name specified in tablein must exist and will be used.

The incrtable parameter is used to specify the incremental table that should be applied to tablein. The calibration type of incrtable sets the type assumed in the operation, so tablein must be of the same type. If it is not, accumulate will exit with an error message. (Certain combinations of types and subtypes may be supported by accumulate in the future.)

The tableout parameter is used to specify the name of the output table to write. If un-specified (or ``''), then tablein will be overwritten. Use this feature with care, since an error here will require building up the cumulative table from the most recent distinct version (if any).

The field parameter specifies those field names in tablein to which the incremental solution should be applied. The solutions for other fields will be passed to tableout unaltered. If the cumulative table was created from scratch in this run of accumulate, then the solutions for these other fields will be unit-amplitude, zero-phase, as described above.

The calfield parameter is used to specify the fields to select from incrtable to use when applying to tablein. Together, use of field and calfield permit completely flexible combinations of calibration accumulation with respect to fields. Multiple runs of accumulate can be used to generate a single table with many combinations. In future, a ``self'' mode will be enabled that will simplify the accumulation of field-specific solutions.

The interp parameter is used to specify the interpolation type to use on the incremental solutions, as in setapply. The currently available interpolation types are ``nearest'', ``linear'', and ``aipslin''. See the setapply URM documentation for more details.

Here is an example:

cb.open('ap366.sim')

# obtain G solutions from calibrator
cb.setdata(msselect='FIELD_ID IN [9,11]')
cb.setsolve(type='G',table='cal.G0',t=300)
cb.solve()

# obtain proper flux density scale
cb.fluxscale (tablein='cal.G0', tableout='cal.G1',
             reference='1328+307', transfer="0917+624")

# generate cumulative table for target source on 20s timescale
cb.accumulate(tablein='',incrtable='cal.G1',tableout='cal.cG0',
             field='0957+561',calfield='0917+624',
             interp='linear',t=20)

# apply this calibration to target
cb.setdata(msselect='FIELD_ID==10')
cb.setapply(type='G',table='cal.cG0',interp='linear')
cb.correct()

#    (image target with imager tool)

# phase-selfcal target on 60s timescale
cb.setdata(msselect='FIELD_ID==10')
cb.setapply(type='G',table='cal.cG0',interp='linear')
cb.setsolve(type='G',table='cal.G2',t=60,phaseonly=T)
cb.solve()

# accumulate new solution onto existing one
cb.accumulate(tablein='cal.cG0',incrtable='cal.G2',tableout='cal.cG1',
             field='0957+561',calfield='0957+561',
             interp='linear')

# apply new cumulative solution to data
cb.setapply(type='G',table='cal.cG1',interp='linear')
cb.correct()

#   (another round of imaging, etc.)

cb.done()

Caveats

Use of the calibrater tool is not without some caveats:

  • Currently only one instance of each calibration type is allowed
e.g., only one table with G, P, B, or T solutions are allowed).
  • Can only arrange one type for solve per execution of the solve
method.

These limitations will be relaxed in the near future.

Calibration models for absolute flux density

When solving for visibility-plane calibration, the calibrater tool compares the observed DATA column with the MODEL_DATA column. The first time that an imager or calibrater tool is constructed for a given MS, the MODEL_DATA column is created and initialized with unit point source flux density visibilities (unpolarized) for all sources (e.g. AMP=1, phase=0^{\circ}). The imager.setjy function is then used to set the proper flux density for flux calibrators. For sources that are recognized flux calibrators (listed in Table \ref{fluxcal.table}), imager.setjy will calculate the flux densities, Fourier transform the data and write the results to the MODEL_DATA column. For the VLA, the default source models are customarily point sources defined by the Baars or Perley-Taylor flux density scales, or point sources of unit flux density if the flux density is unknown. The MODEL_DATA column can also be filled with a model generated from an image of the source (e.g. the Fourier transform of an image generated after initial calibration of the data).

%BEGINTABLE{label="tbl:fluxcal.table" caption="Recognized Flux Density Calibrators."}%
3C Name B1950 Name J2000 Name
3C286 1328+307 1331+305
3C48 0134+329 0137+331
3C147 0538+498 0542+498
3C138 0518+165 0521+166
-- 1934-638 --
3C295 1409+524 1411+522
%ENDTABLE%

By default the imager.setjy function will cycle through all fields and spectral windows, setting the flux density either to 1 Jy (unpolarized), or if the source is recognized as one of the calibrators in the above table, to the flux density (assumed unpolarized) appropriate to the observing frequency. To run imager.setjy on a measurement set called data.ms:

im.open(filename='data.ms')         # Start the imager tool, attach to data.ms
im.setjy()                          # set all fields and spectral windows

To limit this operation to certain fields and spectral windows, use the fieldid and/or spwid parameters, which take single values:

im.setjy(fieldid=0)               # only set the flux density of
                                  # the first field (all spectral windows)

im.setjy(fieldid=1,spwid=17)      # only set the flux density of
                                  # the second field in spectral window 17

The full-polarization flux density (I,Q,U,V) may also be explicitly provided:

im.setjy(fieldid=1,spwid=16,       # Run setjy on field 2, spwid 17
     fluxdensity=[3.5,0.2,0.13,0.0]) #  and set I,Q,U,V explicitly

In the near future, it will be possible to set point source models directly (and independent of the MODEL_DATA column) using a function in the calibrater tool.

If the flux density calibrator is resolved at the observing frequency, the point source model generated by imager.setjy will not be appropriate. If available, a model image of the resolved source at the observing frequency may be used to generate the appropriate visibilities with the imager.ft function, e.g.:

im.setdata(fieldid=0)     # Select the fieldid associated with 3C286
im.ft(model='3c286.mod')  # Read the model image on disk, fourier transform
                            #  the image and put resulting visibilities
                            #  in the MODEL_DATA column

In this example, a model image of 3C286 will be used to generate model visibilities. It is important that the model image {\em not} be one convolved with a finite beam; it must have units of Jy/pixel (not Jy/beam). Also, the frequency range of the image must cover the frequencies in the dataset. Finally, the amplitude scale in the image must be correct (beware of variation due to non-zero spectral index).

In the near future, resolved models for the standard calibrators at standard frequencies will be made available in the \casa\ data respository. Use of these models, including spectral index and frequency adjustments will be supported.

Types of calibrations to solve for and apply

The following sections describe the basics of solving for the various types of calibration solutions. The examples given here are very general. The assembly of these building blocks into calibration patterns for several use cases is demonstrated in the real examples provided at the end of this cookbook. Different instruments may also have special considerations, and these are described in the instrument-specific sections. To start a calibrater tool using a measurement set called 'data.ms', issue the following commands:

cb.open(filename='data.ms')          #  Load data into calibrater

A priori calibrations

This section describes how to apply _a priori_ calibration information that may be available. The setapply executions described here may be used in the course of any of the other calibration solve or correct steps.

Gain curves

Gain curve calibration involves compensating for the effects of elevation on the amplitude of the received signals at each antenna. Antennas are not absolutely rigid, and so their effective collecting area and net surface accuracy vary with elevation as gravity deforms the surface. This calibration is especially important at higher frequencies where the deformations represent a greater fraction of the observing wavelength. By design, this effect is usually minimized (i.e., gain maximized) for elevations between 45 and 60 degrees, with the gain decreasing at higher and lower elevations. Gain curves are most often described as 2nd- or 3rd-order polynomials in zenith angle.

At this writing, gain curve calibration has been implemented in CASA for the VLA, with gain curve polynomial coefficients available directly from the CASA data repository. To make gain curve corrections for VLA data, execute the following setapply command:

cb.setapply(type='GAINCURVE',t=-1)

Setting t=-1 ensures that the gain curve will be calculated per timestamp. On subsequent executions of solve and correct, the gain curve data appropriate to the observing frequencies will be automatically retrieved from the data repository and applied.

Opacity corrections

The troposphere is not completely transparent. At high radio frequencies (>15 GHz), water vapor and molecular oxygen begin to have a substantial effect on radio observations. According to the physics of radiative transmission, the effect is threefold. First, radio waves from astronomical sources are absorbed (and therefore attenuated) before reaching the antenna. Second, since a good aborber is also a good emitter, significant noise-like power will be added to the overall system noise. Finally, the optical path length through the troposphere introduces a time-dependent phase error. In all cases, the effects become worse at lower elevations due to the increased air mass through which the antenna is looking. In \casa, the opacity correction described here compensates only for the first of these effects, tropospheric attenuation, using a plane-parallel approximation for the troposphere to estimate the elevation dependence.

Opacity corrections are a component of calibration type 'T'. To make opacity corrections in CASA, an estimate of the zenith opacity is required (see observatory-specific chapters for how to measure zenith opacity). With a value for zenith opacity in hand (0.1 nepers, say), use the following setapply command:

cb.setapply(type='TOPAC',opacity=0.1,t=-1)

Subsequently executed solve and correct commands will apply an elevation-dependent opacity correction (scaled to 0.1 nepers at the zenith for all antennas) calculated at each timestamp (t=-1).

Generalizations to antenna- and time-dependent opacities, including derivation (from weather information) and solving (directly from the visibility data) capabilities, will be made available in the future.

Other a priori Calibrations

Other a priori calibrations will be added to the calibrater tool in the near future. These will include antenna-position (phase) corrections, system temperature normalization (amplitude) corrections, tropospheric phase corrections derived from Water Vapor Radiometry (WVR) measurements, instrumental line-length corrections, etc. Where appropriate, solving capabilities for these effects will also be added.

Time-dependent complex gain calibration (G, T, GSPLINE)

G solutions

Systematic time-dependent complex gain errors are almost always the dominant calibration effect, and a solution for them is almost always necessary before proceeding with any other calibration. Traditionally, this calibration type has been a catch-all for a variety of similar effects, including: the relative amplitude and phase gain for each antenna, phase and amplitude drifts in the electronics of each antenna, amplitude response as a function of elevation (gain curve), and tropospheric amplitude and phase effects. In CASA, it is possible to handle many of these effects separately, as available information and circumstances warrant, but it is still possible to solve for the net effect using calibration type G.

Generally speaking, type G can represent any per-spectral window multiplicative polarization- and time-dependent complex gain effect downstream of the polarizers. (Polarization independent effects upstream of the polarizers may also be treated with G.) Multi-channel data (per spectral window) will be averaged in frequency before solving (use calibration type B to solve for frequency-dependent effects within each spectral window).

To solve for G on, say, fields 1 & 2, on a 90s timescale, and apply, e.g., gain curve corrections:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]')  # Restrict data selection
cb.setapply(type='GAINCURVE', t=-1)       # Apply a priori gain curve
cb.setsolve(type='G',table='cal.G',       # Setup to solve for phase and
    t=90,refant=3)                        #  amplitude on a 90s timescale,
                                          #  write solutions to a table on
                                          #  disk called 'cal.G'
cb.solve()                                # Solve
cp.open('cal.G')                          #
cp.plot()                                 # Inspect solutions

These G solution will be referenced to antenna 4. Choose a well-behaved antenna that is located near the center of the array for the reference antenna. For non-poloarization datasets, reference antennas need not be specified although you can if you want. If no reference antenna is specified, an effective phase reference that is an average over the data will be calculated and used. For data that requires polarization calibration, you must choose a reference antenna that has a constant phase difference between the right and left polarizations (e.g. no phase jumps or drifts). If no reference antenna (or a poor one) is specified, the phase reference may have jumps in the R--L phase, and the resulting polarization angle response will vary during the observation, thus corrupting the polarization imaging.

To apply this solution to the calibrators and the target source (field 3, say):

cb.reset()                                  # Reset calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1,2]')  # Restrict data selection
cb.setapply(type='GAINCURVE',t=-1)          # Apply gain curve corrections
cb.setapply(type='G',table='cal.G')         # Apply G solutions
cb.correct()                                # Correct data and write to
                                             #  CORRECTED_DATA column in MS

The interpolation mode for G will be 'nearest', e.g., the nearest solution (in time) will be used for each datum. Options for more flexible interpolation are currently under development.

T solutions

At high frequencies, it is often the case that the most rapid time-dependent gain errors are introduced by the troposphere, and are polarization-independent. It is therefore unnecessary to solve for separate time-dependent solutions for both polarizations, as is the case for G. Calibration type T is available to calibrate such tropospheric effects, differing from G only in that a single common solution for both polarizations is determined. In cases where only one polarization is observed, type T is adequate to describe the time-dependent complex multiplicative gain calibration.

In the following example, we assume we have a 'G' solution obtained on a longish timescale (> a few minutes, say), and we want a residual T solution to track the polarization-independent variations on a very short timescale:

cb.reset()                               # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID==[0,1]')     # Restrict data selection to calibrators
cb.setapply(type='G',table='cal120.G')   # Apply G solution on 2min timescale
cb.setsolve(type='T',table='cal.T',t=3)  # Setup for T solution on 3s timescale
cb.solve()                               # Solve for T

For dual-polarization observations, it will always be necesary to obtain a G solution to account for differences and drifts between the polarizations (which traverse different electronics), but solutions for rapidly varying polarization-independent effects such as those introduced by the troposphere will be optimized by using T. Note that T can be used in this way for self-calibration purposes, too.

Caveat: It is not yet possible to apply opacity corrections and solve for or apply ordinary T solutions. This constraint will be relaxed when incremental calibration is enabled.

GSPLINE solutions

At high radio frequencies, where tropospheric phase fluctuates rapidly, it is often the case that there is insufficient SNR to obtain a robust G or T solution on timescales short enough to track the variation. In this case it is desirable to solve for a best-fit functional form for each antenna using the GSPLINE solver. The GSPLINE solver fits time-series of cubic B-splines to the phase and/or amplitude of the calibrator visbilities. Unlike ordinary G, a single common GSPLINE solution will be determined from data for all spectral windows and fields specified in setdata, and the resulting solution will be applicable to any field or spectral window in the same Measurement Set. An important consequence of this is that all fields used to obtain a GSPLINE amplitude solution must have models with accurate relative flux densities. (Use of incorrect relative flux densities will introduce spurious variations in the GSPLINE amplitude solution.)

The GSPLINE solver requires a number of unique parameters, and has its own specialized setsolve function, setsolvegainspline. The mode parameter controls whether phase or amplitude solutions (or both) are determined. The duration of each spline segment is controlled by splinetime. The actual splinetime will be adjusted such that an integral number will fit within the overall range of data.

Phase splines require that cycle ambiguities be resolved prior to the fit; this operation is controlled by npointaver and phasewrap. The npointaver parameter controls how many contiguous points in the time-series are used to predict the cycle ambiguity of the next point in the time-series, and phasewrap sets the threshold phase jump (in degrees) that would indicate a cycle slip. Large values of npointaver improve the SNR of the cycle estimate, but tend to frustrate ambiguity detection if the phase rates are large. The phasewrap parameter may be adjusted to influence when cycles are detected. Generally speaking, large values (>180^\circ) are useful when SNR is high and phase rates are low. Smaller values for phasewrap can force cycle slip detection when low SNR conspires to obscure the jump, but the algorithm becomes significantly less robust. More robust algorithms for phase-tracking are under development (including fringe-fitting).

To solve for GSPLINE phases, with splines of duration 600s:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]')  # Select gain calibrators
cb.setsolvegainspline(mode='PHASE',       # Set spline parameters; write to table
    splinetime=600,table='cal_ph.gspl')   #  cal_ph.gspl
cb.solve()                                # Solve for spline

To solve for GSPLINE amplitudes, and combine them with existing GSPLINE phase solutions:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]')  # Select gain calibraters
cb.setapply(type='GSPLINE',               # Spply previous solutions
    table='cal_ph.gspl')
cb.setsolvegainspline(mode='AMP',         # Set amplitude spline parameters;
    splinetime=1200,                       #  write both phase and amplitude
    table='cal_ampph.gspl')                #  solutions to table cal_ampph.gspl
cb.solve()                                # solve for solutions

The table 'cal_ampph.gspl' now contains both phase and amplitude solutions. This is an example of the incremental calibration mode supported by the GSPLINE solver. This can also be used to update the phase solutions themselves:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]')  # Select the gain calibraters
cb.setapply(type='GSPLINE',               # Apply previous solutions
    table='cal_ph.gspl')
cb.setsolvegainspline(mode='PHAS',        # Set spline parameters write
    splinetime=600,table='cal_ph2.gspl')   #  solutions to table cal_ph2.gspl
cb.solve()                                # Solve for solutions

In this solve, the solutions in table 'cal_ph.gspl' are applied to the data, and a new (residual) phase solution is determined. The resulting table, 'cal_ph2.gspl', contains the cumulative effect of both solutions. Additionally, the first solution need not have been determined at the same frequency as the second; the first phase solution will be scaled by the appropriate frequency ratio before application. This technique is particularly effective for mm-interferometry data where the tropospheric phase rates are proportional to the observing frequency, and become difficult to track at the highest frequencies. E.g., for PdBI observations, solutions obtained at 3mm (where SNR is relatively good) are pre-applied for 1mm GSPLINE solutions (where SNR is worse). This improves the prospects for a decent 1mm solution since the residual phase will be much better behaved if the 3mm solutions work well. An example of how to apply scaled 3mm gain solutions to the 1mm band and then derive residual 1mm solutions is given below:

cb.reset()                                  # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1,2,4] && DATA_DESC_ID IN [24]')
                                            # Select 1mm continuum & all cals
cb.setapply(type='BPOLY',                   # Arrange to apply pre-determined
    table='1mm.bpoly')                      #  1mm bandpass polynomial solutions
cb.setapply(type='GSPLINE',                 # Arrange to apply scaled 3mm gain
    table='3mm.gcal', rawspw=[7])           #  solutions in table 3mm.gcal that
                                            #  were derived from spwid=8
cb.setsolvegainspline(table='1mm.gcal',     # Set up to solve for 1mm phases
    mode='PHAS', preavg=0,                  #  only, no preaveraging, splinetime
    splinetime=10000,                       #  of 10,000s, using refant=1,
    refant=1, npointaver=10,                #  select npointaver & phasewrap to
    phasewrap=260, append=F)                #  avoid phasewraps in the solution
                                            #  and create a new table: 1mm.gcal
cb.solve()                                  # Solve for the 1mm gains
!ghostview 1mm.gcal.PHASE.ps &              # View the solutions in ps file

The GSPLINE solutions can not yet be plotted using calplot. Instead, the solve generates a plot of each baseline's visibility data (as used in the solver), with the solution (product of antenna-based solutions) overlayed. This plot is stored as a postscript file named according to table, with a ``.ps'' suffix. In future, this plot, as well as antenna-based plots of the spline solutions will made available in calplot (cp).

Currently, several caveats should be kept in mind when using the GSPLINE solver:

  • GSPLINE solutions are only possible for single-pol observations with contiguous antenna list;
  • GSPLINE cannot be combined with use of ordinary G

These limitations will be relaxed soon.

Flux density scale calibration

The 'G' or 'T' solutions obtained from calibrators for which the flux density was unknown and assumed to be 1 Jy are correct in a time- and antenna- relative sense, but are mis-scaled by a factor equal to the inverse of the square root of the true flux density. This scaling can be corrected by enforcing the constraint that mean gain amplitudes determined from calibrators of unknown flux density should be the same as determined from those with known flux densities. The calibrater tool's fluxscale method exists for this purpose. Given a 'G' table, 'cal.G', containing solutions for a flux density calibrator (3C286, say) and for one or more random calibrator sources with unknown flux densities (0234+285 and 0323+022, say) use the fluxscale function as follows:

cb.fluxscale(tablein='cal.G',               # Select input table
          tableout = 'cal.Gflx',             # Write scaled solutions to cal.Gflx
          reference='3C286',                 # 3C286 = flux calibrator
          transfer=['0234+258', '0323+022')  # Select calibrators to scale

The output table, 'cal.Gflx', contains solutions that are properly scaled for all calibrators.

Note that the assertion that the gain solutions are independent of the calibrator includes the assumption that the gain amplitudes are strictly not systematically time dependent. While synthesis antennas are designed as much as possible to achieve this goal, in practice, a number of effects conspire to frustrate it. When relevant, it is advisable to pre-apply gain curve and opacity corrections when solving for the 'G' solutions that will be fluxscaled. When the G solutions are essentially constant for each calibrator separately, the fluxscale operation is likely to be robust.

The fluxscale function can be executed on either 'G' or 'T' solutions, but it should only be used on one of these types if solutions exist for both.

If the reference and transfer fields were observed in different spectral windows, the refspwmap parameter may be used to achieve the scaling calculation across spectral window boundaries. The refspwmap paramter takes a vector of indices indicating the spectral window mapping for the reference fields, such that refspwmap(i)=j means that reference field amplitudes from spectral window j will be used for spectral window i.

cb.fluxscale(tablein='cal.G',                # Select input table
          tableout = 'cal.Gflx',             # Write scaled solutions to cal.Gflx
          reference='3C286',                 # 3C286 = flux calibrator
          transfer=['0234+258', '0323+022'   # Select calibrators to scale
          refspwmap=[0,0,0])                 # Use spwid 1 scaling for spwids 2 & 3

cb.fluxscale(tablein='cal.G',                # Select input table
          tableout='cal.Gflx',               # Write scaled solutions to cal.Gflx
          reference='3C286',                 #  3C286 = flux calibrator,
          refspwmap=[0,0,1,1],               #  select spwids for scaling,
          transfer=['0234+285','0323+022'])  #  select calibrators to scale,

In this example, reference amplitudes from spectral window 1 will be used for spectral windows 1 and 2 and reference amplitudes from spectral window 2 will be used for spectral windows 3 and 4.

Resolved flux density calibrators

If the flux density calibrator is resolved, the assumption that it is a point source will cause solutions on outlying antennas to be biased in amplitude. In turn, the flux-density scaling step will be biased on these antennas as well. In general, it is best to use a resolved model for the calibrator as described in Section \ref{cal.models}, but if such a model is not available, it is important to limit the solution on the flux density calibrator to only the subset of antennas that have baselines short enough that the point-source assumption is valid. This can be done by using antenna and uvrange selection when solving for the flux density calibrator. For example, if antennas 1 through 8 are the antennas among which the baselines are short enough that the point-source assumption is valid, and we want to be sure to limit the solutions to the use of baselines shorter than 15000 wavelengths, then we can assemble properly scaled solutions for the other calibrator as follows (note: specifying both an antenna and a uvrange constraint prevents inclusion of antennas with only a small number of baselines within the specified uvrange from being included in the solution; such antennas will have poorly constrained solutions):

# first solve for gain solutions for the flux density calibrator
# (3C286 observed in field 1) using a subset of antennas
cb.reset()
cb.setdata(msselect='FIELD_ID IN [0]  # Select the flux density calibrator
            && ANTENNA1 IN [0:7]       #  antennas 1-8,
            && ANTENNA2 IN [0:7]',
            uvrange=[0,15])            #  and limit uvrange to 0-15klambda
cb.setsolve(type='G',                 # Set up to solve for gain solutions
            table='cal.G',t=90)        #  on 90s timescales, write solutions
                                       #  to table called cal.G
cb.solve()                            # Solve

# now solve for other calibrator (0234+285 in field 2) using all antennas
#  (implicitly) and append these solutions to the same table
cb.reset()
cb.setdata(msselect='FIELD_ID IN [1]')
cb.setsolve(type='G',table='cal.G',
            t=90,append=T)             # Set up to write to the same table
cb.solve()                            # Solve

# now run fluxscale to adjust scaling
cb.fluxscale(tablein='cal.G',         # Input table with unscaled cal solutions
              tableout='cal.Gflx',     # Write scaled solutions to cal.Gflx
              reference='3C286',       # Use 3c286 as ref with limited uvrange
              transfer='0234+285')     # Transfer scaling to 0234+285

The fluxscale calculation will be performed using only the antennas common to both fields, but the result will be applied to all antennas on the transfer field. %% DSS: QUESTION: do you need to select antennas or will selection of %% uvrange be adequate. If former, then need to change code so only %% uvrange need be specified.

Bandpass calibration (B, BPOLY)

B solutions

For channelized data, it is often desirable to solve for the gain variations in frequency as well as in time. Variation in frequency arises as a result of non-uniform filter passbands or other dispersive effects in signal transmission. It is usually the case that these frequency-dependent effects vary on timescales much longer than the time-dependent effects handled by types 'G' and 'T'. Thus, it makes sense to solve for them as a separate term: 'B'.

Calibration type 'B' differs from 'G' only in that it is determined for each channel in each spectral window. It is possible to solve for it as a function of time, but it is most efficient to keep the 'B' solving timescale as long as possible, and use 'G' or 'T' for rapid timesscale variations.

'B' solutions are limited by the SNR available per channel, which may be quite small. It is therefore important that the data be coherent over the time-range of the 'B' solutions. As a result, 'B' solutions are almost always preceded by a 'G' or 'T' solve. In turn, if the 'B' solution improves the frequency domain coherence significantly, a 'G' or 'T' solution following it will be better than the original.

To solve for 'B' (on a very long timescale, i.e., constant for an observation shorter than 1 day), using a prior G solution:

cb.reset()
cb.setdata(mode='channel',            # Select bandpass calibrator (field 1),
            nchan=58,start=2,         #  using channels 3-61 (avoid end
            msselect='FIELD_ID==0')   #  channels)
cb.setapply(type='G',table='cal.G')   # Set up to apply gain solutions derived
                                      #  for field 1 previously
cb.setsolve(type='B',table='cal.B',   # Set up to solve for bandpass response
            t=86400)                  #  as a function of frequency using a
                                      #  long timescale (assumes bandpass
                                      #  is constant for length of observation)
cb.solve()                            # Solve for the bandpass solutions

Note that the solution has only been obtained for the subset of 58 channels starting with channel 3. Explicit channel selection like this is only necessary if it is desired that some channels be avoided (e.g., end channels that may not be well-behaved). The default is to obtain a solution for every channel.

BPOLY solutions

For some observations, it may be the case that the SNR per channel is insufficient to obtain a usable per-channel 'B' solution. In this case it is desirable to solve instead for a best-fit functional form for each antenna using the BPOLY solver. The 'BPOLY' solver fits polynomials to the amplitude and/or phase of the calibrator visibilities as a function of frequency. Unlike ordinary 'B', a single common BPOLY solution will be determined for all spectral windows specified (or implicit) in the setdata execution. As such, it is usually most meaningful to select individual spectral windows for BPOLY solves, unless groups of adjacent spectral windows are known a priori to share a single continuous bandpass response over their combined frequency range (e.g., PdBI data).

The 'BPOLY' solver requires a number of unique parameters, and has its own specialized setsolve function, setsolvebandpoly. The degamp and degphase parameters indicate the polynomial degree desired for the amplitude and phase solutions. The maskcenter parameter is used to indicate the number of channels in the center of the band to avoid passing to the solution (e.g., to avoid Gibbs ringing in central channels for PdBI data).

For example, to solve for a BPOLY (5th order in amplitude, 7th order in phase), using data from field 2, with G corrections pre-applied:

cb.reset()
cb.setdata(msselect='FIELD_ID IN [1]')
cb.setapply(type='G',table='cal.G')
cb.setsolvebandpoly(table='cal.bpoly',degamp=5,degphase=7)
cb.solve()

Note that all available spectral windows will be used to obtain a single solution spanning them all. If separate solutions for each spectral window are desired, solve for each separately, e.g., if there are 3 spectral windows (0,1,2):

cb.reset()
cb.setdata(msselect='FIELD_ID IN [1] && DATA_DESC_ID==0')
cb.setapply(type='G',table='cal.G')
cb.setsolvebandpoly(table='cal.bpoly1',degamp=5,degphase=7)
cb.solve()

cb.reset()
cb.setdata(msselect='FIELD_ID IN [1] && DATA_DESC_ID==1')
cb.setapply(type='G',table='cal.G')
cb.setsolvebandpoly(table='cal.bpoly2',degamp=5,degphase=7)
cb.solve()

cb.reset()
cb.setdata(msselect='FIELD_ID IN [1] && DATA_DESC_ID==2')
cb.setapply(type='G',table='cal.G')
cb.setsolvebandpoly(table='cal.bpoly3',degamp=5,degphase=7)
cb.solve()

!ghostview cal.bpoly1.ps &  # view resulting solution plots
!ghostview cal.bpoly2.ps &
!ghostview cal.bpoly3.ps &


Each solution is stored in a separate table. As a result, subsequent calibration operations must be undertaken for each spectral window separately.

The BPOLY solutions can not yet be plotted using calplot. Instead, the solve generates a plot of each baseline's visibility data (as used in the solver), with the solution (product of antenna-based solutions) overlayed. This plot is stored as a postscript file named according to the table name, with a ``.ps'' suffix. In future, this plot, as well as antenna-based plots of the polynomial solutions will made available in calplot (cp).

Note that BPOLY does not yet support time-dependent solutions. Also, BPOLY solutions cannot yet be combined with use of ordinary B. These constraints will be relaxed in the near future.

Instrumental Polarization Calibration (D)

TBD

Baseline-based Calibration (M, MF, K)

Some calibration cannot be factored into antenna-based factors. Such effects occur physically after the signals from each antenna have been combined in the correlator. These may occur as a consequence of finite time- and frequency-averaging over other calibration factors before they are corrected. Any loss of coherence in such averaging gives rise to residual baseline-based errors which depend on the details of the combination of antenna-based errors on the baseline. Also, if only a single baseline is available, a single calibration solution for that baseline is obtainable, and short of additional information, it simply cannot be factored into unique terms for each antenna. Therefore, such situations necessarily require baseline-based calibration solutions.

It is important to note that source structure also introduces baseline-based information in the visibilities, and so it can be difficult to distiguish baseline-based errors from the information in which we are interested, namely, the astronomincal visibilities (which are imaged). Therefore, baseline based calibration factors should be used with great care, to avoid changing the desired astrophysical information in ways that cannot be determined. As indicated below, there are some calibration circumstances where such extreme measures are warranted (e.g., resolved bandpass calibrators), but the careful observer should always consider how different calibration factors affect the data, and this is especially important for baseline-based factors.

M, MF solutions (Generic Baseline-based Gain)

The calibration types M and MF provide the baseline-based analogs of G and B, respectively. M provides for (per-spectral window) time-dependent gain calibration for each baseline independently. MF provides for a frequency-dependent (within each spectral window) version of M. One or the other of these types can be used to compensate for any residual closure errors, usually after all antenna-based calibrations have converged. Since these types can absorb legitimate visibility information from the data, they should be used with great care, and usually only on sources for which there is no doubt about the source structure. It is therefore largely meaningless to use them in self-calibration---they will only reinforce the current source model, and can even artifically absorb thermal noise in the data.

To solve for M on an hourly timescale, using previously determined G and B solutions:

cb.reset()                                # Reset the calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1]') # Restrict data selection
cb.setapply(type='G',table='cal.G')      # Apply existing G solution
cb.setapply(type='B',table='cal.B')      # Apply existing G solution

cb.setsolve(type='M',table='cal.M',      # Setup to solve for M on
    t=3600)                               #  an hourly timescale,
                                          #  write solutions to a table on
                                          #  disk called 'cal.M'
cb.solve()                               # Solve

Note that refant is, of course, meaningless for baseline-based calibration types.

To apply these solutions, along with G and B:

cb.reset()                                  # Reset calibrater tool
cb.setdata(msselect='FIELD_ID IN [0,1,2]')  # Restrict data selection
cb.setapply(type='G',table='cal.G')         # Apply G solutions
cb.setapply(type='B',table='cal.B')         # Apply B solutions
cb.setapply(type='M',table='cal.M')         # Apply M solutions
cb.correct()                                # Correct data and write to
                                            #  CORRECTED_DATA column in MS

Use of MF is essentially identical, except that it will probably be used on even longer timescales than M (as for B, compared to G).

An M solution can be especially useful for obtaining normalized bandpass solutions from significantly resolved calibrators where a source model adequate for pre-B G-solving is not available. Ordinarily, if the bandpass calibrator is nearly point-like, we first solve for G (using a point-like model):

\vec{V}_{ij}~=~G_{ij}~\vec{V}_{ij}^{\mathrm{~point}}

Then, use this G to solve for B (using the same model):

\vec{V}_{ij}~=~B_{ij}~\left(G_{ij}~\vec{V}_{ij}^{\mathrm{~point}}\right)

However, we will get (at best) awkwardly scaled and/or poorly converged B solutions if our source model is not sufficient for the either of these solutions. In this circumstance, we can use M to absorb both time-dependent gain variations during the bandpass calibration scan and the source structure. First solve for M, using a point-like model:

\vec{V}_{ij}~=~M_{ij}~\vec{V}_{ij}^{\mathrm{~point}}

Then, use this solution to solve for B:

\left(M_{ij}^{-1}~\vec{V}_{ij}\right)~=~B_{ij}~\vec{V}_{ij}^{\mathrm{~point}}

The resulting B solution is nearly as good as one obtained with a good, resolved source model and G. It is somewhat less sensitive because more free parameters have been introduced, but this technique can often be useful. Note that the data for the bandpass calibrator, after correction by B and M, will be that of a point source, since M has absorbed the source structure information. This information has been sacrificed for a decent bandpass calibration applicable to other sources.

K solutions (Baseline-based fringe-fitting)

Fringe-fitting is essentially a generalization of ordinary phase calibration (the phase part of G, B, M, and/or MF) in which coefficients of the Taylor expansion of the phase in time and frequency are solved for. Usually, the expansion is only taken to first order in time and frequency. In this case, the phase can be written:

\phi = \phi(t_o,\nu_o) + 2\pi(\nu - \nu_o)\tau + 2\pi\nu(t - t_o)\dot{\tau}

In this equation, t_o and \nu_o are a fiducial time and frequency within the solution interval (where the phase is \phi(t_o,\nu_o)), and \tau and \dot{\tau} are the delay (phase slope in frequency) and delay-rate (phase slope in time), respectively.

Note: As written, this equation defines the group delay (by virtue of referring the frequency to the fiducial value) where the delay-rate is not necessarily the time derivative of the delay. For most current observations, this is the case simply because the noise on the delay solution far exceeds the variation implied by the delay-rates. The equation can also be written in terms of the phase delay, where the \nu_o is dropped from the second term, and the delay-rate is exactly the derivative of the delay. This mode will be useful for the wide-bandwidth instruments of the near future, and will be an option available for users of K.

Evidently, fringe-fitting permits more accurate phase tracking in both time and frequency when delay and delay-rate errors are large. Such errors originate in independent clock and position errors of antennas and in errors in the direction to radio sources. Fringe-fitting is normally associated with VLBI because these clock and geometry errors are more significant there than for smaller connected-element arrays, but the wide bandwidths and high frequencies of future instruments will likely find fringe-fitting useful.

The current implementation of fringe-fitting in CASA is baseline-based and quite simple. It solves the above equation for \phi(t_o,\nu_o), \tau, and \dot{\tau} for each baseline independently, assuming less than one cycle of phase over the bandwidth and duration of the solution interval.

Note: Solutions outside the first ambiguity require implementation of a first-guess mechanism (an FFT) to reduce the phase to within the first ambiguity. This is still under development.

A separate phase and delay is determined for each polarization, but only one rate is determined for both.

Note: This is almost always justified since the phase and delay correspond to signal path lengths, and the delay rate corresponds to the speed of the signal along the path. The phase and delay are likely to be different for different polarizations since these signals traverse different electronics with different path lengths, but the speed at which the signal travels is the same. In any case, this constraint will be relaxable in a future version.

To obtain a scan-based K solution, with an existing MF solution applied:

cb.setdata(msselect='FIELD_ID==0')        # select data
cb.setapply(type='MF',table='cal.MF')     # arrange apply of MF
cb.setsolve(type='K',table='cal.K',t=0)   # arrange solve for K
cb.solve()                                # solve

Application is like any other type:

cb.setdata(msselect='FIELD_ID IN [0,1]')  # select data
cb.setapply(type='MF',table='cal.MF')     # arrange apply of MF
cb.setapply(type='K',table='cal.K')       # arrange apply of K
cb.correct()                                # correct

Note that only nearest interpolation is available for application of K. This will be remedied in the near future.

Solution Examination}\label{solution.examination

The calplot tool is available for examining solutions of all of the basic solvable types (G,T,B,D,M,MF,K). To plot amplitude or phase as a function of time for G (or T,M) solutions:

cp.open('cal.G')
cp.setdata(plottype='AMP')
cp.plot()
cp.setdata(plottype='PHASE')
cp.plot()

Similarly, to plot amplitude or phase as a function of channel for B solutions:

cp.open('cal.B')
cp.setdata(plottype='AMP')
cp.plot()
cp.setdata(plottype='PHASE')
cp.plot()

Note that if there is more than one timestamp in the B table, the user will be queried to interactively advance the plot to each timestamp. Other plot types are also available:

  • [AMP] -- Amplitude vs. time or chan
  • [PHASE] -- Inverse of amplitude vs. time or chan
  • [RLHASE] -- RL phase vs. time
  • [XYPHASE] -- XY phase vs. time
  • [DELAY] -- DELAY vs time
  • [DELAYRATE] -- DELAYRATE vs time

Several selection parameters are available to obtain plots of subsets of solutions:

  • [antennas] -- A list of antenna indices to plot (default: all antennas)
  • [caldescids] -- A list of description ids (default: all)

The above examples will show the solutions for all antennas and spectral windows on a single plot. If per-antenna solution plots are desired, use multiplot=True, and specify the number of plots to appear on each page using nxpanels and nypanels. For example to show 5 plots per page (arranged vertically) of G phase solutions on each page, arranged vertically:

cp.open('cal.G')                          # load cal table
cp.plotoptions(nypanels=5,multiplot=true) # default nxpanels=1
cp.plot()                                 # default is PHASE

To show 6 plots per page of B amplitudes on a 3x2 grid:

cp.open('ngc5921.bcal')                   # load cal table
cp.plotoptions(3,2,true)                  # setup multiplot grid
                                          # nx=3, ny=2, multiplot=true
cp.plot('AMP')                            # plot amplitude

For B, if there is more than one timestamp and multiplot=T, the antennas plots will be cylced through for each timestamp in turn.

Currently, calplot cannot plot parameterized solutions, e.g., GSPLINE, BPOLY, opacity, gaincurve, or parallactic angle. A design for a much more capable solution plotting (and editing) function is being developed.

Application of the final calibration

After all relevant calibration types have been determined, they must be applied to the target source. For example, to apply G and B solutions to source fields 2,3,4:

cb.reset()
cb.setdata(msselect='FIELD_ID IN [2,3,4]')
cb.setapply(type='G',table='cal.Gflx')
cb.setapply(type='B',table='cal.B')
cb.correct()

Different detailed combinations of calibration application can be achieved by running this sequence more than once, and including specific field and/or spectral window selections in setdata and each setapply, as appropriate. For example, to limit the G solutions applied to field 3 to those obtained from field 1 (otherwise, same as above):

cb.reset()
cb.setdata(msselect='FIELD_ID IN [2,4]')
cb.setapply(type='G',table='cal.Gflx')
cb.setapply(type='B',table='cal.B')
cb.correct()

cb.reset()
cb.setdata(msselect='FIELD_ID IN [3]')
cb.setapply(type='G',table='cal.Gflx',select='FIELD_ID==1')
cb.setapply(type='B',table='cal.B')
cb.correct()

Note that the pairs G and GSPLINE, B and BPOLY, and T and TOPAC, cannot currently be combined in a single calibration sequence since, e.g., G and GSPLINE are considered internally by the calibrater tool as the same calibration type, and only one instance of each type can be used. Similarly, only one version of each individual type may be used in one calibration sequence. These constraints will soon be relaxed. (This will greatly expand the flexibility of self-calibration and incremental calibration.)

It is important to remember the relative nature of each calibration term. A term solved for in the presence of others is, in effect, residual to the others, and so must be used in combination with them (or new versions of them) in subsequent processing. At the same time, it is important to avoid isolating the same calibration effects in more than one term, e.g., by solving for both G and T separately (without applying the other), and then using them together. It is always a good idea to examine the corrected data after calibration.

Interpolation

As described above, 'nearest, 'linear', and 'aipslin' interpolation algorithms are currently available.

Visibility Model Fitting

It is often desirable to fit simple analytic source component models directly to visibility data. Such fitting has its origins in early interferometry, especially VLBI, where arrays consisted of only a few antennas and the calibration and deconvolution problems were poorly constrained. These methods overcame the calibration uncertainties by fitting the models to calibration-independent closure quantities and the deconvolution problem by drastically limiting the number of free parameters required to describe the visibilities. Today, even with larger and better calibrated arrays, it is still desirable to use visibility model fitting in order to extract geometric properties such as the positions and sizes of discrete components in radio sources. Fits for physically meaningful component shapes such as disks, rings, and optically thin spheres, though idealized, enable connecting source geometry directly to the physics of the emission regions.

The first implementation of visibility model fitting in CASA is in the calibrater; this is because it will be useful to combine solutions for some calibration components (e.g., instrumental polarization) with a solution for a simple (polarized) source component. Adopting the calibrater's existing solving mechanism (as well as other infrastructure, including data selection) ensures the consistency required to eventually implement such combined solutions, and enables a near-term demonstration of basic model fitting. Eventually, calibrater will retain a limited form of visibility model fitting, and a much more capable stand-alone visibility model fitting application will be developed.

Visibility model fitting in calibrater is controlled entirely by the modelfit function, which allows fits for a single component point or Gaussian. The user specifies the number of non-linear solution iterations (niter), the component type (type), an initial guess for the component parameters (par), and optionally, a vector of Booleans selecting which component parameters should be allowed to vary (*var), and a filename in which to store a CASA componentlist for use in other applications (file). The function returns a vector containing the resulting parameter list. This vector can be edited at the command line, and specified as input (par) for another round of fitting. See the calibrater tool mechanics section for details

The par parameter is currently the only way to specify the starting parameters for the fit. For points, there are three parameters: I (total flux density), and relative direction (RA, Dec) offsets (in arcsec) from the observation's phase center. For Gaussians, there are three additional parameters: the Gaussian's semi-major axis width (arcsec), the aspect ratio, and position angle (degrees). It should be understood that the quality of the result is very sensitive to the starting parameters provided by the user. If this first guess is not sufficiently close to the global \chi^2 minimum, the algorithm will happily converge to an incorrect local minimum. In fact, the \chi^2 surface, as a function of the component's relative direction parameters, has a shape very much like the inverse of the absolute value of the dirty image of the field. Any peak in this image (positive or negative) correponds to a local \chi^2 minimum that could conceivable capture the fit. It is the user's responsibility to ensure that the correct minimum does the capturing. Eventually, in the stand-alone model fitting application, there will be interactive means of specifying the starting parameters, e.g., by selecting locations on a view of the dirty image. An automatic image peak-selection algorithm will also be provided. For now, the calibrater implementation will rely on the likelihood that the source is very near the phase center (within a beamwidth) and/or the user's savvy in specifying the starting parameters. This fairly serious constraint will soon be relieved somewhat by enabling a rudimentary form of uv-plane weighting to increase the likelihood that the starting guess is on a slope in the correct \chi^2 valley.

Improvements in the works for visibility model fitting include:

  • User-specifiable uv-plane weighting
  • Additional component shapes, including elliptical disks, rings,
and optically thin spheroids.
  • Optional calibration pre-application
  • Multiple components. The handling of more than one component
depends mostly on efficient means of managing the list itself (not easy in command line options), which are currently under development.
  • Combined component and calibration fitting.

Example (See Fig \ref{fig:modelfit}):

ms.open('ngc5921.ms,            # load data into ms tool
         false)                 # enable write access - needed for split
#
# Note: It's best to channel average the data before running a modelfit
#

ms.split('1445_avg.ms',         # split to new file containing just the gain cal
        fieldids=1,             # 2nd source = 1445
        nchan=1,                # get a single channel that is the
        start=5,                # average of 55 channels starting
        step=55,                # at channel 6
        whichcol="CORRECTED_DATA")# split the corrected data

cb.open('ngc5921.ms')           # load the data into cb (calibrater tool)
cb.modelfit(niter=5,            # Do 5 iterations
            compshape='P',      # P=Point source, G=Gaussian, D=Disk
            par=[2.0,.1,.1],    # Source parameters; for a point source:
            file='gcal.cl')     # [flux, long offset, lat offset]
                                # Initial guess is that it's close to the phase center
                                # and has a flux of 2.0 (a priori we know it's 2.47)
#Output looks like:
#CASA [18]: cb.modelfit(niter=5,compshape='P',par=[2.,0.1,0.1])
# iter=0   chi2=873028:  I=2,  dir=[0.1, 0.1] arcsec
# iter=1   chi2=24408.8:  I=2.47721,  dir=[-0.0313003, -0.0479691] arcsec
# iter=2   chi2=24330.6:  I=2.47772,  dir=[-0.00611969, -0.0196879] arcsec
# iter=3   chi2=24330.6:  I=2.47772,  dir=[-0.00612216, -0.0196719] arcsec
# iter=4   chi2=24330.6:  I=2.47772,  dir=[-0.00612217, -0.0196719] arcsec
#  Out[18]: [2.477723671108333, -0.0061221697335041726, -0.019671900712410345]

#Looks reasonable - got the right flux around the phase center
#chi2 went down: expect chi2 = 2*number of visibilities/number of degrees of freedom
# degrees of freedom = 3 for point source (flux and long,lat offsets)

cb.close()                       # unload data from calibrater
# Now use component list to generate model data
im.open('ngc5921.ms')            # load data into imager tool
im.ft(complist='gcal.cl')        # Fourier transform the component list -
                                 # this writes it into the MODEL_DATA column
                                 # of the MS
im.close()                       # unload data from imager
#
mp.open('ngc5921.ms')            # load data into msplot tool
mp.setdata(fieldIndex=1)         # Select 1445+0990
mp.uvdist(column='corrected')    # plot corrected data
mp.plotoptions(overplot=true,    # specify overplot
 plotsymbol='bo')                # specify red circles for model data
mp.uvdist(column='model')        # plot model data

%BEGINFIGURE{label="fig:modelfit" caption="Use of msplot tool (mp) to display corrected data (red points) and uv model fit data (blue circles)."}% Error: (3) can't find modelfit.jpg in Software %ENDFIGURE%

Prepare uv data for imagin

Examine calibrated source data

Once the source data is calibrated you should examine the $uv$ data and flag anything that looks bad. If you find source data that has not been flanked by calibration scans, delete it (it will not be calibrated). For example:

mp.open('data.ms')              # Load data into msplot

See Chapter \ref{chapter:edit} descriptions of how to use msplot to view the data in xy-plot format.

uv Continuum Subtraction

At this point, consider whether you are likely to need continuum subtraction. If there is significant continuum emission present in what is intended as a spectral line observation, continuum subtraction may be desirable. You can estimate and subtract continuum emission in the $uv$-plane prior to imaging or wait and subtract an estimate of it in the image-plane. Note that neither method is ideal, and the choice depends primarily upon the distribution and strength of the continuum emission. Subtraction in the uv-plane is desirable if continuum emission dominates the source, since deconvolution of the line emission will be more robust if not subject to errors in deconvolution of the brighter continuum. There is also a performance benefit since the continuum is probably the same in each channel of the observation, and it is desirable to avoid duplication of effort. However, the main drawback of subtraction in the uv-plane is that it is only strictly correct for the phase center, since without the Fourier transform, the visibilities only describe the phase center. Thus, uv-plane continuum subtraction will be increasingly poor for emission distributed further from the phase center. If the continuum emission is relatively weak, it is usually adequate to subtract it in the image plane; this is described in the Image Analysis section of this cookbook. Here, we describe how to do continuum subtraction in the uv-plane.

uv plane continuum subtraction is performed by the continuumsub function in the ms tool. First, determine which channels in your data cube do not have line emission, perhaps by forming a preliminary image as described in the next chapter. This image will also help you decide whether or not you need to come back and do uv-plane continuum subtraction at all.

For each baseline, and over the timescale specified in solint, continuumsub will provide a simple linear fit to the real and imaginary parts of the (continuum-only) channels specified in chans, and subtract this model from all channels. Choose the timescale to be shorter than the timescale for changes in the visibility function of the continuum, but be careful not to make it so short that the SNR of the estimated continuum suffers substantially. For example:

ms.open('data.ms',                       # Load data into ms tool
        nomodify=False);                 # Make sure it is writable!
ms.continuumsub(fldid=2,                 # Choose the source data (field 3),
           spwid=0,                      #  spectral window 1,
           chans=range(4,7)+range(50,60),#  line free channels 4-6 & 50-59,
           solint=0.0,                   #  estimate/subtract per scan
           mode='subtract');             #  & subtract cont. from line data.
                                         # Line-only data will be written into
                                         # the  CORRECTED_DATA column.
ms.close();                              # Finish the tool

Running continuumsub with mode='subtract' will replace the CORRECTED_DATA column in the MS with continuum-subtracted line data and the MODEL_DATA column with the continuum model. You can use mode='replace' to replace the CORRECTED_DATA column with the continuum model; however, it is probably better to use mode='subtract' and then use split to select the MODEL_DATA and form a dataset appropriate for forming an image of the estimated continuum. Note that a continuum image formed from this model will only be strictly correct near the phase center, for the reasons described above.

Note that it is currently the case that continuumsub will overwrite the CORRECTED_DATA column. Therefore, it is desirable to first split the relevant corrected data into a new Measurement Set. Also, continuumsub requires that the MODEL_DATA and CORRECTED_DATA columns exist, and split does not generate them. These columns will be generated by the imager tool when forming the exploratory image described above. If you run continuumsub on the original dataset, you will have to re-apply the calibration as described in the previous chapter. Finally, since continuumsub is currently implemented as a glish method, its performance is quite slow. All of these issues will be resolved in the near future.

So, the recommended procedure is as follows:

  • Finish calibration as described in the previous chapter
  • Use split to form a separate dataset
  • Use the imager tool on the split result to form an
exploratory image that is useful for determining the line-free channels
  • Use continuumsub with mode='subtract' to subtract
the continuum from the CORRECTED_DATA in the split dataset (and write the continuum model in the MODEL_DATA column
  • Image the line-only emission with the imager tool
  • If an image of the estimated continuum is desired, run split
again (on the *continuumsub*'d dataset), and select the MODEL_DATA; then run imager to image it

Optional: Split out calibrated uv data}

To split out calibrated uv data into a separate MS, use the split function in the ms tool.

ms.open('data.ms',                       # Load data into ms tool
        nomodify=False)                  # Make sure it is writable!
ms.split('source.split.ms',              # Select output file name,
           fieldids=2,                   #  write out only field 3 data,
           spwids=0,                     #  spectral window 1,
           nchan=63,                     #  write out 63 channels,
           start=0,                      #  starting with channel 1,
           step=1,                       #  no channel averaging,
           whichcol='CORRECTED_DATA')    #  write out CORRECTED_DATA column.
ms.close()                               # Finish the tool

Imaging Data

Overview

This chapter describes how to make and deconvolve images starting from a calibrated MeasurementSet. There is also some discussion of how to do self-calibration.

The tool for imaging and deconvolution is imager (im). It has been designed to do gridding/degridding of visibility data, Fourier transforms, deconvolution using various methods, etc. A complete overview of the capabilities of Imager can be found in the \ahlink{Reference Manual}{imager}; note: at this time the Reference Manual still refers to the Glish syntax.

There are two sorts of tool functions in im; passive and active. The passive functions like setimage, setdata etc. set the state of the im tool. Then, active functions such as makeimage, clean etc. act on the data according to the previously set state.

To use an im tool, one must first construct it from a MeasurementSet and then configure it (set its ``state''). When you are finished, you destroy the tool (which of course does not affect the actual MeasurementSet on disk). To construct an imager tool for a measurement set located at /home/data/data.ms:

im.open('/home/data/data.ms')            # Load data into imager tool
im.summary()                             # Summarize state of tool
im.close()                               # End the tool

The summary function is always useful to summarize the current state of the imager tool.

Setting Imaging Parameters

Data Selection

To make an image from the visibilities one needs to select the part of the data from the MeasurementSet that is going to be used (e.g which pointing or channels or spectral window that is going to be imaged). This is done with the \ahlink{imager.setdata}{imager:imager.setdata.function} function. If you don't run this function, you get all of the data.

Here are some examples (refer to the \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node151.html}{Reference Manual} for more details).

im.setdata(mode='channel', nchan=15,      # Select 15 channels, third
             spwid=2, fieldid=3)          #  spectral window  and fourth field
                                          # REMEMBER: everything is 0-based!

im.setdata(mode='channel', nchan=256,     # Select every second channel
             step=2, spwid=2, field=3)    #  to make 256 total channels

im.setdata(mode='velocity', nchan=63,     # Select data based on
             mstart=quantity(20.,'km/s'), #  radio velocity
             mstep=quantity(-100.,'m/s'),
             spwid=2,
             fieldid=3)
im.setdata(mode='opticalvelocity',        # Select data based on
             nchan=63,
             mstart=quantity(20.'km/s'),  #  optical velocity
             mstep=quantity(-100.,'m/s')) #

# Consider a more complex data selection with diferent number of channels and
# different spectral windows:

im.setdata(mode='channel', spwid=[1,2],   # Select spectral windows 2 & 3,
             start=[3,4],                 #  spwid 1: start at 4, select 10 chans
             nchan=[10,12], step=[1,1])   #  spwid 2: start at 5, select 12 chans
                                          #  step by 1 channel for each spwid

Other functions can also further reduce the data selected. For example, \ahlink{imager.filter}{imager:imager.filter.function} and \ahlink{imager.uvrange}{imager:imager.uvrange.function}.

Setting Image Parameters

To set the actual imaging parameters use the \ahlink{imager.setimage}{imager:imager.setimage.function} function. This is a {\bf required} function in {\tt Imager} (in fact, the only one presently). This function is passive; it just sets state so that when you tell {\tt Imager} to do something (like make an image), it knows what to do.

This function controls things like image size, cell size, spectral and polarization selection, sampling, phasecenter, and number of facets (for wide-field imaging). Images constructed afterwards (using {\em e.g.} the \ahlink{imager.clean}{imager:imager.clean.function} function) will have these parameters.

One important point to note is that the image size does NOT have to be a power of two. Just choose some reasonably composite even number close to the number that you want. {\bf Do not} use a prime number since the Fourier Transform will then be very slow.

Another important point is that the data selection between {\tt setdata} and {\tt setimage} should be aligned, that is, there must be some overlap (e.g., don't select fieldid=1 in {\tt setdata} and fieldid=2 in {\tt setimage}). If you aren't doing any averaging, then the data selection fields ({\tt fieldid, mode, nchan, start, spwid, etc}) should be the same in both function calls; see Section \ref{chansel} for more details.

As an example, to set the image parameters for a simple image cube:

im.setimage(nx=600, ny=600,                # Define image to be 600x600 pixels
     cellx=quantity(0.5,'arcsec'),
     celly=quantity(0.5,'arcsec'),         #  with 0.5'' pixels for
     fieldid=2, mode='channel',            #  field 3 and image 20 channels
     nchan=20, start=10)                   #  starting at channel 11.

The phase center is that of the specified field center. In addition, each selected channel will be imaged as one channel of the output image.

Channel Selection and Combination

Functions \ahlink{imager.setdata}{imager:imager.setdata.function} and \ahlink{imager.setimage}{imager:imager.setimage.function} both have arguments {\tt nchan, start} and {\tt step}. {\tt setdata} is used to select channels. {\tt setimage} is used to specify how many {\bf output} channels the image will have, and how the input channels should be combined (averaging, gridding).

You should call {\tt setdata} first. After you have called this function, subsequent active {\tt Imager} function calls only have available to them the data that you selected.

Note however, that when you call {\tt setimage}, the specification of the channels is still in an absolute sense. Thus, if you set {\tt nchan=10, start=20} in {\tt setdata}, and you wish to image all of these channels (let us say one image plane per channel), you must {\bf also} set {\tt nchan=10, start=20} in {\tt setimage}.

You might think that you have already selected the 20 channels you want with {\tt setdata} and therefore in function {\tt setimage} setting {\tt start=1} would select the first of those 20 channels. This is {\bf not} the way it works. If you asked for, say, channel 1 with {\tt setimage} when you did {\bf not} select channel 1 with function {\tt setdata}, that channel would be imaged but empty in the output image.

You use the channel selection parameters in {\tt setimage} to specify how the selected channels will be combined and how many output channels the image will have. Basically, there are two sorts of ways that you might like to use the channels that you have selected.

Firstly, in a multi-frequency synthesis image, all of the selected channels are gridded onto the $uv$ plane to reduce band-width smearing (which would be incurred if you averaged the channels and then gridded). In this case, the {\tt step} argument is not generally relevant; leave it at 1 if explicitly 'mfs' is used. For example:

im.setdata(mode='channel',      # Select 32 channels and start at channel 11
     nchan=32, start=10, step=1)
im.setimage(mode='mfs',         # In mfs mode, create the average of
     nchan=1, start=0, step=1)    #  the 32 channels selected above

Or, if you want a straight average:

im.setdata(mode='channel',      # select 32 channels starting at channel 11
     nchan=32, start=10, step=1)
im.setimage(mode='channel',     # Select 1 channel - this will be the
     nchan=1, start=10, step=32)  #  average of the 32 channels selected above

Secondly, when you make a spectral-line cube, you may wish to select/combine channels in a variety of ways according to the science you want to do. Here are some examples.

im.setdata(mode='channel',       # Select 200 consecutive channels starting
     nchan=200, start=10, step=1)  #  with channel 11
im.setimage(mode='channel',      # Form an image plane for each
     nchan=200, start=10, step=1)  #  selected channel
\end{verbatim}

\begin{verbatim}
im.setdata(mode='channel',       # Select 200 channels starting with channel
     nchan=200, start=10, step=5)  #  11, pick every fifth channel
im.setimage(mode='channel',      # Form an image plane for each
     nchan=200, start=10, step=5)  #  selected channel

im.setdata(mode='channel',       # Select 200 channels starting with channel
     nchan=200, start=10, step=5)  #  11, pick every fifth channel
im.setimage(mode='channel',      # Each channel of the image is formed by
     nchan=100, start=10, step=10) #  averaging 2 successively selected channels

In the above example, channels 11, 16, 21, 26, 31, etc... are selected. During imaging, channels 11 and 16 will be averaged to form output image channel 0. Channels 21 and 26 are averaged to form output channel 1 and so on.

Of course you could also use {\tt mode='mfs'} when combining groups of channels if you want an output image of more than one channel. In this case the combination is done by gridding rather than averaging.

Now to an example when one wants to make a cube image from multiple spectral windows:

im.setdata(mode='channel',               # Select 600 data channels from
     spwid=[0,1,2], nchan=[200,200,200],   #  3 spectral windows(200 from each)
     start=[0,0,0], step=[1,1,1],          #  start at channel 1, step by 1,
     fieldid=1)                            #  for field 2
im.setimage(mode='channel',              # Create 150 channels, average
     spwid=[0,1,2], nchan=150,             #  4 channels for each image
     start=0, step=4,                      #  plane.
     fieldid=1)                            #

In the above example we select 600 data channels from 3 spectral windows (200 from each spectral window). Then in the {\tt setimage} step we make imager combine 4 channels to make one image channel. Note, the spectral windows should have some overlapping channels for this procedure. {\tt imager} will figure out what the overlap is and create a continuous image cube from all 3 spectral windows.

Now consider an example in which all data in spectral windows 1 and 2 are selected. Then define an image in terms of velocity values:

im.setdata(fieldid=1, spwid=[0,1])       # Select field 2 and sp windows 1 & 2
im.setimage(nx=800, ny=800,              # Image will have 800x800 pixels
     cellx=quantity(0.5,'arcsec'),
     celly=quantity(0.5,'arcsec'),       # 0.5'' on a side, create 30 channels,
     mode='velocity', nchan=30,          #  start at -10km/s and step by
     mstart=quantity(-10.,'km/s'),
     mstep=quantity(1.8,'km/s'),         #  1.8km/s.  Do this for field 2
     spwid=[0,1], fieldid=1)             #  and use both spectral windows

Examples for mosaics are given in Section \ref{mosaic}.

Weighting

The above steps show how to set up the {\tt Imager} tool as desired. In addition, before imaging one may wish to set some values in the MeasurementSet itself. This is necessary for visibility weighting. The visibility imaging weights are computed prior to the actual imaging and stored in a column of the MeasurementSet called ``IMAGING\_WEIGHT''.

The first time an {\tt Imager} tool is attached to a MeasurementSet it initializes the imaging weights to the natural weighting scheme. Thereafter, whatever weighting scheme you last used is the one you will get if you don't explicitly run one of the weighting functions.

The values in the IMAGING\_WEIGHT column are set, changed, and examined by the following functions.

  • _im.weight -- sets the column
using one of natural, uniform, or Briggs (robust) weighting. For the latter two methods, one can specify the field of view over which the minimization of the sidelobes is done (thus achieving what is often called super-uniform weighting). Below are some examples of how to set imaging weights:
im.weight(type='natural')                # Natural weighting
im.weight(type='uniform')                # Uniform over entire field of view
im.weight(type='uniform', npixels=300)   # Uniform over specified size
im.weight(type='briggs', rmode='norm',   # An example of Briggs
     robust=0.5)                           #  weighting

  • im.filter -- applies an optimal filter for a given shape (often called 'tapering'). In the following example we apply a Gaussian filter:

im.filter(type='gaussian',                 # Apply a Gaussian taper
     bmaj=quantity(4.0,'arcsec'),
     bmin=quantity(2.5,'arcsec'),          #  that is 4x2.5 arcsec in size
     bpa=quantity(60.,'deg'))              #  at a position angle of 60deg

  • im.plotweights -- plots the imaging weights either as a function of uv distance or on a gridded plane.

  • im.sensitivity -- calculates and returns the sensitivity of the resulting weighting both absolutely and relative to natural weighting.

  • im.fitpsf -- calculates the dirty point spread function and returns the best fitting Gaussian.

_WARNING: All the weighting schemes modify the MeasurementSet and thus are conserved even after destroying the Imager tool and may no longer be suitable for subsequent imaging. You can of course reset the weighting scheme with the im.weight function.

Creating and Deconvolving Images

Creating Images

  1. It may be helpful to use the im.advise function to help determine the cell size, as well as the number of pixels for a given field of view.

Imagine we want to make an image over a field of view of 10~arcmin. The im.advise function will return the maximum pixel size required and number of pixels. If the number of facets required is larger than 1 then one needs a wide-field imaging algorithm as described in the \ahlink{wide-field}{GRwidefield} chapter of Getting Results. The recommendations should always be considered in the context of your imaging goals before being used. For example, to use the advise function on a split MeasurementSet that contains only calibrated source data:

im.open('source.ms')               # Create the imager tool
im.advise(fieldofview=quantity(10.,'arcmin'))  # Provide advice if FOV=10'

 # Logger will report something like:
   Maximum uv distance = 31068.1 wavelengths
   Recommended cell size < 3.31956 arcsec
   Recommended number of pixels = 192
   Dispersion in uv, w distance = 16011.2, 6277.93 wavelengths
   Best fitting plane is w = -0.0543279 * u + -0.418078 * v
   Dispersion in fitted w = 4882.22 wavelengths
   Wide field cleaning is not necessary

  1. It is often useful to make a dirty image of the field before deconvolution. Use the
im.makeimage function to make a dirty image and the point spread function:

im.makeimage(type='corrected', # Make a dirty image and store
     image='dirty.image')      #  it in the file called 'dirty.image'

im.makeimage(type='psf',       # Make a PSF image and store it
     image='dirty.beam')        #  in the file on disk called 'dirty.beam'

Deconvolution

At this point, you are ready to deconvolve the point spread function (usually called the dirty beam) from the dirty image. The goal of this step is to correct the dirty image for the sidelobes in the point spread function which arise from the limited Fourier plane coverage in your observation.

The available deconvolution algorithms are:

  • CLEAN; H\"ogbom, Clark and multi-scale variants of the CLEAN algorithm are available using \ahlink{im.clean}{imager:imager.clean.function}.

If you use a multi-scale algorithm, you should also set up the scale sizes via the im.setscales function.

CLEAN hints

For data using a PSF with high sidelobes, data that hasn't been properly calibrated, or in mosaics in which the PSF used is very different between different fields, the clean residuals may begin to diverge (rather than converge to zero). In these cases, there are a number of controls to help rein the cleaning process:

  • Reconcile the visibilities often and regenerate new residuals from the residual visibilities.
    • If using one of the multi-field algorithms (mf*), the setmfcontrol parameters provide additional controls on the process.
      • cyclefactor: The threshold in a major cycle by default is 1.5 times the product of the peak residual and the maximum outer sidelobe. This can be increased to raise the threshold.
      • cyclespeedup: Number of iterations after which it will increase the threshold in a major cycle by 2.
    • Manually set niter to a small value and restart clean multiple times. im subtracts the model visibility everytime it is started with a model image. This wya, one can force a major cycle every niter iterations.
  • Use a mask around the region where one believes there is real emission. In confused
regions or with strong sources, use interactive masking (\ref{regionmask}) to change the mask by looking at the residual after some iteration of clean.
  • Use a very low gain.
  • Re-calibrate, edit the data.

Example: Force major cycles:

im.open("orion.ms" )                       # load data into imager (im) tool
im.setdata(mode="none" ,                   # set all channels
        spwid=[0, 1] ,                     # set spectral windows 0 and 1
        fieldid=6)                         # set field 7
im.setimage(nx=500, ny=500,                # set 500 x 500
        cellx=quantity(2.0,'arcsec'),      # set cells to be 2"x2"
        celly=quantity(2.0,'arcsec'),
        stokes="I" ,                       # set Stokes I
        spwid=[0,1],                       # image spectral windows 1 and 2
        fieldid=6)                         # image field 7
im.weight(type="briggs",                   # Briggs weighting parameters
        robust=0,
        fieldofview=quantity(10.,arcsec'))
im.make('field_7c')                        # make a blank image to use as starting model
im.setscales(scalemethod='uservector',     # set multiscale clean scales
        uservector=[0,3,10,50])
im.setmfcontrol(stoplargenegatives=-1)     # continue component search even if
                                           # we've found negative components

  for k in range(10):                         # do a loop to force major cycles based
     im.clean(algorithm='mfmultiscale',    #  on the number of iterations
     niter=2000,                           # in the initial trial, 10000 iterations
                                           # diverged so we choose a 2000 iterations
                                           # (fraction of 10000) to force the major cycle
     gain=0.1, threshold='0.005Jy',        #  force major cycles every 2000 iterations
     model=['field_7c'],
     residual=['field_7c.residual'],
     image=['field_7c.restored'],
     mask=['orion.mask2'])                 # use a tight mask around emission
                                           # you can generate this with interactivemask

  • Maximum Entropy; maximum entropy and maximum emptiness methods are available. Functions that implement MEM are im.mem and dc.mem.

im.setdata(mode='channel',       # Select 200 consecutive channels starting
     nchan=200, start=10, step=1)  #  with channel 11
im.setimage(mode='channel',      # Form an image plane for each
     nchan=200, start=10, step=1)  #  selected channel
im.mem(algorithm='entropy',      # Select maximum entropy algorithm
     niter=10,                     # niter is smaller than that used in CLEAN as
                                   # it is not the components but the number of
                                   # iterations to maximize the entropy
     sigma=quantity(0.1,'Jy'),     # target noise to achieve in residual image
     targetflux=quantity(10.,'Jy'),# an estimate of the total flux in the image -
                                   # if uncertain, start with 1/10th to 1/2 of the
                                   # expected source flux
     constrainflux=F,              # set this to 'T' if you want the total flux fixed
                                   # to the target flux specified above
     prior='priorimage',           # if available, an image that gives some knowledge of
                                   # how the flux is distributed. This is not necessary.
     image=['maxen.image'],        # output images
     model=['maxen.model'],
     residual=['maxen.resid'],
     mask=['mask.im'])             # If needed you can specify a region to constrain the
                                   # flux

MEM hints

How to set up MEM so it can converge; how to decide when to stop MEM

  • Run MEM with only a few iterations (niter=5-10) and no mask. The image will be
poor but you can use it to define a mask. This is critical as MEM can easily diverge if you have complicated emission and no mask. Go through this procedure, increasing niter a bit until you have a good mask.

  • Start with about half of the flux density expected in the final image. This allows
MEM adequate cycles to achieve convergence. If you start with the full known flux density MEM may not converge to the correct flux. If you start with too little flux density, MEM may obtain a divergent flux. If in doubt, start with a small value, and watch the convergence. If you get no convergence on the final map flux with 20 to 50 iterations, increase the input Target Flux until you see MEM behaving reasonably. Note: when using a single-dish image as the starting model, the target flux should be set to the value of the single-dish flux since this is a known, rigid constraint.

  • Use a sigma input that is about or lower than
what you expect for the final image. This is not too critical. If it is clearly too high, MEM will stop before it has obtained a good image. You want to make sure sigma is low enough that MEM continues to deconvolve the image until it has converged to a good solution. You will probably stop MEM manually anyway based on the displayprogress plot.

  • When you have arrived at good initial inputs to MEM and you have a good mask,
the displayprogress plot will look something like Figure \ref{fig:displayprogress}. You will get discontinuities between each major cycle. Here you see 6 major cycles (1st one has only 1 iteration). Up through major cycle 5 (iterations 10-32), the peak flux found and residual sigma decrease steadily while flux increases. During cycle 5, all parameters stabilize. In cycle 6, the peak flux and sigma show signs of instability (they get a small bump). This is the first sign that MEM is digging too deep. In this example, stop MEM before the peak and sigma become unstable.

  • If you compare images: stopping the iterations after cycle 5 will get you
a nice flat residual map and a good deconvolved image with low background. If you go beyond cycle 5, you will get increased background, stripes and a worse residual image.

  • When in doubt, make lots of images and compare. Also, compare MEM with CLEAN
or multi-scale CLEAN.

The deconvolution methods do not keep a list of CLEAN components in sequence. Instead the CLEAN components are immediately added to the model image. This allows interoperability of the CLEAN and MEM algorithms as well the deconvolution functions in the \ahlink{Deconvolver} {deconvolver:deconvolver} \tool. It also allows editing of the resulting images using the Image tool.

The deconvolution functions can return the residual and restored images, as well as the updated model image. In addition, there are \ahlink{imager.residual}{imager:imager.residual.function} and \ahlink{imager.restore}{imager:imager.restore.function} functions that also compute residual and restored images. The distinction is that in the former, the residual is that calculated by the deconvolution method at the end of iteration, whereas the latter returns the residual calculated by going back to the original visibility data (except for the cases of multi-field and wide-field algorithms).

An example of using Clark CLEAN deconvolution with the {\tt imager} tool is given below:

im.clean(algorithm='clark',     # Clean a single field with Clark clean
     niter=2000, gain=0.2,      #  using 2000 iterations, set loop gain=0.2,
     model=['model_2000'],      #  The model image is called model_2000,
     residual=['residual_2000'],#  the residual image is residual_2000,
     image=['restored_2000'])   #  the final restored image is restored_2000.
                                #  All images are written to disk.

If the model image does not pre-exist it will be created and filled with the CLEAN delta function components found. If it does pre-exist (it may be non-empty (perhaps the result of a prior deconvolution) its Fourier transform is first subracted. This is also how you would continue to CLEAN deeper from a prior run of {\tt clean or mem}.

Note that the model image is updated when this function finishes. So if it was non-empty on input, you should save a copy first, if you wish to preserve that model.

Multi-scale CLEAN

General

Delta function CLEAN algorithms, such as the Clark or Hogbom algorithms, must use a modest gain factor (traditionally 0.1) in order to image extended emission in a reasonable manner. Otherwise, emissions and sidelobes get confused and error striping can result.

Multi-scale CLEAN algorithms attempt to recognize that the emission in the sky usually comes in a variety of scale sizes; it decomposes the image into Gaussians of the specified scale sizes. This means it does not spend huge amounts of time CLEANing large extended sources pretending that they are really collections of delta functions. However, note: because you are CLEANing multiple scale sizes, multi-scale CLEAN generally takes longer than CLEAN to run.

The \ahlink{im.setscales}{imager:imager.setscales.function} function allows you to choose these scales, either with an automatic method (you say how many scales you want) or by direct specification of the scale sizes in pixels.

Multi-scale CLEAN does not suffer from the same problems as delta function CLEAN and can use a larger value of the gain factor. Sometimes an oscillatory pattern will occur in the total cleaned flux with the peak residuals bouncing back and forth between positive and negative. If this occurs, try reducing the gain factor or try reducing the size of the largest scale.

As an example:

im.setdata(mode='channel', nchan=15,   # Select 15 channels,
     start=10, step=1,                 #  starting with channel 11, use
     spwid=2, fieldid=3)               #  spectral window 3 and field 4
im.setimage(nx=600, ny=600,            # Set up the imaging parameters
     cellx=quantity(0.5,'arcsec'),     #  Combine all channels into a single
     celly=quantity(0.5,'arcsec'),
     mode='mfs',                       #  plane using multi-frequency synthesis.
     spwid=2, fieldid=3))
im.setscales(scalemethod='nscales',    # Set up multi-scale components
     nscales=3)                        # Specify number of scales, allow imager
                                       #  to determine the scale sizes.
im.clean(algorithm='msclean',          # Deconvolve using multi-scale CLEAN
           model=['model'],            #  Call the model image 'model' and
           image=['restored'],         #  restored image 'restored'
           niter=100, gain=0.3)        #  clean down 100 iterations using a gain loop
                                       #  of 0.3.

Multi-scale CLEAN hints

  • Use im.setscales to set scalemethod='uservector'. Look at the general
distribution of emission and choose scales that are appropriate for the emission (e.g., start with 0 and 3 pixels; add scale sizes until the largest scale is about the size of the largest diffuse clump in the map). For example, assume you have emission that covers a 4 arcminute area, and there is significant diffuse emission that looks like it has a size scale of about 30 - 40 arcseconds. With 1 arcsecond pixels, choose deconvolution scales of [0,3,10,30] pixels. You can go smaller (e.g., [0,2,5,10]), but don't go much larger.

  • The more scale sizes you choose, the more memory it will take to deconvolve
the image. Be conservative.

  • The largest scale you choose should fit in any deconvolution masks you set
(or multi-scale CLEAN will not be able to use the largest scale).

  • If you have a mosaic where the fields are not sampled very quickly (e.g.,
field uv coverages differ somewhat) then the PSF will also differ between fields, sometimes significantly. \casa\ finds an average PSF that is common to all fields and cleans down to a certain level in each major cycle. In almost any mosaic that is taken with current instrumentation, fields are observed at different times and can be observed at slightly different frequencies. Thus, the PSFs between fields will differ enough that convergence in multi-scale CLEAN can be challenging. Try: in imager.setmfcontrol, start with cyclefactor=3 and cyclespeedup=500.

  • While multi-scale CLEAN is running, watch the progress. If there is
significant large-scale emission, initial major cycles will only get flux on the largest scale. As the number of cycles increase, more scales will begin to accumulate flux. Eventually, smaller scales will go negative; don't panic. The smaller scales are compensating for errors in the large scale flux distribution. Stop the iterations when you see minimal or negative flux on all scales. We recommend using the imager.setmfcontrol method with the stoplargenegatives=-1 setting. See the example (\ref{fig:displayprogress2}). Notice as the number of iterations increases, the final threshold obtained in the residual image decreases.

Mosaicing

The Fourier transform relationship between the Fourier plane and the image plane must include the primary beam:

V(u) = \int A(x) I(x) e^{-i2 \pi u x} dx

Where V(u) is the measured visibilities; A(x) is the primary beam response; and I(x) is the true brightness distribution on the sky.

Hence, given the image, one can simulate the corresponding $uv$ data. However, given the data and desiring the image, we have an inverse problem to solve.

Early attempts at mosaicing treated each field independently, deconvolving and self-calibrating each, and then sewing the overlapping fields' images together via the basic mosaicing equation:

I(x) = \frac{ \sum_f I_{f}(x)  A_{f}(x) } { \sum_f A^{2}_{f}(x) }.

Where the subscript f refers to each field in the mosaic. However, Cornwell (1988) demonstrated that better results can be obtained via a simultaneous deconvolution of the data from all the fields. This simultaneous deconvolution was achieved by using maximum entropy (MEM) or maximum emptiness as a solution engine to solve the inverse problem. Total power could be added as additional fields with their own primary beam. However, MEM's positivity bias, which is detrimental to low SNR imaging, led to a search for other algorithms to image multi-field data.

Sault et al. (1996) have implemented mosaicing algorithms which can use either CLEAN or MEM for simultaneous deconvolution.

Mosaicing Algorithm

Cornwell, Holdaway, and Uson (1994) proposed the following mosaicing algorithm: generate the mosaic of the dirty images and a single approximate point-spread function (PSF), and then proceed with any conventional single field deconvolution algorithm. For high-quality Fourier-plane coverage and similar PSF's for all fields in the mosaic, this approach is not limited by the differences in the approximate PSF and each field's actual PSF until the possible image dynamic range exceeded a few hundred to one.

\casa\ takes this approach to mosaicing a step further: perform an incremental deconvolution of the residuals with the approximate PSF, with an exact subtraction of the cumulative model brightness distribution at the end of each incremental ``major cycle'' (similar in concept to the major cycles of the Clark CLEAN).

If all of the fields are observed with many short snapshots over and over again (this is the preferred way to make a mosaic observation) then each field will have similar Fourier coverage and hence similar synthesized beams. An approximate PSF can be created which is a fairly good match to the actual PSF of each of the fields. Also, if the sky-coverage of the observed fields is Nyquist or better, then the approximate, shift-invariant PSF will be a reasonable match to the actual PSF of sources at various locations across the mosaic. The residual visibilities from each field can be transformed and mosaiced to make a single residual mosaic image. This mosaic image can be deconvolved with the deconvolution method of your choice; for example, with Clark CLEAN, Multiscale CLEAN, maximum entropy, or maximum emptiness.

The deconvolution algorithm cannot deconvolve arbitrarily deeply, because at some level the discrepancies between our approximate shift-invariant PSF and the true PSF at any location in the image will become apparent, and we will start ``cleaning'' error flux. Hence, we need to stop deconvolving when we have gotten down to the level of these PSF discrepancies. At this point, we take the part of the model brightness distribution we have just deconvolved and calculate model visibilities (using the measurement equation) and subtract them from the (corrected) data visibilities. To the extent that the primary beam and sky pointing are exact, the visibility subtraction is also exact. The residual visibilities can then be re-mosaiced, but the peak residual is at a much lower level. The process of deconvolving with the approximate, shift-invariant PSF then continues, and another increment to the model brightness distribution is formed, removed from the remaining residual visibilities, and added to the cumulative model brightness distribution. Borrowing from the Clark CLEAN's terminology, we call each cycle of incremental deconvolution and exact visibility subtraction a ``major cycle''.

How to Create a Mosaic Image

Set the Data Fields

Mosaicing can be a time consuming process, so it may be worthwhile to make a restricted version of the mosaic first. For example, you may want to image a few fields at lower resolution to reduce the number of pixels you are imaging. Eventually, you will want to image most or all of the observed fields. Use the {\tt setdata} function to restrict the data over which you will generate your image, e.g.,

im.setdata(fieldid=range(0,4))       # Select first 4 fields

Set the Image

One of the fields must be specified in im.setimage to provide the direction of the resultant image's reference pixel. For example, with a 25 pointing (5 x 5 raster) observation, field 13 could be the central field:

im.setdata(fieldid=range(0,25))      # Select all 25 pointings
im.setimage(nx=256, ny=256,          # Create a 256x256 resultant image
     cellx=quantity(3.,"arcsec"),    # Choose a cell size of 3x3 arcseconds
     celly=quantity(3.,"arcsec"),
     fieldid=12)                      # Select field 13 as the center

Setting the Voltage pattern (primary beam)

We must account for the primary beam pattern when imaging the same region but from different pointings. CASA currently has primary beam patterns stored for:

  1. ATCA
  2. GBT
  3. GMRT
  4. HATCREEK
  5. NMA
  6. NRAO12M
  7. NRAO140FT
  8. OVRO
  9. VLA
  10. WSRT
  11. OTHER

in addition, there are specific beams available for each of the following:

  1. ATCA_L1, ATCA_L2, ATCA_L3, ATCA_S, ATCA_C, ATCA_X, GBT, GMRT, HATCREEK, NRAO12M, NRAO140FT, OVRO, VLA, VLA_INVERSE, VLA_NVSS, VLA_2NULL, VLA_4, VLA_P, VLA_L, VLA_C, VLA_X, VLA_U, VLA_K, VLA_Q, WSRT, WSRT_LOW

im.setvp(dovp=T)  # dovp=do the voltage pattern correction
                    # this will default to use the voltage pattern for the
                    # telescope used and frequency observed. If no such
                    # default exists, a warning is issued.

If a voltage pattern is not provided, you can specify your own voltage pattern and bind it to the telescopes in a MeasurementSet by using the \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node228.html}{vpmanager} (voltage pattern manager). The Vpmanager will produce a table describing the different telescopes' voltage patterns, and this table can be used by Imager. See the \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node228.html}{vpmanager} section in the User Reference Manual for details. This is generally not necessary.

Weighting

If you use ``uniform'' or ``briggs'' weighting, the weighting details will depend upon the way the data are gridded. However, if all fields are specified in function setdata, then the weights from all fields will be gridded onto a single grid for the purposes of calculating the weights. This is valid for natural weighting but will not be valid for other types of weighting. You probably want to weight the data on a field-by-field basis:

im.weight(type="uniform",mosaic=T)  # This will weight each field separately

Options

For mosaicing, there is a key option which can be used. In particular, the gridding can use the primary beam as the convolution function. This is achieved using the option ftmachine="mosaic".

im.setoptions(ftmachine='mosaic')   # do a mosaic gridding

The normal gridder uses a spheroidal function to grid the data. For mosaics, using the Fourier transform of the primary beam as the gridding function has some advantages.

After a major cycle, the primary beam correction using the mosaic gridder leaves the (uncleaned) noise floor flat while it gets the deconvolved component scaled properly for flux scale. The use of the spheroidal gridding function, after correcting for the primary beam, tends to lift the noise at the edge of the beams and thus is non-optimal for images with emission extending to the edge of the mosaic.

In addition, controls are available using the setmfcontrol function to help adjust the cycle parameters (the conditions for ending the deconvolution cycles) for multi-field (and wide-field) imaging.

# Set the image plane flux scale type to "SAULT"
# Set a cutoff for the minimum primary beam level to use
# constpb is used for SAULT weighting to set the flux scale constant above
# this level.
im.setmfcontrol(scaletype='SAULT', # This selects Sault weighting which uses a
                                     # primary beam function that is flat across most
                                     # of the mosaiced image but is attenuated at
                                     # the edges of the mosaic so that the noise level
                                     # does not become excessive there
                  minpb=0.07,        # minimum primary beam level to use in each field
                  constpb=0.4)       # The noise is constant above this primary
                                     # beam level

Deconvolving

To image and deconvolve, use either imager's CLEAN or MEM functions. Only algorithms with the ``mf'' prefix will perform multi-field imaging correctly (i.e. algorithm ``clark'' will grid the data from all specified fields onto the same grid, resulting in a very confused image. CLEAN's mosaicing methods include mfclark, mfhogbom, and mfmultiscale, while MEM's mosaicing methods include mfentropy and mfemptiness. Some hints for CLEAN and MEM are provided at \ref{cleanhints} and \ref{memhint} respectively. A full example of how to create a mosaic image is given below. A detailed script is given at the end of this chapter.

im.open('data.ms')  # load data into imager
                                                # choose all fields (1-10) and all channels
im.setdata(mode='channel', nchan=62, start=0,   # restrict the data to the
     step=1,fieldid=range(0,10), spwid=[0,1])   # first two spectral windows
im.setimage(nx=640, ny=256,                     # select the output image size
     cellx=quantity(0.5,'arcsec'),              # and cell size properties
     celly=quantity(0.5,'arcsec'),stokes='I',
     mode='channel',
     nchan=18,start=2,step=5,fieldid=0,
     spwid=[0,1])                               # do 18 channels starting
                                                # at channel 3 and averaging 5
                                                # channels; use field 1 as the mosaic
                                                # field center
im.setvp(dovp=T)                                # do primary beam correction
im.weight(type="uniform",mosaic=T)              # use uniform weighting for each field
im.setoptions(padding=1.0, ftmachine='mosaic')  # do mosaic gridding
im.clean(algorithm="mfclark" , niter=1000,      # use mfclark algorithm
     gain=0.1, threshold=quantity(0.005,'Jy'),
     model=["src.all.cln.model"],               # stop cleaning at a threshold
                                                # of 0.005 Jy
     image=["srcmos"], residual=["src.all.cln.resid"])

Mosaic Details

Controlling the Major Cycles

The key to making the incremental deconvolution in \casa\ multi-field imaging successful lies in controlling just how deeply to deconvolve in the major cycles. The control parameters discussed here can be set with \href{http://aips2.nrao.edu/stable/docs/user/SynthesisRef/node191.html\#imager:imager.setmfcontrol.function}{im.setmfcontrol}. Deconvolving too deeply with the approximate PSF will waste computation time as slightly different PSF sidelobes add and subtract components. Not deconvolving deeply enough also causes problems as it necessitates more major cycles, again slowing down the computation time.

  • by increasing the cyclefactor. The major cycle cleaning
threshold is a fraction of the peak image residual. That residual fraction is determined by the cyclefactor multiplied by the peak negative sidelobe of the approximate PSF. Stopping the major cycle cleaning sooner can be accomplished by increasing the cyclefactor. Values ranging between 2 and 4 are common.

  • by decreasing the cyclespeedup. What if the
cyclefactor is set too low? The cycle threshold as calculated above can be made to drift upwards by setting the cyclespeedup. The threshold will double every cyclespeedup iterations in the major cycle until the major cycle stops. If the cyclespeedup is less than or equal to 0.0, no adjustments to the calculated cycle threshold will be made.

In addition to these cycle control parameters, which are applicable to mosaicing with CLEAN, Multi-Scale CLEAN, MEM, or Maximum Emptiness, there are two more control arguments set by im.setmfcontrol which are applicable only to Multi-Scale CLEAN. These are stoplargenegatives and stoppointmode, which are discussed below.

Mosaic Details with Multi-Scale CLEAN

See the Section \ref{msclean} for multi-scale basics.

Sometimes in the first few iterations of Multi-scale CLEAN in the mosaicing context, the largest scale will be dominated by large negative residuals (i.e. this is just the negative bowl, integrated over the area of the largest clean-component scale). One way to fix this is to make the largest scale smaller. Another way is to use a tighter mask which excludes finding large scale components in the bowl region. And a third, ad hoc way is to stop the major cycle when a negative component is found on the largest scale. This allows the exact subtraction to proceed, often resulting in a reduced bowl level. Stopping the major cycle upon encountering a negative component on the largest scale should only be performed on the first few cycles, as during subsequent cycles small amplitude large scale components may be required to make adjustments in the image. The number of cycles for which stopping when a negative component is found on the largest scale can be controlled by the parameter stoplargenegatives in imager.setmfcontrol. As smaller scales may require negative components to correct for errors made by ``over-cleaning'' in the larger cycles, no restriction should be placed on negative components from smaller size scales.

Details with MEM

If there are bright unresolved or barely resolved sources in the field, it may be advantageous to perform a Clark Clean down to the level of the peak extended emission, or include a component list in the model, because MEM does not work well on bright point-like sources.

The maximum entropy/emptiness algorithm has been modified to fit into the incremental deconvolution/major cycle framework adopted by mosaicing in \casa. These algorithms deal with both the incremental brightness distribution, which it seeks to solve given the approximate PSF and the residual mosaic image, and the cumulative brightness distribution for the calculation of the entropy function and its gradients. When maximum entropy starts up, it typically takes the algorithm four or five iterations to ``find itself'' and get the control parameters set to achieve the proper balance between the gradients of the entropy and the $\chi^2$. Once a good balance is struck, the algorithm makes marked progress towards convergence. At the end of a major cycle, the relevant control parameters are saved and used for the next major cycle.

An example similar to the single field case but with the algorithm changed to 'mfentropy' for multiple fields:

im.setdata(mode='channel',         # Select 200 consecutive channels starting at 11
     fields=range(0,10),           # Select 10 mosaic fields
     nchan=200, start=10, step=1)  #
im.setimage(mode='channel',        # Form an image plane for each selected channel
     nx=300,ny=300,                # Create a 300x300 pixel image with
     cellx=quantity(2.,'arcsec'),
     celly=quantity(2.,'arcsec'),  # 2"x2" pixels
     nchan=200,start=10,step=1,    # use all channels selected in setdata
     field=5)                      # Use field 6 as the mosaic center
im.mem(algorithm='mfentropy',      # Select maximum entropy algorithm for mosaic
     niter=10,                     # niter is smaller than that used in CLEAN as
                                   # it is not the components but the number of
                                   # iterations to maximize the entropy
     sigma=quantity(0.1,'Jy'),     # target noise to achieve in residual image
     targetflux=quantity(10.,'Jy'),# an estimate of the total flux in the image -
                                   # if uncertain, use 1/10 of the expected flux
     constrainflux=F,              # set this to 'T' if you want the total flux fixed
                                   # to the target flux specified above
     prior='priorimage',           # if available, an image that gives some knowledge of
                                   # how the flux is distributed. This is not necessary.
     image=['maxen.image'],        # output images
     model=['maxen.model'],
     residual=['maxen.resid'],
     mask=['mask.im'])             # If needed you can specify a region to constrain the
                                   # flux

Flux Scale Images

When correcting for the effects of the primary beam to achieve an accurate and uniform flux scale across the image (i.e. by dividing by the primary beam in the case of a single field observation), the noise and errors at the edge of the mosaic sky coverage are amplified. The noise amplification distracts from the visual beauty of the image and may complicate the image's display and interpretation.

Sault et al (1996) endorse a different image plane weighting in the mosaicing which results in a constant noise level across the image, but has a variable flux scale, e.g., for Nyquist sampled mosaics, flux scale decreases outside of the overlap region of the mosaic field - just as the single field fluxscale decreases toward the edge of the primary beam.

In CASA an image plane weighting similar to Sault's scheme has been implemented, but the noise goes to zero outside the region covered by the primary beams. The flux scale is position dependent, but it is mostly flat over most of the mosaic sky coverage (depending on how constpb is set within the setmfcontrol function and how much overlap there is between pointing centers). In order to get a constant flux image across the mosaic, one must divide by the fluxscale image (See Section \ref{chapter:analysis} for details on manipulating images). The flux scale images can be created by setting the fluxscale argument in Imager's setmfcontrol function (in case an inverse-Sault weighting is needed to correct the image). Regions outside the multi-field primary beam pattern will have a zero value.

Masks

Routinely in single field deconvolution, only the inner quarter of an image is deconvolved so that the sidelobes from this region can be correctly subtracted from the entire image. However, in the multi-field case, such a restriction usually does not exist. The major cycles only deconvolve down to a certain level, fixed by the sidelobe characteristics of the PSF. After that, the exact subtraction of the deconvolved flux is carried out. Typically, the exact subtraction is performed by multiplying the brightness distribution by a field's primary beam, convolving by that field's exact PSF, multiplying by the primary beam again, and subtracting from the previous cycle's residual mosaic. The two primary beam multiplications ensure that the far out effects of the PSF, which will not be correct due to the full-image deconvolution, will not effect the model brightness distribution.

If no mask is used, the major cycle deconvolution algorithms create a mask from the generalized primary beam pattern of all observed fields, with zero outside the outermost fields' primary beam main lobes. If you don't want this mask for some reason, you should supply your own mask image (as in Section \ref{masks}).

An Example Mosaic Script

The following script makes an interferometer-only mosaic image from the multi-field test measurementset which is distributed with CASA:

im.open('XCAS.ms')
# Use all the data for the mosaic
im.setdata(mode="none",
             spwid=[1:2], fieldid=[1:7])

# Use the first field as the image center
im.setimage(nx=256, ny=256, cellx="3arcsec",
              celly="3arcsec", stokes="I", doshift=F,
              mode="mfs", spwid=[1:2], fieldid=1,  facets=1)

# Weight each field individually
im.weight(type="uniform",mosaic=T)

# Use all the data for the mosaic
im.setdata(mode="none", nchan=1, start=1, step=1,
             spwid=[1:2], fieldid=[1:7])

# Using the mosaic gridder that uses the FT of the Primary Beam as the gridding function
im.setoptions(ftmachine='mosaic')

# Use the voltage pattern (primary beam) - it will lookup the telescope and use the
# appropriate PB for the observing frequency
im.setvp(dovp=T, usedefaultvp=T, dosquint=F)

# Make a MEM image  (using  mask image) - set names for results
maskname = ['mem.mask']               # maskname will be mem.mask
modname = ['mem.model']              # modname will be mem.model
imgname = ['mem.image']              # imgname will be mem.image
resname = ['mem.resid']              # resname will be mem.resid
scalename = ['mem.scale']            # scalename will be mem.scale

# set some multi-field control parameters.
im.setmfcontrol(cyclefactor=3.0,  # set to 3.0* max sidelobe * max residual
     cyclespeedup=20.0,             # threshold doubles in this many iterations
     fluxscale=scalename)           # name of fluxscale image
im.mem(algorithm='mfentropy',     # use multi-field Maximum Entropy algorithm
     niter=80, sigma='0.001Jy',     # number of iterations and target sigma
     targetflux='10.0Jy',           # target flux for final image
     constrainflux=F,               # constrain image to match target flux
     displayprogress=T,  fixed=F,   # display progress, don't keep model fixed
     complist='', prior='',         # set these to blank
     mask=maskname,                 # setup output file names
     model=modname,
     image=imgname,
     residual=resname)

# Make a multi-scale CLEAN image  for comparison
modname = ['msclean.model']          # modname will be msclean.model
imgname = ['msclean.image']          # imgname will be msclean.image
resname = ['msclean.resid']          # resname will be msclean.resid
scalename = ['msclean.scale']        # scalename will be msclean.scale

im.setscales(scalemethod='uservector',  # method by which scales are set
     uservector=[0.0, 3.0, 10.0, 20.0])   # vector of scale sizes in pixels

### please note the use of parameter stoplargenegatives
### which is set to false to let multi-scale clean to continue despite hitting
### a negative on the largest scale

im.setmfcontrol(cyclefactor=3.0,  # set to 3.0 * max sidelobe * max residual
     stoplargenegatives=-1,         # continue despite negatives
     fluxscale=scalename)           # name of fluxscale image
im.clean(algorithm='mfmultiscale',# use multi-field multi-scale algorithm
     niter=1000, gain=0.6,          # number of iterations and loop gain
     threshold='0Jy',               # stop cleaning at this threshold
     displayprogress=T, fixed=F,    # display progress, don't keep model fixed
     complist='',                   # name of component list
     mask=maskname,                 # set names for produced files
     model=modname,
     image=imgname,
     residual=resname)

# unload data from tool
im.close()

Single Dish Imaging

*Under Construction

Imaging combined single dish/synthesis data

Methods

Below is a list of current methods used to combine single dish and synthesis data (Stanimirovic 2002).

  • Feathering in the image domain (IMMERGE in miriad, IMERG in AIPS,
im.feather in CASA). Images are feathered together in the Fourier plane. Intermediate size scales are down-weighted to give interferometer resolution while preserving single-dish total flux density.

_Fastest and least computer intensive way to add short-spacings. Also the most robust way relative to the other 3 methods which all require a non-linear deconvolution at the end._

  • Linear combination - the single-dish image and the interferometer dirty image
are linearly combined in the image plane, a composite beam is constructed, and the resulting composite image is deconvolved with the composite beam. Existing packages don't directly support this method but it can be done via image manipulation and subsequent deconvolution.

_Straight linear combination is not optimal but this method has the advantage that it does not require either Fourier transformation of the single-dish data which can suffer severely from edge effects, nor deconvolution of the single-dish data which is especially uncertain and leads to amplification errors._

  • Model Image - use the single-dish image as the starting model and subtract
this model from the uv data. In the shortest uv spacing, the single-dish-sampled structure will be preserved in the model information. In the uv overlap region, the source structure can be modified from the single-dish flux density distribution during deconvolution using the interferometer uv data. In the interferometer-only region of the uv-plane, the deconvolution will proceed as usual with no single-dish constraints.

Works effectively with good uv overlap (i.e., large single-dish).

  • Full Joint Deconvolution - single-dish and interferometer data are gridded
together. All regions of the uv-plane are jointly deconvolved.

_Theoretically the best way to do short spacing correction. This method depends heavily on a good estimate of the interferometer and single-dish noise variances._

Note: These techniques rely on images (data and masks) having the same number of axes. If errors are encountered with complaints about image shapes, use the image.summary function to look at the input images axes.

Examples are provided for adding an axis to an image (\ref{reorder}) and also for regridding an existing image to another coordinate system (\ref{exregrid}).

Feathering

In the image feathering technique, first the calibrated single-dish image is corrected for the beam response and the calibrated interferometric data are Fourier transformed and deconvolved to create an interferometer-only image.

The feathering technique does the following:

  • The single-dish and interferometer images are Fourier transformed.
  • The beam from the single-dish image is Fourier transformed (FTSDB(u,v)).
  • The Fourier transform of the interferometer image is multiplied by (1-FTSDB(u,v)).
This basically down weights the shorter spacing data from the interferometer image.
  • The Fourier transform of the single-dish image is scaled by the volume ratio of the
interferometer restoring beam to the single dish beam.
  • The results from c and d are added and Fourier transformed back to the image plane.

The term feathering derives from the tapering or down-weighting of the data in this technique; the overlapping, shorter spacing data from the deconvolved interferometer image is weighted down compared to the single dish image while the overlapping, longer spacing data from the single-dish are weighted down compared to the interferometer image.

The tapering uses the transform of the low resolution point spread function. This can be specified as an input image or the appropriate telescope beam for the single-dish. The point spread function for a single dish image may also be calculated using makeimage.

Advice: Note that if you are feathering large images, be advised to have the number of pixels along the X and Y axes to be composite numbers and definitely not prime numbers. In general FFTs work much faster on even and composite numbers. You may use subimage function of the image tool to trim the number of pixels to something desirable.

Note: This method is analogous to the AIPS IMERG task and the MIRIAD immerge task with option 'feather'.

Feathering Example

im.setvp(dovp=T)                    # Do primary beam correction; it will use the default
                                    # primary beam for the single-dish telescope
                                    # and frequency
im.feather(image='feathered.image'  # Resulting, combined image
    highres='synthesis.image',      # Synthesis image
    lowres='singledish.image')      # Single-dish image
                                    # If the beam response of the single-dish telescope
                                    # is not stored in AIPS++ then, one can optionally
                                    # specify the 'lowpsf' image if  available.

The feather task works on both single plane images and on multi-plane images (data cubes). The synthesis and single-dish images must have the same number of axes however; for example, default images in \casa\ have axes of: 1) Direction (e.g., RA) 2) Direction (e.g., Dec), 3) Stokes (e.g., I), and 4) Spectral (e.g., frequency or velocity). These axes can be manipulated (added or deleted) as necessary using the {\bf image} tool; this tool has not yet been ported to CASA .

uv-plane Combination

There are two principal methods for the uv-plane combination of single-dish and synthesis data.

  • Use the single-dish image as a starting model for the deconvolution (CLEAN or MEM).
  • Full joint deconvolution of the datasets.

Use single-dish image as starting model

In this first technique, a non-linear combination of the synthesis and single-dish data is performed in the uv plane. In the \casa\ implementation, the single-dish image is used to make a model (clean component) image which is then used as the starting point for the deconvolving algorithm to interpolate the missing short baselines.

During the deconvolution step in the CLEAN process, the interferometric data is extrapolated to shorter spacings. When starting CLEAN with a model from a single-dish image, the single-dish image is Fourier transformed and data in the area of uv-overlap is compared in a major cycle. The uv spacings provided by the single-dish image thus provides information (constraints) which help to prevent CLEAN from over-extrapolating in this region of the uv-plane.

The imager function, setsdoptions can be used to set a factor by which to scale the single-dish image, if necessary (typically to convert from K to Jy/beam).

The sdpsf parameter (optional) should be used if an external PSF image of the single dish is needed to calculate the beam parameters of the primary beam of the dish. This is usually needed if the single-dish image is from a non standard telescope or the beam is not in the CASA system; see the example in feather - \ref{feather}} for generating a beam in this fashion.

A mask image can be provided to the clean algorithm. This mask image helps guide the clean (and mem) algorithms and should be chosen carefully based on the 'known' emission. A mask image can be created from an existing image via {\tt interactivemask} (\ref{regionmask}).

Information on basic imaging is found in \ref{chapter:image}. Information on mosaic imaging is found in \ref{mosaic}.

There are two ways to get a model from a single-dish image:

  • Directly convert the single-dish image from Jy/beam to Jy/pixel.
  • Deconvolve the single-dish image with MEM or multiscale-CLEAN (using large scales only) to obtain a model image (Jy/pixel).

Example

Example: Directly convert the single-dish image to a model and use this as a starting model for the deconvolution.

im.open(filename='n4826_both.ms')    # Create imager tool using synthesis data
im.setdata(fieldid=range(0,7),       # Select mosaic fields 1-7
           spwid=[0,1,2])            # Select spectral windows 1-3
im.setimage(nx=256, ny=256,          # Resultant image will be 256x256
   cellx=quantity(1.,'arcsec'),
   celly=quantity(1.,'arcsec'),      #   with 1" pixels
   stokes='I',                       # Resultant image will be Stokes I
   mode='channel',                   # Define image planes by channel
   nchan=30,                         #   30 planes
   start=46,                         #   Starting with channel 47
   step=4,                           #   Averaging 4 channels
   fieldid=0,                        #   Use fieldid=1 as the phase center reference
   spwid=[0,1,2])                    #   Use spectral windows 1-3
im.setmfcontrol(scaletype="NONE",    # Set some multi-field processing parameters
                                     #   NONE indicates no scaling
   minpb=0.1)                        #   Level at which the primary beam will be applied
im.setvp(dovp=T)                     # Do primary beam correction

                                     # Make starting model image from single-dish image
im.makemodelfromsd(sdimage='n4826_12mchan.im', # specify single-dish image
   modelimage='n4826_joint1',        # specify name of output model image
   maskimage='n4826.mask')           # specify make image

                                     # joint deconvolution and clean
im.clean(algorithm='mfclark',        # Use multi-field clark algorithm
   model=['n4826_joint1'],           # Use model image generated above as the initial model
   gain=0.2, niter=500)              # set gain and iterations
                                     # you can specify a clean mask if desired - see
                                     # Section 5.3.4.1 on interactivemask
                                     # Note: if the output image name isn't specified, the
                                     # restored image name will be the model image name,
                                     # 'n4826_joint1' appended with '.restored'

im.close()                           # Close tool

When combining single-dish images (in feather or uv-plane combination), the single-dish image can be scaled by a factor (typically to convert from K to Jy/beam) using the setsdoptions function.

im.setsdoptions(scale=0.5)

Full Joint Deconvolution

This is currently under construction.

References (Single-Dish and Interferometric Data Combination

For a review of techniques see: Stanimirovic, S. 2002, astro-ph/0205329, "Short-Spacings Correction from the Single Dish perspective".

  • Emerson, D. T. 1974, {\it MNRAS}, {\bf 169}, 607

  • Helfer et al., 2003, {\it ApJS}, {\bf 145}, 259, Appendix B

  • Holdaway, M. A. 1999, in ASP Conf. Ser. 180, Synthesis Imaging in Radio Astronomy II, ed. ed. G. B. Taylor, C. L. Carilli, \& R. A. Perley (San Francisco: ASP), 401

  • Stanimirovic, S., Staveley-Smith, L., Dickey, J. M., Sault, R. J., \& Snowden, S. 1999, {\it MNRAS}, {\bf 302}, 417

  • Vogel, S. N., Wright, M. C. H., Plambeck, R. L., \& Welch, W. J. 1984, {\it ApJ}, {\bf 283}, 655

Wide Field Imaging (non-coplanar effects)

Under Construction

The problem of imaging interferometric data for wide fields involves correcting for the non-co-planarity of the array when inverting the visibilities in the Fourier basis. The distortions in the image domain increase as a function of distance from the phase center. This distortion, if not corrected, limits the imaging dynamic range for fields with significant emission away from the phase center. The measurement equation for such an observation is

%$ V(u,v,w)=\int\int I(l,m) e^{2\pi\iota(u l + v m + w \sqrt{1-l^2-m^2})} \frac{dl dm}{\sqrt{1-l^2-m^2}} $%

with the usual meaning for the various symbols. The two algorithms in CASA for correcting for the non-co-planarity are called the ``w-projection'' and the ``faceted imaging'' algorithm.

w-projection

The w-projection algorithm is a matched filter approach. A set of filters corresponding to various value of w for the %$e^{2\pi\iota w \sqrt{1-l^2-m^2}}$% term are constructed and applied to the 2D co-planar visibility function to evaluate the 3D non-co-planarity visibilities as:

V(u,v,w)=G(u,v,w)*V(u,v)

where G(u,v,w) are the filters. For fast evaluation of the left-hand side, G is pre-computed for a discrete set of w with uniform sampling in \sqrt{w} and the filter corresponding to the nearest value of the w is used.

This effectively increases the limit before which the errors due to the non-coplanar array dominates. The errors in the PSF computed using the w-projection approach are smaller than those incurred in using the approximate PSF in the minor cycle of Clark-Clean. Effectively therefore, Clark-Clean (and its variants) uses one form of approximate PSF in the minor cycle, while w-projection uses another. The accuracy of the major cycle in w-projection depends on the resolution at the which the w-axis is sampled by the filters. For VLA L-band imaging, 256 samples along the w-axis is probably sufficient (set via the {\tt facets} argument of the setimage() method of the imager).

Faceted imaging

The tradition approach for wide-field imaging involves approximating the (curved) sky by a set of tangent planes. The size of the facets is set such that the 2D approximation of the measurement equation is valid on each facet. The part of sky covered by each facet is them imaged and deconvolved in the usual manner by phase rotating the visibilities to the center of each facet. The Clean-ed images for each of the facets are then appropriately rotated and projected onto a single 2D tangent image. The accuracy of the major cycle for this algorithm also depends on the number (and hence the size) of the facets, but the dependence is different from that of the w-projection algorithm.

It can be shown that the imaging on each facet in the image domain is equivalent to a shear operation in the visibility domain. The CASA implementation of the faceted imaging algorithm uses this technique, which effectively does the faceting in the visibility domain rather than in the image domain. The functional advantage of this approach is that the users see (and manage) a single 2D image as against multiple facet images when the faceting is done in the image domain. This is a significant advantage, when using faceted imaging.

Comparison of the two methods

The w-projection algorithm can be shown to be about 10x faster than the faceted algorithm. The major cost of major-minor cycle based deconvolution algorithms is the cost of the major cycle. The dominant cost of major cycle is in the the prediction of the visibilities, given a model image. Typically, in faceted imaging approach, each major cycle involves predicting the visibilities for each facet. Though the facets without any emission need not be predicted saving some compute time, this saving disappears for typical P- and L-bands fields which invariable have emission is most facets.

The disadvantage of the w-projection algorithm is that it needs the entire image for the minor cycle. Since a single image covers the entire field of view, this approach requires larger run-time memory. This can however be relaxed by putting smaller fields around the dominant flanking sources in the field of view, which can be of smaller sizes.

Example

Single Field Imaging

A typical script for imaging a Measurement Set named MS using the w-projection algorithm is as follows. The number of facets in this case corresponds to the number of discreet values of $w$ for which the gridding convolution functions are evaluated. This ultimately determins the imaging dynamic range. However since the runtime is only weakly dependant on the number of facets, setting it to a high number like 256 is sufficient for VLA L-band imaging.

    #
    # Set the various paraments of imaging
    #
    MS       = ''                    # Name of the MS

    ALGO     = 'mfclark'             # The algorithm to use for
                                       # deconvolution

    IMSIZE   = [4096,4096]           # The image size

    CELLSIZE = ['2arcsec','2arcsec'] # The cellsize

    SPWID    = [1,2]                 # Spectral windows to image

    FIELDID  = 1                     # Field IDs to image

    NFACETS  = 256                   # No. of w-planes.

    STOKES   = 'I'                   # Stokes value

    MODEL_IMAGE_NAME ='1046.im'      # Name of the image files.
                                     # The restored Clean-ed image
                                     # and the residual imgaes are
                                     # stored in images with
                                     # ``.clean'' and ``.res''
                                     # appended to this name.

    NITER    = 10000                 # No. of Clean components

    THRESHOLD= '100e-3mJy'           # The Cleaning threshold.  The
                                     # iterations are stopped when
                                     # the magnitude of the
                                     # strongest components is lower
                                     # than this value.

    DOINTERACTIVE = F                # Set it to T to do interactive
                                     # box setting in-between iterations.

    im.open(MS)

    im.setdata(spwid=SPWID,fieldid=FIELDID,mode='channel')

    im.setimage(nx=IMSIZE[1],ny=IMSIZE[2],
                cellx=CELLSIZE[1],celly=CELLSIZE[2],
                facets=NFACETS,stokes=stokes,
                spwid=SPWID,fieldid=FIELDID)
                im.setoptions(ftmachine='wproject')

    im.clean(algorithm=ALGO,
             niter=NITER,threshold=THRESHOLD,
             interactive=DOINTERACTIVE,
             model=[MODEL_IMAGE_NAME],
             image=[MODEL_IMAGE_NAME+'.clean'],
             residual=[MODEL_IMAGE_NAME+'.res'])

References

EVLA Memo: http://www.aoc.nrao.edu/evla/geninfo/memoseries/evlamemo67.pdf

Self-Calibration

Under Construction

Displaying Images

Under Construction

Image Analysis

Under Construction

Simulation

Under Construction

Tool Summaries

calibrater - cb

Description

The {\tt cb} tool provides synthesis calibration capabilities within \aipspp. The primary purpose of this module is to solve for calibration components, and to optionally apply these corrections to the observed data. The calibration module is designed to be used in conjunction with the \ahlink{im}{imager} module which provides support for synthesis imaging.

The calibration model adopted by {\tt calibrater} is that of the Hamaker-Bregman-Sault measurement equation for synthesis radio telescopes (see \htmladdnormallink{\aipspp Note 189}{../../notes/189/189.html}). This represents calibration corrections as matrices acting on 4-vectors representing the four possible correlations measured by an interferometer in full polarization. The calibration matrices cover a diversity of instrumental effects, including: parallactic angle and feed configuration (P,C), atmospheric phase (T), electronic gain (G), bandpass (B), instrumental polarization (D), baseline-based (correlator) corrections (M, MF), and baseline-based fringe-fitting (K).

The calibration data are stored in \casa\ tables and can be directly examined, manipulated or edited in the Glish command line interpreter (CLI) via the \ahlink{tb}{table} tool. The calibration tables may be interpolated when applied to the observed uv-data.

The solver allows the calibration components to be determined over different time intervals, thus allowing, as an example, the solution for atmospheric phase effects (T) over a much shorter interval than electronic gain terms (G). This also allows polarization self-calibration for time variable instrumental polarization corrections.

The measurement equation is designed to model calibration effects for a generic radio telescope and the calibration and synthesis modules are, in general, not instrument specific.

The {\tt cb} tool acts on a specified Measurement Set (MS), containing the observed uv-data. The Measurement Set format is described in (see \htmladdnormallink{\aipspp Note 191}{../../notes/191/191.html}). The interaction of the {\tt calibrater} tool with specific MS data columns is important. The observed data, as recorded by the instrument, are stored in the DATA column of the MS, and are referred to as the observed data. If calibration corrections are applied by calibrater, the resulting calibrated data are stored in a separate CORRECTED_DATA column in the MS. These columns can be selected when imaging the data using the \ahlink{im}{imager} tool. A further MS column is used by calibrater, namely the MODEL_DATA column. The difference between the model data and corrected data columns is used to form \chi^2, when solving for individual calibration components. It is important to set the {\bf MODEL\_DATA} column before using calibrater to solve for calibration. This can be done using the {\tt imager} functions \ahlink{setjy}{imager:imager.setjy} or \ahlink{ft}{imager:imager.ft}

A calibrater tool is attached to a specified measurement set as indicated in the following example:

Step-by-Step Guides

Spectral Line Analysis -- NGC 5921 (VLA)

The VLA spectral line dataset used as an example in this cookbook is available from the \casa\ Data Repository. The Data Repository is usually created in the directory /usr/lib/casapy/data/ when \casa\ is installed on a UNIX or LINUX machine (check with your \casa\ system administrator if the data are not here). You can create \casaMeasurement Sets from this data directly as described below using the \ms\ tool.

The spectral-line dataset is a short VLA D-configuration observation of HI (redshifted to 1413 MHz) in NGC~5921. This dataset was observed in total intensity only (RR \& LL), with 63 channels in a single spectral window. A nearby phase calibrator (1445$+$099) was observed, as well as 3C\,286 for flux density calibration.

VLA: Filling data from VLA archive format

VLA data in on-line archive format are read into \casa\ from disk or tape using the {\tt vlafiller} task.

*Note: autocorrelations are filled automatically into the data set.} Autocorrelation data is not needed for standard interferometric data, further, the imaging routine will try to image the autocorrelation data (it assumes it is single dish data) which will swamp any real signal. Thus, it is necessary to flag the autocorrelation data any time before imaging.*

Fill from Disk

The advantage to filling data from disk is that reading from disk is much faster and a lot less troublesome than reading from tape. It is also advantageous to do this if you expect to fill the data more than once. The disadvantage is that you need about 160 MBytes of spare disk space (or more if there are multiple archive files), as this is typically the size of a VLA archive file.

Downloaded archive format data are available with names like: AS500_A030501.xp1, AS500_A030501.xp2, ..., AS500_A030501.xpN.

The vlafiller task allows selection on frequency band, calibrator name, frequency range, project, time, subarray, and source, and supports concatenation to an existing uv-data set. Suppose that you have 1.3 cm line observations in 'K' band and you have copied the archive data to a subdirectory called 'data'. To run the vlafiller using archive data located on disk:

CASA [5]: pdoc vlafiller               # use pdoc to see a list
    Perform fill operations :          # of input parameters and their
      msname =                         # defaults
      inputfile
      project
      start     = 1970/1/1/00:00:00
      stop      = 2199/1/1/23:59:59
      centerfreq        = 1.0e18Hz
      bandwidth = 2.0e18Hz
      bandname  = *
      source
      subarray  = 0
      qualifier = -65536
      calcode   = #
      overwrite = false

vlafiller msname='as500.ms', inputfile='data/AS500_A030501.xp1', overwrite=True, bandname='K'
vlafiller msname='as500.ms', inputfile='data/AS500_A030501.xp2', overwrite=False,bandname='K'

Filling data from UVFITS format

For UVFITS format, the ms tool has the fromfits method, which can be used as:

CASA [3]: pdoc ms.fromfits
Create an ms from a uvfits file :
  msfile
  fitsfile
  nomodify  = true
  lock      = false
  obstype   = 0
  host
  forcenewserver    = false
----------------------------------------

CASA [4]: ms.fromfits('n5921.ms','/aips++/data/nrao/VLA/ngc5921.fits')
  Out[4]: True

Obtain a summary

To obtain a summary of the dataset's properties after filling, load the data into the ms tool, and run the ms.summary function:

ms.open('as500.ms')                     # Load data into ms tool
ms.summary()                            # Write a summary to the casapy.log file

Plot the data

The primary graphical visualization tool in CASA for visibility data is msplot (mp). mp provides X-Y plots of the observed, corrected, and model data against a range of MS indices and parameters, as well as associated plots such as $uv$-coverage. Plots are displayed using the matplotlib plotting library which is also provided to the user at the command line.

pylab functions allow one to annotate, alter, or add to any plot displayed in the matplotlib plotter. matplotlib commands can be found at:

http://matplotlib.sourceforge.net/pylab_commands.html

To start the mp tool from the Python command line interface and allow for interactive flagging of the NGC~5921 data:


mp.open(msfile='ngc5921.ms')

The following figure illustrates some of the capabilities; the script is below:

mp.open('ngc5921.ms') # open data set mp.setdata(antennaNames='3', # choose a subset of the data; antenna3 only fieldIndex=1, # field 2 (gain calibrater) correlations='RR') # RR pl.subplot(222) # plot it in panel 2 (top right) of a 2x2 grid mp.vischannel() # plot amplitude versus channel mp.setspectral(1,0,1) # select only the 1st channel in the data - for # the VLA, this is the 0 channel (average) data subplot(221) # plot on panel 1 (top left) of a 2x2 grid mp.uvdist() # plot amplitude versus distance pl.subplot(223) # plot on panel 3 (lower left) of a 2x2 grid mp.uvcoverage() # plot the uvcoverage pl.subplot(224) # plot on panel 4 (lower right) of a 2x2 grid mp.array() # plot antenna configuration

%BEGINFIGURE{label="fig:msplot.control" caption="_msplot_: multi-panel display}% Error: (3) can't find msplot_sampler.jpg in Software %ENDFIGURE%

Interactive Flagging

Interactive flagging (e.g., 'see it -- flag it') is possible on almost all graphical views of the data.

The key methods are markflags and flagdata. markflags enables the use of the matplotlib cursor to mark a rectangular region; markflags can also specify a region at the command line:

mp.uvdist()
mp.markflags() #no region specified so mark with cursor
               #left click top left corner (an 'x' will appear)
               #drag down and to the right and left click to mark right bottom corner
mp.flagdata()  # to redisplay plot with flagged region removed
mp.flagdata(diskwrite=true) # to write the flags to disk

mp.markflags(region=[200,250,0.1,0.15]) # specify region at command line
                                        # region=[xmin,xmax,ymin,ymax]
mp.flagdata()  # to redisplay plot with flagged regions removed
#mp.flagdata(diskwrite=true) # to write the flags to disk

%BEGINFIGURE{label="fig:markflags" caption="Plot of the amplitude versus uv distance. The cursor-determined flag region is marked by the black outline. Plot of amplitude versus uv distance with both the cursor-determined and command line specified flagging regions removed (right)."}% Error: (3) can't find markflags.jpg in Software Error: (3) can't find markflags2.jpg in Software %ENDFIGURE%

Flagging bad data in the NGC 5921 dataset

Examine the data in ngc5921.ms using msplot. First, plot calibrator data in _field__id_=0 (1328+307 = 3C286 - the flux calibrator).

mp.setdata(fieldIndex=0) # select flux cal
mp.setspectral(1,30,1)   # select a central channel
mp.uvdist()              # plot amplitude (default)
                         #   vs. uv distance
mp.markflags()           # to interactively flag

The observed data plotted are the RR \& LL correlation (Fig. \ref{msplotS.examples}). Cross correlation (RL & LR) are not available in this dataset. Plot amplitude vs. time next. No obvious flagging is needed based on the msplot plots. Note, it is best not to edit/flag source data until after it is calibrated.

%BEGINFIGURE{label="fig:msplot_cals" caption="_msplot_ of the calibrators in the VLA spectral-line dataset example. Left: 3C286 (1331+305), the flux calibrator. Right: 1445+099 - the gain calibrator."}% Error: (3) can't find vla_msplot_cals.jpg in Software %ENDFIGURE%

Non-interactive flagging

Command-based editing capabilities are supported in the autoflag tool. af is useful because it can select data of a particular type, or with particular attributes and flag these data directly. In general, this involves selecting the data to be flagged and then running a flagging operation. The flagging commands all have the option to run them on a trial basis.

As mentioned in \ref{nrao.vla} , it is useful to flag the autocorrelations in the NGC 5921 dataset, ngc5921.ms, as well as any VLA data filled from archive format files. If autocorrelation data are left in, the data and calibration will be fine but you'll have to explicitly avoid the autocorrelation data in imaging by specifying a uvrange. To flag the autocorrelations, type:

    af.open('ngc5921.ms')             # Create the autoflag tool, attach to MS
    af.setdata()                      # Set all data initially
    af.setautocorrelation(True)       # Select autocorrelation data
    af.run()                          # Flag it
    af.close()                        # Close the tool

Calibration

Set the absolute flux density scale

To set the true flux density for flux density calibrators, use the imager.setjy function, which will recognize common flux density calibrators, and set their model flux densities according to the observing frequency. See Synthesis Calibration chapter for more details.

For the NGC 5921 dataset, set the flux density of 3C286 (field 1):

im.open(filename='ngc5921.ms')      # Load the data into the imager tool
im.setjy(fieldid=0)                 # Calculate and set the flux density
                                    # 1328+307 (3C 286) to 14.76 Jy
                                    # Look at the casapy.log file for the
                                    # results
im.close()

When setjy is run, you will get a report in the logger giving the status of the task along with the end results:

  Starting imager::setjy
    1331+30500002  spwid=  1  [I=14.76, Q=0, U=0, V=0] Jy, (Perley-Taylor 99)
  Finished imager::setjy

Check that the flux is actually set by using mp.uvdist:

mp.open('ngc5921.ms')
mp.setdata(fieldIndex=0)            # set data to 1331+305 (3C 286)
mp.uvdist(column='model')           # plot model amplitudes vs uv distance
mp.close()                          # close out the tool.

Other calibrators typically have unknown flux density. Phase and relative amplitude calibration can proceed on these calibrators using unit flux density models; the flux density scale will be transferred later.

%BEGINFIGURE{label="mp.uvdist" caption="Check the im.setjy result using _mp.uvdist_('model'). The amplitude of 1328+307 (3C 286) for the NGC 5921 dataset is now 14.8 Jy at 1413 MHz."}% Error: (3) can't find vla_plotvis.jpg in Software %ENDFIGURE%

Preparing for calibration: Determine opacity corrections for VLA data

For VLA data, zenith opacity can be measured at the frequency and during the time observations are made using a VLA tipping scan in the observe file. Historical tipping data are available at:

http://www.vla.nrao.edu/astro/calib/tipper/

Choose a year, and click Go to get a list of all tipping scans that have been made for that year. If a tipping scan was made for your observation, then select the appropriate file. Go to the bottom of the page and click on the button that says Press here to continue.. The results of the tipping scan will be displayed. Go to the section called 'Overall Fit Summary' to find the fit quality and the fitted zenith opacity in percent. If the zenith opacity is reported as 6%, then the actual zenith opacity is 0.060 (this is the input needed for CASA corrections).

Solving for complex gain, G

At this point, the source model and observed data exist for all calibrators at all uv-points, and it is possible to begin solving for the gains. It is almost invariably the case that the instrumental and atmospheric amplitude and phase gains (hereafter ``complex gains'') are the dominant calibration error in VLA data. Therefore we solve for this factor first.

For the standard cross-calibration scheme described here, only one term (G) will be used to describe the net complex gain (and so it will include atmospheric effects). Thus, one complex gain factor, as a function of time will be determined for each field, spectral window, polarization, and antenna in the dataset.

Note: Well-behaved antennas that are located near the center of the array are chosen as reference antennas for each dataset. For non-polarization datasets, reference antennas need not be specified although you can if you want to. If no reference antenna is specified, the effective phase reference will be some implicit average over the array, which is usually OK for parallel-hand data. For data that requires polarization calibration, you must choose a reference antenna that has a constant phase difference between the right and left polarizations (i.e., no phase jumps or drifts). If no reference antenna (or a poor one) is specified, the phase reference may have jumps in the R--L phase, and the resulting polarization position angle response will vary during the observation, thus corrupting the polarization imaging.

To solve for $G$, first choose a solution interval which samples the timescale over which you think the instrument or atmosphere is changing. Time scales depend on the array configuration and the observed frequency. Here, scan-based solutions (one solution per visit to each field) are chosen for the 21~cm spectral line data.

At frequencies above \sim 14 GHz, it is a good idea to correct the data for the gain curve response of the antennas and the elevation-dependent opacity. CASA has the gain-curves of the VLA antennas available in the data repository and so can correct for the change in efficiency as a function of elevation. An opacity correction can be made if there is an independent measure of the zenith opacity in the observed frequency band and at the time of the observations (e.g. a 'tipping scan' was included in the VLA observe file). For the NGC 5921 data, an opacity correction is not needed however, the script below includes this step for illustration purposes (using a very low opacity). See Section \ref{vla.opacity} for a discussion on how to determine the zenith opacity for a project.

To obtain gain calibration solutions for the spectral-line data set: select the data you want to find solutions for (the inner 55 channels, starting at channel 3, for the gain and flux calibrators in FIELD_IDs 1 & 2), set up to derive scan-based solutions for phase and amplitude, and then solve for the calibration solutions. Write the gain solutions to a file called ngc5921.gcal and plot the solutions:

cb.open(filename='ngc5921.ms')           # Create calibrater tool for
                                         #  spectral line data set
cb.reset()                               # Reset apply/solve state
                                         #  of the calibrater tool
cb.setdata(msselect='FIELD_ID <= 1',     # Select data for calibrators
            mode='channel',              #  (Fields 1 & 2)
            start=3,                     #  and drop the outer channels
            nchan=55)                    #  that may bias the gain solution.
cb.setapply(type='TOPAC',                # Correct opacity 'on-the-fly'
             t=-1,                       # Calc opacity correction for each time
             opacity=0.0001)             # Specify measured zenith opacity
                                         #  make opacity small - not really needed here
cb.setapply(type='GAINCURVE',            # Correct gain curve for VLA 'on-the-fly'
             t=-1)                       #  apply correction for each time
cb.setsolve(type='G',                    # Arrange to solve for G for
             t=0,                        #  each integration.
             refant=13,                  #  Choose reference antenna 14:
             table='ngc5921.gcal')       #  a well-behaved antenna
                                         #  near the center of the array.
cb.state()                               # Review setsolve settings
cb.solve()                               # Solve for the net complex gains
                                         #  and write solutions to the table
                                         #  ngc5921.gcal located on disk.
cp.open('ngc5921.gcal')                  # Inspect solutions
cp.plot('amp')

Note: the amplitude solutions in ngc5921.gcal do not contain any variations due to opacity or gain variations in the antennas because these corrections were automatically determined and taken out before the gain solutions were derived.

While the cb.solve function is running, the logger will report the gain and opacity corrections as well as a summary of how many solutions were found and whether the solutions are considered good or bad:

  Found 5 good G Jones solutions.
        0 solution intervals failed.
        0 solution intervals had insufficient data.
Storing G matrix in table ngc5921.gcal

You can plot the solutions graphically using the cp tool, e.g. (Fig. \ref{cp.plot}). If you want solutions plotted for each antenna separately, choose cp.plotoptions(multiplot=True).

%BEGINFIGURE{label="fig:cp.plot" caption="Gain calibration solutions for spectral-line data. A plot of calibration solutions for the spectral-line data set as a function of time for all antennas (generated with the cp.plot function). The first set of solutions for 3C 286, the last four are for the gain calibrator, 1445+099"}% Error: (3) can't find vla_calplot.jpg in Software %ENDFIGURE%

Bandpass calibrator

In the case of spectral-line data, it is necessary to remove gain fluctuations that are a function of frequency. This is achieved by using the cb tool to solve for $B$ calibration, in the present example, using 1331-305 as the bandpass calibrator. The $G$ calibration obtained above is applied to ensure a coherent time average prior to the solve for B. A by-product of this action is to yield a normalized B solution as well. The solution interval is set very large so only one B solution (in time) is obtained.

To obtain bandpass solutions reset the calibrator tool, select the data you want to find solutions for (the bandpass calibrator in FIELD_ID 1), set up to derive a single solution for phase and amplitude, and then solve. Write the bandpass solutions to a file called ngc5921.bcal and plot the solutions:

cb.reset()                               # Reset apply/solve state
                                         #  of the calibrater tool
cb.setdata(msselect='FIELD_ID==0')       # Select bandpass calibrator
                                         #  (1331+305 = 3C 286)
cb.setapply(type='TOPAC',                # Correct opacity 'on-the-fly'
             t=-1,                       # Calc opacity correction for each time
             opacity=0.0001)             # Specify measured zenith opacity
                                         #  make opacity small - not really needed here
cb.setapply(type='GAINCURVE',            # Correct gain curve for VLA 'on-the-fly'
             t=-1)                       #  apply correction for each time
cb.setapply(type='G',                    # Arrange to apply G solutions
             t=0.0,                      #  from the table ngc5921.gcal
             table='ngc5921.gcal')
cb.setsolve(type='B',                    # Arrange to solve for a single
             t=86400.0,                  #  bandpass solution for the
             refant=13,                  #  entire observation.
             table='ngc5921.bcal')
cb.state()                               # Review setapply/setsolve settings
cb.solve()                               # Solve for the bandpass solutions
                                         #  and write them to the table
                                         #  ngc5921.bcal located on disk.
cp.open('ngc5921.bcal')                  # Inspect the solutions
cp.plot('amp')

The state function provides the current state of the calibrator tool in a logger message:

  The following calibration components will be applied:
    GAINCURVE table= t=-1 select=[ ]
    TOPAC table= t=-1 select=[ ]
    G table=ngc5921.gcal t=0 select=[]
  The following calibration components will be solved for:
    B table=ngc5921.bcal t=86400 preavg=0 phaseonly=F refant=14 append=F

While the calibrater.solve function is running, the logger will report:

Initializing Opacity corrections (TOpac-matrix)
For interval of -1 seconds, found 26 slots
  Elevation-dependent opacity corrections will be applied
  using a constant zenith opacity = 1e-04
Initializing Gain Curve corrections (EVJones-matrix)
For interval of -1 seconds, found 26 slots
  Searching for gain curve data in /aips++/daily/data/nrao/VLA/GainCurves
  Found the following gain curve coefficient data
  for spectral window = 0 (ref freq=1.41345e+09):
   Antenna=1: [0.99545, -2.6976e-05, 4.1327e-06, 0] [0.99545, -2.6976e-05, 4.1327e-06, 0]
   ...
Applying G table from ngc5921.gcal
Solving for B
  Found 1 good B Jones solutions.
        0 solution intervals failed.
        0 solution intervals had insufficient data.
Storing B matrix in table ngc5921.bcal

Plot the solutions graphically using the cp tool. If you want solutions plotted for each antenna separately, choose cp.plotoptions(multiplot=True).

%BEGINFIGURE{label="fig:bcal.plotcal" caption="Bandpass calibration solutions for the spectral-line data. A plot of the bandpass solutions for NGC 5921 for all antennas."}% Error: (3) can't find vla_plotcal_bpass.jpg in Software %ENDFIGURE%

Transfer the flux density scale

Point source calibrators of unknown flux density have been assumed to be of unit flux density (1 Jy) thus far. Their absolute flux densities can be established by scaling the mean gain amplitudes against those of the known amplitude calibrators. This scaling is achieved using the cb.fluxscale function.

In the example below, the amplitude scale is set with 3C~286 as the reference flux density calibrator. The output calibration table contains the scaled G calibration factors for all of the transfer sources, and will be used in subsequent reduction steps. Proper scaling is indicated in the calplot plots by smoothly varying (ideally constant) amplitude gains for each antenna, despite source changes.

In the calibrater tool, transfer the flux scale calculated for 3C 286 to the gain calibrator, 1445+099, and write the scaled solutions to a file on disk called ngc5921.fluxcal. Plot the solutions with the calplot function (Fig. \ref{fcal.plotcal}):

cb.fluxscale(tablein='ngc5921.gcal',          # Transfer the flux density scale
              tableout='ngc5921.fluxcal',     #  from 1328+307 (J2000: 3C286)
              reference=['1331+30500002'],    #  to the gain calibrator
              transfer=['1445+09900002'])     #  1445+09900002.  Write
                                              #  solutions to the table
                                              #  ngc5921.fluxcal located on disk.
cp.open('ngc5921.fluxcal')                    # Inspect the solutions
cp.plot(plottype='amp')

The logger window will report the flux density of the gain calibrator:

  Setting the flux density scale using reference calibrators
  Flux density for 1445+09900002 (spw=1) is:     2.486 +/-  0.000 Jy

%BEGINFIGURE{label="fig:fcal.plotcal" caption="Scaled gain solutions for the NGC 5921 data. A plot of scaled gain solutions for the NGC 5921 data set as a function of time for all antennas. Amplitudes for all sources are now roughly the same, as expected for a good set of scaled gain solutions."}% Error: (3) can't find vla_plotcal_fcal.jpg in Software %ENDFIGURE%

Correct the Data

Before imaging, the source data need to be corrected by applying the calibrations determined above. The cb.setdata function selects which data are to be corrected, and the cb.setapply function selects which solutions are to be applied. The cb.setapply function may be run several times (in any order) to specify simultaneous application of different calibration components. The components will be applied by the cal.correct function in the appropriate order according to the Measurement Equation. In CASA, calibration tables are always cumulative, and so are always applied to the observed data in the DATA column. The corrected data are written to the CORRECTED_DATA column in the MS, which will be used directly in imaging. Note: the observed DATA is NOT changed. If you want to change the solutions you have applied, you can create new calibration tables and correct the data again -- the CORRECTED_DATA column will be over-written.

In the calibrater tool, reset the tool state, select source & gain calibrators (Fields 2 & 3), set up to apply flux-scaled gain solutions derived from the gain calibrator (Field 2) along with the bandpass solutions, and then correct the observed data, writing the result to the CORRECTED_DATA column in the measurement set:

cb.reset()                                 # Reset setapply/setsolve
cb.setdata(msselect='FIELD_ID IN [1,2]')   # Select sources in MS fields
                                           #  2, & 3 to which calibration
                                           #  is to be applied
cb.setapply(type='TOPAC',                  # Correct opacity 'on-the-fly'
             t=-1,                         # Calc opacity correction for each time
             opacity=0.0001)               # Specify measured zenith opacity
cb.setapply(type='GAINCURVE',              # Correct gain curve for VLA 'on-the-fly'
             t=-1)                         #  apply correction for each time
cb.setapply(type='G',                      # Arrange to apply flux-scaled G
             t=0.0,                        #  solutions (from field 2) from
             table='ngc5921.fluxcal',      #  the ngc5921.fluxcal table.
             select='FIELD_ID==1')
cb.setapply(type='B',                      # Arrange to apply B solutions
             t=0.0,                        #  from the ngc5921.bcal table.
             table='ngc5921.bcal',
             select='')
cb.state()                                 # Review setapply settings
cb.correct()                               # Apply solutions and write the
                                           #  CORRECTED_DATA column in the MS.
cb.done()                                  # Close out the calibrator tool

The calibrater.state and calibrater.correct functions will produce logger messages like:

  The following calibration components will be applied:
    GAINCURVE table= t=-1 select=[ ]
    TOPAC table= t=-1 select=[ ]
    B table=ngc5921.bcal t=0 select=[]
    G table=ngc5921.fluxcal t=0 select=[(FIELD_ID+1)==2]
  The following calibration components will be solved for:
    None.

  ... Opacity and gain curve results will be written out ...

  Applying B table from ngc5921.bcal
  Applying G table from ngc5921.fluxcal

The target data is now calibrated and ready for imaging.

Note: The flux calibrator (3C 286, FIELD_ID 1) could have been calibrated at the same time by selecting it in the initial cb.setdata and applying the G and B solutions to all the data (as in the previous example with the continuum data set). If you want to calibrate the 3C 286 data separately, execute {\tt cb.setdata} and cb.setapply(type=G) again with FIELD_ID==1 selected in both, followed by cb.correct.

Examine calibrated source data

At this point, you should examine the calibrated source data and flag anything that looks bad. If you find source data that has not been flanked by calibration scans, delete it (it will not be calibrated):

mp.open(msfile='ngc5921.ms')              # Examine the data, flag
                                          #  anything that looks bad

uv Continuum Subtraction

At this point, consider whether you are likely to need continuum subtraction. If there is significant continuum emission present in what is intended as a spectral line observation, continuum subtraction may be desirable. You can estimate and subtract continuum emission in the $uv$-plane prior to imaging or wait and just subtract an estimate of it in the image-plane. Note that neither method is ideal, and the choice depends primarily upon the distribution and strength of the continuum emission. Subtraction in the $uv$-plane is desirable if continuum emission dominates the source, since deconvolution of the line emission will be more robust if not subject to errors in deconvolution of the brighter continuum. There is also a performance benefit since the continuum is probably the same in each channel of the observation, and it is desirable to avoid duplication of effort. However, the main drawback of continuum estimation in the $uv$-plane is that it is only strictly correct for the phase center, since without the Fourier transform, the visibilities only describe the phase center. Thus, $uv$-plane continuum subtraction (as described here) will be increasingly poor for emission distributed further from the phase center. If the continuum emission is relatively weak, it is usually adequate to subtract it in the image plane; this is described in the Image Analysis section of this cookbook. Here, we describe how to do continuum estimation and subtraction in the $uv$-plane.

$uv$ plane continuum subtraction is performed by the {\tt continuumsub} function in the {\tt ms} tool. First, determine which channels in your data cube do not have line emission, perhaps by forming a preliminary image as described in the next chapter. This image will also help you decide whether or not you need to come back and do $uv$-plane continuum subtraction at all.

For each baseline, and over the timescale specified in {\tt solint}, {\tt continuumsub} will provide a simple linear fit to the real and imaginary parts of the (continuum-only) channels specified in {\tt chans}, and subtract this model from all channels. Choose the timescale to be shorter than the timescale for changes in the visibility function of the continuum, but be careful not to make it so short that the SNR of the estimated continuum suffers substantially. For example (the NGC 5921 case):

ms.open(filename='ngc5921.ms',           # Load the data into the ms tool
           readonly=F)                   # Make sure it is writable!
ms.continuumsub(fldid=2,                 # Choose NGC 5921 (field 3),
           spwid=0,                      #  spectral window 1,
           chans=range(4,7)+range(50,60),#  line free channels 4-6 & 50-59,
           solint=0.0,                   #  estimate/subtract per scan
           mode='subtract')              #  & subtract cont. from line data.
                                         # Line-only data will be written into
                                         #  CORRECTED_DATA column.
ms.close()                               # Unload the data from the ms tool

Running continuumsub with mode='subtract' will replace the CORRECTED_DATA column in the MS with continuum-subtracted line data and the MODEL_DATA column with the continuum model. You can use mode='replace' to replace the CORRECTED\_DATA column with the continuum model; however, it is probably better to use mode='subtract' and then use {\tt split} to select the MODEL_DATA and form a dataset appropriate for forming an image of the estimated continuum. Note that an continuum image formed from this model will only be strictly correct near the phase center, for the reasons described above.

Note that it is currently the case that continuumsub will overwrite the CORRECTED_DATA column. Therefore, it is desirable to first split the relevant corrected data into a new Measurement Set. Also, continuumsub requires that the MODEL_DATA and CORRECTED_DATA columns exist, and split does not generate them. These columns will be generated by the imager tool when forming the exploratory image described above. If you run continuumsub on the original dataset, you will have to re-apply the calibration as described in the previous chapter. Finally, since {\tt continuumsub} is currently implemented as a python method, its performance is quite slow. All of these issues will be resolved in due time.

So, the recommended procedure is as follows:

  • Finish calibration as described in the previous chapter
  • Use split to form a separate dataset
  • Use the imager tool on the split result to form an
exploratory image that is useful for determining the line-free channels
  • Use continuumsub with mode='subtract' to subtract
the continuum from the CORRECTED_DATA in the split dataset (and write the continuum model in the MODEL_DATA column
  • Image the line-only emission with the imager tool
  • If an image of the estimated continuum is desired, run split
again (on the _continuumsub_'d dataset), and select the MODEL_DATA; then run imager to image it

Split out calibrated uv data

To split out calibrated uv data into a separate MS, use the split function in the ms tool. Thus, for the NGC 5921 dat:

ms.open(filename='ngc5921.ms',           # Load the data into the ms tool
           readonly=F)                   # Make sure it is writable!
ms.split(outputms='ngc5921.split.ms',    # Select output file name,
           fieldids=2,                   #  write out only NGC 5921 data,
           spwids=0,                     #  spectral window 1,
           nchan=62,                     #  write out 62 channels,
           start=0,                      #  starting with channel 1,
           step=1,                       #  no channel averaging,
           whichcol='CORRECTED_DATA')    #  write out CORRECTED_DATA column.
ms.close()
ms.open('ngc5921.split.ms')
ms.summary()                             # Get a new summary
ms.close()                               # Unload the data from the ms tool

Spectral Line Imaging

For the spectral-line data set, total intensity images of the HI in the galaxy NGC~5921 are generated for each channel to form a spectral-line cube. In this example, the CLEANing process is done on the line+continuum data and then the continuum is subtracted in the image plane.

Cleaning a spectral-line cube with the continuum in each channel can be inefficient if most of the clean cycles are spent cleaning the same structure (e.g. continuum emission) in each plane. Thus, it is usually better to subtract the continuum emission before the CLEAN process. This example creates an image using the line-only cube (continuum subtraction was done in the $uv$ plane).

CLEANing a cube is similar to CLEANing a single continuum channel except that a mask can be defined for each channel. The mask can be a single shape that is defined for all channels (e.g. as in the continuum polarimetry example) or it can have a variable shape depending upon the location of the emission in each channel. In the simple example below, the image data, parameters, and weighting function are defined first. The inner quarter of the cube is then CLEANed down to a threshold of 2~mJy~beam^{-1} (the expected RMS in a channel):

im.open(filename='ngc5921.split.ms') # Load the data
                                     #  Use split out calibrated data.
im.setdata(mode='channel',           # Select channel data for field 1 (source)
             nchan=62,
             start=0,
             step=1,
             fieldid=0)
im.advise(takeadvice=F,              # Determine image and cell size using
            fieldofview=quantity(60.,'arcmin'))  #  advise function.  Do not take advice,
                                     #  note parameters in logger
                                     #  window and use for setimage.
                                     #  This step is NOT required for
                                     #  imaging, it is just a useful tool.

The advise function is not necessary for imaging except that the logger messages produced can help guide your choice of image properties if you don't know what they should be:

  Advising image properties
  Maximum uv distance = 4797.41 wavelengths
  Recommended cell size < 21.4975 arcsec
  Recommended number of pixels = 180
  Dispersion in uv, w distance = 1928.44, 788.529 wavelengths
  Best fitting plane is w = -0.253496 * u + -0.574818 * v
  Dispersion in fitted w = 178.687 wavelengths
  Wide field cleaning is not necessary

The accuracy of the deconvolution algorithm generally improves when the shape of the dirty beam is well sampled, e.g. 3-6 cells per resolution element. To interpret the results of Advise, the ``Recommended cell size'' is the {\it maximum} cell size you can get by with to create a reasonable image. This value will give you 1-2 cells across the synthesized beam. If you use a cell size of 21.5'', then you will need an image size of 180 pixels to clean the inner quarter of the primary beam. Thus, to get a reasonable image, one should choose a cell size of, say, $15''$ (or less) and a corresponding image size of 256 \times 256 pixels (or greater).

Set the image parameters and weights:

im.setimage(nx=256,                # Set imaging parameters
              ny=256,
              cellx=quantity(15.,'arcsec'),
              celly=quantity(15.,'arcsec'),
              stokes='I',
              mode='channel',
              nchan=58,              # Image 58 channels
              start=2,               #  starting with channel 3
              step=1,                #  stepping by 1
              fieldid=0)             #  for field 1 (NGC 5921)
im.weight(type='briggs',           # Set up Robust weighting
            rmode='norm',
            robust=0.5)

To Hogbom CLEAN the inner quarter of the image down to a threshold level of about the expected RMS level in a single channel (2 mJy):

im.clean(algorithm='hogbom',       # Image and deconvolve inner quarter
           niter=6000,               #  with Hogbom CLEAN down to a threshold
           gain=0.1,                 #  of 2 mJy
           threshold=quantity(2.,'mJy'),
           model=['ngc5921.mod'],    # Write model image to ngc5921.mod
           image=['ngc5921.im'],     # Write the cleaned image to the file
           residual=['ngc5921.resid'], #  ngc5921.im on disk.
           mask=[''],                # Clean the inner quarter, thus: do not
           interactive=F)            #  set a mask or create a deconvolution
                                     #  region.

The logger will report the status of the deconvolution and the final synthesized beam shape:

Fitted beam used in restoration: 53.2798 by 45.8793 (arcsec) at pa -164.714 (deg)

Note: In this example, the inner quarter of the 256x256 image is is searched for CLEAN components.

The CLEAN deconvolution can be better constrained if deconvolution regions are specified. This is particularly important if the $uv$ coverage is sparse, e.g., for small arrays, or snapshots. In these cases, the probability of finding spurious CLEAN components is greater since the synthesized beam will scatter more power around the image. When deconvolution regions are made very small, using them can also decreaset the time required in the CLEAN component search.

Use the qtviewer to display the image ngc5921.im and obtain statistics (Fig. \ref{ngc5921.im}). If you did not subtract the continuum emission in the uv plane then the final image will have a peak of \sim 89.7 mJy~beam^{-1} and an RMS \sim 1.8 mJy~beam^{-1} in each channel. You should see several continuum sources and the HI emission in the galaxy as you move through the cube. Fig. \ref{ngc5921.im} also shows an image created after uv continuum subtraction. No deconvolution region needed to be set and the final RMS in the image is slightly better than that achieved from the line+continuum image because the bright continuum sources did not interfere with the deconvolution.

%BEGINFIGURE{label="fig:ngc5921.im" caption="Line+continuum and Line-only emission at 6 cm in channel 40 of the NGC 5921 cube. Left: Line+continuum emission at 6 cm in channel 40 of the NGC 5921 cube. Grey scale is plotted from 0 to 89.7 mJy beam^{-1}, the RMS is 1.8 mJy beam^{-1}, and contours are plotted at -3, 3, 5, 10, 20, 30, 50 and 70 \sigma. Right: Line-only emission. Grey scale is plotted from 0 to 47.2 mJy beam ^{-1}, the RMS is 1.3 mJy beam ^{-1}, and contours are plotted at -3, 3, 5, 10, and 20 \sigma. The synthesized beam is shown in the bottom left corner of the images."}% Error: (3) can't find vla_ngc5921_image.jpg in Software Error: (3) can't find vla_ngc5921_nocont_image.jpg in Software %ENDFIGURE%

Another task that is useful but not necessary to create images is im.fitpsf which allows you to examine the synthesized beam or Point Spread Function to determine the resolution resulting from the chosen weighting scheme and see what artifacts to expect from beam sidelobes during CLEANing (Fig. \ref{vla.psf.im}):

im.makeimage(type='psf',           # Form the PSF image if desired
               image='ngc5921.psf')
im.fitpsf(psf='ngc5921.psf')       # Measure the beam size
im.close()

The im.fitpsf function will report the FWHM of the synthesized beam in the logger:

    Beam fit: 53.2801 by 45.8792 (arcsec) at pa -164.714 (deg)

%BEGINFIGURE{label="fig:vla.psf.im" caption="Point Spread Function or Dirty beam for the NGC 5921 spectral-line data. Contours are plotted at -5, 5, 10, 30, 50, 70 and 90% of the peak"}% Error: (3) can't find vla_psf.jpg in Software %ENDFIGURE%

Annotated Example Scripts

Note: The NGC 5921 data set is available with the full CASA rpm distribution. Other data sets can be made available upon request. These scripts are intended to illustrate the types of commands needed for different types of reduction/astronomical observations.

NGC 5921 -- VLA -- red-shifted HI emission

Note: This script does not include self-calibration steps.

###########################################################
## Fill Data:
##
casalog                                    # view logging output
fitstoms('ngc5921.ms',                     # fill UVFITS to output MS
     '/aips++/data/nrao/VLA/ngc5921.fits') # input fits file
ms.open('ngc5921.ms')                      # Open created MS
ms.summary(verbose=True)                   # Obtain summary
                                           # prints to casapy.log
ms.close()                                 # close ms

###########################################################
## Manually plot the data for visual inspection:
##
mp.open('ngc5921.ms')                      # load the MS
mp.uvdist()                                # view data using
                                           # msplot functions
mp.close()                                 # unload the data

###########################################################
## Flag autocorrelations:
##
af.open('ngc5921.ms')                      # Load data into autoflag tool
af.setdata()                               # Use whole dataset (no selection)
af.setselect(autocorr=T)                   # Select autocorrelations
af.run()                                   # Flag the autocorrelations
af.close()                                 # unload the data

###########################################################
## Set the flux density of 3C~286 (field 1):
##
im.open(thems='ngc5921.ms')      # Start the imager tool
                                 # Calculate & set flux density for
im.setjy(fieldid=0)              # Set data to 1331+305 (3C 286)
                                 # (explicitly set the spwid to id of the
                                 # source and calibrater).
im.close()                       # unload data
mp.open('ngc5921.ms')            # load data into msplot
mp.setdata(fieldIndex=0)         # set source to 3C 286
mp.uvdist('model')               # plot model amplitudes vs uv distance
mp.close()                       # Close out the tool.

###########################################################
## Derive gain calibration solutions:
##
cb.open(filename='ngc5921.ms')           # Create calibrater tool for
                                         #  spectral line data set
cb.reset()                               # Reset apply/solve state
                                         #  of the calibrater tool
cb.setdata(msselect='FIELD_ID <= 1',     # Select data for g & bp calibrators
            mode='channel',              #  (Fields 1 & 2)
            start=2,                     #  and drop the outer channels
            nchan=55)                    #  that may bias the gain solution.
cb.setapply(type='TOPAC',                # Correct opacity 'on-the-fly'
             t=-1,                       # Calc opacity correction for each time
             opacity=0.0001)             # Specify measured zenith opacity
                                         #  make opacity small - not really needed here
cb.setapply(type='GAINCURVE',            # Correct gain curve for VLA 'on-the-fly'
             t=-1)                       #  apply correction for each time
cb.setsolve(type='G',                    # Arrange to solve for G for
             t=0,                        #  each scan.
             refant=13,                  #  Choose reference antenna 14:
             table='ngc5921.gcal')       #  a well-behaved antenna
                                         #  near the center of the array.
cb.state()                               # Review setsolve settings
cb.solve()                               # Solve for the net complex gains
                                         #  and write solutions to the table
                                         #  ngc5921.gcal located on disk.
cp.open('ngc5921.gcal')                  # Inspect the solutions
cp.plot()
cp.close()

###########################################################
## Derive bandpass calibration solutions:
##
cb.reset()                               # Reset apply/solve state
                                         #  of the calibrater tool
cb.setdata(msselect='FIELD_ID==0')       # Select bandpass calibrator
                                         #  (1331+305 = 3C 286)
cb.setapply(type='TOPAC',                # Correct opacity 'on-the-fly'
             t=-1,                       # Calc opacity correction for each time
             opacity=0.0001)             # Specify measured zenith opacity
                                         #  make opacity small - not really needed here
cb.setapply(type='GAINCURVE',            # Correct gain curve for VLA 'on-the-fly'
             t=-1)                       #  apply correction for each time
cb.setapply(type='G',                    # Arrange to apply G solutions
             t=0.0,                      #  from the table ngc5921.gcal
             table='ngc5921.gcal')
cb.setsolve(type='B',                    # Arrange to solve for a single
             t=86400.0,                  #  bandpass solution for the
             refant=13,                  #  entire observation.
             table='ngc5921.bcal')
cb.state()                               # Review setapply/setsolve settings
cb.solve()                               # Solve for the bandpass solutions
                                         #  and write them to the table
                                         #  ngc5921.bcal located on disk.
cp.open('ngc5921.bcal')                  # Inspect the solutions
cp.plot(plottype='amp')
cp.close()

###########################################################
## Transfer the flux density scale:
##
cb.fluxscale(tablein='ngc5921.gcal',          # Transfer the flux density scale
              tableout='ngc5921.fluxcal',     #  from 1328+307 (J2000: 3C286)
              reference=['1331+30500002'],    #  to the gain calibrator
              transfer=['1445+09900002'])     #  1445+09900002.  Write
                                              #  solutions to the table
                                              #  ngc5921.fluxcal located on disk.
cp.open('ngc5921.fluxcal')                    # Inspect the solutions
cp.plot(plottype('AMP')

###########################################################
## Correct the target source data:
##
cb.reset()                                 # Reset setapply/setsolve
cb.setdata(msselect='FIELD_ID IN [1,2]')   # Select sources in MS fields
                                           #  1  & 2 to which calibration
                                           #  is to be applied
cb.setapply(type='TOPAC',                  # Correct opacity 'on-the-fly'
             t=-1,                         # Calc opacity correction for each time
             opacity=0.0001)               # Specify measured zenith opacity
                                           #  make opacity small - not really needed here
cb.setapply(type='GAINCURVE',              # Correct gain curve for VLA 'on-the-fly'
             t=-1)                         #  apply correction for each time
cb.setapply(type='G',                      # Arrange to apply flux-scaled G
             t=0.0,                        #  solutions (from field 2) from
             table='ngc5921.fluxcal',      #  the ngc5921.fluxcal table.
             select='FIELD_ID==1')
cb.setapply(type='B',                      # Arrange to apply B solutions
             t=0.0,                        #  from the ngc5921.bcal table.
             table='ngc5921.bcal',
             select='')
cb.state()                                 # Review setapply settings
cb.correct()                               # Apply solutions and write the
                                           #  CORRECTED_DATA column in the MS.
cb.close()

###########################################################
## Subtract the continuum in the uv-plane
##  (assumes that no self-calibration will be done):
##
ms.open(filename="ngc5921.ms",           # Create the ms tool
           readonly=F)                   # Make sure it is writable!
ms.continuumsub(fldid=2,                 # Choose NGC 5921 (field 3),
           spwid=0,                      #  spectral window 1,
           chans=range(4,7)+range(50,60),#  line free channels 4-6 & 50-59,
           solint=0.0,                   #  use scan-averaging,
           mode="subtract")              #  & subtract cont. from line data.
                                         # Line-only data will be written into
                                         #  CORRECTED_DATA column.
ms.close()                               # Unload the data

###########################################################
## Split out calibrated target source data:
##
ms.open(filename="ngc5921.ms",           # Create the ms tool
           readonly=F)                   # Make sure it is writable!
ms.split(outputms="ngc5921.split.ms",    # Select output file name,
           fieldids=2,                   #  write out only NGC 5921 data,
           spwids=0,                     #  spectral window 1,
           nchan=46,                     #  write out 46 channels,
           start=5,                      #  starting with channel 6,
           step=1,                       #  no channel averaging,
           whichcol="CORRECTED_DATA")    #  write out CORRECTED_DATA column.

###########################################################
## Get quick summary with default catalog tool:
##
ms.open('ngc5921.split.ms')
ms.summary()                             # Get a new summary so you know
                                         #  the new field_id & spwid
ms.close()                               # Unload the data from the ms tool

###########################################################
## Image the target source data:
##
im.open(filename='ngc5921.split.ms')
                                     # Create imager tool if not already done.
                                     #  Use split out calibrated data.
im.setdata(mode='channel',         # Select channel data for field 1 (source)
             nchan=46,
             start=0,
             step=1,
             fieldid=0)
im.advise(takeadvice=F,            # Determine image and cell size using
            fieldofview=quantity(60.'arcmin')) #  advise function.  Do not take advice,
                                     #  note parameters in logger
                                     #  window and use for setimage.
                                     #  This step is NOT required for
                                     #  imaging, it is just a useful tool.
im.setimage(nx=256,                # Set imaging parameters
              ny=256,
              cellx=quantity(15.,'arcsec'),
              celly=quantity(15.,'arcsec'),
              stokes='I',
              mode='channel',
              nchan=46,
              start=0,
              step=1,
              fieldid=0)
im.weight(type='briggs',           # Set up Robust weighting
            rmode='norm',
            robust=0.5)
im.clean(algorithm='hogbom',       # Image and deconvolve inner quarter
           niter=6000,               #  with Hogbom CLEAN down to a threshold
           gain=0.1,                 #  of 2 mJy
           threshold='2mJy',
           model=['ngc5921.mod'],
           image=['ngc5921.im'],     # Write the cleaned image to the file
           residual=['ngc5921.resid'], #  ngc5921.im on disk.
           mask=[''],                # Clean the inner quarter, thus: do not
           interactive=F)            #  set a mask or create a deconvolution
                                     #  region.
# look at sidelobe response, if desired:
im.makeimage(type='psf',           # Form the PSF image if desired
               image='ngc5921.psf')
im.fitpsf(psf='ngc5921.psf')      # Measure the beam size
im.close()

NGC 5921 data summary

Summary created with ms.summary(verbose=T):

This is written to the casalog (which is just a 'tail -f' on the casapy.log file).

           MeasurementSet Name:  ngc5921.ms      MS Version 2

   Observer: TEST     Project:
Observation: VLA
Data records: 22653       Total integration time = 5280 seconds
   Observed from   13-Apr-1995/09:19:00   to   13-Apr-1995/10:47:00

   ObservationID = 1         ArrayID = 1
  Date        Timerange                Scan  Field          DataDescIds
  13-Apr-1995/09:19:00.0 - 09:24:30.0     1  1331+30500002  [1]
              09:27:30.0 - 09:29:30.0     2  1445+09900002  [1]
              09:33:00.0 - 09:48:00.0     3  N5921          [1]
              09:50:30.0 - 09:51:00.0     4  1445+09900002  [1]
              10:22:00.0 - 10:23:00.0     5  1445+09900002  [1]
              10:26:00.0 - 10:43:00.0     6  N5921          [1]
              10:45:30.0 - 10:47:00.0     7  1445+09900002  [1]
Fields: 3
  ID   Name          Right Ascension  Declination   Epoch
  1    1331+30500002 13:31:08.29      +30.30.32.96  J2000   ** Flux/bandpass cal, 3c 286
                                                               (J2000 name & coords.)
  2    1445+09900002 14:45:16.47      +09.58.36.07  J2000   ** Gain calibrator
  3    N5921         15:22:00.00      +05.04.00.00  J2000   ** Source

Data descriptions: 1 (1 spectral windows and 1 polarization setups)
  ID    Ref.Freq    #Chans  Resolution  TotalBW     Correlations
  1     1413.45MHz  63      24.4141kHz  1550.2 kHz  RR  LL
Feeds: 28: printing first row only
  Antenna   Spectral Window     # Receptors    Polarizations
  1         -1                  2              [         R, L]
Antennas: 27:
  ID   Name  Station   Diam.    Long.         Lat.
  1    1     VLA:N7    25.0 m   -107.37.07.2  +33.54.12.9
  2    2     VLA:W1    25.0 m   -107.37.05.9  +33.54.00.5
  3    3     VLA:W2    25.0 m   -107.37.07.4  +33.54.00.9
  4    4     VLA:E1    25.0 m   -107.37.05.7  +33.53.59.2
  5    5     VLA:E3    25.0 m   -107.37.02.8  +33.54.00.5
  6    6     VLA:E9    25.0 m   -107.36.45.1  +33.53.53.6
  7    7     VLA:E6    25.0 m   -107.36.55.6  +33.53.57.7
  8    8     VLA:W8    25.0 m   -107.37.21.6  +33.53.53.0
  9    9     VLA:N5    25.0 m   -107.37.06.7  +33.54.08.0
  10   10    VLA:W3    25.0 m   -107.37.08.9  +33.54.00.1
  11   11    VLA:N4    25.0 m   -107.37.06.5  +33.54.06.1
  12   12    VLA:W5    25.0 m   -107.37.13.0  +33.53.57.8
  13   13    VLA:N3    25.0 m   -107.37.06.3  +33.54.04.8
  14   14    VLA:N1    25.0 m   -107.37.06.0  +33.54.01.8
  15   15    VLA:N2    25.0 m   -107.37.06.2  +33.54.03.5
  16   16    VLA:E7    25.0 m   -107.36.52.4  +33.53.56.5
  17   17    VLA:E8    25.0 m   -107.36.48.9  +33.53.55.1
  18   18    VLA:W4    25.0 m   -107.37.10.8  +33.53.59.1
  19   19    VLA:E5    25.0 m   -107.36.58.4  +33.53.58.8
  20   20    VLA:W9    25.0 m   -107.37.25.1  +33.53.51.0
  21   21    VLA:W6    25.0 m   -107.37.15.6  +33.53.56.4
  22   22    VLA:E4    25.0 m   -107.37.00.8  +33.53.59.7
  24   24    VLA:E2    25.0 m   -107.37.04.4  +33.54.01.1
  25   25    VLA:N6    25.0 m   -107.37.06.9  +33.54.10.3
  26   26    VLA:N9    25.0 m   -107.37.07.8  +33.54.19.0
  27   27    VLA:N8    25.0 m   -107.37.07.5  +33.54.15.8
  28   28    VLA:W7    25.0 m   -107.37.18.4  +33.53.54.8


Tables:
   MAIN                   22653 rows
   ANTENNA                   28 rows
   DATA_DESCRIPTION           1 row
   DOPPLER             <absent>
   FEED                      28 rows
   FIELD                      3 rows
   FLAG_CMD             <empty>
   FREQ_OFFSET         <absent>
   HISTORY                  270 rows
   OBSERVATION                1 row
   POINTING                 168 rows
   POLARIZATION               1 row
   PROCESSOR            <empty>
   SOURCE                     3 rows
   SPECTRAL_WINDOW            1 row
   STATE                <empty>
   SYSCAL              <absent>
   WEATHER             <absent>

0319+415 (3C84) -- VLA -- single baseline (3,17) from N1333 bandpass observation

# Baseline-based calibration of N1333 calibrater
# VLA baseline 3-17 02-May-2003
#           MeasurementSet Name:  B0319_0317.ms      MS Version 2
#
#   Observer: AW602     Project:
#Observation: VLA
#Data records: 62       Total integration time = 609.999 seconds
#   Observed from   02-May-2003/20:09:40   to   02-May-2003/20:19:50
#
#   ObservationID = 1         ArrayID = 1
#  Date        Timerange                Scan  FldId FieldName      DataDescIds
#  02-May-2003/20:09:40.0 - 20:19:50.0    49      1 0319+415_1     [1]
#Fields: 1
#  ID   Name          Right Ascension  Declination   Epoch
#  1    0319+415_1    03:19:48.16      +41.30.42.10  J2000
#Data descriptions: 1 (1 spectral windows and 1 polarization setups)
#  ID  #Chans Frame Ch1(MHz)    Resoln(kHz) TotBW(kHz)  Ref(MHz)    Corrs
#  1       63 LSRK  43416.2392  97.65625    6250        43419.2666  RR  LL
#Feeds: 29: printing first row only
#  Antenna   Spectral Window     # Receptors    Polarizations
#  1         -1                  2              [         R, L]
#Antennas: 2:
#  ID   Name  Station   Diam.    Long.         Lat.
#  3    3     VLA:E2    25.0 m   -107.37.04.4  +33.54.01.1
#  17   17    VLA:E3    25.0 m   -107.37.02.8  +33.54.00.5

# Examine observed data

os.system('rm -r B0319.K B0319.M B0319.MF casapy.log B0319*.png')

mp.open('B0319_0317.ms')        # Load data into mp (msplot tool)
mp.plotoptions(plotsymbol='bo') # Make antennas blue circles
mp.array()                      # look at antenna locations - currently missing labels!
pl.savefig('B0319_array.png')
mp.plotoptions(plotsymbol='r,') # Make visibilities points
mp.uvcoverage()                 # look at uv coverage on source
pl.savefig('B0319_uvcoverage.png')
mp.uvdist()                     # look at amplitude versus uv distance (default)
pl.savefig('B0319_uvdista.png')
mp.uvdist(what='phase')         # look at phase versus uv distance
pl.savefig('B0319_uvdistp.png')
mp.vistime()                    # look at amplitude versus time (default)
pl.savefig('B0319_vistimea.png')
mp.vistime(what='phase')        # look at phase versus time
pl.savefig('B0319_vistimep.png')
mp.vischannel()                 # look at amplitude versus channel (default)
# Example data flagging
#mp.markflags()                 # do this interactively
                                # draw as many boxes as necessary
#mp.flagdata()                   # view flags; not saved until diskwrite=1
mp.markflags(region=[58,65,0.0,2.]) # flag some data with specific region
mp.markflags(region=[0,3,0.0,2.])
mp.flagdata(true)               # view data with region flagged
#
pl.savefig('B0319_vischannela.png')
mp.vischannel(what='phase')     # look at phase versus channel
pl.savefig('B0319_vischannelp.png')

# Example spectral averaging
# mp.setspectral?               for information
mp.setspectral(nchan=3,         # Set up 3 channels with
        start=0,width=20)       # a width of 20 channels and starting at channel 1
mp.vistime()                    # view averaged data versus time
mp.uvdist()                     # view averaged data versus uv distance
mp.close()

# Calibrate data
cb.open('B0319_0317.ms')        # Load data into cb (calibrater tool)
cb.initcalset()                 # initialize the calibration for the dataset
cb.reset()                      # and reset (safety)
cb.setsolve(type='M',           # Solve for baseline-based gain solutions
        t=3,                    # use small time to get solution per integration
                                # this can be varied to explore best interval
        table='B0319.M')        # write solutions to table=B0319.M
cb.state()                      # view state of calibrater
cb.solve()                      # solve for M

cb.reset()                      # reset state of calibrater for next solving
cb.setapply(type='M',           # apply baseline-based gain
        table='B0319.M')
cb.setsolve(type='MF',          # solve for baseline-based bandpass
        t=30000,                # use large time to get a single solution for each channel
        table='B0319.MF')       # write solutions to table=B0319.MF
cb.state()                      # view state to make sure all is set properly
cb.solve()                      # solve for MF

cb.reset()                      # reset state of calibrater for application of calibration
cb.setapply(type='M',           # apply M
        table='B0319.M')
cb.setapply(type='MF',          # apply MF
        table='B0319.MF')
cb.state()                      # review what we will be doing
cb.correct()                    # apply M, MF to data
cb.close()                      # unload data from calibrater tool

# Compare observed 'data' and 'corrected' data
mp.open('B0319_0317.ms')        # Load data into mp
mp.plotoptions(subplot=121)     # First plot (left of two panels)
mp.uvdist()                     # Plot observed 'data' versus uv distance (default)
mp.plotoptions(subplot=121,overplot=true,plotsymbol='b+') # Overplot
                                # Overplot using blue '+' symbol
mp.uvdist('corrected')          #    corrected data
mp.plotoptions(subplot=122)     # Second plot (right of two panels)
mp.setspectral(1,5,50)          # Take spectral average of channels 6-56
mp.uvdist('data')               # Plot observed 'data'
mp.plotoptions(subplot=122,overplot=true,plotsymbol='b+')
                                # Overplot using blue '+' symbol
mp.setdata()                    # Clear data selection
                                # NOTE: This is needed when switching between
                                #    data columns; this also clears the spectral
                                #    selection so it should be reset
mp.setspectral(1,5,50)          # Take spectral average of channels 6-56
mp.uvdist('corrected')          # Plot 'corrected' data
pl.ylim(0.,1.5)                 # Extend y-axis plot limits to make room for legend
pl.legend(('obs-RR','obs-LL','corr-RR','corr-LL'),numpoints=1,shadow=True)
                                # Plot legend (using only one point per type)
                                # Put a shadow around the box
pl.text(44.60,1.4,'Spectral average: 1 channel=channels 6-56',fontsize=8)
                                # Put a text note at the specified data x,y coordinates

pl.savefig('B0319_obsvscorr.png')
mp.close()                      # unload the data
pl.clf()                        # clear the plotter

# Compare M,MF with K
mp.open('B0319_0317.ms')        # Load data into mp (msplot tool)
mp.setspectral(1,5,50)          # Average the data into a single channel
mp.plotoptions(plotsymbol='ro') # use red circles for observed data
mp.vistime('data','phase')      # plot the phase
mp.plotoptions(true,plotsymbol='bo') # overplot the corrected data (blue circles)
mp.setdata()                    # clear the data selection (to enable plotting the corrected column)
mp.setspectral(1,5,50)          # Average the data into a single channel (as before)
mp.vistime('corrected','phase') # plot the corrected phase
mp.close()                      # unload data from mp tool
# derive K solution
cb.open('B0319_0317.ms')        # load data into the cb (calibrater tool)
cb.setdata()
cb.setsolve(type='K',table='B0319.K',t=0) # solve for a baseline-based fringe fit calibration
cb.solve()                      # obtain K calibration solutions
cb.reset()                      # reset state of cb tool
cb.setapply(type='K',table='B0319.K') # apply the K solutions
cb.correct()                    # correct the data with the K solutions
cb.close()                      # unload data from the cb tool
# now overplot K results
mp.open('B0319_0317.ms')        # load data into mp (msplot tool)
mp.setspectral(1,5,50)          # average the data into a single channel
mp.plotoptions(overplot=true,plotsymbol='go') # overplot with green circles
mp.vistime('corrected','phase') # plot the corrected phase from the K solution
pl.savefig('B0319_calcompare.png')
mp.close()                      # unload data from the tool

#M,MF is better in this case - re-correct the data
cb.open('B0319_0317.ms')        # Load data into cb (calibrater tool)
cb.initcalset()                 # initialize the calibration for the dataset
cb.reset()                      # and reset (safety)
cb.setsolve(type='M',           # Solve for baseline-based gain solutions
        t=3,                    # use small time to get solution per integration
                                # this can be varied to explore best interval
        table='B0319.M')        # write solutions to table=B0319.M
cb.state()                      # view state of calibrater
cb.solve()                      # solve for M

cb.reset()                      # reset state of calibrater for next solving
cb.setapply(type='M',           # apply baseline-based gain
        table='B0319.M')
cb.setsolve(type='MF',          # solve for baseline-based bandpass
        t=30000,                # use large time to get a single solution for each channel
        table='B0319.MF')       # write solutions to table=B0319.MF
cb.state()                      # view state to make sure all is set properly
cb.solve()                      # solve for MF
cb.reset()                      # reset state of calibrater for application of calibration

cb.setapply(type='M',           # apply M
        table='B0319.M')
cb.setapply(type='MF',          # apply MF
        table='B0319.MF')
cb.state()                      # review what we will be doing
cb.correct()                    # apply M, MF to data
cb.close()                      # unload data from calibrater tool


# Examine calibration solutions
cp.open('B0319.M')              # Load B0319.M into cp (calplot tool)
cp.plot()                       # Examine baseline-based gain solutions (phase)
pl.savefig('B0319.Mp.png')
cp.setdata()                    # Set data to amplitude (default)
cp.plot('amp')                  # Examine amplitude versus time
pl.savefig('B0319.Ma.png')
cp.close()                      # Unload data

cp.open('B0319.MF')             # Load B0319.MF into cp (calplot tool)
cp.plot()                       # Examine baseline-based bandpass (phase)
pl.savefig('B0319.MFp.png')
cp.setdata()                    # Set data to amplitude (default)
cp.plot('amp')                  # Examine amplitude versus time
pl.savefig('B0319.MFa.png')
cp.close                        # Unload data

cp.open('B0319.K')              # Load B0319.K into cp
cp.plot('delay')                # plot delay
pl.savefig('B0319.K.png')
cp.plot('delayrate')            # plot delay rate
pl.savefig('B0319.Krate.png')
cp.close()

# Examine corrected data
mp.open('B0319_0317.ms')        # Load data into mp (msplot tool)
mp.uvdist(column='corrected')   # Examine corrected data amplitude
mp.uvdist(column='corrected',   # Examine corrected data phase
        what='phase')
mp.vistime(column='corrected')  # Examine corrected data amplitude
mp.vistime(column='corrected',  # Examine corrected data phase
        what='phase')
mp.vischannel(column='corrected') # Examine corrected data amplitude
mp.vischannel(column='corrected', # Examine corrected data phase
        what='phase')

# Also view the average channel plots for vistime, uvdist if desired

mp.close()                      # unload data from mp (msplot tool)

# uv model fit the data

cb.open('B0319_0317.ms')        # load the data into cb (calibrater tool)
cb.modelfit(niter=5,            # Set the initial values to this=no iterations
            compshape='P',      # P=Point source, G=Gaussian, D=Disk
            par=[0.5,.1,.1],    # Source parameters; for a point source:
            file='test.cl')     # [flux, long offset, lat offset]
                                # specify the file to write the component list
cb.close()                      # unload data from calibrater

# Now use component list to generate model data
im.open('B0319_0317.ms')        # load data into imager tool
im.ft(complist='test.cl')       # Fourier transform the component list -
                                # this writes it into the MODEL_DATA column
                                # of the MS
im.close()                      # unload data from imager

mp.open('B0319_0317.ms')        # load data into msplot tool
mp.setspectral(1,5,50)
mp.uvdist(column='corrected')   # plot corrected data
mp.plotoptions(overplot=true,   # specify overplot
 plotsymbol='ro')               # specify red circles for model data
mp.uvdist(column='model')       # plot model data
pl.savefig('B0319_modelfit.png')
mp.close

Appendices

AIPS -- CASA dictionary

This appendix provides a list of common AIPS tasks, and their equivalent CASA tool or tool function names. The two packages differ in both their architecture and calibration and imaging models, and there is often not a direct correspondence. However, this index does provide a scientific user of CASA who is familiar with AIPS, with a simple translation table to map their existing data reduction knowledge to the new package.

The index includes common AIPS tasks and verbs; adverbs are not included. Capabilities which exceed those available in AIPS are not listed here.

%BEGINTABLE{label="tbl:AIPSCASA" caption="AIPS - CASA dictionary"}%
AIPS Task/Verb Description CASA tool/functionSorted ascending
UVFLG Command-based flagging af
SPLIT Apply calibration cb.correct
GETJY Set flux density scale cb.fluxscale
UVFIT uv-plane component fitter cb.modelfit
FRING fringe-fitter cb.setapply, K
BPASS Bandpass calibration cb.setsolve,solve
CALIB (A,\phi) self-calibration cb.setsolve,solve
PCAL Polarization calibration cb.setsolve,solve
BLING Baseline-based fringe-fitter cb.setsolve,solve (K)
BLCAL Baseline-based (A,$\phi$) calibration cb.setsolve,solve (M,MF)
CPASS Polynomial bandpass calibration cb.setsolvebandpoly,solve
CCEDT Edit CC tables componentmodels
SNPLT Plot calibration solutions cp.plot
COMB Image combination ia.calc, ia.imagepol
IMLIN Image-plane continuum subtraction ia.continuumsub
IMFIT Image-plane component fitter ia.imagefitter
JMFIT Image-plane component fitter ia.imagefitter
IMLOD FITS image filler ia.imagefromfits
IMSTAT Image statistics ia.statistics,qtviewer
SUBIM Extract sub-image ia.subimage
MCAT List image catalog ia.summary
IMAGR Synthesis imaging im
MX Synthesis imaging im
APCLN CLEAN deconvolution im.clean
UVSUB Source model computation im.ft
VMEM MEM deconvolution im.mem
SETJY Set source properties im.setjy
PBCOR Primary beam correction im.setvp, vpmanager
SPFLG Interactive line data editing mp
TVFLG Interactive TB data editing mp
VBPLT Baseline-based uv-data plotting mp
VPLOT Baseline-based uv-data plotting mp
DBCON uv-data concatenation ms.concatenate
VBGLU Concatenate VLBI data ms.concatenate
UVLSF UV continuum subtraction ms.continuumsub
UVLOD UV-FITS filler ms.fitstoms
  Split out calibrated data ms.split
DTSUM uv-data summary ms.summary
IMH File header summary ms.summary
FITTP UVFITS writer ms.tofits,fitstoms
TVBOX Set regions in an image NA
UCAT List uv-data catalog NA
FITLD VLBA filler not available
IBLED Interactive VLBI editor not available
MK3IN MK3 VLBI filler not available
TRANS Transpose an image not required in CASA
ISPEC Plot image slice qtviewer
TVFID Adjust TV display qtviewer
TVLOD Load image to TV display qtviewer
DTSIM Simulator simulator
UVCON Simulator sm
UVMOD Simulator sm
LISTR List uv and calibration data tablebrowser
TBIN Read table from ASCII format tablefromascii
MOVE Move uv-data files tb
PRTAB List table data tb
PRTAN List AN table data tb
PRTUV List uv-data tb
UVPRT List uv-data tb
UVSRT Sort uv-data tb
ZAP Delete a file tb
TAPLT General table plotting tb, tp
MSORT Sort uv-data tb.command
RENAME Rename file name tb.tablerename
TBOUT Write table to ASCII format tb.toascii
FILLM VLA filler vlafiller

%ENDTABLE%

MIRIAD -- CASA dictionary

This appendix provides a list of common Miriad tasks, and their equivalent CASA tool or tool function names. The two packages differ in both their architecture and calibration and imaging models, and there is often not a direct correspondence. However, this index does provide a scientific user of CASA who is familiar with MIRIAD, with a simple translation table to map their existing data reduction knowledge to the new package.

%BEGINTABLE{label="tbl:MIRIADCASA" caption="MIRIAD - CASA dictionary"}%
MIRIAD Task Description CASA tool/function
atlod load ATCA data atcafiller
blflag Interactive baseline based editor/flagger mp raster displays
cgcurs Interactive image analysis qtviewer
cgdisp Image display, overlays qtviewer
clean Clean an image im
fits FITS image filler ia.imagefromfits
gpboot Set flux density scale cb.fluxscale
gpcal Polarization leakage and gain calibration cb with 'G' and 'D'
gpcopy copy calibration tables not needed
gpplt Plot calibration solutions cp.plot
imcomb Image combination im
imfit Image-plane component fitter ia.imagefitter
impol Create polarization images ia.imagepol
imstat Image statistics ia.statistics
imsub Extract sub-image ia.subimage
invert Synthesis imaging im
linmos linear mosaic combination of images im
maths Calculations involving images ia.imagecalc, ia.calc
mfcal Bandpass and gain calibration cb with 'G' and 'B'
prthd Print header of image or uvdata ia.summary, ms.summary
restor Restore a clean component model im
selfcal selfcalibration of visibility data im, cb
tvclip automated flagging based on clip levels af
tvdisp Load image to TV display qtviewer
tvflag Interactive TB data editing mp
uvaver Average/select data, apply calibration ms.split
uvfit uv-plane component fitter cb
uvflag Command-based flagging af
uvgen Simulator sm
uvlist List uv-data tb
uvmodel Source model computation im.ft
uvplt uv-data plotting ms
uvsplit split uv file in sources and spectral windows ms.split
%ENDTABLE%

CLIC -- CASA dictionary

This appendix provides a list of common CLIC tasks, and their equivalent CASA tool or tool function names. The two packages are very similar since the CASA software to reduce IRAM data is based on the CLIC reduction procedures.

%BEGINTABLE{label="tbl:CLICCASA" caption="CLIC - CASA dictionary"}%
CLIC Function Description CASA tool/function
load Load data almatifiller
print Print text summary of data ms.summary
flag Flag data mp, af, qtviewer
phcor Atmospheric phase correction almatifiller
rf Radio frequency bandpass cb.setsolvebandpoly, cb.solve
phase Phase calibration cb.setsolvegainspline,cb.solve
flux Absolute flux calibration cb.fluxscale
ampl Amplitude calibration cb.setsolvegainspline,cb.solve
table Split out calibrated data (uv table) ms.split
%ENDTABLE%

CASA ipythonrc file

# -*- Mode: Shell-Script -*-  Not really, but shows comments correctly
# $Id: Cookbook-Advanced.txt,v 1.3 2013/02/14 03:12:43 PatrickMurphy Exp $

#***************************************************************************
#
# Configuration file for IPython -- ipythonrc format
#
# The format of this file is simply one of 'key value' lines.
# Lines containing only whitespace at the beginning and then a # are ignored
# as comments. But comments can NOT be put on lines with data.

# The meaning and use of each key are explained below.

#---------------------------------------------------------------------------
# Section: included files

# Put one or more *config* files (with the syntax of this file) you want to
# include. For keys with a unique value the outermost file has precedence. For
# keys with multiple values, they all get assembled into a list which then
# gets loaded by IPython.

# In this file, all lists of things should simply be space-separated.

# This allows you to build hierarchies of files which recursively load
# lower-level services. If this is your main ~/.ipython/ipythonrc file, you
# should only keep here basic things you always want available. Then you can
# include it in every other special-purpose config file you create.
include

#---------------------------------------------------------------------------
# Section: startup setup

# These are mostly things which parallel a command line option of the same
# name.

# Keys in this section should only appear once. If any key from this section
# is encountered more than once, the last value remains, all earlier ones get
# discarded.

# Automatic calling of callable objects.  If set to true, callable objects are
# automatically called when invoked at the command line, even if you don't
# type parentheses.  IPython adds the parentheses for you.  For example:

#In [1]: str 45
#------> str(45)
#Out[1]: '45'

# IPython reprints your line with '---->' indicating that it added
# parentheses.  While this option is very convenient for interactive use, it
# may occasionally cause problems with objects which have side-effects if
# called unexpectedly.  Set it to 0 if you want to disable it.

# Note that even with autocall off, you can still use '/' at the start of a
# line to treat the first argument on the command line as a function and add
# parentheses to it:

#In [8]: /str 43
#------> str(43)
#Out[8]: '43'

autocall 1

# Auto-indent. IPython can recognize lines ending in ':' and indent the next
# line, while also un-indenting automatically after 'raise' or 'return'.

# This feature uses the readline library, so it will honor your ~/.inputrc
# configuration (or whatever file your INPUTRC variable points to).  Adding
# the following lines to your .inputrc file can make indent/unindenting more
# convenient (M-i indents, M-u unindents):

#  $if Python
#  "\M-i": "    "
#  "\M-u": "\d\d\d\d"
#  $endif

# The feature is potentially a bit dangerous, because it can cause problems
# with pasting of indented code (the pasted code gets re-indented on each
# line).  But it's a huge time-saver when working interactively.  The magic
# function @autoindent allows you to toggle it on/off at runtime.

autoindent 1

# Auto-magic. This gives you access to all the magic functions without having
# to prepend them with an @ sign. If you define a variable with the same name
# as a magic function (say who=1), you will need to access the magic function
# with @ (@who in this example). However, if later you delete your variable
# (del who), you'll recover the automagic calling form.

# Considering that many magic functions provide a lot of shell-like
# functionality, automagic gives you something close to a full Python+system
# shell environment (and you can extend it further if you want).

automagic 1

# Size of the output cache. After this many entries are stored, the cache will
# get flushed. Depending on the size of your intermediate calculations, you
# may have memory problems if you make it too big, since keeping things in the
# cache prevents Python from reclaiming the memory for old results. Experiment
# with a value that works well for you.

# If you choose cache_size 0 IPython will revert to python's regular >>>
# unnumbered prompt. You will still have _, __ and ___ for your last three
# results, but that will be it.  No dynamic _1, _2, etc. will be created. If
# you are running on a slow machine or with very limited memory, this may
# help.

cache_size 1000

# Classic mode: Setting 'classic 1' you lose many of IPython niceties,
# but that's your choice! Classic 1 -> same as IPython -classic.
# Note that this is _not_ the normal python interpreter, it's simply
# IPython emulating most of the classic interpreter's behavior.
classic 0

# colors - Coloring option for prompts and traceback printouts.

# Currently available schemes: NoColor, Linux, LightBG.

# This option allows coloring the prompts and traceback printouts. This
# requires a terminal which can properly handle color escape sequences. If you
# are having problems with this, use the NoColor scheme (uses no color escapes
# at all).

# The Linux option works well in linux console type environments: dark
# background with light fonts.

# LightBG is similar to Linux but swaps dark/light colors to be more readable
# in light background terminals.

# keep uncommented only the one you want:
colors Linux
#colors LightBG
#colors NoColor

########################
# Note to Windows users
#
# Color and readline support is avaialble to Windows users via Gary Bishop's
# readline library.  You can find Gary's tools at
# http://sourceforge.net/projects/uncpythontools.
# Note that his readline module requires in turn the ctypes library, available
# at http://starship.python.net/crew/theller/ctypes.
########################

# color_info: IPython can display information about objects via a set of
# functions, and optionally can use colors for this, syntax highlighting
# source code and various other elements. This information is passed through a
# pager (it defaults to 'less' if $PAGER is not set).

# If your pager has problems, try to setting it to properly handle escapes
# (see the less manpage for detail), or disable this option.  The magic
# function @color_info allows you to toggle this interactively for testing.

color_info 1

# confirm_exit: set to 1 if you want IPython to confirm when you try to exit
# with an EOF (Control-d in Unix, Control-Z/Enter in Windows). Note that using
# the magic functions @Exit or @Quit you can force a direct exit, bypassing
# any confirmation.

confirm_exit 1

# Use deep_reload() as a substitute for reload() by default. deep_reload() is
# still available as dreload() and appears as a builtin.

deep_reload 0

# Which editor to use with the @edit command. If you leave this at 0, IPython
# will honor your EDITOR environment variable. Since this editor is invoked on
# the fly by ipython and is meant for editing small code snippets, you may
# want to use a small, lightweight editor here.

# For Emacs users, setting up your Emacs server properly as described in the
# manual is a good idea. An alternative is to use jed, a very light editor
# with much of the feel of Emacs (though not as powerful for heavy-duty work).

editor 0

# log 1 -> same as ipython -log. This automatically logs to ./ipython.log
log 1

# Same as ipython -Logfile YourLogfileName.
# Don't use with log 1 (use one or the other)
logfile ''

# banner 0 -> same as ipython -nobanner
banner 1

# messages 0 -> same as ipython -nomessages
messages 1

# Automatically call the pdb debugger after every uncaught exception. If you
# are used to debugging using pdb, this puts you automatically inside of it
# after any call (either in IPython or in code called by it) which triggers an
# exception which goes uncaught.
pdb 0

# Enable the pprint module for printing. pprint tends to give a more readable
# display (than print) for complex nested data structures.
pprint 1

# Prompt strings

# Most bash-like escapes can be used to customize IPython's prompts, as well as
# a few additional ones which are IPython-specific.  All valid prompt escapes
# are described in detail in the Customization section of the IPython HTML/PDF
# manual.

# Use \# to represent the current prompt number, and quote them to protect
# spaces.
prompt_in1 'In [\#]: '

# \D is replaced by as many dots as there are digits in the
# current value of \#.
prompt_in2 '   .\D.: '

prompt_out 'Out[\#]: '

# Select whether to left-pad the output prompts to match the length of the
# input ones.  This allows you for example to use a simple '>' as an output
# prompt, and yet have the output line up with the input.  If set to false,
# the output prompts will be unpadded (flush left).
prompts_pad_left 1

# quick 1 -> same as ipython -quick
quick 0

# Use the readline library (1) or not (0). Most users will want this on, but
# if you experience strange problems with line management (mainly when using
# IPython inside Emacs buffers) you may try disabling it. Not having it on
# prevents you from getting command history with the arrow keys, searching and
# name completion using TAB.

readline 1

# Screen Length: number of lines of your screen. This is used to control
# printing of very long strings. Strings longer than this number of lines will
# be paged with the less command instead of directly printed.

# The default value for this is 0, which means IPython will auto-detect your
# screen size every time it needs to print. If for some reason this isn't
# working well (it needs curses support), specify it yourself. Otherwise don't
# change the default.

screen_length 0

# Prompt separators for input and output.
# Use \n for newline explicitly, without quotes.
# Use 0 (like at the cmd line) to turn off a given separator.

# The structure of prompt printing is:
# (SeparateIn)Input....
# (SeparateOut)Output...
# (SeparateOut2),   # that is, no newline is printed after Out2
# By choosing these you can organize your output any way you want.

separate_in \n
separate_out 0
separate_out2 0

# 'nosep 1' is a shorthand for '-SeparateIn 0 -SeparateOut 0 -SeparateOut2 0'.
# Simply removes all input/output separators, overriding the choices above.
nosep 0

# xmode - Exception reporting mode.

# Valid modes: Plain, Context and Verbose.

# Plain: similar to python's normal traceback printing.

# Context: prints 5 lines of context source code around each line in the
# traceback.

# Verbose: similar to Context, but additionally prints the variables currently
# visible where the exception happened (shortening their strings if too
# long). This can potentially be very slow, if you happen to have a huge data
# structure whose string representation is complex to compute. Your computer
# may appear to freeze for a while with cpu usage at 100%. If this occurs, you
# can cancel the traceback with Ctrl-C (maybe hitting it more than once).

#xmode Plain
xmode Context
#xmode Verbose

# multi_line_specials: if true, allow magics, aliases and shell escapes (via
# !cmd) to be used in multi-line input (like for loops).  For example, if you
# have this active, the following is valid in IPython:
#
#In [17]: for i in range(3):
#   ....:     mkdir $i
#   ....:     !touch $i/hello
#   ....:     ls -l $i

multi_line_specials 1

#---------------------------------------------------------------------------
# Section: Readline configuration (readline is not available for MS-Windows)

# This is done via the following options:

# (i) readline_parse_and_bind: this option can appear as many times as you
# want, each time defining a string to be executed via a
# readline.parse_and_bind() command. The syntax for valid commands of this
# kind can be found by reading the documentation for the GNU readline library,
# as these commands are of the kind which readline accepts in its
# configuration file.

# The TAB key can be used to complete names at the command line in one of two
# ways: 'complete' and 'menu-complete'. The difference is that 'complete' only
# completes as much as possible while 'menu-complete' cycles through all
# possible completions. Leave the one you prefer uncommented.

readline_parse_and_bind tab: complete
#readline_parse_and_bind tab: menu-complete

# This binds Control-l to printing the list of all possible completions when
# there is more than one (what 'complete' does when hitting TAB twice, or at
# the first TAB if show-all-if-ambiguous is on)
readline_parse_and_bind "\C-l": possible-completions

# This forces readline to automatically print the above list when tab
# completion is set to 'complete'. You can still get this list manually by
# using the key bound to 'possible-completions' (Control-l by default) or by
# hitting TAB twice. Turning this on makes the printing happen at the first
# TAB.
readline_parse_and_bind set show-all-if-ambiguous on

# If you have TAB set to complete names, you can rebind any key (Control-o by
# default) to insert a true TAB character.
readline_parse_and_bind "\C-o": tab-insert

# These commands allow you to indent/unindent easily, with the 4-space
# convention of the Python coding standards.  Since IPython's internal
# auto-indent system also uses 4 spaces, you should not change the number of
# spaces in the code below.
readline_parse_and_bind "\M-i": "    "
readline_parse_and_bind "\M-o": "\d\d\d\d"
readline_parse_and_bind "\M-I": "\d\d\d\d"

# Bindings for incremental searches in the history. These searches use the
# string typed so far on the command line and search anything in the previous
# input history containing them.
readline_parse_and_bind "\C-r": reverse-search-history
readline_parse_and_bind "\C-s": forward-search-history

# Bindings for completing the current line in the history of previous
# commands. This allows you to recall any previous command by typing its first
# few letters and hitting Control-p, bypassing all intermediate commands which
# may be in the history (much faster than hitting up-arrow 50 times!)
readline_parse_and_bind "\C-p": history-search-backward
readline_parse_and_bind "\C-n": history-search-forward

# I also like to have the same functionality on the plain arrow keys. If you'd
# rather have the arrows use all the history (and not just match what you've
# typed so far), comment out or delete the next two lines.
readline_parse_and_bind "\e[A": history-search-backward
readline_parse_and_bind "\e[B": history-search-forward

# (ii) readline_remove_delims: a string of characters to be removed from the
# default word-delimiters list used by readline, so that completions may be
# performed on strings which contain them.

readline_remove_delims -/~

# (iii) readline_merge_completions: whether to merge the result of all
# possible completions or not.  If true, IPython will complete filenames,
# python names and aliases and return all possible completions.  If you set it
# to false, each completer is used at a time, and only if it doesn't return
# any completions is the next one used.

# The default order is: [python_matches, file_matches, alias_matches]

readline_merge_completions 1

# (iv) readline_omit__names: normally hitting <tab> after a '.' in a name
# will complete all attributes of an object, including all the special methods
# whose names start with single or double underscores (like __getitem__ or
# __class__).

# This variable allows you to control this completion behavior:

# readline_omit__names 1 -> completion will omit showing any names starting
# with two __, but it will still show names starting with one _.

# readline_omit__names 2 -> completion will omit all names beginning with one
# _ (which obviously means filtering out the double __ ones).

# Even when this option is set, you can still see those names by explicitly
# typing a _ after the period and hitting <tab>: 'name._<tab>' will always
# complete attribute names starting with '_'.

# This option is off by default so that new users see all attributes of any
# objects they are dealing with.

readline_omit__names 0

#---------------------------------------------------------------------------
# Section: modules to be loaded with 'import ...'

# List, separated by spaces, the names of the modules you want to import

# Example:
# import_mod sys os
# will produce internally the statements
# import sys
# import os

# Each import is executed in its own try/except block, so if one module
# fails to load the others will still be ok.

import_mod

#---------------------------------------------------------------------------
# Section: modules to import some functions from: 'from ... import ...'

# List, one per line, the modules for which you want only to import some
# functions. Give the module name first and then the name of functions to be
# imported from that module.

# Example:

# import_some IPython.genutils timing timings
# will produce internally the statement
# from IPython.genutils import timing, timings

# timing() and timings() are two IPython utilities for timing the execution of
# your own functions, which you may find useful.  Just commment out the above
# line if you want to test them.

# If you have more than one modules_some line, each gets its own try/except
# block (like modules, see above).

import_some

#---------------------------------------------------------------------------
# Section: modules to import all from : 'from ... import *'

# List (same syntax as import_mod above) those modules for which you want to
# import all functions. Remember, this is a potentially dangerous thing to do,
# since it is very easy to overwrite names of things you need. Use with
# caution.

# Example:
# import_all sys os
# will produce internally the statements
# from sys import *
# from os import *

# As before, each will be called in a separate try/except block.

import_all

#---------------------------------------------------------------------------
# Section: Python code to execute.

# Put here code to be explicitly executed (keep it simple!)
# Put one line of python code per line. All whitespace is removed (this is a
# feature, not a bug), so don't get fancy building loops here.
# This is just for quick convenient creation of things you want available.

# Example:
# execute x = 1
# execute print 'hello world'; y = z = 'a'
# will produce internally
# x = 1
# print 'hello world'; y = z = 'a'
# and each *line* (not each statement, we don't do python syntax parsing) is
# executed in its own try/except block.

execute

# Note for the adventurous: you can use this to define your own names for the
# magic functions, by playing some namespace tricks:

# execute __IPYTHON__.magic_pf = __IPYTHON__.magic_profile
#execute __IP.magic_exp = __IP.magic_pdoc
#execute __IPYTHON__.magic_exp = __IPYTHON__.magic_pdoc
#execute __IPYTHON__.magic_explain = __IPYTHON__.magic_pdoc

# defines @pf as a new name for @profile.

#---------------------------------------------------------------------------
# Section: Pyhton files to load and execute.

# Put here the full names of files you want executed with execfile(file).  If
# you want complicated initialization, just write whatever you want in a
# regular python file and load it from here.

# Filenames defined here (which *must* include the extension) are searched for
# through all of sys.path. Since IPython adds your .ipython directory to
# sys.path, they can also be placed in your .ipython dir and will be
# found. Otherwise (if you want to execute things not in .ipyton nor in
# sys.path) give a full path (you can use ~, it gets expanded)

# Example:
# execfile file1.py ~/file2.py
# will generate
# execfile('file1.py')
# execfile('_path_to_your_home/file2.py')

# As before, each file gets its own try/except block.

execfile

# If you are feeling adventurous, you can even add functionality to IPython
# through here. IPython works through a global variable called __ip which
# exists at the time when these files are read. If you know what you are doing
# (read the source) you can add functions to __ip in files loaded here.

# The file example-magic.py contains a simple but correct example. Try it:

# execfile example-magic.py

# Look at the examples in IPython/iplib.py for more details on how these magic
# functions need to process their arguments.

#---------------------------------------------------------------------------
# Section: aliases for system shell commands

# Here you can define your own names for system commands. The syntax is
# similar to that of the builtin @alias function:

# alias alias_name command_string

# The resulting aliases are auto-generated magic functions (hence usable as
# @alias_name)

alias casalog /usr/X11R6/bin/xterm -sb -sl 1000 -fn 6x12 -geometry 80x12+0+88 -bg white -fg blue -title "CASA Log" -e "tail -f casapy.log" &

# For example:
# alias myls ls -la
# will define 'myls' as an alias for executing the system command 'ls -la'.
# This allows you to customize IPython's environment to have the same aliases
# you are accustomed to from your own shell.

# You can also define aliases with parameters using %s specifiers (one per
# parameter):

# alias parts echo first %s second %s

# will give you in IPython:
# >>> @parts A B
# first A second B

# Use one 'alias' statement per alias you wish to define.

# alias

#************************* end of file <ipythonrc> ************************

CASA matplotlibrc file

backend : TkAgg

-- JosephMcMullin - 13 Oct 2006
Error during latex2img:
ERROR: can't find latex at /usr/bin/latex
INPUT:
\documentclass[fleqn,12pt]{article}
\usepackage{amsmath}
\usepackage[normal]{xcolor}
\setlength{\mathindent}{0cm}
\definecolor{teal}{rgb}{0,0.5,0.5}
\definecolor{navy}{rgb}{0,0,0.5}
\definecolor{aqua}{rgb}{0,1,1}
\definecolor{lime}{rgb}{0,1,0}
\definecolor{maroon}{rgb}{0.5,0,0}
\definecolor{silver}{gray}{0.75}
\usepackage{latexsym}
\begin{document}
\pagestyle{empty}
\pagecolor{white}
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}$ and $\vec{V}_{ij}^{\mathrm{~IDEAL}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~B_{ij}~G_{ij}~D_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle _c\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle E\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle uv\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle V(u) = \int A(x) I(x) e^{-i2 \pi u x} dx\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \chi^2\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle G\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle I(x)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle %\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle TOPAC\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle J_{ij} = J_i \otimes J_j^{*}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle T\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~G_{ij}~\vec{V}_{ij}^{\mathrm{~point}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle +\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle i,j\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle E_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \tau\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \phi\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \dot{\tau}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \sigma\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle P\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \sim 14\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~B_{ij}~\left(G_{ij}~\vec{V}_{ij}^{\mathrm{~point}}\right)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{I} = (I,Q,U,V)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle T_{ij}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle G(u,v,w)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle w\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle BPOLY\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle B_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle N_{ant}(N_{ant}-1)/2\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \sim 89.7\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle M_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle t_o\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle <\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle >\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle J_{ij}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle G_{ij}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle P_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle ^{-1}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle >180^\circ\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \nu_o\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle D\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle ^{\circ}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \phi = \phi(t_o,\nu_o) + 2\pi(\nu - \nu_o)\tau + 2\pi\nu(t - t_o)\dot{\tau}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle 256 \times 256\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle _f\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle T_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle B_{ij}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle GAINCURVE\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle D_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}^{\mathrm{~IDEAL}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle M\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \sim 1.8\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle V(u,v,w)=G(u,v,w)*V(u,v)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle ij\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle GSPLINE\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \sqrt{w}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle B\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle f\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \left(M_{ij}^{-1}~\vec{V}_{ij}\right)~=~B_{ij}~\vec{V}_{ij}^{\mathrm{~point}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~M_{ij}~B_{ij}~G_{ij}~D_{ij}~E_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}^{IDEAL}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \phi(t_o,\nu_o)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle N_{ant}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \times\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle V(u)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle G_{ij}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~M_{ij}~\vec{V}_{ij}^{\mathrm{~point}}\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle A(x)\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle I(x) = \frac{ \sum_f I_{f}(x)  A_{f}(x) } { \sum_f A^{2}_{f}(x) }.\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}^{\mathrm{~IDEAL}}~=~\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle 21.5''\end{math}
}
\clearpage
{
\color{black}
\begin{math}\displaystyle \vec{V}_{ij}~=~J_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}\end{math}
}
\clearpage
\end{document}
STDERR:
Topic revision: r3 - 2013-02-13, PatrickMurphy
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback