Multi-scale Improvements in Clean

CASA Modification Request 1C108, November 2007

1. Introduction

Multi-scale clean, now part of the clean task, needs significant improvements before it can be released to the general user. See specifics under requirements.

2. Background

3. Requirements

3.1 Separate multi-scale options from deconvolution options: JIRA Ticket CAS-118

Clean has three types of deconvolution algorithms: clark, hogbom and csclean. All of these deconvolution types decompose the image into a set of point sources. For extended sources, it may be useful to decompose the image into not only point sources but sources of several angular sizes (which is the definition of multi-scale deconvolution). This type of multi-scale deconvolution should be usable with any of these deconvolution algorithms.

Right now, clean has four options: clark, hogbom, csclean and multiscale. Multiscale should NOT be one of the options because it is not an algorithm, but a parameter associated with any of these algorithms. Sponsor suggests that a new variable called multiscale be added to clean, where multiscale=false means normal cleaning and multiscale=true would then open up the "scales" parameter for additional editing. What this means is that the user will then be able to select any of the three deconvolution types whether they are doing point source or multiscale deconvolution.

3.2 Multi-scale Clean scale sizes:

The appropriate default parameters for multiscale clean need more investigation. The present three values [0,3,10] mean use a component with width 0 pixels (point source), 3 pixels (What is this? full width half power width?) and 10 pixels. The number and value for these scales need better documentation.

4. Design

5. Deployment Checklist

6. Test Plan

6.1 Internal Testing

6.2 Sponsor Testing

Check to see that all six combinations of deconvolution algorithm and point source/multiscale are available and function (that is, provide some reasonable answer). Developer and sponsor should provide documentation on the strategy for using multi-scale clean.

6.3 Integration/Regression Tests

Should be included in at least on regression test.

6.4 Testing for Scientific Validity

Compare with very deep cleans of extended source using point source model and with multi-scale clean in AIPS.


APPROVED: I acknowledge that my request is fully contained in this MR, and if the CASA development group delivers exactly what I specified, I will be happy.

ACCEPTED: I acknowledge that I have validated the completed code according to the acceptance tests, and I am happy with the results.

Written - - - - -
Checked - - - - -
Approved by Scientific Sponsor - - - - -
Accepted/Delivered by Sponsor - - - - -

  • Use %X% if MR is not complete (will display ALERT!)
  • Use %Y% if MR iscomplete (will display DONE)

Discussion Area

Input from DebraShepherd (co-sponsor on this ticket):

I like the suggestion to make ms-clean an option in the standard clean algorithms. It makes more sense. I can see where this is lower priority given that it works OK the way it is.

Something that may help: This ticket says you need better documentation for how to use the scales. Here is an e-mail that I wrote explaining how to use the current implementation of multi-scale clean:

If the standard clean command for a VLA 3.6 cm continuum image looks like:

default clean

clean(vis='', imagename='source.1', mode='mfs', alg='clark', niter=500, gain=0.1, field='AFGL*', spw=[0,1], imsize=[256,256], cell=['0.1arcsec','0.1arcsec'], weighting='briggs', rmode='norm', robust=0.5, cleanbox='interactive', npercycle=100)

Multi-scale would look like:

default clean

clean(vis='', imagename='source.2', mode='mfs', alg='multiscale', scales=[0,3,10], niter=500, gain=0.1, field='AFGL*', spw=[0,1], imsize=[256,256], cell=['0.1arcsec','0.1arcsec'], weighting='briggs', rmode='norm', robust=0.5, cleanbox='interactive', npercycle=100)

So here the scales are 0, 3 and 10 pixels. Each pixel is 0.1" so scales are
  • 0" (delta function, just like the regular clean)
  • 0.3"
  • 1"

Using the defaults should be adequate for most images. This will take about 3 times longer than a standard clean because the deconvolution algorithm has to generate intermediate residual images, etc... for each scale. However, when it is done, you will have just one image that combines all the scales.

The scales should be logarithmic spaced for the algorithm to work well - if you put them too close together the algorithm won't use some of the spatial scales. E.g. using scale sizes of 0, 1, 2, 3 pixels is not a good idea because nothing will be found on scales 1 and 2 (just 0 and 3) but it will take 4 times longer to clean. If you need another scale, then use scales=[0,3,10,30] but only if you think you will have size structures in your map that are significantly larger than 30 pixels (3"). If the largest structure in an image is about 3'' in diameter - you do not need a 3" scale. Also, if your mask regions are not > 3" in diameter, no flux will be found on the 3" scale because the mask will not allow such large structures to be fit to the image.

-- NicoleRadziwill - 30 Oct 2007
Topic revision: r5 - 2007-12-15, DebraShepherd
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback