First, I can do everything in the scripts just fine. It's a bit clunky and the documentation rather thin, but it does seem to work and I haven't crashed anything yet. This is just on the first OrionS script ...

So, now some probably mostly dumb questions interspersed with comments. Feel free to tell me to RTFM if that's appropriate:

1) How exactly should these regression scripts be run as scripts (as opposed to just cut and paste)?

Once in casapy, type: CASA <1>: execfile ''

2) The regression test data isn't here in CV so the part that copies the data at the top won't work. Note also that if you use svn to get a copy of the OrionS data that the actual MS of interest is one level below that. It took me a short minute to work all that out.

Working on modifications to the scripts to handle the rpm case. Not ready yet but the data is in CV at /usr/lib/casapy/data/regression (rhas4).

4) why is recalc_azel necessary, why not just do that when the scantable is made

Can be done for MS to scantable fill.

5) How much is actually in memory for each scantable? When should I start to be worried that I've got too many scantables? Is there some way to tell?

You can monitor your casapy session to see how much memory it's using. Malte recommends just del things once you're done with them.

6) The plotter is pretty plain. It could use some pop-up help on the buttons to remind me what each one does.

A popular comment. This is the matplotlib plotter. We can look at ways to customize it but this is what you get out of the box.

7) Inconsistent naming conventions: e.g. getifs and get_elevation

8) How do I see what polarizations are in the data (XX, YY, etc). Summary doesn't seem to show it (unless I'm missing something) and I can't find a method yet to get that, but the plotter gets it so it must be there.

9) When you average the two polarizations it seems odd to me that the polarization is now XX instead of I (okay, so that's probably not really an average, blah, blah, blah, but XX seems wrong - or at least not helpful).

10) Its REALLY annoying that so many plotter operations don't hold over when you change something (e.g. set_histogram resets the zoom level, as does replotting the thing).

11) Is there any equivalent to bdrop/edrop to just always exclude the end channels from the display without removing them from the data? That might help on the auto-scaling, although I can't tell because ...

Not exactly, though you can use setup a mask for a scantable that will work in a similar way.

12) How do I get at the actual data values? Print them out, for example.

It is a bit hidden but there is scantable._getspectrum(row)

13) Can I fit a polynomial baseline and not remove it right away? Overplot it before deciding if I like it?

14) There are a bunch of differences in statistics and fitted values from what is shown in the script. Should I be worried? Shouldn't you edit the script to reflect the current reality? And the differences with GBTIDL seem larger than I would have thought - but perhaps thats because of the differences at the baseline fitting step.

Just drift from the original; we had some differences in the implemented calibration; things are much closer now; I'll update them.

15) You can give odd names for the desired statistic without any error message. e.g. I mis-typed "rms" as "rls" and saw a 0.0 and it took me quite a while to figure out what had gone wrong.

3) I like the tab completion. It helps with the clunkiness. Is there any easy way to know what parameters are available for a certain method?


16) It's a pitty that the max, sum, median, and mean can't be done in one call (or can they and this is an RTFM issue).

We're working on packaging up the asap functionality, much like was done for the AIPS++ toolkit methods. You then have persistent state, you can look at inputs, get decent help,etc.

17) The separate plotter for the gaussian fit is a pain. Can't I overplot the fit on my data?

You would have to do it by hand currently. This is ASAP.

18) Related to (14) ... why does the regression script only care about accuracy to 5%. Surely you can do better than that given the same data source.

Sure. The regressions are still evolving so stay tuned.%ENDCOLOR

-- JosephMcMullin - 28 Feb 2007
Topic revision: r1 - 2007-02-28, JosephMcMullin
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback