Last Update: JeffMangum - 23 Mar 2016
- Corrected Visibility Phase Fluctuations: < 57 deg at 950 GHz
- Delay Error (Structural): < 13 fs RMS between 10 and 300s
- Delay Error (Structural): < 38 fs RMS for 10s integration
- Delay Error (Electronics): < 65 fs for 10s (noise)
- WVR Divergence from Observing Beam: < 10 arcmin
- WVR Path Length Correction Noise (RMS): < 0.01*w+10 micron (for sampling < 1 Hz)
- Most of the backbone behind this spec derives from Larry D'Addario's "System Design Description", which is linked from JGM's AlmaReference page.
- The "System Design Description" resulted from Lama Memo 803. In that memo, D'Addario and Holdaway found that under the best 5% atmospheric conditions, residual phase errors were about 37 degrees over 20 seconds at 937 GHz. If the instrumental phase errors are the same magnitude, then the total phase errors become 53 degrees at 950 GHz.
- 57 degrees at 950 GHz implies > 28.5 deg at 475 GHz (worse atmospheric conditions).
Band-to-Band Phase Calibration
From email discussion involving EdFomalont, Bill Shillue, and including input from Neil Phillips.
The main problem with Band-to-Band calibration is that ALMA cannot measure three things simultaneously:
- The high frequency phase of a strong source
- The low frequency phase of the same strong source
- The low frequency phase of the weaker phase calibrator source
You can switch quickly among the frequencies and sources, but not that fast and still have sufficient SNR. The resultant phase scatter from these measurements leads to a systematic phase error when the low frequency phase calibrator observation is scaled to the high frequency of the target observation. I am not sure if there is any kind of improved switching implementation that can be done with ALMA except by observing at two bands simultaneously.
Bill Shillue asked if it would be possible to observe two bands simultaneously. JeffMangum
did a secondary focus plate scale calculation
which showed that: Band 3 and Band 8 are right next to each other, so let's see how their offsets and beam sizes compare. Band 3 is at X/Y coordinate (54,-306) while Band 8 is at (0,-103.3), so I think that the radial separation is sqrt((54-0)^2 + (-306-(-103.3))^2) = 209.77mm. This would correspond to an angular distance of 209.77*2.06 = 432 arcsec, which is much larger than the primary beam at Band 3 (about 60 arcsec). Unless you place some optics in front of the bands to be used simultaneously (like they do at IRAM) I don't see how multiple bands can be measured simultaneously. Let me know if you have questions.
Bill Shillue follows-up with: Band3-Band8 separation is 7 Band3 beamwidths or 30 Band8 beamwidths. Based on your guidance I worked out the other beam sizes and separations, Band 6-7-9 are closest but still the beam-on-sky is 387 arcsec, something like 20-40 times the beam size. Thus the use of two bands would not be useful to address the Band-to-Band phase transfer. JeffMangum
suggests, though, that rather than observing two bands simultaneously it would be almost as good to observe them with a very short switch time: ALMA's original plan for Band-to-Band phase transfer involved switching between bands while observing phase calibration sources with sufficient signal-to-noise. See the attached "Phase Calibration Plan" for details. Section 4.2, entitled "Fast Switching Only", lists a measurement sequence which involves switching between bands, target source, and calibrator on relatively short timescales. I see this as a variant of what you are proposing, except that there are few-second delays between measurements (rather than them being simultaneous), and we need to move the antenna a bit to reposition the appropriate beams.
listed a "current status" of Band-to-Band switching capabilities with ALMA: There are some fast-switching 'scripts' which change frequency every 3-sec or so. The problem is that you need at about 20 sec on the high frequency of the bright source to get sufficient SNR.
- L(5 sec) - H(20 sec) - L(5 sec): is what we are doing now. L(5 sec) means low frequency for 5 sec; H(20 sec) means High freq for 20 sec. About 10 sec to switch frequencies normally, represented by the '-'. Then, compare average phase of two L(5 sec) with H(20 sec) to get pseudo-simultaneous phase difference.
- L(2 sec) - H(5 sec) - L(2 sec) -H(5 sec) - L(2 sec) -H(5 sec) - L(2 sec) -H(5 sec) - L(2 sec): Is this sequence any better in measuring the high-low phase difference at a specific time, where '-' now takes about 3-sec or less. Perhaps take each triad (L-H-L) and get a phase difference. Then average the four of them. This average (amp and phase) should be coherent and give a more accurate difference. Remember, the phase at the high frequency is noise at the 20 deg rms level or more. I think we need the L-H phase difference for each antenna to 5 deg or better for band-to-band to be accurate.
asks: How stable is the instrumental phase (the part of the total measured phase that Bill cares about and about which we can do something about) when switching between the low and high frequencies? If there is room for improvement in this phase stability, then what Bill is proposing could improve the phase continuity between the low and high frequency measurement. And what is the source of the 10-second switch time between the two frequencies? Also, comparing this to what Holdaway designed, it seems that we are spending a lot of time on the low-frequency cal (L(5 sec)). Why so long? I would suspect that signal-to-noise is not the limitation.
responds: We spend too much time at the low frequency because of the time it takes to switch frequencies. The instrumental phase difference (dispersive phase difference between the two frequencies) is amazingly constant for ALMA over a period of an hour, usually a few degrees at most. So, we only need determine this phase difference once an hour and assume all other changes are delay-like, so that two frequency delays track very well. Most of the changes are caused by the troposphere with two to many second time scale. So, the shorter the triads, the less likely that the troposphere will contaminate the phase difference. But, if you are switching frequencies very quickly, how long does it take for the LO phase to settle down? I suppose that both LO's are hot, so there should be no transients? Another question: If you are observing a spectral line at the high frequency (say band 9) and specify the LO's needed, then the low frequency LO must be compatible for fast switching? As you see I don't know much about the LO system at ALMA, but the low frequency choice is somewhat arbitrary, so you can pick a frequency that is switchable quickly.
Bill Shillue responds: The LO phase should be settled before the antenna gets to the new slew position, ideally. We tested this extensively with just the hardware, two independent Local oscillators, 14km of fiber, and a simulated antenna which was just two fiber optic wraps. So our phase measurements did not account for software delays, gravity or effects of the large mechanical structure. The plot below shows a switching test with phase at 90 GHz versus time hh:mm:ss (From BEND-50.01.00.00-011-A-TDR pg 159).
As far as the LO frequencies
if the cal frequency is Band 3 then the LO is designed to fast switch in something like 1 second. The target frequency can be any other frequency. Normally the LO tuning takes 10 or 20 seconds (when you are just moving to a new target for instance). The fast switching (between two bands) is however not invoked by the control software directly. Rather, the firmware of the LO keeps a laser parked at the previous frequency and if it is asked to return to that frequency it skips all the coarse tuning and locks immediately. For this reason if you want to do 100 or 1000 cycles between two frequencies the first two frequency sets will take the full 10 or 20 seconds and can be regarded as setup time.
Bill Shillue LO phase locking tests.
summarized: If I read this right, then, it takes only 1 second to switch the LO from a frequency in Band 3 to any other frequency in any other band. If correct, then it appears to me that the retuning of the LO is not the "tall pole" in our desire to improve Band-to-Band phase calibration, correct?
Bill Shillue concurred, but thought that some "live" tests should be done with the complete ALMA system: I was thinking about that after I replied, and worried that maybe the combination of software and hardware is resulting in the system not meeting the standard to which it was designed. The LO is supposed to be invisible in the sense that it should setup, switch and settle faster than the antenna motion (which we took to be 2 deg 1.5 seconds). It might be a good idea for us to look at how the LO is actually behaving during this kind of switching. EdFomalont
will look into doing these tests.
Antenna Design Specifications Relating to Phase Calibration
From the "Specification for Design, Manufacturing, Transport and Integration on Site of the ALMA Antennas", Sections 5.4.1 and 5.4.4:
Section 5.4.1 (Fast Switching Phase Calibration):
"...the antenna shall perform steps of 1.5 degrees on the sky and
settle to within 3 arcsec peak pointing error, all in 1.5 seconds of
time. The antenna shall then track and integrate on a calibration
source during one second, then it shall switch back to the target
source with the same requirements on switching time and settling
accuracy. It shall then track on the target source. The time for a
full cycle of target-calibrator-target observation shall be 10
Section 5.4.4 (Step Response):
For a step amplitude of 1.5 degrees, the direction of the boresight
axis shall position to within 3 arcsec within 1.5 seconds of time and 0.6
arcsec within 2.0 seconds of time (total). This applies to the motion on
the sky and in any direction, except that the azimuth component of
the motion need only meet this requirement if the elevation is less
than 60 degrees.
Note from MH: the specification allows a cycle time as short as 10s.
In actuality, much longer cycle times will be used: 20-30s without WVR,
and perhaps 30-300s with WVR (we are studying this).
- 22 Oct 2004
The Importance of Self-Calibration
A thread derived from an email discussion between EdFomalont and MarkHoldaway...
Realization #1 -- Self-Calibration is Useable Over a Much Larger Range of S/N Than Previously Thought:
I just did a very simple simulation using AIPS++ -- adding more and more thermal noise to a point source, doing self-cal on it, and looking at the rms error which is introduced to the phase gain. The theoretical result (Cornwell, 1980) is
sigma_gain = sigma_bl / ( S * sqrt(N-2) )
where signs_bl is the thermal noise per baseline, S is the source strength (point source assumed here), N is the number of antennas, and sigma_gain is the rms gain error -- which turns out to be
essentially the same as the rms error of the phase solutions in radians. Now, for N = 50,
sigma_gain = 0.144 (sigma_bl / S )
Now, when I sumulate this in AIPS++, adding thermal noise, I get the result shown in the attached plot
, and the best fit line is more like
sigma_gain = 0.109 (sigma_bl / S )
IE, just crunching through the numbers with AIPS++, the gain solution process is less susceptible to noise than I had thought.
Further interpreting the plot, ALMA can effectively perform sefl-cal when the SNR per visibility over the solution interval is as low as 1/5 or even 1/7 (1/5 and 1/7 being the inverse of the X axis in the plot). People might be cautious to use such low SNR data for gain solutions, but remember that the noise is more limiting to the image than the noisy gain solutions:
Thermal noise limit to dynamic range, no gain errors:
DR = S * N / (sqrt(2) * sigma_bl )
Limit to dynamic range from noisy gain solutions:
DR = S * N^(3/2) / (sqrt(2) * sigma_bl)
These expressions work for a snapshot; if we have a long observation, the phase solution process will randomize the phase solution errors, and the conclusions still hold: thermal noise's effect on gain errors is sqrt(N) less damaging to the image than the actual thermal noise is: ie, self-cal works great.
HOWEVER, we don't want to routinely do self-cal with SNR per baseline of 1/5: then we could buffer from irreparable decorrelation from the thermal noise on the self-cal gain solutions.
Respose from EdFomalont:
The change from linearity of the postscript plot is mostly an indication of when the phase points start to scatter above 360 deg. My experience with noisy antenna phase solutions (VLBA and VSOP), is that an rms of 20 deg is about the limit. This, then corresponds to about SNR=2.5:1 for an antenna, which drops to about SNR=0.35 (1/3) for a baseline phase. Now, some caveats.
In attempting to measure relatively low SNR self-calibration solutions, anything that can be done to improve the SNR of a solution is worthwhile. For example, combining the eight channels of data (4 freq and 2 polarizations) to obtain one solution is usually possible since the troposphere fluctuations should be about the same, once all of the phase differences have been removed among the data channels (fortunately, these are generally slowly varying and easy to
determine). The coherence time of the observations, hence the approximate solution interval is also important to determine. It could vary with baseline length and certainly will vary with time. And, of course, any resolution effects will decrease the SNR.
Another thing to realize is that if you self-calibrate on noise alone, you will get phase solutions which will have a surprisingly large signal to noise, perhaps even as high as 1.5, and the image will be a nice point source at the center. However, if you look at the phase solutions, they will be random in time. So, whenever you are selfcalibrating, the phase solutions should be continuous in time, even with low SNR solutions. In fact, a good self-calibration algorithm would fit for a continuous delay function with time over several coherence periods.
More from MarkHoldaway:
Realization #2 -- Self-Calibration is Useable With Much Weaker Target Sources Than Previously Thought:
In fast switching, we are typically spending about 0.3 s on a 50 mJy cal source and then spending 30 s on the target source. In order to get good phase solutions on the cal source, we typically got a SNR of 1 on each baseline -- though that was tweaked -- if the residual atmospheric errors are small, then we would require higher SNR so as not to be limited by the thermal errors in the gain solutions.
So, when we go to the target source, we have 100 times as much time on source and can accept a SNR per baseline over that 30 s of like 1/2 (
=> gain rms =
0.2 rad, or 0.96 coherence); in other words, our target source could be about 20 times fainter than the cal source -- ie, as weak as just a few mJy, though that depends on the observing band -- and self-calibration should still work well.
General comment from EdFomalont:
A general comment is: The role of the WVR corrections are CRUCIAL for ALMA. If you can increase the coherence time from 3 sec (or whatever the lower limit is) to 30 sec to 60 sec, then self-calibration to obtain the residual slowly changing phase, is pushed to weaker sources by about a factor of 5.
- MarkHoldaway: Do we even need fast switching? Yes, we do. If we DIDN'T fast switch, we might very well get into a regime where we have 360 deg errors over time and our source might be so complicated that we cannot contrive a starting model (unlike VLBI, where a point source is often sufficient to start with).
- EdFomalont: If there are changes of 360 deg in a 30 sec period, then you are in big trouble even with fast switching; i.e. the coherence time is shorter than the calibration time. If the phase of two consecutive calibration observations differ by 180deg, then which way do you interpolate to apply to the target? If you go the wrong way, the target phase is in error by 180 deg on the average. If you self-cal the target (with or without introducing incorrect phases from the calibrator), you must not use a solution interval in which the phase changes by more than about 100 deg at most. On the second point, i think that it would be safer with alma to start with a point source than with the vlba. With many more alma antennas, the self-cal algorithm would be more robust. With no fast switching, then you start with a point source model (or model fit something to the amplitudes). With satisfactory fast switching, you will get a reasonable model. But, if the source is very complicated (beam filling emission), then i would worry about the self-cal image convergence regardless of whether you started with a point source or the first image. Have you made simulations of successive self-calibration iterations? Finally, one reason for fast switching is to tie the position of the target to that of the calibrator. With self-calibration alone, you lose the target position.
- MarkHoldaway: Will this fix everything that fast switching can't quite get? No. Any decorrelation losses during the self-cal solution interval (which should be the switching cycle time or smaller; and which could be the averaging time of the visibilities) are GONE. However, we can solve for the average phase error over that period.
- EdFomalont: I was never comfortable with 0.3s integration times on the calibrator because this short period tells you nothing about the short-term fluctuations of phase. You could argue that a 5 sec integration time on a calibrator (even with infinite signal to noise) will also estimate the decorrelation loss (for the calibrator and the target). this correction then applied to the target may more than compensate for the small loss of target snr with less integration time. Of course, if the wvr work well, we don't have to worry about this as much.
- MarkHoldaway: I think using Self-cal after Fast Switching can significantly improve the residual phase errors and some of the decorrelation from fast switching.
- EdFomalont: Yes, self-calibration will also determine accurate antenna gains solutions, and will fix minor decorrelation. Large decorrelation cause closure errors. However, the snr needed for the accurate gain solutions are now, perhaps, twice as high as we discussed above.
- MarkHoldaway: Just as we hope that WVR will help increase the fast switching cycle time and thereby increase ALMA's efficiency, Self-cal will also do that. If our source is not a low SNR detection experiment, and we know that we have 5, 10 mJy, or more to work with, we could relax the fast switching requirements. If we ran into trouble, we could make the first image (to be used as a starting model in the self-cal loop) with only visibilities collected very close to the CAL observations, then using that as the starting model, do phase solutions on all the data.
- EdFomalont: There are lots of strategies which will depend on the weather, the wvr behavior, the source strength and complexity. With experience with alma, the optimum ones will become clearer. The most important goal of the wvr is to increase the coherence time by a significant amount. Then a calibrator must be observed every coherence period in order to remove the more slowly varying residual phases. Without the wvr, the coherence may be just 5 sec; with the wvr, it will be considerably longer than 60 sec. This calibration will produce a source image, regardless of its strength. If the source is really strong, then the calibrator observations only give you the relative position of the source since self-calibration loses the absolute position. As mentioned above, i don't think the starting model for the source using the calibrators observation is crucial (but i may be wrong here). More commonly will be a source which is nearer the margins of self-calibration. In this case the calibrator observations are needed in order to determine the relative phase of the polarization and frequency channels so that all data streams can be combined coherently, before tacking the self-cal. These relative phase terms are probably slowly varying and can be following with calibrator observations every five minutes or so. But, if a source image is needed in order to obtain a robust self-calibration solution, then the calibrator observations must be made more often to really follow the residual phase, and not just the difference in residual phase among the different data streams. On your last point, if the phase stability is so bad that you can only use data close in time to the calibrator, then you should be observing at a lower frequency, or not at all. If things are this bad, then the phase that you measure on the calibrator may have little to do with the target just because the are separated by a degree or two in the sky. Big temporal phase changes mean big angular phase changes in the moving phase screen above the array.
- MarkHoldaway: When we are obsevring a source of 5-10 mJy or brighter, self-cal should be able to significantly improve the fast switching efficiency. It stands between 0.7 and 0.9 now (less efficient at high frequencies). We can spend less time doing fast switching calibibration and we can correct for more of the residual phase fluctuations, resulting in less decorrelation, so it seems a first guess might be that the efficiency of fast switching + self cal will be more like 0.85 - 0.95.
- EdFomalont: Yes, if self-cal is likely, then you need much fewer observations to tie the source position down and to determine the relative phase terms described above.
- MarkHoldaway: The big shift in my thinking is that self-cal isn't just for strong sources anymore. I have no business guessing, but it seems like we should be able to self-cal more than half of ALMA observations.
- EdFomalont: With this perspective, it would be interesting to look at the 'sample' alma proposals to see what the mix is. Remember, there are plenty of interesting strong sources that have lots of structure and may not be easily self-calibratable on the longer baselines. And, self-calibration will be able to image sources which can be detected in, say about 5 minutes. For an integration time of 4 hours on a weak source, the detection level will be 7 times fainter than the self-calibration level. There must be a hell of a lot of sources between the alma cohernent detection level and the self-calibration detection level.
- MarkHoldaway: What about WVR? If WVR works as well as it is supposed to, we won't have nearly as much reason to use Self-Cal. So, it is a good insurance policy.
- 22 Mar 2006
VLA Tests of ALMA Phase Calibration Issues
A thread derived from an email discussion between EdFomalont and MarkHoldaway...
have embarked on some tests using the VLA to characterize some aspects of proposed ALMA phase calibration schemes.
- MarkHoldaway and Bryan Butler have observed weak phase calibrators at 22 GHz in order to determine the minimum SNR needed to obtain phase solutions.
- EdFomalont has been promised VLA time in order to cycle between several
sources to determine the properties of the angular phase coherence (not
done yet as far as I know). This data will be used to investigate the question of how close a calibrator should be to a target source in order to decrease any systematic phase differences associated with the cal-target separation.
On this second issue of the angular phase coherence, MarkHoldaway
points out that Frazer Owen's intuition is that the systematic phase offsets with calibrator distance would be due to errors in the dry zenith delay "measurement". At the VLA and VLBA, they've got models for the dry zenith delay based on ground measurements of T and P. This delay exceeds 1 m for ALMA, probably 2 m for the VLA. The delay cancels out almost exactly when two antennas are very close to each other (ie, pointing in the same direction and at the same geographical altitude), so any error you make in measuring the drt zenith delay makes no difference. However, in the Y+ configuration, we will need to measure the dry zenith delay more precisely, and we will also need to include the altitude of the antennas, which differ by as much as 500 m.
To this end, MarkHoldaway
are in the process of answering a few questions:
- How does the surface determination of the the dry zenith delay compare with what you get when you integrate the radiosonde data (plus a model to take you up beyond the top baloon height)?
- If we had frequent radiosonde data, would that be precise enough to measure the dry zenith delay?
- What about the temperature remote sensor with ± 2 K accuracy?
- At what baseline separation does the systematic phase difference become large enough to be of concern?
- Is there also smallish angular scale structure in the dry-air component that we don't expect?
- Are ground-based measurements alone enough to obtain sufficiently accurate delay models along the quasar path?
- If there are dry phase fluctuations, how will you distinguish them from wet fluctuations?
- 13 Apr 2005
- just for clarification: " we will also need to include the altitude of the antennas, which differ by as much as 500 m ": in fact there are two effects of the altitude difference:
- The geometrical delay effect: this is taken into account by CALC.
- The change of dry delay corresponding to the altitude difference. This is very probably also taken into account also, as long as the ground pressure at the antenna, which is used to calculate the dry delay, is corrected for altitude change and proper scale height.
- naturally is would be better to measure the ground pressure at the antenna, or at least to have measurement at various site locations of different altitudes, and to interpolate to each antenna location.
- 14 Apr 2005
Issues Derived from Other ALMA Documents
Calibration Specifications and Requirements
- Phase fluctuations are expected to be about 5 degrees RMS at 30 GHz (scaled from 30 degrees at 11 GHz).
- Not clear how we measure this. Site test interferometer?
- See Hills et. al. (2003) (ALMA Memo 459)
- 08 Oct 2004
I assert the ionosphere is not a big deal for ALMA.
Between 1996 and 2001, the 11 GHz site testing interferometer had an
rms phase exceeding 20 deg only 2.4% of the time -- some of those
times are due to tropospheric fluctuations, and some are ionospheric.
So, 20 deg of ionospheric fluctuations at 11 GHz scales to 0.3 degree
at 90 GHz where fast switching will
occur. Even if we scaled up the fast switching phase to 900 GHz, an
ionospheric error of 20 deg at 11 GHz, or 0.3 deg at 90 GHz, would
make an error of only 3 deg at 900 GHz.
SO, as the bad ionospheric phase errors happen so rarely, and as they
scale down to reasonable levels at 90 GHz, I JUST DON'T THINK IT IS
A PROBLEM WE NEED TO WORRY MUCH ABOUT. Saddam is a much bigger threat.
I think we will have some way of monitoring it -- we will look at the 11 GHz
interferometer phase and compare that with what the ALMA array
is telling us, so we will know when the bad ionospheric conditions
are happening -- this may be input to the dynamic scheduling software,
giving us bells and whistles to say "Don't Observe at 30 GHz right now".
- 13 Oct 2004
I agree with Mark here (about the ionosphere anyway). The anomalous phase fluctautions seen at 11GHz were very
localised in time, and will only have an effect at 30GHz - and we have no funding yet for Band 1 anyway.
- 14 Oct 2004
Water Vapour Radiometry (WVR)
WVR ATF TEST Plans (draft)
Frontend IPT WVR Development Planning
- 22 Oct 2004
Correlation Studies of MIR and Millimeter-Wave Atmospheric Opacity Measurements
was concerned with the affects of ice crystals on measurements of the submillimeter opacity in the 650 micron window. The Smithsonian FTS campaigns showed that ice crystals were a significant source of opacity not well probed by the 225 GHz tipper, and AlWootten
wondered if there is a correlation between their presence and mid infrared atmospheric probes. BaltasarVilaVilaro
points out that from the work by Chapman you could derive the correlation between 20 micron and millimeter/submillimeter-wave opacity measurements. In particular, they include the correlations between 20 micron and SCUBA 450 micron, and the correlation between 225GHz and 20 micron. See:
- Chapman, I. M., Naylor, D. A., and Phillips, R. R., 2004, MNRAS, 354, 621; Correlation of Atmospheric Opacity Measurements by SCUBA and an Infrared Radiometer.
- Ian Myles Chapman: PhD Thesis, University of Alberta, 2000; The Atmosphere Above Mauna Kea at Mid-Infrared Wavelengths.
- Chamberlain, M. A., Ashley, M. C. B., Burton, M. G., Phillips, A., Storey, J. W. V., Harper, D. A., 2000, ApJ, 535, 501; Mid-Infrared Observing Conditions at the South Pole.
- Storey, J. W. V., Ashley, M. C. B., Burton, M. G., Lawrence, J. S., 2005, EAS, 14, 7; Automated Site-Testing from Antartica.
- 19 Dec 2005
IRMA, Infrared Radiometer for Millimetre Astronomy
spectral emission line of water vapour just as a 183 GHz radiometers does,
but at a wavelength of ~20 microns. At this wavelength the water lines are
very strong and the technology is less complicated. This allows good
signal-to-noise to be achieved on very short time scales and, critically,
allows much less complicated hardware than the 183GHz designs.
IRMA was deployed at the SMA before the summer 2004. Analysis of the
resulting data has been complicated by various software problems; however,
the data does show some correlation with the SMA derived phase measurements.
All three units were returned to the SMA in mid September and as of
late September had started collecting data again. With all our know
software issues that relate to SMA operations now fixed we should be in
a position to perform some more meaningful phase measurements.
Preparations are nearly complete for taking IRMA to Gemini south,
except for the new vacuum dewar. Meanwhile observations at the SMA are
continuing most nights - we have now had 2.5 months of continuous operation
with the two units currently at the SMA.
- 03 Dec 2004
I'm not sure what has triggered this comment, but I think it is worth making the obvious points here
about using 20micron as the WVR wavelength. This technique may turn out to be useful, but it has some
obvious difficulties to overcome and is as yet unproven; for example
- the 183GHz radiometers have enough sensitivity, so this is not an issue.
- there are real concerns about the linearity of the 20 micron emission and the phase delay: the conversion factor may be hard to calibrate.
- the behaviour in cloud is unclear
- the 183GHz beam couples very well to the astronomical beam as it uses the 12-m primary mirror; this is impossible to achieve at 20micron, so the best one can do is pick out a particular layer in the atmosphere to match the coupling to. This is likely to be a limitaton for fluctuations occurring in 3 dimensions.
- 03 Dec 2004
Calibration Plan Section
NOTE: This text has been included in the current version of the Calibration Plan
- 08 Oct 2004
Note Regarding This New Text
We still need to get some NEW TEXT from the Cambridge group
re: the WVR.
Done, put in the doc file.
- 03 Dec 2004
Also, we had an e-mail discussion (myself & the Cambridge folks)
concerning the optimal averaging time. Consider some very good
phase conditions -- then the thermal WVR noise on 1 s
may be much larger than the atmospheric phase noise -- then
we should actually not take the 1 s WVR samples, but should average
up a bit -- to like 5 s (this is Alison Stirling's calculation) --
then the WVR (equivalent phase) noise
will be lower by sqrt(5), and the atmospheric phase errors will
be larger by something like 5^(0.6) (phase structure function),
and add the WVR noise and ATMSOPHERIC PHASE in quadrature,
but you still end up with lower phase errors than the WVR
thermal noise on 1 s. During poor conditions, you won't want
to do that, you'll want 1 s WVR data.
A question -- we will be averaging down to like 10-100s ANYWAY
(after applying the WVR data) -- does this averaging do what we
want for free? Not quite -- at the high frequencies, the 1 s thermal
noise will result in some decorrelation. Richard Hills' concern:
if we do fancy pre-averaging of the WVR data to optimize the residual
phase errors, we still need to keep track of what the decorrelation
- 28 Sep 2004
I am in thge process of adding some sentences about the general probllme of going from WVR measures to optimally
- 14 Oct 2004