Continuum Total Power Issues
Continuum Total Power Sampling:
Discussions which took place on or about 2004-07-26
Darrel Emerson said...
I don't mind either way.
I regard the discussion about anti-alias filters and so on as
details of a specific implementation, rather than being anything
fundamental in principle; the key thing is the basic sample rate.
If the implementation is to be carried out by using the anti-alias
filter, oversampling and subsequent averaging, then yes there are
additional requirements (such as amplitude and phase response of
the anti-alias filter and initial oversample rate.) But an alternative
implementation might (say) involve a V/f converter with a frequency
counter performing the integration, in which case none of the
anti-alias filter requirements are relevant - although the 2 kHz is,
and there might still be linearity & dynamic range questions.
So, isn't most of this level of stuff in the category of engineering
detail, where the engineer should be trusted to make his own best
judgement on the specific implementation, rather than it being
something to bother the CCB with? Maybe I'm wrong; you can judge
better than me.
Stacy is about to put the CRE into EDM. Please feel free to
add as much or as little of the detailed requirements we talked about,
according to whatever you think the most useful.
Bill Brundage then replied...
Are you saying that the basic requirement for astronomical total power
data is only: "500 microsec square box integrations at 100% efficiency,
16 bit values (LSB < 30% of rms of full scale output) of each 2 GHz
bandwidth baseband channel"? Is this adequate for a worst case
observation-Thotcalibration cycle at constant-gain with Tsysmax/Tsysmin
= 11? Please confirm.
This still leaves us with the question of which antennas: only the "4
dedicated total power antennas with nutators", or all 64 ALMA antennas,
and/or all Compact Array (Japan) antennas?
After your reply, I will put it in the CRE technical comments.
By the way, no known commercial V/F converter is sufficiently fast to
produce a sufficient number of counts at 500 microsec output intervals.
So an oversampling ADC with constant delay antialias filter appears to
be the only practical implementation (unless we can prove that a
sigma-delta ADC can achieve 100% integrations without smearing across
Brian Glendenning commented...
I think the first thing to determine is whether the traffic can fit on a CAN
bus if we use a non-AMB protocol.
Darrel Emerson then responded...
Guys: I think I pretty much agree with Bill's summary, but
maybe I've forgotten something:
(1) I believe LSB being 30% of rms is (just) adequate. Taking the sampling
error as roughly an rms of half the sample step, or +/-15%, the
degradation [sqrt(1+.15^2)] would be 1.1%. Maybe 25% (0.8%) would
be safer than 30%. This is all a bit simple minded, but I don't
think it can be too far wrong?
(2) With 0.5 ms integration, 2 GHz bandwidth, the (B.t) factor
is (2.10^9*.5*10^-3) = 10^6, so sqrt(B.t) is 1000. With 4 levels
per rms, that becomes 4000. With ratio of Tsys(max/min) of 11, this
becomes 44000. A 16-bit number gives us 65536 levels, so a
16-bit number for 0.5 ms integration is JUST ok.
(3) I think we do need to add something about the pre-sampling
filtering, such as "amplitude to be flat within 1%(?) out to the
post-detector bandwidth of 2 kHz, and audio phase shift departure
from a linear phase response within the 2 kHz band should be
less than 10(?) degrees. This defines the requirements of any
anti-aliasing filter before the sampler.
Does that all sound about right? What have I forgotten?
...and Darrel further commented...
... and about the question of how many antennas?
As far as I'm concerned, we should have identical TP detectors
on all 80 antennas. For something as cheap as a TP detector, I think
it would be madness to compromise. The compromise might save us a little
(very little) on data rates, but would probably be more expensive overall
since 2 different designs would be needed. If we really want to cut
down on TP data rates, it presumably could be done by software, keeping
all the hardware identical.
Does everyone agree with this? Do we need an ALMA Board decision
to define this? (Please say we don't!!)
Al Wootten commented...
I think the ALMA top level science requirements say all antennas have
TP detectors but only four have nutators.
- 03 Aug 2004
...and Mark Holdaway (via email from Al Wootten) added the following...
Mark had another point, which I try to paraphrase thusly: Currently, our
paradigm is a 5% error in the relative flux scale. This will not be
good enough for some mosaicked observations, where such a large
difference in the interferometer flux scale and the total power flux
scale will result in imaging errors. How can we improve this? Mark
proposes we contemplate a cross calibration of total power and
interferometric systems by observing a bright quasar with both. At 880
GHz there should be one 0.5 Jy QSO within 15 degrees or so of the mosaic
target source. We can observe that source, but ideally the minute or so
we'd like to devote to getting good enough S/N on that source will not
suffice. We'd need many minutes, perhaps an hour. But in that period
of time the sky and the instrument are likely to have changed, so we
cannot get adequate S/N quickly enough. However, if we observe with all
64 antennas, although the TP flux scale on any particular antenna may be
say 8% off, the mean of all of them would be like 1% off, enabling the
accurate cross calibration of total power and interferometric
observations, and allow us to make a high quality mosaicked image.
I believe that the change should apply to all 64 antennas.
Mark notes that you can also find backing material for this argument in
section 10.2 of CalTotalPower
- 14 Sep 2004
Continuum Total Power Sampler Design Considerations:
Discussion extracted from the CRE comments on this design...
Darrel Emerson discussed considerations for implementation of 2 kHz data output rate per total power detector in an email of 14 July 2004, which is reproduced below for ready reference.
Hi Bill, Clint:
This is my first cut at answering Bill's questions. This is just a work in progress, and in particular I want to run it by Larry to check for blunders. At the moment I've only shown it to the scientists in the list below, so please don't take any of it as a firm answer to anything. Some further compromises may be possible if it turns out to be critical.
This is just to let you see the way I'm heading.
-------- Original Message --------
Subject: Parameters for continuum total power sampling
Date: Wed, 14 Jul 2004 16:45:05 -0700
From: Darrel Emerson <email@example.com>
To: Al Wootten <firstname.lastname@example.org>, Robert Laing <email@example.com>, firstname.lastname@example.org
Hi Al, Robert, Tom:
Bill Brundage asked for guidance in the implementation of a continuum detector anti-aliasing filter and sampler, from the point of view of scientific requirements. All this is of course dependent on the results of the eventual Change Control request.
Bill's strawman model consists of the square law detector at each 2-GHz-wide IF, followed by an anti-aliasing filter, followed by a digitizer (not itself incorporating square-box or other integration), followed by a digital averager. The digital averager would then represent a very good approximation to square-box integration, with a data dump frequency of 2 kHz or greater. This is not the only possible implementation.
Does the following sound plausible? I've not run it by Larry yet, but will do so (Larry's out of town at the moment.) It probably needs to be read in conjunction with my notes on continuum sampling frequency. I could very easily have made multiple blunders.
Specifically, Bill asked for guidance on the following parameters:
"A Science requirement giving nominal or limit values for 8 parameters:
, anti-alias filter with response at frequency , rate, LSB = * delta Vrms, boxcar integrated data rate and integration efficiency ought to be sufficient for implementation by BE IPT. Science IPT may prefer a smaller number of parameters, but that might result in a less satisfactory implementation that limits the potential science."
There is a requirement for the phase response of the anti-aliasing filter to be fairly linear over the range of frequencies of interest. (If it isn't, it introduces a distortion into the OTF beam shape, which is different according to each rectangular grid OTF scan direction, and becomes extremely bad for providing good short-spacing UV data.) Based on that, I opt for a Bessel low pass filter, and somewhat arbitarily (because it matches the IC chip Bill already has in mind) adopt a 4th-order Bessel filter.
The maximum frequency of interest, after detection, is 663 Hz. This is derived from a scanning speed of 0.5 degs/second, and Nyquist sampling lambda/(2.D) at 950 GHz. If we want this frequency to be attenuated by the anti-aliasing LP filter by not more than 1%, then the -3 dB point of the filter has to be at 3.74 kHz (see below). [This is fairly conservative.]
From my notes below, I derive:
(the -3 dB point of the anti-aliasing filter): 3.74 kHz.
4th-order Bessel low pass anti-aliasing filter(see equations below).
sample rate 18.3 kHz or faster (see below).
If these samples are averaged to a 2 kHz data dump rate, then 20 kHz raw sample rate may be convenient, averaging 10 samples.
The LSB should be 25% to 30% of the rms noise, which implies a 14-bit digitizer at this 18 or 20 kHz rate. The data dumps at 2 kHz should be 16-bit values.
Equivalent boxcar integrated data rate is 2 kHz or faster.
(See notes from Change Request text).
Efficiency: not more than 1% effective observing time should be lost, in addition to the compromises mention above.
Detailed notes on low-pass filter and sampling parameters:
Choosing a Bessel design because of its near-linear phase characteristic, which is particularly important when on-the-fly single dish data is to be combined with interferometric data:
The maximum frequency of interest is 663 Hz. If we allow (rather conservatively) the amplitude to be attenuated by only 1% at 663 Hz, then for a 4th-order Bessel filter the -3 dB (0.707) frequency comes out as 3.74 kHz.
With the -3 dB response set to 3.74 kHz, the phase/frequency response at 663 Hz is still linear, within much less than 1 degree. (With this filter, even at the -6 dB point, the departure from linear phase/frequency response is less than 3 degrees, although it deteriorates rapidly beyond that point.)
The scientific requirement is only that the departure from linearity be less than 13 degrees at the highest frequency of interest (663 Hz), which is easily met. [This requirement is derived by equating a fraction of the 0.6" antenna pointing spec to the phase shift of the highest spatial frequency present in a 12-meter antenna at 950 GHz.]
If the data are sampled at frequency 2.Fs, then a frequency (Fs + x) present in the original data is aliased back to a frequency of (Fs - x). With the 3 dB frequency at 3.74 kHz, using a 4th-order Bessel LP filter the -40 dB (1%) point is at 17.65 kHz. We can allow noise at this -40 dB point to alias back into 663 Hz. If we set Fs - x = 663 Hz and Fs + x = 17.65 kHz, then 2*Fs=17.65 + 0.663 kHz = 18.3 kHz.
If this Bessel filter, with F(3dB)=3.74 kHz, is sampled at 18.3 kHz, then noise products aliasing back within the passband of interest (0-663 Hz) are at 1% or less.
If the anti-aliasing filter has a 3-dB bandwidth of 3.74 kHz, its effective integration time t is about 1/3.74 = 0.27 milliseconds. With a pre-detector bandwidth B of 2 GHz, the sqrt(B.t) factor in the radiometer equation is sqrt(2.e9 * 0.00027) = 735. In order for <1% degradation in s/n from an inadequate number of bits, the LSB should be no greater than about 25% (maybe 30%) of the rms noise. So, if the detected DC voltage is at the maximum digitized value, 735*4 = 2940 digitizer levels are required. However, the headroom in the detector should allow for at least a factor of 4(?) above that, to be able to cope with clouds and other variations in atmospheric emission and system noise, giving a requirement for 12000 levels: 16384 corresponds to 14 bits.
So, a 14 bit digitizer is required. Note that, after reducing the 18.3 kHz raw data rate to 2 kHz by averaging (in practice a sample rate of 20 kHz might be used, averaging 10 samples), higher precision is required, by roughly the square root of the ratio of the anti-aliasing filter bandwidth to 2 kHz. A 16-bit averaged result, at 2 kHz, is adequate.
RESPONSE OF A 4th-ORDER BESSEL LOW PASS FILTER:
See for example http://www.crbond.com/papers/bsf.pdf
For the amplitude response, with w the angular frequency
(w=2.pi.f), I use:
A=105/sqrt(w^8 + 10.w^6 + 135.w^4 + 1575.w^2 + 11025)
For the phase response, I use
phi(radians) = arctan[(105.w - 10.w^3)/(105 - 45.w^2 + w^4)]
-- JeffMangum - 14 Oct 2004
Continuum Total Power Attenuator Setting:
Discussion between Mark Holdaway and Jeff Mangum regarding attenuator level setting timescales
I think 30s is fine.
I am opperating under the assumption that, in fast switching,
we WILL NOT be observing a load or anything -- presumably a
load will change the instrumental settings of various components
(like the gain before digitizing), and we DON'T WANT to do
anything like that, as any instrumental tweak could result in
an instrumental phase. (That being said, a fast switching cycle
could be 16s on target, 1.5s to move, 1s on calibrator, 1.5s to
move back -- or 20s all together.)
The "target sequence" of fast switching will be done at a
similar airmass. HOWEVER, every 5-10 minutes, we need to
measure the drift in instrumental phase between the CAL FREQ
and the TARGET FREQ -- the so-called "instrumental sequence".
In the "instrumental sequence", we will go off to a source that
is bright at both the CAL and TARGET freqs -- which might be
10-20 degrees away -- and observe for about 10s, switching
freqs between CAL and TARGET -- BUT STILL, we want to observe
with the same instrumental settings -- so, we are trying to
keep all of these observations at about the same airmass.
I may be stupid about this, but I haven't factored the cal load
into the fast switching at all.
OK, so can you think of an observing cycle that WOULD require
doing a 30s cal cycle?
One possibility is in polarization observations, we might want to
go to the same distant source many times during the observations
to act as a pol calibrater (you know, spanning a range of
parallactic angles) -- and we will want the flux of that source to
be well calibrated so we can use all of the data together.
We will often have sufficient SNR on such a polcal source (usually it will
be VERY bright) in only a few seconds. We don't loose a lot by
sitting on that source for 30 seconds (I am guessing that polarization
observations will account for 5-15% of ALMA observations) -- but
at least during these observations, the (astronomical) cal could be
limited by a 30s instrumental cal cycle.
ANOTHER case would be a sky tip -- we will certainly be doing sky tips
with ALMA antennas early on -- perhaps later in ALMA commissioning we will
figure out a more efficient way of doing this, from weather station data
plus WVR data, or from a dedicated tipping radiometer or something.
AGAIN, if we have to spend 30s on each tipping elevation during early
commissioning, not much is lost.
Finally, there will be some astronomical projects which will observe
sources for only a few seconds, and will survey hundreds of sources.
In such observations, we could schedule sources at similar elevations
to be observed together -- this could be a detail that the dynamic
scheduling needs to know about.
> Hi Mark,
> I have been getting a lot of questions from the BE, FE, and COMP
> engineers lately asking what the timescales will be for
> observe/calibrate cycles. They need to know this in order to
> determine how often they will need to reset the input power level, via
> step attenuators, into the TP backends. My gut feeling is that the
> shortest interval between observe/calibrate cycles will be something
> like 30 seconds. Note that what I mean by "calibration cycle" is the
> time between an observation on the sky, followed by an observation of
> a load, followed by another observation on the sky that might be at a
> significantly different airmass.
> Let me know what you think. Also, if you like, I can start a wiki
> topic on this subject.
-- JeffMangum - 28 Sep 2004