TIP Last Update: JeffMangum - 06 Jul 2006


Contents


Concerns Regarding Purchase Specification for VertexRSI Production Hexapod

The following were issues raised to the 2006-01-20 version of the "Technical Specification Subreflector Positioner (Hexapod)" (Vertex Document No.: TS-1005247-22170, Revision 1.1). This document was sent to Science IPT from Antenna IPT on 2006/06/22:

Issues Raised by DarrelEmerson:

  1. Section 6.1.1, under "Daytime", there's a spec on being able to tolerate full solar heating, with a solar flux up to 1290 W/m2. However, the hexapod has to operate when the antenna is looking directly at the sun, when the solar flux onto the subreflector may be considerably higher. (How much higher? With 100 m2 and 2% primary specular reflection, it could be 2 to 3 times higher. Are those values reasonable?) This will probably raise the temperature of the subreflector considerably, but I don't know how much. Is the "Max. surface temp: 65 C" referring to the struts of the hexapod, or the temperature of the SR, or what? I'd have thought that something should be added to cover the situation - e.g. "... Full solar heating of the hexapod from any direction, in addition to conducted and radiated heat from the subreflector which has a maximum temperature of ??? C."

  1. Section 7, EMI requirements. I think these are inadequate. I believe the basic requirements quoted here are what we've used for the antenna in general, where radiation levels at frequencies below 12 GHz have been used, to make it easy for the contractor to verify. That's fine for most of the antenna. However, the subreflector and its electronics are IN THE MAIN BEAM OF THE RECEIVER FEEDS. In setting general RFI limits, such as with the ITU RA.769 recommendation, it is usually assumed that interference will be picked up in an isotropic sidelobe, i.e. an antenna with gain 0 dBi. However, for anything in the neighborhood of the subreflector, the gain of the feed has to be taken into account. The gain of an ALMA rx feed is about 30 dBi, so the EMI requirements we set for anything in the neighbourhood of the subreflector should be ~30 dB more stringent than what we've specified elsewhere for the antenna. Given how critical this is, perhaps it isn't adequate to limit the requirement to 12 GHz - can we at least include a number for 30 GHz? One of the things that worries me is that the normal RFI precautions - filtered leads, the usual shield in front of a display - are probably fairly ineffective above 30 GHz. I think at the very least there should be a statement pointing out that this area of the antenna is a thousand times more susceptible to interference than anywhere else in and around the antenna, so particular care needs to be taken, much more than elsewhere around the antenna.

Issues Raised by PeterNapier:

Obviously missing is anything to do with control of tilt. I understand that this is because of the lack of a CRE from Project Management. However, if in the end ALMA decides to do the tilt themselves, and even if we don't, it would be wise for us to have the source code for the control software from the vendor. I thought that there was a general requirement for VA to deliver all source code to us (maybe I'm wrong on this) but VA does not appear to be passing any such requirement on to their supplier in this document.

Just a Comment by JeffMangum:

Just a comment about the requirements for the "Focus Switching" mode (used for frequency switched measurements). The total shift needs to be lambda/4, which is +-lambda/8 (with suitable Walsh Function modulation of the + and - shift). At 30 GHz this corresponds to +-1.25 mm, so it would appear that the specified +-1.5 mm is good enough.


Subreflector Positioning During Pointing Measurements

This topic was discussed during the internal review of the pointing and bandpass calibration example documents. The conclusion of this (brief) discussion was the following:

Pointing measurements for a given science band are made with the subreflector positioned for optimum science band sensitivity, but measured with a lower-frequency (i.e. Band 3) band.

BUT, Peter Napier notes the following:

I don't think your assumption that the subreflector tilt will remain unchanged when we offset to a pointing calibrator is valid. Due to the reduced number of calibrators available at the shortest wavelengths I think it likely that we may want to do the pointing calibration at a lower frequency than the observing frequency. The pointing correction is then made after applying the pre-measured pointing collimation offset between the calibration and observing bands. When we change to the lower frequency calibration band we will want to position the subreflector at the correct position for that band. So I think your specification should be set as an increment on 0.6 arcsec rather than 2". Do others agree?

...which Peter later followed with...

My suggestion that we move the subreflector to the correct position for the low frequency calibration band was not based on the desire to improve the sensitivity of the calibration slightly but rather to reduce the amount of calibration and bookkeeping that the project must maintain. If you do not move the subreflector to the correct position for the calibration band then we will have to calibrate and keep track of the collimation of the calibration band with the subreflector in the correct position for every one of the higher frequency bands. This sounds like a lot of work to me, especially when a receiver is changed, but it sounds as though the Calibration Group have discussed this issue and consider the bookkeeping feasible so I bow to their judgment.

...and Rick noted the following...

If you don't tilt the subreflector when you go to the pointing calibrator at a lower observing frequency, it will cause a few percent loss of signal. It would be better to accept that small signal loss than it would be to create a more difficult spec, as well as the complexity and potential ambiguity to pointing and phase caused by moving the subreflector. Hence unless the scientists strongly object, we will plan to calibrate without tilting the subreflector, and we should leave the spec as written below: tilt repeatability better than 31urad.

During our discussion of this topic while reviewing the pointing calibration example document, we realized the inportance of the issue of sensitivity to pointing calibrators at the higher frequencies and the bookkeeping issues (Peter's points above), but also realized that practicality likely dictates that we position the subreflector for optimum science during pointing (and fast switching) observations (Rick's point above).

There may be a middle ground in all of this, though. Since pointing calibration measurements will interleave with science target observations over timescales of 10s of seconds, perhaps we can do the following:

Reposition the subreflector for optimum efficiency when observing a pointing calibrator which is more than 10 seconds in move time away (awkward wording, I know).

I suspect that this will not adversely affect the subreflector positioning specification.


Subreflector Positioning During Fast Switching

The following discussion between Richard Hills and Darrel Emerson occurred via email in December 2006 and was related to the discussion of the benefits of subreflector tilt for optimum efficiency. The conclusion of this discussion was the following:

During Fast Switching (FS) observerations the subreflector will be positioned for optimum efficiency at the science observing band.

== Richard Hills said to Darrel Emerson (2006-12-09)  =======

Dear Darrel,

Yes, I had worried a bit about this issue and had also been assuming that if we do implement the tilts we would not try to adjust them when doing the band-switching version of fast-switching - on the same grounds that you give... ... ... ... Detailed responses to Darrel interpolated:

'>
'> With that assumption, the implication is that the SR will not be optimally positioned for the duration of the calibration measurement. However, since the time spent on calibrator will normally be small compared to the time spent on the source, the slight loss of gain on the calibrator will be acceptable - although will still have to be taken account of in the overall system calibration software. The worst case is probably for observations in Band 4, where the feed is roughly diametrically opposite the feed for Band 3. That is, for the calibration measurement on Band 3, while ideally the SR would be tilted ~0.9 degrees towards the Band 3 feed (half the value in your lower table on p.8 of your draft Memo 545), in fact it will be tilted about 0.93 degrees (half the angle in the same table, for Band 4) in the opposite direction. So, in this scenario, the SR will be tilted about 1.8 degrees away from the optimum Band 3 position.
'>
Agreed, although I would have thought that if you were using Band 4 for your astronomy you would not usual bother to switch to Band 3 because the gain in signal to noise on the calibrator would be small and not sufficient to justifiy the loss of common-mode cancellation e.g. in the electronics. With the high frequency bands 7-10 we probably will not want to implement the tilt at all so we should just consider the losses for band 3 in the untilted case.

'> How does the loss of gain vary with frequency, and how does it vary with tilt angle, for increasing SR tilts away from the optimum? If the loss of gain mechanism is vignetting, then to first order the gain loss should be frequency independent (true?) On the other hand, if the gain loss is from phase errors introduced on the primary from the offset optical alignment, the gain loss would go as roughly the square of frequency.

'>
'> For tilts, if the gain loss mechanism is vignetting, then very roughly the gain loss would be about proportional to offset angle. If the gain loss mechanism were phase errors, then the loss would be roughly proportional to tilt angle error squared. (True?)
'>
At frequencies as low as band 3 or 4 the loss is due to the vignetting not phase errors.

'> From your own estimate (p.10 of your memo) if the SR at 200 GHz is NOT tilted, the loss of gain is about 1.5%. Assuming all of this is vignetting, which (with my crude assumptions above) is then frequency independent and (perhaps) proportional to offset tilt angle, then if the tilt angle of the SR is away from optimum by twice that at Band 3 - which it will be, as Band 4 is diametrically opposite Band 3 - then the gain loss while calibrating in Band 3, with the SR left at the optimum position for Band 4, will be about 3%.
'>
'> If the gain loss is from phase errors rather than vignetting, the extra loss will be at least twice that, but probably vignetting is the better guess. (True?)
'>
I think that the spill-over is not entirely frequency independent becuase of the broadening of the diffraction features at lower frequency. I will try to run some cases to find the dependence on angle. I suspect it is between linear and quadratic.

'> CONCLUSION: With the above hand-waving, we will be 3% down in sensitivity in Band 3 if the SR is set at the optimum for Band 4. Even with this loss, it is probably still the best observational strategy NOT to reposition the subreflector during fast-switching calibration at a different frequency.

'> Do you agree with the conclusion? Is the hand-waving estimate of gain loss anywhere near the truth?
'>
I agree with the conclusion, but I suspect that in this worst case the loss in G/T will be quite a bit larger than 3%, but even at say 10% the effect on calibration time is very small. (My recollection of Mark's memos on this is that the time spent on the calibrator is almost always small compared to the time spent moving.)

Best Richard

======== End of Richard Hills response to Darrel ===========


Topic revision: r2 - 2006-07-06, JeffMangum
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback