Josh Crabtree wrote:

'> Hi all, Andrey's nearfield listing for scan 103 is the average beam from
'> two individual beams 1 and 2. I imported his nearfield listing into NSI
'> and created this far field listing. There is no probe correction. The
'> listing is in KxKy, 108x108 points, with the same span as Andrey's
'> farfield listing:
'>
'> http://www.cv.nrao.edu/~jcrabtre/Scan103Investigation/FF_Scan103_KxKy_from
'> _AndreyNF_listing.txt
'>
'> -Josh

Date: Mon, 01 Sep 2008 19:52:46 +0000
From: Richard Hills <rhills@alma.cl>
Subject: Re: [Alma-feic] (Scan 103) FF listing created in NSI from Andrey's near field listing

Dear Josh et al.,

I have made the obvious comparison of this Far Field data set with the one that Andrey sent. Our understanding is that Andrey's data set is just a direct fourier transform of the Near Field whereas the set Josh sent has been through the more elaborate NSI package.

First point was that I found I had to invert the x and y axis to get any sort of match. This just means that the opposite meaning for the sign of the phase in the NF has been assumed. This also means that the signs of phase in far field is reversed and that the signs of some of the fitted quantities come out reversed.

I then made a simple spreadsheet to calculate the real and imaginary parts from the NSI data and (taking account of the reversed sign of the phase) fitted for the scaling factor on the amplitude and a constant offset phase such that the differences are minimized.

I then made the crude plots in "comp-plots" attached. Clearly the amplitude patterns are similar. The Amp-diff x 4 plot shows that there are differences and in fact these mostly correspond to a difference in the beam position ~0.15 degrees in each coordinate.

When one includes the phase one finds substantial differences. This shows up when one takes the length of the vector difference or the plots of the differences in the real and imaginary parts.

It would be easier to see what is going on with false colour plots or the like, but it seems to me that the largest differences occur where the phase is changing most rapidly. This suggests that the problem is some sort of smoothing or in the interpolation onto the requested grid.

Things that I notice in the listing that may be of relevance are:

1) In the "Near-field display setup:
Measurement type: NF Planar XY
Scan options: CV Off, CP On, Bi-dir Off, H-scan
Beamset smear: 0.00016 m
Scan plane compensation: On"
I think scan plane compensation is adjusting the Z-position for known inaccuracies in the scanner meachanism, is that right?

'> Yes, that is correct. -Josh

What does the Beamset smear do? What are CV Off and CP On doing?

'> Scan setup

'> Richard, these have to do with the scan setup. CV is Continuous Velocity, and CP is Continuous Path. I copied sections from the manual which address these parameters, below:
'>
'> CP on- "Continuous Path On"
'>3.1.3.1 Continuous Path
'> Continuous Path, constant velocity, and bi-directional scanning describe timing of data taking during a measurement. Even with the fastest of receivers, there is a finite time over which the data are taken. If the probe is moving during this time, the data are said to be "smeared" over a larger area than if it was taken with the probe stopped. The Continuous path option sets whether data will be taken in Stop-motion or Continuous-motion modes. In Stop-motion mode there is no measurement smear. All measurements in Stop-motion mode are made in a "trigger-read" mode at each point as opposed to Continuous-path mode where measurements are triggered while the probe is moving. In most cases, measurements are made in Continuous-path mode because this mode is fastest. In Continuous-path mode, the smear is related to receiver speed. Smear = Velocity (m/s) * Integration time (sec). Along the cut, the data are pre-triggered in order to keep the center of the smear to be at the desired grid point.

'> CV off- "Constant Velocity Off"
'> 3.1.3.2 Constant Velocity
'> The Constant Velocity option forces the system to always take data when the probe is moving at a constant velocity. At the beginning and end of a cut the probe accelerates (velocity ramps up) and decelerates (velocity ramps down). If the "Constant velocity" option is checked, the probe will start at a position far enough away from the first point so that by the time it gets to the first point it is at a constant velocity. At the end of the cut the probe will over shoot the last point so that the last point can be taken before deceleration begins. In most cases, you will want this option checked. In some cases, the scanner acceleration must be decreased causing the overshoot to be beyond the limits of your scanner. If this occurs you may want to uncheck this box to force the scan anyway. Just remember that the triggering points for the data at the ends of the scan will not be at a constant spacing like the rest of the scan.

2) In the "Near-field setup:
Data - Preprocessed
Truncation
Off
Amplitude tapering: Off
Network correction: Off
Probe/AUT Z-axis: On,K-correction: Off
MTIgain
Off, MTIphase: Off"

What does "Probe/AUT Z-axis On" do?

'> This has to do with applying a phase shift to the second beam so that the two beams may be coherently averaged. If we have two nearfield listings, the phase values in the second listing will be shifted by 90 degrees to make up for the 1/4 wavelength difference between the two scans. In the case of this particular listing, I imported a single nearfield listing (Andrey's), and computed the farfield listing from just that one nf listing. So in this case, the Probe/AUT Z Axis correction should have had no effect, either on or off, since only one nearfield listing was used. Here's some documentation from the manual:

'> 5.2.4 Position Phase Correction
'> Position phase correction is used to adjust the phase of each point in the near field. There are two kinds of position phase correction: Probe/AUT z-position which affects all points in the near-field equally and K-correction which affects each point differently based on the distortion in the K-correction file.
'> 5.2.4.1 Probe/AUT z-position
'> The Probe/AUT z-position option adds phase equal to the probe's z-position and the AUT's zposition from the beam table. Often a dual-z scan is taken where a value of 0.0 and � wavelength is used for the probe position. For high-gain antennas, the difference between these two scans is that the phase of the � wavelength direct-path signal will shift each point by -90 degrees (increased path length = more negative phase). In addition, the phase of the multi-path signal between the probe and the AUT will be shifted by -270 degrees. Position correction compensates for the probe's movement in the direct-path by shifting the phase back to the original plane (by adding 90 degrees to each point). Since the 90 degrees is added to all points, the multi-path signal also changes by +90 degrees. The result is that the two scans can be added coherently (using amplitude and phase).

3) In the "Far-field transform setup
FFT size: 512, 512
X/Y/Z shift= 0.000 m, 0.000 m, 0.000 m
Filter Mode: Max FF, Zoom: Off
Probe setup: Non-acquired
Probe model: None"

What does "Filter Mode: Max FF" do?

'> Set Max FF
'> This button forces the far-field span to be equal to the max far-field values specified at the time the file was acquired. It also forces the H and V-centers to zero. -Josh

I note that the data has apparently already been zero-padded to 512 to 512. This should mean that the data coming back from the transform will have a fine gridding and so the interpolation should have rather a small effect.

Also attached is the output from my analysis sheet for these two data sets. The overall efficiency number (in red) differs by just over 1%. More significantly the derived phase centre (X,Y,Z) differs substantially even when the sign inversions are take into account - if I use the position of one to reduce the other, the phase efficiency is only 94% instead of 99.6%.

I am really going to have to stop playing around with this stuff now. The obvious things to do are read the NSI manual and do a third independent FT of this data as a check.

Best Richard

Date: Tue, 02 Sep 2008 01:48:46 +0000
From: Richard Hills <rhills@alma.cl>
Subject: [Alma-feic] Further thoughts

Just to follow up on my previous message. I realised that the 0.15 degree shift in the fitted direction of the beam corresponds to almost exactly half the sampling interval so it could be that a lot of the disagreement could be just to do with the point we discussed on the last telecon: the fact that we have got an even number of data points and therefore no point at (0,0) means that you have to be careful with pi/2's and the like. I note that the position offset in the phase centre is also close to (but not exactly) half of the scan interval in the near field - the two differences are 0.51 and 0.47mm while the scan interval is 0.75mm. The fact that the phase centre is ~300mm from the plane of the scan makes this more complicated however.

This leaves the difference in the Z coordinate of the phase centre which is 7.4mm and I can't see how that could be caused by the half-row problem.

Anyway the obvious next step if to try again with data taken so that there are an odd number of rows and columns. I think the evidence we have is that a finer scanning interval is not needed and indeed it is better to keep the scan time short. We do however need a rather bigger window. So 127 by 127 points with a 100mm by 100mm window might be a reasonable compromise. If is really true that the step size is exactly 12.7 microns then you might try a window size of 100.8126mm. This comes from taking 63 times 12.7 microns which is 0.8001mm exactly and multiplying by 126 which is the number of intervals if you ask for 127 data points.

Best Richard

Date: Tue, 09 Sep 2008 00:50:49 +0200
From: Andrey Baryshev <A.M.Baryshev@sron.nl>
Subject: Re: [Alma-feic] Further thoughts -- proposal

Dear Richard,

I agree that even number of data points is the reason of the shift. This is the reason I never measure even amount of points. Perhaps I should pad the data by one row and column wheb the matrix size is even.

I also agree with the point that padding the data to say 512 matrix size will definitely improve the approximation and may actually explain all of the differences. I think we can continue for a lone time alone this root, but I think I have a better proposal. I have spotted that Josh can import my near field data into NSI, thus I propose the following: tomorrow I create a dataset from an ideal Gaussian beam with precisely known beam directions and waist position and coupling to the secondary, which we will feed to NSI, and to my calculation procedure. Honestly I did it with my procedure sometime ago, but, still, it never hearts to repeat. This way, comparing to an ideal initial parameters we will be able to eveluate calculations and see if the fall within 1% from ideal efficiency value that can be calculated analytically.

Best regards, Andrey

-- ToddHunter - 13 Sep 2008
Topic revision: r1 - 2008-09-13, ToddHunter

Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback