ALMA Correlator Quantization Correction


  • 2010 Mar 25
  • 2010 Apr 14

New memos

2010 Apr 14 (Wednesday) 9:00 EDT

  • Attendees:
    1. Fred Schwab
    2. Todd Hunter
    3. Jeff Kern
    4. Steve Scott
    5. Jim Pisano
    6. Rodrigo Amestica
  • Discussion and action items
    • Fred pointed out that some of the lag zero data in the Rn0.txt file indicate that the threshold settings are far from optimal (1-2 instead of 3.55, see time series plot and histograms). This appears to be all data observed in TDM mode.
    • Fred will work out how large the matrix needs to be to precalculate the QC.

2010 Mar 25 (Thursday) 10:30 EDT, CV-331, 434-293-6691

  • Attendees:
    1. Fred Schwab
    2. Ed Fomalont
    3. Todd Hunter
    4. Jim Pisano
    5. Rodrigo Amestica
  • Discussion
    • current implementation takes ~90us in average to compute Van Vleck curves and apply the correction to each item in the dump. It has been measured that of this timing quota 66% is taken by the Van Vleck curves instantiation and 33% by the correction itself. Speeding up the Van Vleck curves computation seems to accept two major approaches:
      1. use a pre-calculated table to interpolate the curves using both input sigma levels as lookup key.
      2. at the beginning of a sub-scan compute the 'operational point' based on measured sigma levels (a set of 4 or 5 Van Vleck curves.) Following dumps to be corrected by selecting the closest curve or interpolating a curve from the original set.
    • Discussing how to speed up the correction algorithm itself needs more input data at this moment. Rodrigo will work on collecting more timing information (instrument qc algorithm itself to see where the timing is going and how is the polynomial aproach working? how many times applies polynomial and spline?)
    • what happen if qc is completely disabled? Ask Chile to run a test on this (details of the test to be refined offline.) How important is the correction compared against its costly timing quota? Knowing an answer to these details would let dimension the actually 'needed' algorithm.
    • Ed should revive some test environment to contrast results with data produced by the correlator software. Present Ed some real data to clarify what's the 'type' of lags we are expecting to observe at the correlator's input.

Background material

Reference material

-- ToddHunter - 2010-03-23
Topic attachments
I Attachment Action Size Date Who Comment
4level.pngpng 4level.png manage 30 K 2010-04-23 - 19:33 ToddHunter calculation of optimal thresholds for 4-level correlator
CORL- CORL- manage 587 K 2010-03-23 - 17:13 ToddHunter  
QuantizationCorrection.cpp.txttxt QuantizationCorrection.cpp.txt manage 25 K 2010-03-23 - 10:03 ToddHunter  
QuantizationCorrection.h.txttxt QuantizationCorrection.h.txt manage 14 K 2010-03-23 - 10:02 ToddHunter  
Rn0.txttxt Rn0.txt manage 7 MB 2010-03-24 - 14:27 ToddHunter variations of lag0 across all correlator dumps of a subscan
look-0.pngpng look-0.png manage 13 K 2010-04-23 - 12:56 ToddHunter Fred's histogram of mean values in Rn0.txt
qcorr.pdfpdf qcorr.pdf manage 857 K 2010-05-24 - 11:10 ToddHunter Schwab memo (PDF)
qcorr.psps manage 7 MB 2010-05-24 - 11:11 ToddHunter Schwab memo (PS)
rn.pngpng rn.png manage 23 K 2010-04-22 - 18:59 ToddHunter plot of the first 27560 lines of Rn0.txt thresholds.m manage 212 bytes 2010-04-23 - 15:12 ToddHunter Matlab script that demonstrates the expected zero lag power = 3.55 for a 4-level correlator
Topic revision: r17 - 2010-05-24, ToddHunter
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback