Outline of CSV Concept document

1. Introduction

  • Purpose of document
    • Primary: Offer qualitative expression of the CSV approach including the roles, duties and organization of the team and a preliminary list of expected tests and milestones.
    • Secondary: Convince reader of NRAO's ability to effectively commission and deliver such an ambitious instrument.
  • Scope of document:
    • Define what CSV is (and what it is not) -- summarize section 2.1 of ALMA CSV Plan
    • Defer details of the personnel effort and time estimates to the CSV Plan
  • List of reference documents from which we will draw context for CSV
  • Provide acronym list
    • bgc - the document makes a lot of assumptions about the design, which are still a bit unclear at this time. Better if the document were made more design neutral.
    • bgc - actually, I find it easier to think about a commissioning plan, than about the list of goals included here. I would be happier if this document were less specific, and included mostly general concepts, like the idea that there need to be several teams, working in some degree in parallel, to commission different capabilities, and how those capabilities change as array construction progresses. The team that makes sure that each delivered antenna is functioning glitchlessly will have a rather different makeup and viewpoint than the team commissioning array circular polarization capabilities. We probably should list the order of capabilities to be commissioned - some depend on others working well, and, in reality not everything will be available at the end of construction. This will elicit loud squawks from the reference science group and from various people with particular pet interests, but that's probably a good thing.

2. Overview

  • Describe Goals, Challenges and Philosophy of CSV
    • State a fundamental assumption: all antennas will contain all receiver bands and all stations must meet performance specifications in all bands (incl. rms surface)
      • bgc - CSV is a process, not an event. Waiting to start the process until all bands are available may not the the right thing. There is a lot of work that can be done with a few antennas and a single band.
    • Balance the competing desires of:
      • reaching a minimum performance level for Early Science vs. fully understanding oddities in the system
      • taking "simple but reliable" approach vs. exploring the optimal or novel approach
    • Retire items with high technical risk early:
      • Demonstrate LO/IF system stability on long baselines
        • bgc - The existing designs for the data transmission system do not include the long baselines, which will have special problems.
      • Demonstrate effectiveness and reliability of WVR correction in a variety of weather conditions
    • CSV concerns of feasibility should be considered during the ongoing detailed Design process
      • bgc - I don't know what this means
  • Provide Timescales:
    • Prototype antenna: 2024
    • CSV target start date: first AIV antenna delivery (2026)
      • bgc - Need a date for first fringes. CSV target start date is two days later.
    • Early Science first Call for Proposals: April 1, 2028
    • Early Science first observations: Oct 1, 2028
    • Target completion main (214-element) array: Jan 1, 2034
    • Target completion entire array: Jan 1, 2035
    • Give quantitative evidence that 8 years is the minimum CSV timescale for modern interferometers, both large and small
      • bgc - I'm under the impression that CSV includes commissioning observing modes. Then the CSV timescale is not much less than the lifetime of the instrument.
  • Requirements for Start of CSV -- similar to section 2.2 of ALMA CSV Plan
  • Requirements for Start of Early Science -- similar to section 2.3 of ALMA CSV Plan
  • Present key statements from other Concept documents and briefly discuss how they impact CSV
    • Operations Concept
      1. Section 6.0: "At the start of ngVLA Early Science, only a small number of these modes that have been verified to work end-to-end will be available to PIs. The number of modes available to users will increase as early science progresses, with all modes deliverable from the construction projects available in full operations." -- The Early Science modes need to be defined at an early stage.
      2. Section 6.1: "Different capabilities and observing modes will be made available in stages during the transition from construction through ... commencement of full operations. That transition is likely to take around 10 years."
      3. Section 6.3: "Delivery of a fully-commissioned standard observing mode will include an operational SRDP pipeline before it is offered for regular use through PI proposals."
      4. Section 6.5: "The Observatory will release a set of first look science products - defined with input from the user community - ahead of PI access to the array."
      5. Section 7.1: "The subarray design implications...[will] be fundamental to the operation of the ngVLA."
      6. Section 13.1: "All elements of array design and operations ... must support operation of multiple subarrays for different purposes right from initial commissioning." -- Commissioning of subarray operation must have high priority.
    • Design documents
      1. SPIE article: "...the long baseline antennas, would fall into a VLBI station model with a number of local oscillator (LO) and data transmission stations located beyond the central core. These stations will be linked to the central timing system, correlator, and monitor and control system via long haul fiber optics. Several options will be explored for precision timing and references at these stations, including local GPS-disciplined masers, fiber optic connections to the central site, and satellite-based timing." -- Clearly this is an important problem to be solved and will require placing one or more antennas on remote pads as early as possible during CSV to avoid delays in achieving long baseline science.
        1. bgc - Where does long baseline stuff come in the early science plan?
      2. Antenna design poster at AAS223 suggests slicing the assembled and validated dish into 4 pieces and reassembling at the remote pads. This means:
        • We must practice and confirm (at the central array site) how accurately such reassembly can be performed by passive means.
        • If not sufficient, we will need to confirm the final surface setting after transport. Options are photogrammetry (1 part in 10^5 may be achievable) or holography. Since we cannot have holography towers at every remote station, it would need to be either celestial or satellite holography. In either case, obtaining a stable reference signal will be difficult if the nearest antenna is many tens or hundreds of kilometers away. So we may need to temporarily mount a separate reference receiver and feed on the antenna (like GBT), or be able to transport and set up a small fixed antenna for geostationary satellite holography (like Effelsberg).
        • bgc - An interesting pie to divide. photogrammetry clearly belongs to AIV, holography is split between AIV and CSV, efficiency measurements mostly belong to CSV.
    • Transition Concept
      • Not yet available
    • Reference Observing Program
      • Not yet available

3. Composition and Duties of CSV Team

Because the telescope will present a broad range of capabilities and frequency coverage, and many aspects of the design and technology will differ significantly from other NRAO facilities, the experience of the CSV Team will need to be diverse, drawing from all areas of radio astronomy research including scientists of all ages with experience in RF, digital, and software engineering as well as single-dish and interferometric calibration and imaging. The members of the CSV team will be distributed among the following work activities, with group leaders appointed where necessary.
  1. Assess the implications of major design choices on CSV during the future detailed design phase
  2. Assess impact of CREs submitted by other subsystems during the construction phase, including obtaining and analyzing any necessary data
  3. Assist AIV teams with on-sky testing of hardware prior to delivery to CSV, including the prototype antenna, receiver and correlator performance
  4. Devise and execute integrated performance tests of AIV-delivered items with specific criteria for pass/fail
  5. Work with Computing team to write observing scripts to achieve successful on-sky performance tests ("manual mode")
  6. Develop utilities for performance data analysis (and temporary metadata fixes) as needed and manage them as a coherent package (similar to ALMA's analysisUtils)
  7. Report system deficiencies encountered back to AIV and Maintenance for resolution via JIRA tickets
  8. Interact with hardware and software engineers to be aware of the latest status of problem investigations and fixes
  9. Devise and execute performance tests of the array as a whole (items of stability and fidelity that typically require long integrations or observing sequences)
  10. Write ngVLA memos and reports that summarize performance test procedures, results, and directions for future work
  11. Work with colleagues at ALMA and other facilities to better understand cutting-edge problems that each face
  12. Be familiar with the Reference Observing Program, and the capabilities that these projects require
  13. Maintain familiarity with the calibration plan; provide feedback regarding feasibility
  14. Maintain familiarity with pipeline processing and development; provide new requirements when necessary
  15. Deliver tested observing modes and work with the Operations group to achieve successful Science Validation results

4. Organization of team

The extensive number of performance items that CSV needs to validate will be a daunting task. It will be a challenge to maintain focus on a few major issues at a time while also not letting some items receive no attention, which could raise the risk of significant rework at later stages of the project. For example, it will be prudent to define a team that focuses on long-baseline issues and commissioning in order to expedite finding problems earlier in the CSV process than might otherwise happen in the inevitable push to make the central array available for Early Science. Another example is a team responsible for flushing out all the issues associated with Autocorrelation data, rather than leaving it for a later time. It will also be useful for members of the CSV team to also serve as members of other closely related groups such as the Array Calibration Group and the Control Software Group.

bgc - Very important that there should be an early and active short baseline group. Probably also need an RFI group.

In order to promote an efficient organization of CSV effort, responsibility will be divided between the team members so that each person is encouraged to take ownership of one or a small number of specific commissioning items. That person will follow the natural workflow of proposing the tests to be run, executing the tests, analyzing the data and writing the report, consulting with other members of the team and the wider scientific staff as needed at each stage. Handing off items from one person to another in a turno style should be discouraged in order to avoid misunderstandings of what the next steps should be. Presence on site should not be a requirement for contributing to the CSV team, especially because skilled and conscientious staff working at remote locations can efficiently examine test data during the mornings immediately following test observations.

5. Communication plan

Communication of progress on commissioning items will be recorded in a Daily Log distributed to the team and available to other subsystems. Further details will be presented by team members both informally at weekly group meetings and to a wider audience through lunch talks at the science center. Once multiple antennas are available for CSV, a daily coordination meeting will be needed to review the Daily Log and plan the upcoming 24 hours of observing tests. To encourage remote participation, this meeting will need to be held at a time convenient to all NRAO sites, likely 2PM MT, and should be kept to 30 minutes as much as possible. As each major commissioning item is completed, reports or ngVLA memos will be written that summarize the performance test procedure, current results, and directions for future work. Team members should also give presentations of recent successes and ongoing vexing problems at outside institutions, particularly those with radio astronomers on the faculty. The latter venue may help to prevent the repeating of mistakes of the past as well as become aware of more efficient methods to make progress. Members of the CSV team should likewise be encouraged and enabled to maintain visibility in the science communities of their choice during their years of service.

bgc - I don't like the idea of daily log and coordination meetings. This absorbs too much of people's time. Weekly sounds right to me.

6. Resource Requirements

The Plan document will specify requirements in more detail, such as the personnel effort and time required for various commissioning tasks and milestones. The following is a general list of items that will be needed.
  • Facilities
    • Space in control room (at array site in early days; transitioning to remote site as it becomes feasible)
      • bgc - Telepresence will work well from the beginning (as it did for the WIDAR project). Access to the "control room" (whatever that is) will be to chat with operators.
    • Guaranteed access to high performance computing cluster nodes and disk space
  • Hardware
    • Prototype correlator (2-station or 3-station): this was very helpful with ALMA, even as late as Cycle 7 as a pre-filter for detecting large issues in new software releases
      • bgc - Prototype correlators were important in the past (WIDAR and original VLA also) but new correlator designs come in big chunks, and the prototype concept becomes rather different. For instance, a reasonable prototype correlator might well be all stations but 200 MHz bandwidth.
    • A few IT-supported workstations (so that not everyone needs to have a laptop like at ALMA)
      • bgc - what's this for? Cluster access? Or real-time? (For WIDAR we found it useful to have almost all access through the common computers with lots of screen area in the "war room")
  • Software
    • Ticket reporting and tracking system
      • bgc - Need a monitor data archive, with automatic filling and capability of listing and plotting contents.
      • bgc - Need a hardware revision control system, with history of LRU installations and repairs. It should be possible to find version and serial number for any hardware module installed on an antenna
    • Early releases from Control Software Group that are meant to support CSV rather than the final ngVLA Operations mode (i.e., try to avoid the "ALMA problem" where every subsystem tended to assume the others would be complete)
      • bgc -I doubt the practicality of a CSV version of the real-time stuff. The separation of subsystems should be done as much as possible by the strict enforcement of the principle of maximum ignorance. As much as possible, one subsystem should not know what another is doing.
    • CASA, and python scientific stack on observatory-supported Linux OS (free)
    • Access to commercially licensed analysis tools if needed (MATLAB, etc.)
    • bgc - The importance of having VLA and ALMA providing contemporaneous fluxes, spectra, and maybe polarization at the ngVLA bands should not be overlooked.
  • Personnel:
    • Internal staff: expect contributions from SO, CV, JAO, NRC
      • Postdocs hired specifically for CSV -- to provide the bulk of the daily testing, reporting and software effort
      • Staff scientists assigned (in part) to ngVLA -- to provide skeptical review of results, engage in problem solving efforts on fundamental interferometry issues, and provide experience in recognizing data anomalies
      • Former AIV staff -- to retain knowledge of the system, motivating recruitment of top performers into more permanent roles
      • Operational support from computing, maintenance, and other subsystems as needed
    • External staff:
      • Visiting scientists with interest and experience on specific commissioning campaigns (will we offer financial support?)
      • Graduate students and postdocs: attract and engage their support on testing capabilities that serve their scientific interests. We recognize there will be competition for their attention with SKA etc., and that this will require modest financial support (travel? accomodation? RSRO-like program offering some advantage in Early Science?)

7. Milestones for Commissioning

The list of commissioning milestones express the expected flow of capabilities as systems are delivered from AIV. The Plan document will provide more detail including mapping the activities to specific items in the Science and Technical Specification documents and their respective pass/fail criteria. The order may not be sequential depending on the actual schedule of deliveries achieved. Also, the point of interface between AIV and CSV through the first three steps is likely to be somewhat fluid as the teams gain experience with the antennas and other systems. For example, while AIV is responsible for delivering pointing and focus models and nominal antenna surface setting, it is not expected that a complete characterization of elevation dependence of antenna performance will be provided, so this will work fall to CSV. The list below will be augmented from the more extensive list in the ALMA CSV Plan.

bgc - It seems inevitable to me that the pointing model will be the one used world-wide for alt-az telescopes, and that this will get built into the real-time system at an early date. I rather think that supplying values for the parameters of the models belongs to CSV. Similarly, I suspect the focus model is a deliverable of the design team, rather than AIV, and that CSV will need to supply the parameters. In the long term, of course, maintenance of these parameter values will pass to telescope operations.

  1. Initial tests of delivered components (prior to antenna availability)
    • Demonstration of standalone WVR operation: control and data analysis
  2. Begin Single dish operations
    • Continuum (total power detectors or autocorrelation)
      • Perform subreflector and receiver alignment: optimization of AIV settings
      • Measure radio pointing: first level confirmation of blind and offset pointing specification, including diurnal performance
      • Measure focus curves vs. elevation and diurnal variation
      • Measure beam profiles vs. elevation and diurnal variation
      • Measure gain curves: efficiency vs. elevation
      • Validate Trx and Tsys measurement techniques
    • Surface and illumination
      • Perform surface optimization at one elevation: photogrammetry and/or tower or satellite holography
      • Measure surface performance vs. elevation: celestial holography on brightest quasar
      • Measure illumination pattern and alignment error of each receiver feed
    • Autocorrelation
      • Validate spectral Trx and Tsys measurement techniques
      • Attempt continuum mapping
      • Attempt spectral line mapping
        • bgc - Most of the items above should be done through interferometry, just because it is easier. And celestial holography is a lot easier than tower or satellite holography.
        • bgc - also, Tsys is somewhat lower priority than others. Telcal provides the combination efficiency*SourceFlus/Tsys, which is what is needed for calibration and observation. With source flux from VLA or ALMA, measuring Tsys provides reasurance to the front end people that their calculations are correct, important in the long run, but no hurry.
  3. Begin Interferometry operations I: correlation of core antennas (within VLA site) leading to a stable interferometer
    • Obtain first fringes on bright continuum sources
    • Demonstrate loading of data into CASA and verify accuracy of metadata
    • Demonstrate fringe tracking: assess gross stability, search for delay jumps, and phase discontinuities
    • Obtain spectral line fringes and first visibility spectrum
      • bgc - All fringes will be spectra, as they are for WIDAR. Yes, we need to make sure that a few different channel spacings work.
    • Demonstrate interferometric focus and pointing: second level confirmation of blind and offset pointing specifications, including diurnal performance
    • Demonstrate phase closure and initial level of stability
      • bgc - The real problem here is figuring out why closure is not perfect. With the FX design, it is likely to be very good, and what remains is likely to be band-dependent (and the long baselines may have special problems)
    • Demonstrate simultaneous subarray performance
    • Demonstrate LO modulation: Walsh functions and f-shifts
      • bgc - Not clear these will exist
    • Assess spectral frequency labeling accuracy using narrow line observations
      • bgc - Labeling doesn't have an accuracy, just right or wrong. Velocity labeling has interesting issues.
  4. Begin array performance testing, typically requiring long integrations
    • Perform phase stability measurements: short baselines, long baselines
    • Perform bandpass stability measurements
    • Plot the per-channel rms from image cubes to look for anomalies
      • bgc - Imaging, with its inevitable limitations and artifacts is a whole other game. Maybe, like the other items in this section, this should be rephrased to deal with visibility spectra.
    • Assess noise performance across each band and rms achieved vs. integration time (continuum)
    • Write reports for each band (like EVLA memo 137)
    • Measure spectral purity of correlator on a strong line (including ghosts due to splatter) and dynamic range limit
    • Measure stability of complex beam pattern (each band)
    • Assess stability of antenna position measurements: fit for antenna axis offsets, analyze residual delay and assess atmospheric model
    • Assess astrometric performance
    • Assess flux density accuracy and repeatability
  5. Begin Interferometry operations II: correlation of remote stations
    • Antennas deployed to remote stations should be tested first at the array center if at all possible
    • Surface confirmation/adjustment after dish reassembly: photogrammetry (might be sufficient) or celestial holography (Ku satellite beacon with reference receiver?)
    • For each remote station, repeat the items from step 3 as necessary
    • Test and decide on pointing and focus methodology for remote stations
  6. Begin Interferometry operations III: correlation of core antennas with remote stations
      1. bgc - perhaps retitle to "Operation as a full array". Problems for a while will be antenna based, so core with remote distinction unimportant compared to making remote antennas work.
  7. Assess performance of full array (item 3)
  8. Assess sensitivity to variable weather conditions across the stations
    1. bgc - I think (or hope) most of this can be done with WVR data.
  9. Demonstrate operation from Scheduling Blocks
    • Single subarray
    • Multiple simultaneous subarrays
  10. Test fundamental calibration plan and report performance
    • Delay
    • Temporal gain (phase and amplitude)
      • WVR
      • Phase referencing
    • Flux
    • Bandpass
      • bgc - I keep telling people that delay and bandpass are the same thing, but nobody listens.
  11. Test polarization calibration plan and report performance
    • Compile primary beam models
      • bgc - way before this is just measuring leakage between and relative phasing of the polarization channels, and seeing how stable they are
  12. Test accuracy of full-beam imaging and full-polarization imaging
    • Quantitative comparison with previous VLA and ALMA datasets of same fields
  13. Test that observing modes produce viable raw data (SDM):
    • Single pointing continuum (coarse spectral resolution)
    • Single pointing continuum plus spectral lines (mixed spectral resolution)
    • Ephemeris objects
    • Pointed mosaics
      • bgc - The problems with both pointed and OTF mosaics is getting the imaging right. Not clear to me how much of this falls to CSV.
    • Special capabilities: pulsar, VLBI, solar
  14. Test automated processing of each observing mode (with feedback to developers aimed at subsequent SV)
    • First: through QA0 (data quality and calibration integrity)
    • Second: through automated calibration and calibrator imaging, including RFI flagging of calibrators and science targets
    • Third: through automated imaging of science targets
  15. Declare observing modes ready for Science Validation

8. Milestones for Science Validation

Science Validation is the process of acquiring observations of well-known objects from Scheduling Blocks, and processing them through Calibration, Imaging and SRDP. It will follow the approach used successfully by ALMA. Unlike ALMA, for which no comparable facility existed, we will be able to quantitatively demonstrate agreement with prior observations from VLA and/or ALMA. For each observing mode approved for Early Science by the project as a whole, the following tasks must take place. Steps 1-4 can be done as preliminary work before the array is ready. Step 5 (onward) will require a functioning, reliable interferometer of at least ~25 antennas. Steps 3-6 require close collaboration with the Operations group. Whether Step 7 is performed for every observing mode, or only a subset, will likely depend on the resources available in the run up to the Call for Proposals. The name suggested for those modes that include a public data release include: Public Science Verification or Demonstration Science (see the Life Cycle document). There needs to be agreement with Operations on what constitutes successful processing of a mode, perhaps in terms of Key Performance Indicators.
  1. Select Science Validation targets and identify previous VLA and/or ALMA datasets (with scientific public input organized by Project Scientist)
  2. Select Science Validation calibration plan (in collaboration with Calibration group)
  3. Determine the list of inputs that will be required of the PI for this observing mode, or if any special QA0 considerations are needed
  4. Generate Science Validation SBs using the Observing Tool (with assistance from Operations group)
  5. Execute Science Validation SBs
  6. Perform QA0, adjusting and repeating observations until successful (with assistance from Operations group)
  7. Demonstrate QA2 Pipeline processing of successful SBs (with assistance from pipeline and SRDP groups)
  8. Demonstrate quantitative agreement with previous VLA and/or ALMA datasets of same fields
  9. Release of raw and processed data products to public (prior to Call for Proposals)
  10. Document any exceptions in the characteristics of the data that would invalidate the suitability of this mode to the pipeline

As observing modes are validated, the CSV group will also need to participate in the following activities, along with the Operations team.
  1. Assist Operations group with writing the Technical Handbook (prior to Call for Proposals)
  2. Declare observing modes ready for Early Science (prior to the Call for Proposals)
  3. Define Shared Risk Observing modes that do not meet the readiness criteria for Early Science (prior to the Call for Proposals)
  4. Participate with the Operations team in popularizing the ngVLA and its observing modes via talks at other institutes (after the Call but before the Deadline)
  5. Write IEEE and PASP journal articles on the ngVLA as a whole (ALMA did not do this)

For the observing modes not offered for Early Science, but required to be delivered before the end of Construction, the steps above will need to be repeated.

9. Post-construction activities including new observing modes

Not all observing modes will be commissioned at the end of construction. CSV-like activities to commission further modes will be merged into the Ongoing Capability Development (OCD) effort, as defined in the Operations concept document (section 6.7). Such new observing modes may include specific science areas such as Pulsar or Solar. Since most of the CSV staff may have moved on to other projects, it will be important to transfer knowledge of the commissioning process to the Operations staff, and to solicit help from experts in the community.

The life cycle of new modes will be clearly defined including the product delivered to the PI. The Life Cycle document proposes the following categories:
  • Standard Mode Data Reduction (SMDR): SBs automatically generated; data are fully pipeline-ready and have well defined SRDPs
  • Non-standard Data Reduction (NSDR): SBs automatically generated; data are not pipeline-ready but can be processed by automatically-generated scripts and have defined SRDPs but likely to be refined
  • Shared Risk Observing (SRO): SBs require manual editing; data not pipeline-ready but can calibration
  • New Mode Test Observation (NMTO): an experimental stage that preceeds SRO
In addition, there is an additional option for Principle Investigator Data Reduction (PIDR) for specific complicated data reduction is required. The SBs are automatically generated; but we perform only QA0.

-- ToddHunter - 2018-12-04

-- Barry Clark (bgc) 2019-02-04 and 2019-02-08.
Topic revision: r32 - 2019-02-08, BarryClark
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback