Notes from ANASAC telecon Oct. 9, 2015
attendees: Al Wootten, Karin Oberg, Joe Lazio, Alberto Bolatto, Rachel Osten, Douglas Scott, Shep Doelman, Shih-Ping Lai, Dan Marrone
Congrats to Karin Oberg for stepping up to be the next ANASAC chair. She will represent Rachel in her absence at the upcoming f2f meeting.
Dan Marrone will be appointed as a new ASAC member; also discussion about having a UC member be part of the ANASAC or appointing a new member.
Alberto, Douglas, Karin to be NA - ASAC members at next week's ASAC f2f meeting in Japan.
job ad for JAO postdoctoral fellows is out. Please spread the word.
Discussion mainly on charges to ASAC which will form the meat of the ASAC meeting. Questions from EASAC and ESAC already distributed to ASAC members.
Charge 1 Commenting on the Principles of Proposal Review
-Is the ACA included in the LP category?
-Not much discussion about multi-cycle programs.
-mmVLBI, Shep has concerns, will send written comments.
-Special proposals are the only category for which feasibility is considered in additon to scientific merit.
-Panel can overrule conflict of interest rules, especially instituional ones, and for projects with large numbers of Co-Is this may be necessary
Charge 2 Review Cycle 3 PRP and proposed Cycle 4 PRP
Summary of Karin and Dan's experiences from Cycle 3 PRP:
--------------------------------------------------------------------------------------------------
Overall the new structure of the feedback to the PIs is a good idea. Together with the request from panel members for report drafts ahead of the meeting, and the addition of a full day allocated to writing reports, this improved uniformity and quality of the consensus reports. The details of the structure needs to be revisited, however, and it should be easier to edit each others reports than is currently the case.
Many of the issues encountered in the panels concerned evaluating duplications. Clearer guidelines and better training of the chairs is needed to make sure that all panels treat duplications uniformly. Furthermore there need to be a better pre-meeting duplication check by the JAO, and the resulting identified duplications need to be presented to the panels in a more accessible way than is currently the case; some panels were overwhelmed by very long duplication lists (30k+ line spreadsheets). Duplication cases that were up for debate in panels included:
- observations of identical or highly overlapping pointings that target different objects
- science goals in different proposals that share some but not all targets
- proposals on objects in famous fields that are covered by a large survey in another porposal
- how to rank a proposal that has lost a large fraction of targets to a higher ranked proposal
- overlapping surveys of famous fields
- how to evaluate proposals that included (partial) duplications with other proposals that the PI is on (leads) and lacks comments on/justifications of why two overlapping proposals were submitted
In addition to duplications we noted that there was disagreement within and between panels on how to evaluate observations that are very inefficient, i.e. where the proposed observation mode result in a very high overhead-to-total time ratio without any justification of why ALMA or the chosen observing mode is optimal. Finally some panels were clearly overwhelmed by the number of proposals (100+) they had to review during the ARP.
Our recommendations are:
- Clearer guidelines (and better training of chairs) on duplications addressing all cases listed above.
- The JAO should consider facilitating a community-driven process to define the best surveys of famous fields.
- Mandatory reporting and justifications of self-duplications implemented in the OPT.
- More accessible and vetted presentations of automatically identified duplications by JAO.
- Limit the number of proposals discussed during the meeting to ~80 per panel.
- Increase the percentage of A ranked proposals to reduce resubmissions of B ranked proposals that have not been executed at the time of proposal submission.
- Include a justification box in the OPT for observations that are highly inefficient, e.g. spectral scans and very short integrations.
- Replace spectral scan mode with standard set-ups that mimic spectral scans until the spectral scan mode is more efficient.
- Prioritize implementation of subarrays to increase efficiency.
--------------------------------------------------------------------------------------------------
-duplication policy leaves room for interpretation, and can lead to wasted time trying to figure out.
-not clear how to deal with proposals being carried over.
-small fraction of A ranked proposals, even at 25%.
-burden on panel reviewers
-how do rankings from panels get merged. What happens to a highly ranked proposal in Cycle N that was a resubmission of a proposal in Cycle N-1, if it gets observed between proposal submission and the end of observations in Cycle N-1?
-how to evaluate inefficient proposals?
Charge 3 Time Allocation of the Array
-EOC vs. science: there are subarrays right now. ALMA development projects need EOC time, which is now down to 10% time: can development projects get on the telescope using subarrays?
-science return for configuration schedules
-not all array elements are the same (12m, ACA, TP)
-impact of number of antennas: ALMA's descoping options from 10 years ago described image fidelity behavior with N_antennas. What are the likely scenarios for antenna array element availability?
-Can the steady state be only 2 or 3 antennas out for maintenance at any one time?
Charge 4 ALMA Development projects
-ADMIT had a communications breakdown at JAO; not the fault of the investigators. What gets in the way of deploying ALMA development projects?
-VLBI has to make "network" available to the community before ALMA Phasing Project can work at that wavelength. OK for 3 mm, but for 1.3 mm more problematic; non-standard sites, experiements vs. facilities.
Charge 5 Basic Performance of ALMA
-What is the efficiency of observing targets, versus time on sky?
-Cycle 2 completion statistics?
-Calibration overheads?
-Differing CASA versions needed to reduce different datasets, prevents combination and delays science. Impediment to mining the archive for science.
-Who is the UC rep on the NRAO CASA Users Committee? Is it Rachel Friesen?
I (RAO) checked, and here is the 2105 list of members:
2015 CASA User Committee Members
Thibault Cavalie (chair) 2014/04 NA | Daniel Jacobs 2014/04 NA
Rachel Akeson 2014/04 NA | Alexander Karim 2014/04 EU
Rachel Friesen 2014/04 NA | Kazushi Sakamoto 2014/04 EA
Lizette Guzman 2014/04 CL | David Wilner 2014/04 EU
Tomoya Hirota 2014/04 EA | Ximena Fernandez 2015/09 NA
--
RachelOsten - 2015-10-11