Pulsar Machine Status and Meeting Agenda
- Meeting participants:
- Glen Langston, Amy Shelton, Rich Lacasse, Paul Demorest, Ron DuPlain, Scott Ransom, John Ford
Project Status
- John -- Quarterly summaries and effort requests due Friday. Use Newsletter article as QS?
- NRAO Effort requests:
- John: 0%
- Randy/Jason: 50%
- Ron: 25%
- Scott, Paul, Glen: As required
- Randy and Jason
- Work continues on synchronized xaui transfers. ADC data has been multiplexed together and transmitted as 32 bit words across 2 xaui links.
- Bee2 testing using BORPH to start the FPGA processes is not working right. Inquiries to CASPER for help.
- New iBOB's and iADC's have shipped
- iBOB faceplaces in shop for fabrication
- Amy
- Glen
- Glen will tackle the FPGA design that includes the PFB/FFT blocks for GUPPI, and adapt that to the 43m spectrometer that he is working on.
- CV Group: Paul, Ron, Rich, Scott
- 10 GbE experiments yielded > 1 GB/sec throughput with UDP jumbo packets, with ~10e-3 loss rate.
- Work on design of software is underway, with software being borrowed from other projects wherever possible.
- Project will use the bazaar version control system. It is a distributed VCS with CVS-like commands, but more ease of use in terms of renaming files, repository locations, etc.
- The NFS root on radar for the bee2 is in /export/home/radar/cicadaroots/bee2std
- Ron is writing an MR for software
- WVU Group: Duncan, Patrick Mitch, Brandon
- Unable to attend due to press of end of semester work
- Have received notification that they have been accepted to the xilinx XUP. Will be setting up their workstation as soon as all of the software arrives. They have an iBOB, xilinx cable, and power supply.
Known problems
Project Plans for discussion
- Quarterly summary was written as a newsletter article. If anyone wants to add to it, then please do. It is attached at the bottom of this page. It will be submitted to GB PM on Friday.
Configurable Instrument Collection for Agile Data Acquisition (CICADA)
CICADA, a development program to design and build instruments for data
collection and processing using off-the-shelf hardware, been very active.
We have several concurrent projects underway, including a new Pulsar
Processor, Graphical Processing Unit (GPU) cluster research, and
Spectrometer design work.
After beginning the program in collaboration with the University of
Cincinnati, the first hardware project to be completed under this program is
the event capture system developed by Glen Langston and West Virginia
University students for deployment on the NRAO 43 meter telescope. The
system was built with FPGA technology from the CASPER group at UC Berkeley
to capture dual polarization, 1 GHz bandwidth bursts of data using a
triggering algorithm to identify short duration events. The project design
and initial tests are documented at
https://safe.nrao.edu/wiki/bin/CICADA/CicadaNotes.
In collaboration with the NRAO
E2E Operations, our next project to be
deployed will be the Green Bank Ultimate Pulsar Processing instrument
(GUPPI). GUPPI will also be built with FPGA technology from the CASPER
group at UC Berkeley. Plans are to deploy the first version of GUPPI in
January 2008 as a spigot replacement, with coherent dedispersion over 800
MHz of bandwidth available in a second version to be available in June.
Both of these instruments will begin life as expert user instruments until
software development resources become available to complete the integration
into the GBT system. We expect this to occur in FY 2009. The GUPPI project
held a workshop in late October to get the design team together with
students and scientists from West Virginia University, the University of
California at Berkeley, and Cal Tech. Work is underway in Green Bank,
Charlottesville, and West Virginia University on the hardware, software, and
gateware.
Also in collaboration with the NRAO
E2E Operations, work is also underway in
Green Bank and Charlottesville on research into acceleration of numerical
computation using GPU's. In contrast to general purpose CPU's, GPU's are
optimized or fast parallel computation as is needed for computer graphics
applications. Utilizing these GPU's holds the promise of 10 to 100 times
speedup for certain numerically intensive algorithms often employed in radio
astronomy, along with greatly reduced power consumption per GFLOP. We have
assembled a cluster of eight GPU cards in four computers, all tied together
with multiple Gigabit Ethernets. A 10 Gigabit Ethernet connection is also
available for use by each of these cluster machines. Several members of the
team from both Green Bank and Charlottesville attended the
AstroGPU
conference in Princeton, NJ, as well as the IEEE/ACM Supercomputing '07
conference, to become more familiar with leading edge scientific computing.