GUPPI Design Workshop Results

Attendees:
  • Guests
    • Glenn Jones, Caltech
    • Peter McMahon, Berkeley, KAT
    • Joeri L. via telecon
  • West Virginia University
    • Maura McLaughlin
    • Dunc Lorimer
    • Mitch Mickaliger
  • NRAO CV
    • Walter Brisken
    • Paul Demorest
    • Ron Duplain
    • Rick Fisher
    • Rich Lacasse
    • Scott Ransom
  • NRAO Green Bank
    • Joe Brandt
    • Mark Clark
    • John Ford
    • Glen Langston
    • Randy McCullough
    • Melinda Mello
    • Jason Ray
    • Amy Shelton
    • Karen O'Neil

Purpose

A 2 day workshop to get the entire development team together for uninterrupted concerted effort.

Deliverables

The following list of deliverables was put forth before the meeting. The symbol at the end of each shows the state of each deliverable. In my (John Ford) opinion, all these were addressed, some better than others, but in all, the workshop was an extremely valuable exercise.
  1. An end to end detailed hardware/gateware/firmware design at the block diagram level for the spigot replacement modes. DONE
  2. A detailed software requirement document to match the above. wip
  3. A conceptual design for some possible approaches to the Coherent Dedispersion backend. wip
  4. A list of items to be delivered for each of the three phases of the project, along with an estimate of people and money resources needed, and the calendar time needed to produce them. Obviously only Phase 1 could be estimated with any degree of certainty. wip
  5. An estimate by the team members of the amount of time they can spend on the project based on their workload. DONE
  6. A list of possible collaborators to contact. DONE

Plan/Schedule for the Workshop

Here's a draft plan for the meeting. All should feel free to comment and add to it.

October 29th, begin 10:00 A.M.

Project News

Short presentations on work in progress were given by the presenters below. A few highlights from my notes are tossed in.

Glenn Jones - Caltech

  • Glenn's current focus is on a high resolution spectrometer for the DSN 34 Meter antenna for educational uses.
  • Working on infrastructure, including some nice test harnesses for block testing.
  • Fixed bug in casper library PFB block
  • Showed the "cram/uncram" blocks
  • Built a vector accumulator that works at 250 MHz
  • Noted that we need a community repository where we can all read/write.
  • Future plans include a transient capture mode, and a pulsar timing mode.

Peter McMahon - Berkeley/Kat

  • Peter's been working on a pulsar machine for KAT, as well as upgrades to the ASP series of pulsar machines using an iBOB front end feeding a 10 GbE switch, which then distributes the samples to a cluster of PC's via 1 GbE links.
  • Told us some lessons learned
    • Configure the HP switch for jumbo packets
    • Check the MAC addresses and don't put random addresses in the 10 GbE MACs, as the switch will ignore them. help! I forgot the the actual illegal numbers
    • Use short cables on bee2 wherever possible
    • Make sure to set the MTU and enable jumbo frames on all client machines
    • Abandon use of the "pink" blocks from the astro library
  • Showed us the "grace" program for plotting. It was installed and integrated into Glen Langston's event capture stuff by the end of the day.
  • Showed us the reorder block that uses block rams.
    • Could be extended to use Dram or Sram.

Glen Langston - NRAO Green Bank

  • Presented the 43m Event Capture device built over the summer by Glen, Brandon Rumberg, and Patrick Brandt.
  • Showed the system operating later in the afternoon.
  • See Cicada Notes series for more info on this.

Specification Review and development

  • Rick Fisher pointed out that we have no specs for our input signals. We agreed that our input will initially be limited by the analog signals available in the equipment room from the Spectrometer's Analog Filter Rack. This limits the available analog passbands to:
    • 800-1600 MHz
    • 50 - 100 MHz
    • 25 - 37.5 MHz
Our intention is to use the 800-1600 MHz bandpass for initial operations.
  • We agreed that the first machine will only do one case, that is, all bandwidths, #channels, and # bits will not be configurable at first light. There is a spec on that first machine in the specifications.
  • The spec on RFI mitigation is weak, and needs to be fleshed out if it is to be implemented. It was agreed that there is nothing to be implemented in the immediate future.

Hardware Block Diagram Development

We began this section with the iBOB/iADC, and continued along until we ended with our output on the 10 Gb Ethernet link going out to our data collection machine. As we are a bit pressed for time in documenting this workshop, we simply photographed the white board diagrams and have not taken the time to reduce them to real drawings or simulink diagrams yet. Each section below describes a sketch, and possible features to be added or subtracted.

Overall System Hardware Design

The overall initial generation of GUPPi will contain 2 iADC's, 2 iBOB's, and a BEE2, connected with XAUI links. The output of the BEE2 will be sent over 10 Gb Ethernet to a data collection and processing machine. Here's a view of the proposed design

iADC/iBOB Design

The initial generation of GUPPi will consist of two iADC modules and two iBOB's. On each iBOB, data will be sampled at 1600 MHz, and packaged onto the twin XAUI links and sent to the BEE2 module. The iBOB's will contain a small spectrometer to allow astronomers and engineers to see the raw input spectra, and an ADC scope to allow system tuning and debugging. A 1 PPS timing signal will be used to trigger and synchronize the operation of the system. A sampling clock will be input to the 2 iADC modules. Here's a Diagram of the system

BEE2 Overall Design

The BEE2 design as conceived in the workshop has four of the five FPGA's used. The fifth FPGA will be used in case we run out of space, or possibly for a digital downconverter stage for narrowing the bandwidths from the samplers. Communications between FPGAs will be on the board traces instead of over the XAUI links.

Control FPGA

The control FPGA will be used to control the BEE2 and to run the BORPH operating system. No DSP functions are allocated to this FPGA right now. The image will be rebuilt as needed to allow the system to start up on its own. It may be possible to use an NFS root file system to allow us to not modify the stock compact flash image.

User FPGA 1

The User 1 FPGA will be used to contain the stokes parameter calculation, the vector accumulator, and the output packing,processing, and 10 Gb Ethernet interface. Here's a diagram showing the concept.

User FPGA 2

User FPGA 2 is to be used to process one polarization of the received signal. The data is run through the PFB/FFT to get the spectrum, a bit or two is chopped off the output, and the rest sent over the communications channels to User FPGA 1 for combining with the other polarization. Here's the proposed scheme.

User FPGA 3

This FPGA is unused in our initial design. It may well be used before we're done...

User FPGA 4

This one is used for the other polarization. See above description of User FPGA 2

Data Collection Machine

The data collection machine will be used to conttrol the hardware, collect the data off the 10 Gb Ethernet port, format the data, write the data, and create quick-look plots from snapshots of the data flowing past. Here's a rude diagram of the system.

Software Specifications Development

We started off this part by having Scott Ransom draw on the board what he thought the user interface needed to supply. The rest of the scientists put in their ideas based on the plethora of pulsar machines we've seen here at green bank and their experience elsewhere. Here's a picture of his drawing, which does not capture much of the discussion. A list of the software interface parameters is included in this Manager Interface Document. The software requirements for each part of the system are sketched below, but need to be expanded in separate documents. Note that a software simulator must be supplied for each part of the system, so that progress can be made on the software independently from the hardware.

User Interface

Eventually, the user interface will be provided through the standard GBT ASTRID system. Configuration will be provided by configtool. A Cleo screen will be provided. In the meantime, though, as the system is being developed and tested, a lightweight interface for engineering and first science will be used to manage the machine. It will consist of a lightweight client that can connect via network to the machine to set/get parameter values, and which can display engineering data. It should be noted that this lightweight interface will be independent of the GBT M and C system, and so is portable to any other observatory. The working document for the user interface can be found here.

Data Capture

The data capture will be performed by a single multiple processor host machine with a large disk array. The data rate for this initial design in only 100 MB/s sustained, which is rather pedestrian by modern RAID standards. Our machine is claimed to support a 500 MB/s sustained rate (hardware RAID in RAID 6 configuration). As part of the data capture function, this machine is required to:
  • Format the data into PSRFITS
  • Grab samples of the data stream and supply quick-look plots to the user interface via the monitor system
  • Control the starting and stopping of data writing
  • Run the software controlling the entire machine.
    • The M&C Manager
    • The network interface server (the "RPC server" in the drawings and discussions)
    • Data processing and capture processes
The working document for the Data Capture can be found here.

Hardware Control

The Hardware Control function will be performed by the network interface server. The server will pass parameters down to the iBOB's and the BEE2. The connections to the data capture host will be via a private Ethernet consisting of the iBOB's, the BEE2 host port, and the Data Capture Server host port. A connection into the system wil be through a second gigabit port on the data capture server computer. Functionally, the software will write paramters to the iBOB's using the tinyshell command interface, and will get monitor data using the tinyshell read and write functions as appropriate. Reading and writing all defined software registers and BRAM's will be supported, preferably by generating an interface library automatically when a new FPGA design is compiled. The working document for the iBOB interface can be found here. The document for the BEE2 interface may be found here.

GUPPI Phase II approaches

Clusters -- Build, buy, or beg?

A wide-ranging discussion on clustering at NRAO in general, and GB in particular. One problem is that there is a lack of system administration help right now in Green Bank in particular. We think that our computing division would be very receptive to dealing with clusters if we give them sufficient support. In particulaer, the group as a whole is receptive to the replacement of CGSR2 with a cluster of machines from Matthew Bailes if it can be arranged.

The point that we have had GASP and CGSR2 in GB for many years without taking advantage of them was raised, and we should ensure that there is sufficient interest, support, and publicity to use a cluster that might otherwise sit and take up space and power without being used.

GPU's -- What seems feasible, what's not

Several of us are heading off to the AstroGPU conference in Princeton, and several of us are headed out to the Supercomputing '07 conference in November. I think we may learn a lot on those two trips that will help guide us in the future. We have also installed a cluster of GPU machines for use by anyone who has an interest in this technology. It seems that there is a closing performance gap with the advent of multi-core general purpose CPU's.

FPGA DSP -- Where to begin?

We didn't discuss this much, but we think that this should be pursued in spite of any clustering technology that might show up on our doorstep. This technology holds out the promise of greater reliability, less power and space consumption, and other benefits, so it is worth pursuing in its own right. It was deemed probable that algorithms could be implemented on BEE2 class machines that could do the job. Much work remains to see if that is the case.

Assignments and Priorities

As far as priorities, it was decided that we would build a 4096 channel, 2 polarization, 4 bit output 50 microsecond accumulation time machine. Once this base machine is in production, (We are aiming for January) we will work on extending it to more bits, fewer channels, and full stokes parameters. Specifically:
  • Hardware
    • Randy, Jason, and Glen will work on transferring samples synchronously between the FPGAs.
    • John will maintain the development systems as needed.
    • We will document blocks that need designed and farm out this work to the WVU students to work on.
    • We will consult with other CASPER users to find blocks that we can use. We will contribute our blocks developed to the Casper community.

  • Software
    • Ron Duplain will work on building the interface to the system. THis will include a lightweight client for first light and debugging, and will support full GBT M&C integration when that comes along. A model based on our Caltech Backend collaboration will be used. Ron will deliver a working system by January that will allow first light commissioning and science observations.
    • Patrick Brandt, John Ford and Glen Langston will write the software for the FPGA communications. This will include a simulator for each of the FPGA communications channels to be used.
    • Scott Ransom, Paul Demorest, Walter Brisken, and Ron Duplain will build the software to run on the data collection machine.
  • Consultations
    • Rick Fisher, Rich Lacasse, Joe Brandt, Amy Shelton, Casper people, Glenn Jones, GB System Administrators, and the WVU Pulsar group will be used as consultants to help with problems that we encounter along the way, and for reviews of the program.
Topic attachments
I Attachment Action SizeSorted ascending Date Who Comment
43m.scesce 43m.sce manage 2 K 2007-11-01 - 10:42 UnknownUser  
user_interface_notes.JPGJPG user_interface_notes.JPG manage 271 K 2007-11-01 - 10:46 UnknownUser  
serverjobs.JPGJPG serverjobs.JPG manage 288 K 2007-11-01 - 10:46 UnknownUser  
pfb_fft_fpga.JPGJPG pfb_fft_fpga.JPG manage 304 K 2007-11-01 - 10:17 UnknownUser  
ibob_bee2.JPGJPG ibob_bee2.JPG manage 322 K 2007-11-01 - 10:44 UnknownUser  
stokes_vacc_output.JPGJPG stokes_vacc_output.JPG manage 329 K 2007-11-01 - 10:45 UnknownUser  
systemwidecommdiagram2.JPGJPG systemwidecommdiagram2.JPG manage 334 K 2007-11-01 - 10:45 UnknownUser  
Topic revision: r4 - 2008-02-05, PaulDemorest
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback