GUPPI Design Documents
These documents are the working versions of the design. Once design is complete, these will be turned into static documents.
GUPPI System Design Documents
System Block Diagram
GUPPI Hardware Design Documents
Pictures from the workshop
Note that the blue check marks on the whiteboard pictures mean that someone was confident that the blocks exist either in the casper libraries or their personal libraries and work as expected.
Formal design documents
- iBOB Hardware Block Diagram
- iBOB FGPA Simulink Diagram
- iBOB Design Documentation
GUPPI Software Design Documents
Pictures from the workshop
Formal design documents
Software Specifications Development
We started off this part by having Scott Ransom draw on the board what he thought the user interface needed to supply. The rest of the scientists put in their ideas based on the plethora of pulsar machines we've seen here at green bank and their experience elsewhere. Here's a picture of his drawing
, which does not capture much of the discussion. A list of the software interface parameters is included in this Manager Interface Document
. The software requirements for each part of the system are sketched below, but need to be expanded in separate documents. Note that a software simulator must be supplied for each part of the system, so that progress can be made on the software independently from the hardware.
Eventually, the user interface will be provided through the standard GBT ASTRID system. Configuration will be provided by configtool. A Cleo screen will be provided. In the meantime, though, as the system is being developed and tested, a lightweight interface for engineering and first science will be used to manage the machine. It will consist of a lightweight client that can connect via network to the machine to set/get parameter values, and which can display engineering data. It should be noted that this lightweight interface will be independent of the GBT M and C system, and so is portable to any other observatory. The working document for the user interface can be found here.
The data capture will be performed by a single multiple processor host machine with a large disk array. The data rate for this initial design in only 100 MB/s sustained, which is rather pedestrian by modern RAID standards. Our machine is claimed to support a 500 MB/s sustained rate (hardware RAID in RAID 6 configuration). As part of the data capture function, this machine is required to:
- Format the data into PSRFITS
- Grab samples of the data stream and supply quick-look plots to the user interface via the monitor system
- Control the starting and stopping of data writing
- Run the software controlling the entire machine.
- The M&C Manager
- The network interface server (the "RPC server" in the drawings and discussions)
- Data processing and capture processes
The working document for the Data Capture can be found here.
The Hardware Control function will be performed by the network interface server. The server will pass parameters down to the iBOB's and the BEE2. The connections to the data capture host will be via a private Ethernet consisting of the iBOB's, the BEE2 host port, and the Data Capture Server host port. A connection into the system wil be through a second gigabit port on the data capture server computer. Functionally, the software will write paramters to the iBOB's using the tinyshell command interface, and will get monitor data using the tinyshell read and write functions as appropriate. Reading and writing all defined software registers and BRAM's will be supported, preferably by generating an interface library automatically when a new FPGA design is compiled. The working document for the iBOB interface can be found here.
The document for the BEE2 interface may be found here.
Assignments and Priorities
As far as priorities, it was decided that we would build a 4096 channel, 2 polarization, 4 bit output 50 microsecond accumulation time machine. Once this base machine is in production, (We are aiming for January) we will work on extending it to more bits, fewer channels, and full stokes parameters. Specifically:
- Randy, Jason, and Glen will work on transferring samples synchronously between the FPGAs.
- John will maintain the development systems as needed.
- We will document blocks that need designed and farm out this work to the WVU students to work on.
- We will consult with other CASPER users to find blocks that we can use. We will contribute our blocks developed to the Casper community.
- Ron Duplain will work on building the interface to the system. THis will include a lightweight client for first light and debugging, and will support full GBT M&C integration when that comes along. A model based on our Caltech Backend collaboration will be used. Ron will deliver a working system by January that will allow first light commissioning and science observations.
- Patrick Brandt, John Ford and Glen Langston will write the software for the FPGA communications. This will include a simulator for each of the FPGA communications channels to be used.
- Scott Ransom, Paul Demorest, Walter Brisken, and Ron Duplain will build the software to run on the data collection machine.
- Rick Fisher, Rich Lacasse, Joe Brandt, Amy Shelton, Casper people, Glenn Jones, GB System Administrators, and the WVU Pulsar group will be used as consultants to help with problems that we encounter along the way, and for reviews of the program.