Do It Yourself Guppi
A how-to guide to building an advanced pulsar processing machine
Introduction
The promise of a CASPER-based project is that others can build on what we have accomplished. Our machine is different than many of the other CASPER machines in that it is meant to be tightly integrated into the GBT, yet it is designed so that others can use it at other telescopes without "drinking the GBT koolaid." That is, the instrument is decoupled from telescope specific software and hardware. So why would anyone want to build their own GUPPI? It offers:
- 100,200,400, and 800 MHz bandwidths ( not sampling rates!)
- All 4 Stokes parameters in slow-dump modes. Slow dump is defined as >= 40.96 microseconds
- I and Q available in fast-dump modes, and at all bandwidths below 800 MHz, no accumulations are necessary
- Software folding and decimation in time and frequency available to reduce disk requirements
- Standard CASPER signal processing and "yellow blocks" used throughout
- Runs on standard BEE2 and iBOB hardware
- Software for running the machine is provided in C and Python, which will run on a 64 bit Linux host.
- Source code for the simulink models as well as the support software
- Extensive testing and verification has been done.
The hardware
The GUPPI hardware consists of 2 iADC boards, 2 iBOB boards, a BEE2, an AMD Opteron Linux host, a clock synthesizer, and an IF conditioner. Each of these components is covered below.
The iADC
Each iADC is driven by an iBOB in the 2x mode. For each clock, 2 samples are produced. So clocking the ADC at 800 MHz gives us 1600 M Samples per second. The iADCs are standard CASPER devices.
The iBOB
The iBOB is programmed with a standard firmware that simply drives the iADC, packages the data into 2 XAUI streams, appends a sample alignment counter, and sends the data over dual XAUI links to the BEE2. No signal processing is done on the iBOB.
The iBOB is mounted in an ELMA enclosure part number 39C02AD118Y3HC1X. A few modifications were made to this enclosure.
The front panel of the enclosure was modified to accommodate the JTAG and RS232 connections, along with the DIP switch to set the IP address. This is shown in this drawing:
Below are the schematics for the IP address setting, JTAG, and RS232:
The lid of the enclosure was modified to accommodate an additional cooling fan for the FPGA, shown in this drawing:
This custom face plate and mounting plate is used to mount the iBOB into the chassis.
This schematic shows the misc internal wiring necessary on the iBOB chassis:
The BEE2
The BEE is programmed with three different personalities in three of the FPGA's. The fourth user FPGA is unused at this moment. Software running on the BEE is used to load the proper personalities into each of the FPGA's for the given operating mode.
The Linux host
The host contains a 10 GbE interface that reads the data from the BEE and puts it into memory for the rest of the software to process. Hardware RAID can write data to disk at sustained rates of about 300 MB/sec. A multi-core machine is used for the host, which allows concurrent processing with reading the Ethernet and writing the disk files. A quad processor dual core Opteron system was chosen due to the increased memory and PCI express bandwidth available for this data intensive application.
The Clock Synthesizer
The clock synthesizer has an IEEE-488 interface on it that allows user control from the software on the host. THe clock rate can be set to allow operation at 100-800 MHz, at 100,200,400, and 800 MHz.
The IF conditioner
The IF conditioner consists of amplifiers and attenuators (manually controlled) that allow the user to set nominal levels into each channel of the machine. This is useful for piggybacking on the IF of another instrument for trials and concurrent observations.
Some other inputs include a reference input for the clock, a 1 PPS input for timestamping, and NTP synchronization for rough timing. The software includes a network server that can be used to integrate the GUPPI into any telescope control system that supports connection to a network port.
The configuration, software and firmware
System interconnection
The diagram here shows the interconnection of all the parts of the system.
RF and timing connections
- IF connections
- Polarization 0 is connected to ADC input connector on iBOB 0
- Polarization 1 is connected to ADC input connector on iBOB 1
- Clock
- The clock is split and equal length cables connect to each iADC module
- 1 PPS
- The 1 PPS signal from the timing center is connected to iBOB 0.
- The IF connections, clock, and 1PPS are connected to the GUPPi rack via SMA and BNC connectors mounted on a custom 19" rack mount panel.
- BEE 2 clock
- The BEE2 is clocked by a signal derived from iBOB 0 by dividing the ADC clock by four. This keeps the BEE synchronized to the iBOB data aquisition
- A clock adapter circuit is required to adjust the signal outputs of the iBOB to match the signal input requirements of the BEE2. That circuit is shown in this drawing: 45420S004.pdf
Networks and Digital Data Connections
It is important that all the XAUI cables to the iBOBs be short, and equal. The 10 GbE cable can be longer, but should still be 3 meters or less. Our networking setup for this machine puts everything in a self-contained package. The iBOBs and BEE are not visible outside the host computer. This allows us to essentially ignore security considerations that would be present if the machine's innards were exposed to the internet. Our current configuration is this:
- 10 GbE
- Bee2 IP address is 192.168.3.8
- Host IP address is 192.168.3.7
- Port number is 8915
- 100 Mb Ethernet connections
- iBOB 0 IP address is 169.254.128.4
- iBOB 1 IP address is 169.254.128.5
- BEE IP address is 169.254.128.20
- A Gb Ethernet port is the host's connection to the outside world. For security reasons, I'll leave the real address to your imagination.
iBOB firmware and settings
The iBOB firmware is the NRAO file i_GUPPI_SAMP_800_A_XA.mcs. This file should be programmed into each iBOB. The Simulink file for this is
blah. No other software is needed for the iBOB. The standard interface to the iBOB using LWIP is provided to get to the shell for debugging and monitoring purposes. The IP addresses for each board is given above.
BEE software and firmware
Several steps are necessary to make the GUPPI BEE functional, as outlined below.
Set up a file system and runtime environment for the Borph on the control FPGA.
Here's how to set up the BEE2. I'm assuming that it has been powered up and tested, and all the relevant parts work.
First, some preliminaries. You'll need root access to your host to set up the NFS root for the bee. Or a friendly system administrator to sit with you and help when needed.
Follow the instructions at
http://bee2.eecs.berkeley.edu/wiki/Bee2Setup.html to connect up your hardware and ensure it works. Once that's done, you need to create a bootable compact flash disk.
Install Python and other standard software needed.
TBD
Install the custom GUPPI software.
TBD
Install the .bof files to be used.
TBD
(View this section on a separate page.).
Host configuration and software
The host must be capable of sustained high speed (well, fairly high speed...) disk I/O. We are using hardware RAID with 12 disks in each raid. Our I/O rates are sufficient for the purpose, but not nearly state of the art. We have a multi processor multi core machine. We configured the 10 GbE handler to run on the CPU closest to the hardware, and the disk writing process to run on the CPU closest to the disk controllers. These are on separate PCI express interface chips, and so do not share bandwidth and latency. Following are some configurations that are necessary:
The following is in the rc.local file for our host:
echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter
echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle
echo 10 > /proc/sys/net/ipv4/tcp_fin_timeout
echo 16777216 > /proc/sys/net/core/wmem_max
echo 16777216 > /proc/sys/net/core/rmem_max
echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem
echo 0 > /proc/sys/net/ipv4/tcp_sack
echo 1 > /proc/sys/net/ipv4/tcp_no_metrics_save
echo 3000 > /proc/sys/net/core/netdev_max_backlog
Other than that, you'll need your local guru to configure the network devices as per your local custom.
Several steps are necessary to make the GUPPI functional with the rest of the hardware, as outlined below.
Install Python and other standard software needed.
TBD
Install the custom GUPPI data acquisition software dependencies.
Python dependencies
TBD (in the process of filling this out)
Install the custom GUPPI data acquisition software.
TBD (don't forget the boot scripts and other such details)
TBD (mostly establishing environment scripts and setting shared mem keys)
Install the custom GUPPI controller software.
TBD (installation and porting GUPPI controller to a new instrument)
TBD (mostly establishing environment scripts and setting controller defaults)
(View this section on a separate page.).