Lustre Server Hardware
The OSS consists of the following hardware, an example spec can be found here:OSS-specs.pdf
- Superlogics 4U chassis, CSE-846TQ-R1200B
- 4U preferred rackmount chasis with 24 SAS/SATA hotswap bays
- Supermicro X8DTH-i motherboard with 7 PCI-E 2.0 8x slots
- 24 disk bays with at least 1 internal disk bay.
- Redundant 1+1 1200W power supplies
- 2.26 Ghz E5520 Intel Quad Core Xeon processor
- 12GB ECC DDR 1333 Memory (triple channel memory)
- 4 3Ware 9650SE-8LPML 8 port SATA Raid controllers (Must be 4 cards)
- 80GB SATA internal hard drive
- 24 2TB WD2003FYYS 64MB cache hard drive
A wiring schematic for the Raid controllers to disks can be found here:oss-wiring.pdf
- Note: It's important that the wiring diagram be followed to simplify later disk identification and to ensure balance across RAID controllers. Each controller should map to one column of disks in the front.
- It's likely the left to right PCI bus numbering will not match the front left to right disk column numbering. The kernel may number PCI slots as 4,6,5,7 or 5,4,6,7 or some other out of order sequence. A note should be made of the ordering.
A picture of the inside of an OSS can be found here: OSS.jpg
The MDS hardware spec can be found here:mds_node.ps
- Note: there is an error in that spec, there should be 3 not 2 250GB harddrives. One disk is for the OS, the 2nd and 3rd disks are in a RAID1 mirror for the actual MDT.
The compute hardware spec can be found here:compute_node.ps
- Note 2.4GHz processors are slightly more cost effective than 2.9GHz even though the 2.9Ghz are faster.
Lustre Network Hardware
The primary Lustre network is Infiniband based and consists of a:
- 1 Mellanox MIS5030Q-1SFC InfiniScale IV IS5030 QDR 36-Port 36port switch
- FabricIT-EFM-0036 subnet manager license, one per IB fabric
- MHQH19B-XTR ConnectX-2 VPI InfiniBand HCA, one controller per MDS, OSS and client node
A schematic of the MDS, OSS, compute nodes and network can be found here:lustre-schematic.pdf