Notes from TeraGrid 2009

AmyShelton - 2009-07-14

What is TeraGrid?

From their website:

TeraGrid is an open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource.

Using high-performance network connections, the TeraGrid integrates high-performance computers, data resources and tools, and high-end experimental facilities around the country. Currently, TeraGrid resources include more than a petaflop of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers can also access more than 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research.

TeraGrid is coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago, working in partnership with the Resource Provider sites: Indiana University, the Louisiana Optical Network Initiative, National Center for Supercomputing Applications, the National Institute for Computational Sciences, Oak Ridge National Laboratory, Pittsburgh Supercomputing Center, Purdue University, San Diego Supercomputer Center, Texas Advanced Computing Center, and University of Chicago/Argonne National Laboratory, and the National Center for Atmospheric Research.

They are funded by the NSF.

Tutorials

Preparing Your Application for TeraGrid Beyond 2010

  • TeraGrid Phase II (current): nominal end date is March 2010
    • Extension proposal for the current program to span the year 4/1/2010 to 3/31/2011. Decision by NSF expected in August 2009
  • TeraGrid Phase III: eXtreme Digital Resources for Science and Engineering (XD)
    • Follow-on program for TeraGrid, providing integrating services for the machines available to you 2011-2015. Planning studies are underway, proposals to be submitted in July 2010.
  • Known, Planned and Proposed Resources for 2010:
    • Available Now, Continuing into XD: (two largest systems)
      • Ranger (TACC): Sun Constellation, 62,976 cores, 579 Tflop/s, 123 TB RAM
      • Kraken (NICS): Cray XT5, 66,048 cores, 608 Tflop/s, > 1 Pflop/s in 2009
    • Planned for 2010, Continuing into XD:
      • Large Shared Memory System at PSC
    • Planned for 2010-2011:
      • Flash & Virtual Shared Memory System at SDSC
    • Proposed for Extension until 6/30/2010:
      • Pople (PSC): Shared memory system
    • Proposed for Extension until 3/31/2011:
      • Abe (NCSA): 90 Tflop/s Infiniband Cluster
      • Steele (Purdue): 67 Tflops/s Infiniband Cluster
      • Lonestar (TACC): 61 Tflop/s Infiniband Cluster
      • Queen Bee (LONI): 51 Tflop/s Infiniband Cluster
      • Lincoln (NCSA): GPU-Based Cluster
      • Quarry (IU): Virtual service hosting
    • New Resources expected in 2011
      • Contract being competed: data-intensive HPC system, experimental HPC system, pool of loosely coupled, high throughput resources, and experimental, high-performance grid test bed
      • eXtreme Digital (XD) High-Performance Remote Visualization and Data Analysis Services
      • Blue Waters @ NCSA:
        • 1 Pflop/s sustained on qualifying applications in 2011

Terascale Remote and Collaborative Visualization

  • Visualization Overview:
    • Enabling Petascale science, broad science community, open infrastructure, open partnership
    • 11 resource providers
    • TeraGrid Visualization Working Group
    • Supports batch, interactive, and collaborative visualization
    • Remote access leverages VNC and VirtualGL (!!) with TACC
    • NCAR Twister - analysis and post-processing, 3D visualizations, Virtual GL
    • Networking and security are the remote visualization bottlenecks
    • Important issue for collaborative - how to deal with conflict/coordination between collaborators
  • Remote Visualization on Spur and Ranger:
    • Data analysis of large data needs large systems
    • Most visualization software takes significant time and effort to learn effectively
    • TACC EnVision - neophyte visualization, web-based
    • ParaView, Visit - other visualization packages, using python as scripting language
  • TG Visualization gateway - UChicago:
    • Simplify access to advanced visualization resources and services
    • IA32 visualization cluster
  • Introduction to VAPOR:
    • Enable interactive analysis and exploration of large data sets w/o herculean computing resources
    • Give tools to scientists and not visualization experts
    • Targeted at earth and space sciences
    • close coupling with IDL, immediate visualization of derived quantities
  • NCL:
    • tailored to geoscientific data
    • command line interface, single threaded
    • NCAR command language
    • Data formats: NetCDF, HDF4, HDF5

Cactus

  • Tutorial
  • See slide 20 for visualization client programs
  • Cactus is free software, but often problem-specific codes are not -> non-distributable binary

Regular Program

An Interactive, Integrated, Instructional Pathway to the LEAD Science Gateway

  • visualization tool - IDV

Distributed MD simulations: A Case Study of Co-Scheduling on the TG and LONI

  • TeraGrid resource scheduling a useful model for VLBI dynamic scheduling? (Nicole idea)
  • # apps that use more than one distributed resource is very small (< 1%), even over the life of TeraGrid
  • Tightly-coupled simulations over multiple resources
  • Even though run times might incur overhead when distributing your code, the overall time to solution may go down because wait time in queue is reduced
  • Probability algorithms, BQP, cited here might be useful for dynamic scheduling probability prediction (Nicole idea)
  • Forward and reverse scheduling - promoting reverse, Nicole says that she thinks this is the next big thing for telescope scheduling

Grid Enablement of Scientific Applications on TeraGrid

The Asteroseismic Modeling Portal

  • Used Django
  • Has a very nice interface

Birds of a Feather

XD Requirements

  • Currently conducting a survey of cyberinfrastructure requirements, identifying key cyberinfrastructure capabilities. Going to make recommendation for investment, research, development, and adoption strategies for XD phase.
  • There was some concern that "the usual suspects" would do most of the steering, but purpose of this BoF plus others at other conferences and town hall meetings meant to broaden input.
  • Most attendees of BoF weren't end users, although the organizers hoped they would be. Their input needed most.
  • Interest expressed in wrapping licensing with larger companies, e.g. MathWorks, for things like Mathematica and Matlab.
  • Lots of hand-wringing over archiving, or as they prefer, data curation. E.g. Who is responsible? How much to keep? How should it be available?
    • Right now bits aren't being stored in a useful way - more as backups than public access

General Notes from Conference

Next Conference - TeraGrid 2010, Pittsburgh, August 2-5, 2010

Topic revision: r3 - 2009-10-19, CarolynWhite
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding NRAO Public Wiki? Send feedback