5th Petascale Data Storage Workshopheld in conjunction with
|
KEYNOTE SPEAKER: JOHN SHALF, LBNL/NERSC
Exascale Computing Hardware Challenges
Abstract: The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. [PDF]
Recent trends in the microprocessor industry have important ramifications for the design of the next generation of High Performance Computing (HPC) systems as we look beyond the petaflop scale. The need to switch to a geometric growth path in system concurrency is leading to reconsideration of interconnect design, memory balance, and I/O system design that will have dramatic consequences for the design of future HPC applications and algorithms. The required reengineering of existing application codes will likely be as dramatic as the migration from vector HPC systems to Massively Parallel Processors (MPPs) that occurred in the early 90’s. Such comprehensive code reengineering took nearly a decade, so there are serious concerns about undertaking yet another major transition in our software infrastructure.
This presentation explores the fundamental device constraints that have led to the recent stall in CPU clock frequencies. It examines whether multicore (or manycore) is in fact a reasonable response to the underlying constraints to future IC designs. Then it explores the ramifications of these changes in the context of computer architecture, system architecture, and programming models for future HPC systems. Finally, the talk examines the power-efficiency benefits of tailoring computer designs to the problem requirements in a process called hardware-software co-design.
Bio: John Shalf is Group leader for the NERSC Advanced Technology Group, which tries to understand the NERSC scientific computing workload and how it affects computer architecture of future HPC systems. His background is in electrical engineering: he spent time in graduate school at Virginia Tech working on a C-compiler for the SPLASH-2 FPGA-based computing system, and at Spatial Positioning Systems Inc. (now ArcSecond) he worked on embedded computer systems. John first got started in HPC at the National Center for Supercomputing Applications (NCSA) in 1994, where he provided software engineering support for a number of scientific applications groups. While working for the General Relativity Group at the Albert Einstein Institute in Potsdam Germany, he helped develop the first implementation of the Cactus Computational Toolkit, which is used for numerical solutions to Einstein's equations for General Relativity and which enables modeling of black holes, neutron stars, and boson stars. He also developed the I/O infrastructure for Cactus, including a high performance self-describing file format for storing Adaptive Mesh Refinement data called FlexIO. John joined Berkeley Lab in 2000 and has worked in the Visualization Group, on the RAGE robot, which won an R&D100 Award in 2001, and on various projects in collaboration with the LBL Future Technologies Group. He is a member of the DOE Exascale Steering committee, and is a co-author of the landmark "View from Berkeley" paper as well as the DARPA Exascale Software Report.