by the ExMatEx team on January 4th, 2012
This document originally published at Los Alamos as LALP-11-020. PDF version.
Exascale computing presents an enormous opportunity for solving some of today’s most pressing problems, including clean energy production, nuclear reactor lifetime extension, and nuclear stockpile aging. At their core, each of these problems requires the prediction of material response to extreme environments. Our Center’s objective is to establish the interrelationship between software and hardware required for materials simulation at the exascale while developing a multiphysics simulation framework for modeling materials subjected to extreme mechanical and radiation environments. This will be accomplished via a focused effort in four primary areas:
The programming models and approaches developed to achieve this will be broadly applicable to a variety of multiscale, multiphysics applications beyond the materials science ones addressed here.
Our adaptive physics refinement technique is illustrated in Fig. 1 for a high strain-rate loading problem in which the coarser-scale model, for example a finite element method, spawns finer-scale crystal plasticity or atomistic models as needed when the standard empirical constitutive model is less accurate, for instance in the vicinity of shock fronts. Such a procedure may be carried through multiple levels of refinement, or applied to the time domain, using ab initio techniques to compute activation energies for a rate theory or kinetic Monte Carlo model.
Our multi-institutional team will also leverage considerable experience in petascale simulation and vendor interaction to create flexible proxy applications (“apps”) that encompass both Gordon Bell Prize-winning single-scale applications that achieve petascale performance using single program multiple data (SPMD) approaches, as well as a scale-bridging prototype that represents the asynchronous task-based multiple program multiple data (MPMD) programming model that avoids bulk synchronous parallelism and is more likely to survive the transition to exascale. These two classes of proxy apps will target distinct hardware characteristics: node-level data structures, memory and power management for the single-scale SPMD apps (e.g. molecular dynamics), and system-level data movement, fault management, and load balancing for the scale-bridging MPMD apps. The proxy apps are a condensation of the “real” apps that capture the broader workflow but are built to readily explore strategies such as data layout and solution algorithms, and overlay strategies for fault and power management.
Borrowing from agile development concepts, we will establish and execute a continuous (i.e., throughout the project lifetime) algorithm/hardware modeling, evaluation, optimization, and synthesis loop (Fig. 2), including optimization for performance, memory and data movement, power, and resiliency. Proxy applications and performance models/simulators will be used to introduce a realistic domain workload into the exascale hardware and software stack development process at an early stage, and enable real scientific applications ready when exascale platforms become available.
The proxy apps will be used to explore the breadth of algorithm space (including programming models and other implementation choices), and will be co-optimized together with the emerging vendor architecture designs with respect to price, power (energy consumption), performance (time-to-solution), and resilience (robustness under node- and system-level jitter, faults, and other cross-cutting challenges) within the externally imposed constraints (“P3R” optimization).
We anticipate several potential areas of synergy with ASCR-supported exascale computer science research. These include, but are not limited to: