ESCAPE-2‎ > ‎Project structure‎ > ‎

WP3: Weather & Climate Benchmarks

Aim: WP3 will develop a hierarchy of benchmarking components representing the key elements in the workflow of weather and climate prediction systems and re-integrate and test code adaptations generated from the DSL toolchain. This work will establish a representative High Performance Climate and Weather benchmark (HPCW). HPCW will serve as a benchmark for (pre)-exascale applications of climate and weather codes and will facilitate communication with HPC hardware developers and vendors. The value of HPCW will be demonstrated using the range of available hardware architectures.

Approach and methodology: WP3 will define the HPCW based on representative Earth system models from which key dwarfs will have been extracted (in WP1). Moreover, WP3 will ensure reliable and automatic verification through developing routines that check the correctness of the benchmark execution when different software implementations or different hardware options are explored. Several known approaches will be implemented and evaluated following the methodological developments in WP1 and WP2, and also producing an evaluation option in the VVUQ framework of WP4. The HPCW benchmark establishes a comprehensive set of test cases and models featuring a number of representative algorithmic motives, as well as system-sized workloads. The workload simulator Kronos (developed by the NextGenIO project) will be employed to create realistic operational scenarios for executing multiple workloads within a single benchmarking environment. This level of benchmarking is entirely new, and it will allow exploring the effect of complex resource contention not observable if single workloads are executed in isolation.

Suitability of the research approach: WP3 closely involves HPC centres and the leading European infrastructure vendor to ensure the suitability of the benchmark design as both user and vendor requirements with respect to HPC benchmark use and relevance will be addressed. In addition, leading European models provide a comprehensive suite of current and future requirements.

Measures for Success of the Work Package/ KPIs:

  • Number of selected algorithmic motives (dwarfs) for which successful back-integration of DSL-toolchain generated code of in models can be demonstrated.
  • Number of delivered components of the HPCW benchmark.
  • Number of performance analyses (per pair benchmark code/hardware system).
  • Number of downloads of HPCW components from portal.