Better, Faster and More Scalable: The March to Exascale
- Project length: 0h 57m
Clusters continue to scale in density. And developing, tuning and scaling Message Passing Interface* (MPI*) applications is now essential―providing more nodes with more cores and more threads, all interconnected by high-speed fabric. According to the Exascale Computing Project, exascale supercomputers will process a quintillion (1018) calculations each second—more realistically simulating the processes involved in precision, compute-intense usages (e.g., medicine, manufacturing, and climate). As part of the exascale race, the MPICH* source base from Argonne National Labs (which is not only the high-performance, widely portable implementation of MPI, it’s also the basis for Intel® MPI Library) has been updated
Join us to learn:
- How the Intel MPI Library will optimize the MPICH source in 2018 and;
- How simple-to-use Intel® Application Performance Snapshot can help you quickly understand how your distributed and shared memory applications are performing … and where you can focus your optimization efforts.
Presenter: Dmitry Durnov
Dmitry Durnov “Dmitry is a senior software engineer in the Intel® MPI team at Intel Corporation. He is one of lead developers and his current focus is a full stack Intel® MPI product optimization for new Intel platforms (Intel® Xeon® Scalable processors, Intel® Xeon Phi™ and Intel® Omni-Path Architecture)”