- Increased complexity and performance potential of exascale computers requires rethinking programming
- ExaStencils project aims to automate programming steps that currently must be made manually
- Initial testing has been successful; others are encouraged to follow the example
Supercomputing can be difficult to fully understand. In fact, Christian Lengauer of the University of Passau believes that the average person has a hard time grasping what exascale will mean for the future of science and computing.
“Exascale is 10^18 floating-point operations per second (FLOPS),” says Lengauer. “If you have a performance of just 10^12 FLOPS and you wanted to do this in sequence, you would have to have one floating-point operation in one picosecond, or one 10^(-12)th second.”
“Now, just consider how far light travels in a vacuum in that time,” he continues. “Light goes from the earth to the moon in 1.3 seconds and to the sun in 8.3 minutes. In one picosecond, light progresses one third of a millimeter."
The speed of an exascale computation is nearly incomprehensible, and reaching it requires serious changes in the way we currently do computing. To realize the potential performance an exascale computer offers, its architectural features must be individually addressed by the software that drives it.
Currently, computer scientists must manually reprogram high-performance computing (HPC) software based on the specific platform being used to solve a problem. The complicated nature of HPC architectures, coupled with the high cost of running a supercomputer, make this a poor strategy. Exascale computing’s increase in performance potential will only exacerbate this challenge. ExaStencils is one of the projects working on a solution.
Flexibility is key
As part of German Research Foundation (DFG) priority program SPPEXA, Lengauer leads a team of eight principal investigators, including Harald Köstler of Friedrich-Alexander University Erlangen-Nürnberg, to tackle this issue head-on. ExaStencils aims to automate the programming of partial differential equation solvers on structured grids.
Used in applications such as blood flow simulations and ab initio molecular dynamics, these compute-intensive algorithms repeatedly redefine data points in a grid as a combination of the values of neighboring points.
To successfully complete such an experiment, many parameters of the model, the numerical algorithm, and the underlying platform-dependent implementation must be set in the correct order.
The problem is that changing any of these selections demands extensive manual reprogramming. Solvers running in an exascale environment will match the technology’s complexity, making any changes to the parameters of these programs even more complex and time-consuming.
Current supercomputers are so fast mainly because of their massive parallelism, meaning many processors work on parts of a problem at the same time. Today’s fastest computers already have millions of computing cores. Exascale machines will be even more massively parallel, presenting a challenge for programmers who will have to ensure any change to their code won’t alter how the machine tackles large problems.
“We want to give application scientists a language in which they only specify major choices,” says Lengauer. “For instance, they will specify the solver, smoother, and so on, then click a few buttons for platform choices, boundary condition choices, and stencil patterns, then let the software technology derive the code that is specific to the platform and the problem they have. And in order to do this, you can't deal with a flat language.”
Flat language is an important term because it has to do with the levels of abstraction within a given solver. If the language used within a particular program is flat, that means that it is only at one level of abstraction.
“We have four levels of abstraction,” says Lengauer. “At each successive level, you add more detail, but the way that it's added is automatically by a code generation engine based on expert domain knowledge.”
Tests are promising
Although the ExaStencils team has not yet been able to test their platform on an exascale computer (because they don’t exist), trials on existing supercomputers have been promising.
“In ExaStencils, we are mainly concerned with achieving scalability and efficiency on the given platform,” says Köstler. “So we do scalability runs. In other projects, we would typically run applications such as blood flow or earth mantle simulations.”
There’s still a lot of work ahead, but ExaStencils is headed in the right direction. Lengauer remarks that now that the domain of elliptic partial differential equations on structured grids has already been successfully tested on the ExaStencils platform, the team is excited to see where else it can go. They encourage experts in other domains to attempt a similar approach and report their findings.
When exascale computing finally arrives in the next five to ten years, scientists will have access to computing speeds so fast they are nearly beyond our comprehension. But with researchers like Lengauer’s team and the other SPPEXA groups leading the way, scientists may achieve research results well beyond our current imagination.
Read more: