• Subscribe

Another step on the stairs to exascale

Speed read
  • Scientists struggle with complex problems in quantum physics and quantum chemistry
  • Sparse linear algebra plays a vital role in solving for these values
  • Software library created to solve these problems is now publicly available

Every year, doctoral candidates defend their theses, hoping to pass their degree and gain recognition for years of hard work. Within the computer science field, the German program Software for Exascale Computing (SPPEXA) honors PhD candidates whose research in HPC demonstrates “originality, significance, quality and clarity.”

SPPEXA Best PhD Award-winner Moritz Kreutzer's thesis “Performance Engineering for Exascale-Enabled Sparse Linear Algebra Building Blocks" was praised for its great applicability to a wide variety of problems.

The 2017 winner of the SPPEXA Best PhD Award was Moritz Kreutzer for his thesis, “Performance Engineering for Exascale-Enabled Sparse Linear Algebra Building Blocks.” Science Node interviewed Kreutzer at the ISC 2018 supercomputing conference to talk about his approach to solving major problems in high-performance computing (HPC).

Lonely matrix

Kreutzer’s thesis is part of a larger project to prepare for the advent of exascale computing. These machines, predicted to arrive around 2020, will be capable of at least one exaFLOPS, or a billion billion calculations every second. This powerful technology promises breakthroughs in many fields, allowing researchers to do things like fully simulate a human brain. Exascale is the next step in supercomputing, and organizations like SPPEXA are racing to create the software that will enable it.

<strong>Exascale computing</strong> promises breakthroughs in many fields, allowing researchers to do things like fully simulate a human brain. Kreutzer's work is laying the foundation for the software that will enable the next generation in high-performance computing.More specifically, Kreutzer focuses on sparse linear algebra.

“Sparse linear algebra is a field of mathematics which is relevant for a big range of applications from scientific computing to scientific engineering,” says Kreutzer. “It usually occurs when you have sparsely or loosely coupled systems, which is the case in many applications like finite elements or quantum physics. Sparse Matrix-Vector Multiplication (SpMV) is one of the key operations that we have to solve there, which makes up the majority of run time for many problems.”

Essentially, improving SpMV will increase the speed with which computers can solve large problems. 

<strong>Sparse linear matrix.</strong> This matrix is “sparse,” meaning important values are dotted among a sea of zeros. Courtesy SPPEXA.Functions like SpMV are the most prominent building blocks of sparse linear algebra, and they allow for a more efficient construction of many important algorithms. Benjamin Uekermann, program manager at SPPEXA, explains:

“With sparse matrices, the matrix-vector (also often referred to as BLAS level 2) has a complexity of O(N), so you need around N steps to compute the product if the matrix has a size of N times N,” says Uekermann. “If the matrix was a full one, you would need O(N*N) operations. Thus, sparse operations have a tremendous algorithmic advantage compared to full operations. The classical downside is that the numerical approximation is not as good as when you apply ‘full’ discretization techniques.  But normally the algorithmic advantages outweigh this disadvantage.”

Kreutzer has developed a platform-agnostic storage format for high-performance general SpMV. Known as SELL-C-σ, it achieves high efficiency and performance portability. Applicable for all relevant HPC hardware deployments, in many test cases it surpassed device-specific formats and implementations.  

In order to make his work accessible to a broader community, Kreutzer also developed a scalable open-source software library he calls GHOST.

“Being based on MPI+X parallelism with truly heterogeneous data-parallel execution and a holistic view of applications, algorithms, and implementations, [the GHOST library] delivers a unique feature set for highly performant sparse linear algebra on current and future supercomputers,” Kreutzer writes.

This sort of community-first mindset is exactly what accelerates scientific advancement, and it’s exciting to know that others will be able to build from Kreutzer’s work.

So why is this important?

All of Kreutzer’s efforts are directed at making certain HPC functions more efficient. SELL-C-σ, for example, outperforms device-specific storage formats for a large variation of sparse matrices.

As part of the Equipping Sparse Solvers for Exascale (ESSEX) project, Kreutzer joins a larger body of work attempting to create an Exascale Sparse Solver Repository (ESSR) for sparse eigenvalue problems. Eventually, projects such as this will go on to perform in exascale environments.

While we’re still waiting on that first exascale machine, it’s good to know that such intelligent people are working on it. What’s more, the professionals at SPPEXA were extremely impressed by Kreutzer’s work.

"All of the dissertations were really excellent,” says an anonymous reviewer partially responsible for the award decision. “I had a very tough time ranking them. In the end my reasoning for ranking Kreutzer first was the merit of the topic combined with the quality of the work. Sparse Linear Algebra building blocks for exascale computing is a topic with great applicability to a wide variety of problems."

Read more:

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2023 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.