Summer is upon us, and that means it's time for the central event in the HPC community's calendar, ISC High Performance.
(Here's our preview of ISC 2017.)
We're fortunate to have had a chance to interview Wednesday's keynote speaker, Thomas Sterling.
Sterling is director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University and has been integral to the development of high-end computing, most notably his groundbreaking work with the Beowulf cluster.
Over the next three weeks we'll publish installments from our interview with Sterling, beginning with this week's review of how we arrived at the present state of HPC.
Take us through a quick tour of the important developments in HPC over the last 30-40 years.
HPC has evolved from the first digital electronic calculators (e.g., ENIAC, Colossus) in the 1940s to the mammoth many-core Chinese Sunway TaihuLight today with an approximate gain in computational performance of 100 trillion.
This wasn’t just a case of doing it wrong for a while and then suddenly getting it right in one of those rare 'aha' moments.
Rather, it reflected a steady progress of about 200X every decade through a succession of technology innovations matched by enhanced computer architectures, execution models, and programming interfaces.
What other advances arose in the halo of these leaps in infrastructure?
Clock rates also improved as a result of these technological advances, from an initial speed of Kilohertz to Gigahertz today (or a factor of a million).
Main memory capacities ranged from a few Kilobytes 60 years ago to Petabytes today (conservatively, a factor of a billion).
Programming and software methods accompanied these technology and architecture advances, most notably Fortran, C, C++, MPI, OpenMP, and CUDA (to name a few), along with operating systems, mainly Unix and Linux.
In every case, HPC was driven to achieve its best possible operation for real-world applications in science, engineering, socio-economics, medical, and security.
What must HPC overcome to reach its potential?
HPC is always reaching its potential within the bounds and limits of emerging technologies. The mid 1960s delivered 1 Megaflops; the 1970s hit 100 Megaflops; the 1980s entered the era of Gigaflops, with 1 Teraflops achieved in 1997; 1 Petaflops in 2008, and most recently 100 Petaflops last year.
Engineers have their sights set on 1 Exaflops between 2020 and 2024, depending on the nation deploying the machine.
Throughout this extended period of progress, all system architectures had to address the key challenges related to performance.
But beyond these engineering considerations that admittedly take up a lot of time in the field — and make up a lot of the fun — equally important challenges have to do with application programming interfaces and algorithms for user productivity.
The ability to move user applications between machines of different types, scales, and generations without a lot of specialty tuning is also a challenge.
All of these issues are in the critical path of successful advancement for HPC performance gains and are interrelated; each having to take the others into consideration.
Next week, Thomas Sterling forecasts the future of HPC:
"In the field of HPC, seven years from now has already happened."