• Subscribe

Time travel with Thomas Sterling

Summer is upon us, and that means it's time for the central event in the HPC community's calendar, ISC High Performance.

(Here's our preview of ISC 2017.)

We're fortunate to have had a chance to interview Wednesday's keynote speaker, Thomas Sterling. 

Sterling is director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University and has been integral to the development of high-end computing, most notably his groundbreaking work with the Beowulf cluster

Over the next three weeks we'll publish installments from our interview with Sterling, beginning with this week's review of how we arrived at the present state of HPC.


Take us through a quick tour of the important developments in HPC over the last 30-40 years.

HPC has evolved from the first digital electronic calculators (e.g., ENIAC, Colossus) in the 1940s to the mammoth many-core Chinese Sunway TaihuLight today with an approximate gain in computational performance of 100 trillion.

This wasn’t just a case of doing it wrong for a while and then suddenly getting it right in one of those rare 'aha' moments.

Epochs in Computation

  • 1940s — [vacuum tubes] initial von Neumann architectures, MIT Whirlwind
  • 1950s — [core memory, transistors] sequential instruction issue, e.g., IBM 7090
  • 1960s — [small scale integration DTL] execution pipeline, multiple ALU
  • 1970s — [medium scale integration (MSI) TTL/ECL, DRAM] vector, e.g., Cray-1
  • 1980s — [large scale integration CMOS] SIMD-array, PVP, e.g., CM-2, Cray-YMP
  • 1990s — [very large scale integration] MPP, commodity clusters, e.g., Intel Touchstone Delta, Beowulf Linux Clusters
  • 2000s — [multi-core] many-SMP MPPs and clusters, e.g., IBM BG/Q
  • 2010s — [GPU] accelerator augmented MPPs and clusters, e.g., Cray XC-40

Rather, it reflected a steady progress of about 200X every decade through a succession of technology innovations matched by enhanced computer architectures, execution models, and programming interfaces. 

What other advances arose in the halo of these leaps in infrastructure?

Clock rates also improved as a result of these technological advances, from an initial speed of Kilohertz to Gigahertz today (or a factor of a million).

Main memory capacities ranged from a few Kilobytes 60 years ago to Petabytes today (conservatively, a factor of a billion).

Programming and software methods accompanied these technology and architecture advances, most notably Fortran, C, C++, MPI, OpenMP, and CUDA (to name a few), along with operating systems, mainly Unix and Linux.

In every case, HPC was driven to achieve its best possible operation for real-world applications in science, engineering, socio-economics, medical, and security. 

What must HPC overcome to reach its potential?

HPC is always reaching its potential within the bounds and limits of emerging technologies. The mid 1960s delivered 1 Megaflops; the 1970s hit 100 Megaflops; the 1980s entered the era of Gigaflops, with 1 Teraflops achieved in 1997; 1 Petaflops in 2008, and most recently 100 Petaflops last year.

Engineers have their sights set on 1 Exaflops between 2020 and 2024, depending on the nation deploying the machine.

Throughout this extended period of progress, all system architectures had to address the key challenges related to performance.

Key challenges to HPC performance

  • Starvation — Inadequacy of parallelism, either to keep all execution resources utilized or through poor balancing of work across a distributed system
  • Latency — Distance to remote resources for access of data or services
  • Overhead — Extra work needed to manage parallel physical resources and concurrent tasks
  • Contention — Delays due to sharing of logical and physical resources
  • Energy — Power costs $1 million per Megawatt-year (Worse, chips can hold more transistors than they can power without self-destructing.)
  • Reliability — Single point failure modes increase with system scale while it is still required to keep a machine performing without stop for at least a week.

But beyond these engineering considerations that admittedly take up a lot of time in the field — and make up a lot of the fun — equally important challenges have to do with application programming interfaces and algorithms for user productivity.

The ability to move user applications between machines of different types, scales, and generations without a lot of specialty tuning is also a challenge.

All of these issues are in the critical path of successful advancement for HPC performance gains and are interrelated; each having to take the others into consideration.


Next week, Thomas Sterling forecasts the future of HPC:

"In the field of HPC, seven years from now has already happened."


Want to read more of our coverage about ISC High Performance?

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2017 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.