• Subscribe

Professor Sterling sifts the tea leaves

Here's part two of our interview with Professor Thomas Sterling. 

(You can find last week's episode here. And look here for our preview of ISC 2017.)

Sterling is director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University and has been integral to the development of high-end computing, most notably his groundbreaking work with the Beowulf cluster

This week he discusses what's in store for HPC, and why this all matters.


We've talked about how we arrived at the high-performance computers we now have. Now tell us where HPC is headed.

There is an oft-shared comment in the field of HPC that seven years from now has already happened. This is due to the phases of design, development, deployment, and program porting that must take place from the point of conception.<strong>Days of future past. </strong> Reading the<a href= 'https://www.top500.org'> TOP500 list</a> is like peering into the future of computer science. Next stop: Exascale. Courtesy Top500.

We are already in that episodic sequence that is also taking place in Japan, China, and Europe — although each with very different approaches.

It should be noted that the Chinese have retained the top slot on the Top-500 list for more than two years with two different machines. They project that they will deploy the first exascale system by 2020.

The United States

The US, under the auspices of the 2015 Presidential Executive Order — the National Strategic Computing Initiative (NSCI) — has initiated the Exascale Computing Program (ECP) that will deliver two distinct classes of exascale computer between 2022 - 2024.

The first will (probably) be integrated by IBM using the Power-9 (or derivative) high speed processor combined with the latest Nvidia GPU accelerators. This may be similar to the new Summit computer implemented by the CORAL program to be deployed within two years at Oak Ridge National Laboratory (ORNL).

The supercomputer is to information and the use of intelligence what the hammer was to the tangible and to the creation of tools, artifacts, clothing, food, (and probably social strife). 

The second will (probably) be integrated by Cray using the Intel PHI lightweight processor core architecture derived from the upcoming Knights Hill chip that will be employed in the Aurora computer also implemented by the CORAL program at Argonne National Laboratory (ANL) at about the same time.

Both systems will exhibit a performance of about 200 PetaFLOPS. These systems will both advance the national computing mission-critical goals through the end of this decade and serve as testbeds in preparation for delivery of systems capable of more than 5X these two machines on real world problems.

China, Europe, and Japan

The Chinese are now designing and fabricating their own processors which means they have complete freedom to evolve a very different approach to supercomputer design as they did with the TaihuLight system with a total of 10 million very fine grain cores. But rumor has it that a different track is also being followed using their own version of ARM processor cores.

The Europeans are less focused on developing their own hardware and are instead emphasizing the importance of system software, programming models, and end-purpose applications.

Examples of the latter are programs in graphene technology and in the Human Brain Project, both requiring exascale computing which they will probably acquire from other vendors.

Finally, the Japanese have a long tradition of developing extremely well engineered and balanced supercomputer systems. They are now developing the follow-on to their excellent K computer produced by Fujitsu and deployed at Riken in Kobe with a delivered performance of 10 PetaFLOPS.

Their next system in the 100 PetaFLOPS performance regime unofficially dubbed 'Post K' is under development at this time with plans for their first exascale system (yes, called 'Post-Post K') in the works.

Speak to those who remain unconvinced about the worth of supercomputers. Where do you think their greatest significance lies?

Superlatives are dangerous to use when trying to convey reality as they tend to be dismissed as hyperbole. However, to respond to this very important question I must take this risk. And in spite of such extremes, I am likely to understate the case.

We are dealing with the synthesis of computing, communication, and information and its symbiosis with humanity to produce core knowledge and decisioning (any word can be verbed).

The supercomputer is the hammer of the mind. It explains the past, controls the present, and in some narrow cases predicts the future. 

<strong> Hammer time. </strong> HPC is as revolutionary as the discovery of fire, says Thomas Sterling. Sterling will speak at ISC High Performance 2017 on Wednesday, June 21. Courtesy José-Manuel Benito Álvarez. <a href='https://creativecommons.org/licenses/by-sa/2.5/legalcode'>(CC BY-SA 2.5) </a>

To be concrete, I consider the supercomputer and its accoutrements to be no less important than the discovery of fire and the fabrication of the Acheulian hammer, both by our Homo Erectus forbears. 

Fire was largely responsible for drawing people and families into clans and tribes. Stone hammers enabled proto humanity to modify its physical world, serving as a tipping point (no pun intended) leading to Homo Sapiens.

The supercomputer is to information and the use of intelligence what the hammer was to the tangible and the creation of tools, artifacts, clothing, food, (and probably social strife).

The supercomputer is the hammer of the mind. It explains the past, controls the present, and in some narrow cases predicts the future.


Next week, Professor Sterling reflects on being part of the HPC community and recalls some highlights from 32 years of ISC conferences:

“I should have brought a mongoose!” 


Want to read more of our coverage about ISC High Performance?

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2017 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.