Last week, almost 2,500 experts from industry, research, and academia gathered in the German city of Leipzig for International Supercomputing Conference '13 (ISC'13). The event played host to the announcement of the new TOP500 list of the fastest supercomputers in the world. Milky Way 2 (known also as Tianhe-2), located at the National University of Defense Technology (NUDT) in Changsha, China, was announced the new winner. "The Milky Way 2 project lasted three years and required the work of more than 400 team members," says Kai Lu, vice dean of the School of Computer Science at NUDT. Boasting over 3 million cores and with a peak performance of around 34 petaFLOPS on the Linpack benchmark, Milky Way 2 is nearly twice as fast as the previous winning supercomputer, Titan, at Oakridge National Laboratory, US. Titan has now slipped to number two spot on the list, with another US-based supercomputer, Sequoia, located at Lawrence Livermore National Labs, completing the top three. JuQUEEEN at the Jülich Supercomputing Centre in Germany was ranked as the fastest machine in Europe.
"Our projections still point towards reaching exascale systems by around 2019," says Erich Strohmaier of the US Department of Energy's Lawrence Berkley National Laboratory, who gave an overview of the highlights of the new Top 500 list. Strohmaier, however, warns that increasing the power efficiency of supercomputing systems will continue to be a major challenge over the coming years: "If we don't start to have some new ideas about how to build supercomputers, we will truly be in trouble by the end of the decade."
"If you actually look at what people want to do, an exaflop is still not enough," says Bill Dally of NVIDIA and Stanford University, California, US. He gave a keynote speech on the future challenges of large scale computing. "The appetite for performance is insatiable," he says, citing work in a number of research fields as evidence that performance is currently still the limiting factor in terms of the exciting science which can potentially be done. "If we provide increased performance, people will always find interesting things to do with it."
"Moore's Law is alive and well" - but is that enough?
Stephen S. Pawlowski of Intel also gave a keynote speech at ISC'13, entitled 'Moore's Law 2020', in which he discussed the challenges computer scientists across the globe face in achieving exascale supercomputers by the end of the decade. "People are always saying that Moore's Law is coming to an end, but transistor dimensions will continue to scale two times every two years and improve performance, reduce power and reduce cost per transistor," he says. "Moore's Law is alive and well."
"But getting to Exascale by 2020 requires a performance improvement of two times every year," Pawlowski explains. "Key innovations were needed to keep us on track in the past: many core, wide vectors, low power cores, etc."
"Going forward, scaling will be as much about material and structure innovation as dimension scaling". He cites potential future technologies, such as graphene, 3D chip stacking, nanowires, and photonics, as ways of achieving this.
Pawlowski argues for less focus on achieving a good score on the Top 500 list by optimising performance for the Linpack benchmark. Instead, he says, there needs to be more focus on creating machines suited to running scientific applications. "Moore's Law continues, but the formula for success is changing," concludes Pawlowski.