
Last week, iSGTW attended the International Supercomputing Conference (ISC'14) in Leipzig, Germany. The event featured a range of speakers representing a wide variety of research domains. Karlheinz Meier from Heidelberg University, Ruperto Carola in Germany gave a keynote speech at the event. He spoke at length about Europe's exciting Human Brain Project, of which he is co-director. Find out more about the work being undertaken through the project to develop brain-inspired computing technologies in our in-depth interviews with Meier and fellow co-directors Henry Markram and Richard Frackowiak.
Satoshi Matsuoka, a professor at the Global Scientific Information and Computing Center of the Tokyo Institute of Technology, Japan, and leader of the TSUBAME series of supercomputers, also gave a keynote speech. During this talk, he argued that "future extreme big-data workloads" may see supercomputer architectures converging with those of big data. Another high-profile speaker at the event from Japan was Hirofumi Tomita of the RIKEN Advanced Institute for Computational Science. He spoke about the work he and his colleagues have contributed to the Athena project for high-resolution global climate solutions and outlined his vision for the future of climate simulation as the global supercomputing community approaches the exascale era.
One of the highlights of the event was a keynote speech given by Klaus Schulten about large-scale computing in biomedicine and bioengineering. Schulten, who is director of the theoretical and computational biophysics group at the University of Illinois at Urbana-Champaign, US, gave an overview of the progress being made in simulating ever more complex biological systems. "The computer is often actually a microscope in the life sciences," says Schulten. He argues that computation is vital for tackling the significant and varied challenges that exist today in the life sciences, from understanding Alzheimer's disease to developing improved biofuels. Thanks to petascale supercomputers, we can now simulate organelles for the first time, but we need to be able to simulate living cells to solve problems like antibiotic resistance, explains Schulten. Simulating even simple cells will require much more computing power, he warns: to describe a small cell you need to simulate 110 billion atoms. "Simulating a cell will be the next big step - hopefully this will happen in my lifetime," he says. Read our recent feature article '64-million atom simulation - a new weapon against HIV' to find out how Schulten has been using the Blue Waters supercomputer to determine the chemical structure of the HIV capsid.
Research highlights recognized
During the event a number of high-profile prizes were awarded. The ISC award for the best research poster went to Truong Vinh Truong Duy of the Japan Advanced Institute of Science and Technology and the University of Tokyo, Japan. He presented work on OpenFFT, which is an open-source parallel library for computing three-dimensional 'fast Fourier transforms' (3-D FFTs).

Meanwhile, both the Partnership for Advanced Computing in Europe (PRACE) and Germany's Gauss Centre for Supercomputing awarded prizes for the best research papers. The PRACE award went to a team from the Technical University of Munich and the Ludwig Maximilian University of Munich, Germany, for its work optimizing software used to simulate seismic activity in realistic three-dimensional Earth models. For a simulation under 'production conditions', the team is able to achieve a sustained performance of around 1.09 petaFLOPS on the SuperMUC supercomputer. Read more about this in an announcement posted on our site last month.
The GAUSS award went to a team from IBM Research and the Delft University of Technology in the Netherlands for their analysis of the compute, memory, and bandwidth requirements for the key algorithms to be used in the Square Kilometre Array radio telescope (SKA), which is set to begin the first phase of its construction in 2018. The work was presented by IBM Research's Erik Vermij, who used one of his slides to draw a creative analogy regarding the huge amount of data the SKA will produce: To scale in terms of mass, if the 350 million photos posted to Facebook each day equate to one small rabbit, and the peak traffic on the Amsterdam Internet Exchange equates to a lion, then the hundreds of terabytes of data per second that the SKA will produce is the equivalent of a blue whale, the heaviest animal to have ever existed on Earth! From the analysis conducted by Vermij and his colleagues, they concluded that simply waiting for new technology to arrive is not going to be enough: one or two orders of magnitude better power efficiency and compute capabilities are required. As such, the team argues that novel hardware and system architectures, to match the needs and features of this unique project, must be developed. Read more about the astronomical amounts of data the SKA is set to produce in our recent feature article 'Handling astronomical data from the world's largest telescope'.
Stagnation in supercomputing?
Another source of competition at the event is the announcement of the new TOP500 list of the world's fastest supercomputers. Last week's new list held little in the way of surprises, with China's Tianhe-2 remaining the fastest supercomputer in the world by a significant margin. Titan at Oak Ridge National Laboratory in Tennessee, US, and Sequoia at Lawrence Livermore National Laboratory in California, US, remain the second and third fastest systems in the world. The Swiss National Computing Centre's Piz Daint is once again Europe's fastest supercomputer and is also the most energy efficient in the top ten.

Perhaps the most surprising thing about last week's list was the lack of surprises - or even the lack of much significant change at all. Only one new system entered the top ten (a new US Government system was ranked in tenth place) and for the second consecutive list, the overall growth rate of all the systems is at a historical low. Also, the last-placed system on the new list featured at position 384 in the previous TOP500 list, which represents the lowest turnover rate in two decades. During the opening session of ISC'14, Erich Strohmaier, one of the co-authors of the list, explained that the age of systems on the list has increased markedly over recent years and suggests that tightening budgets in many countries since the global financial crisis began in 2008 may be the cause for this. "There is a big lack of money and it's not clear to me when that's going to change," he says.
Thomas Sterling of Indiana University, US, also used his keynote speech to highlight the lack of changes in the new TOP500 list. "The breaking news is that not much is new," says Sterling. "A 'typical' supercomputer is not much different from what it looked like last year." He argues that the high-performance computing community is currently at a point of inflection and says there is potential for "deep rethinking of HPC futures."