• Subscribe

Feature - SuperComputing 2010 comes to a close

Last week 10,000 people from around the world converged on the city of New Orleans to attend SuperComputing 2010.

Hot topics included (but were hardly limited to) climate change modeling, graphic processing units, and the rise of data-intensive science.

Climate change modeling

Keynotes, panels, and technical papers all sought to address the challenges facing climate modeling in the coming years. Some suggested that exascale supercomputers enabled by graphic processing will be necessary to run future climate model. But greater computational power on its own is not enough. A model that accurately described the Earth's climate would provide increasingly accurate results when run at increasingly high resolution - and draw on increasingly large quantities of computational power in the process, since computational cost increases with resolution. But as the panelists at the "Pushing the Frontiers of Climate and Weather Models" panel pointed out, existing models are each optimized at a specific resolution, and will become less accurate if resolution is increased. Before we can take advantage of the higher resolutions enabled by greater computational resources, climate modelers will have to come up with models that are accurate at higher resolutions.

Graphics Processing Units

There remains a great deal of hype around the little chips known as graphics processing units, or GPUs for short. Proponents argue that GPUs are the only way to reach the exascale at a reasonable cost in both money and energy. But during the panel "Toward Exascale Computing with Heterogeneous Architectures," NERSC director Kathy Yelick (standing in for John Shalf) pointed out several factors that are often overlooked. First, there is the fact that in some cases, the benefits of GPUs are overstated because they are being compared to unoptimized CPU code. After extensive benchmarking, Yelick and her colleagues found that actual speed-ups should range from 2.2 for memory-intensive code to 6.7 for compute-intensive code. Second, teaching developers and researchers to program in a new paradigm such as CUDA and translating existing applications into CUDA is no small endeavor. Anyone who believes that something new will come along soon may conclude that they are better off waiting to transition to a new architecture at that time, and skipping over CUDA entirely.

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2020 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.