Tornadoes (twisting winds that descend from thunderheads) and derechos (winds that race ahead of a straight line of storms) are just two varieties of extreme weather events whose frequency and violence are on the increase. To keep up, climate models are running at ever-higher resolutions, which require ever-greater processing speeds and altered computer architectures.

As global climate models improve, they are generating growing amounts of data. Generating and analyzing such amounts of data can be taxing, even for the world's fastest supercomputers.
Decision makers need climate model projections at higher spatial resolutions and on more specific time scales than currently available. Michael Wehner, a climate scientist at Lawrence Berkeley National Laboratory (LBNL) in California, US, focuses on extreme weather such as intense hurricanes, derechos, and atmospheric rivers – and computing challenges are common.
“In order to simulate these kinds of storms, you need high resolution climate models,” says Wehner. “My conclusion from a 100-kilometer model is that in the future we will see an increased number of hurricanes. But a more realistic simulation from a high-resolution, 25-kilometer model yields a significant difference; the total number of hurricanes will decrease, but the number of very intense storms will increase.”
A dataset that would take 411 days to crunch on a single-processor computer takes just 12 days on the Hopper supercomputer, located at the US National Energy Research Scientific Computing Center (NERSC) at LBNL. Despite this advance, Wehner is looking for improved run times in the neighborhood of an hour.

Models that can now process petabytes (a quadrillion bytes) per second will soon need to accommodate exabytes (a quintillion bytes) per second – a thousand-fold increase. John Shalf, chief technology officer at NERSC, is leading an effort to achieve such exascale performance. “Data explosion is occurring everywhere in the Department of Energy... genomics, experimental high-energy physics, light sources, climate … we need to rethink our computer design,” says Shalf.
What was once the most important factor in computer performance will soon become the least important. “In 2018 the cost of FLOPS – floating point operations per second – will be among the least expensive aspects of a machine, and the cost of moving data across the chip will be the most expensive. That's a perfect technology storm,” Shalf explains. In preparation, Shalf and his group are modeling hardware to predict performance of exascale systems – simulating hardware before it is built.
“Tools of yesteryear are incapable of answering these questions on climate,” says Shalf's colleague Wes Bethel, who leads the computational research division's visualization group at LBNL. He, however, stresses the good news: “Datasets are getting larger, but there's more interesting science and physics hidden in the data, which creates opportunities for asking more questions.”