- Complex economic models can't be calculated using conventional methods
- Piz Daint supercomputer successfully crunches the numbers using highly parallelized code
- Framework developed by researchers from University of Zürich and USI Lugano may help more economists access HPC resources
Economists have previously made little use of high-performance computers (HPC) in their research. However, the complex interactions and hundreds of dimensions of economic models cannot be calculated using conventional methods.
Previously, simplified models were formulated to solve these issues, but they often failed to answer the whole question—and sometimes provided false predictions.
At least, that's what Simon Scheidegger, senior assistant at the University of Zurich's Department of Banking and Finance, believes.
For example, quantitatively studying optimal monetary policy in the wake of a financial crisis cannot be achieved using conventional methods. But calculating high-dimensional models on a supercomputer is no easy task.
Thankfully, recent developments in software and numerical analysis are making HPC more viable for the economic community.
The curse of dimensionality
Unlike in physics models, in which time is considered as a fourth dimension alongside the three spatial dimensions, economic models have to consider ten- or even a hundred-times more dimensions.
Even a "simple" model of pension insurance in a single country, which aims to depict the prosperity of its society at each year of age, clearly shows how quickly a higher dimensionality is reached: "If we assume that people will live to 80 years old on average and will be earning from the age of 20, and want to determine prosperity for each year of age, we already have 60 dimensions," explains Scheidegger.
What's more, people make their current decisions while taking into account future uncertainties. Ideally, a model should consider all these influences.
There are two main sticking points in calculating such complex economic models. The first is recursively approximating the high-dimensional functions using many iteration steps.
At the same time, systems of non-linear equations must be solved at millions of grid points that describe the model. Calculating such a model can take hours and sometimes days of computing time, even on high-performance supercomputers like Piz Daint at the Swiss National Supercomputing Centre (CSCS).
Nested model
To find a highly efficient solution method that can recursively calculate the economic decision-making rules (known as policy functions), the researchers combined so-called sparse grids with a high-dimensional model reduction framework.
"The resulting linear combination of sparse grids, which describe the model and thus the policy functions, are nested together like a Russian doll, and are lined up in such a way that they optimally approximate and describe the original high-dimensional space," says Scheidegger.
The beauty is that the code to calculate the individual grids and their combination is highly parallelized. Even in small models with "only" 50 dimensions, the method efficiently scales up to as many as 1,000 of Piz Daint’s nodes simultaneously.
In simple terms, the dimensional decomposition framework ensures that only the relevant grid points and dimensions that describe the model under consideration need to be calculated.
To further minimize the time required to solve the functions and keep communication between the processors and the processes running on them highly efficient, the researchers also used a hybrid parallelization scheme (message passing interface (MPI) and Intel(R) threading building blocks (TBB)).
Scheidegger and his colleagues have thus developed a method that takes significant account of the heterogeneities and avoids oversimplification.
It also works generically and so can be applied to a variety of issues—from public finance models, such as state pensions, to central bank models. "As is the case in computer-aided physics or chemistry, the new method should enable models in economics to be solved fundamentally, that is ab initio, and then confronted with real-world data and adapted as necessary," says Scheidegger.
In many scientific fields, researchers now commonly rely upon high performance computing (HPC), in addition to theory and experimentation, to ask questions that would otherwise be impossible to address. Yet, as of today, economics rarely takes advantage of these computing resources.
Researchers at University of Zürich and USI Lugano hope to change that by developing a framework that can be disseminated in forms that make it more easily accessible to users.