Fractals are famous for looking the same at all magnifications - as you zoom into the picture, it remains the same at smaller and smaller scales. However, other systems in nature, such as blood vessels in the brain for example, are not so self-similar. If you delve into these at different scales, the picture can change dramatically. For example, to model a brain aneurysm, medics can image the topology of the swelling in the cerebral blood vessel very minutely using high-performance computing, move up a scale to set that in context within the brain's circulatory network (to work out how to bypass the affected area), and finally look at the position of the aneurysm and its effect on the whole body. Solving these sorts of problems was the focus of a session on multiscale modeling using e-Infrastructures, led by Andrew Emersonof Cinecaand the MMM@HPC project at the recent eChallenges meeting in Lisbon.
Nature tends to present us with a continuum of length and time scales - in practice, modeling work tends to focuses on a discrete set of scales because researchers need to use different algorithms and application codes for each scale and physical model, and these each have different scalability. Due to the size of the datasets and the complexity involved, multiscale modeling often takes researchers into the realm of petascale computing (i.e. high-performance computers with one quadrillion floating point operations). These systems have many 1000s or 100,000s of cores with networking, power consumption and heat dissipation become important engineering constraints. Typically, petascale computers use a large number of low power cores, or use accelerators with high-performance and low-power consumption (e.g. graphical processing units) - or a hybrid of both. This means that the maximum number of flops per watt can be achieved.
The problem is that most of the codes stop scaling as communications between the increasing number of cores becomes the rate-determining step. And you need to make sure that the codes you are using scale as you increase the problem size and the number of cores - to book time on a supercomputer there is often a lower limit that you have to meet.
The problems get even greater with exascale, which could lead to computers with the same power consumption as a small town. Projects like Montblanc are focusing on using ARM processors to reduce power consumption, with the aim of using 30 times less power than current computers. The European Exascale Software Initiative (EESI)is bringing together industry and academia to drive the transition from petascale to exascale, as is the Dynamical Exascale Entry Platform (DEEP) project.
Clearly, this is an area where the multiscale modeling community will be keeping a close eye on developments. The requirement for exascale computing, rather than just more petascale machines, needs to be clear and will pose interesting questions for the community - will exascale merely give more capability to tackle larger data sets for similar problems? Or, is there a whole new set of questions that exascale could tackle, giving top-to-bottom answers to multiscale problems we haven't even thought of yet? Where this might take us in the future is not yet clear.
This article was originally posted on the Gridcast blog.