• Subscribe

Understanding risk with 'environmental computing'

Climate change adaptation, urbanization management, and disaster risk reduction would all greatly benefit from a technical and conceptual framework that would enable the production of detailed, comprehensive simulation tools based on well-understood and trusted components. Image courtesy Norman Kuring, NASA/GSFC/Suomi NPP.

A growing number of initiatives, organizations, and application groups form an emerging field that we call 'environmental computing'. The field draws on established disciplines and research topics, such as traditional numerical modeling techniques, high-performance computing and big data approaches. Applying these tools to support planning, decision-making, and operations in a broad range of 'societal-resilience issues' can help to uncover common issues, challenges, and opportunities.

There are several well-established cross-disciplinary fields merging computing and application domains into distinct specialties, such as medical informatics, which has grown into a field with thousands of practitioners and a worldwide multi-billion-euro marketplace for products and services. Practically any activity related to medical informatics (be it academic research or marketing of products) needs to seamlessly integrate knowledge related to IT and medical technology, as well as healthcare practices and regulations. Today, the need for this 'amalgamated foundation' of medical informatics may seem very obvious. However, as late as in the early '80s (almost three decades after the initial efforts to bring computing into hospitals) the identity of the field was still very much in flux, with discussions about the exact scope, name, and articulated vision and mission of the discipline continuing for decades.

Where are we today with environmental computing?

Modeling, predicting, and explaining natural phenomena are among some of the earliest uses of computers. Computerized weather predictions have formed the basis of meteorological services for nearly 60 years and seismology has relied heavily on computing for almost as long - be it for risk assessment or surveying (oil or gas) purposes.

However, efforts to integrate different models into an analysis-tool framework (such as the ones developed by the DRIHM and MAPPER projects) that can encompass different phenomena and produce estimates of their likely combined impact have only taken off more recently. This work presents several challenges:

  • Technical: combining different models so that the whole multi-model system can be executed efficiently is far from trivial.
  • Procedural: making sure that the software components, configuration files, and data sources are in a coherent state requires a completely new level of cooperation and collaboration between researchers from different fields.
  • Semantic: describing consistently and concisely the (combined) accuracy and limitations of the predictive ability of these multi-model systems is crucial for their broader use.

At the moment, these challenges are addressed only in a relatively fragmented manner.

Do we really need to launch a new discipline?

If the only goal is research, the answer could well be no. De facto standards for addressing the technical, procedural, and semantic challenges would eventually emerge from the different communities building more and more elaborate multi-model systems. However, climate change adaptation, urbanization management, and disaster risk reduction would all greatly benefit from a technical and conceptual framework that would enable the production of detailed, comprehensive simulation tools based on well-understood and trusted components.

As an example, the United Nations Office for Disaster Risk Reduction (UNISDR) currently produces its Global Assessment Report (GAR), which presents a global analysis of the progress made in reducing disaster risks, on a biennial basis. "Turning GAR into a more interactive monitoring tool is not just a question of additional computing and storage capacity - a more uniform approach to model development and integration is needed," notes Julio Serje, the coordinator of national disaster loss databases and online resources at UNISDR.

Reaching this goal would require similar steps forward to those which have enabled medical research results to be brought into clinical care: insurance companies accept a cast being put on a broken foot without the need for a source-code review of the patient-record software with each of the reimbursement claims. Reaching a similar level in the field of environmental computing might make it possible to have meaningful discussions (and perhaps even reach agreements) about how to divide the burden of action related to greenhouse gas emissions or flood risks - without needing to reiterate and reargue the validity, applicability, and accuracy of computer simulations in each case.

What next?

Environmental computing is still in its early stages, with the exact scope, approach, and even the name still open for discussion. However, there are ongoing, concrete activities that are contributing to the foundations of this field:

Three supercomputing centers with extensive backgrounds in environmental computing - located in Finland, Germany, and the Netherlands - and a large number of environmental research groups have also recently joined forces to form the EnCompAS Centre of Excellence consortium.

Join the conversation

Do you have story ideas or something to contribute?
Let us know!

Copyright © 2015 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.