• Subscribe

From 2011 to 2012

What happened in 2011 that will be significant for the world of scientific computing? How will 2012 differ? To find out, we consulted experts from around the world.

Bob Jones

Head of CERN openlab, CERN

Expertise:
Grids, policy, and real-time systems.

What happened in 2011 that you believe was significant?
Large-scale experimentation of the use of cloud systems by the research community which showed that the technology is suitable for a sub-set of the workloads and that commercial cloud services are relevant if legal issues and financial costs are addressed.

How do you expect 2012 to differ from 2011?
The "community and sharing" issues of clouds will become more prominent such as how can I re-use my existing identity on cloud systems; how can I share my data and results with my peers in a secure manner etc.

Steven Newhouse

Director, European Grid Infrastructure (EGI.eu)

Expertise:
Federated Infrastructures, Grids, Clouds, Middleware

What happened in 2011 that you believe was significant?
The global economic situation is forcing real decisions to be made about value for money in e-Infrastructure. The last decade has provided generous investment to kick-start this community we now need to select which activities are proving to be most effective and continue them and identify others that need to be stopped and others that need more investment. One area that the e-infrastructure community has yet to fully embrace is the use of the commercial cloud. Even if it is not embraced for full-time operation, it can provide valuable lessons learned in operating model and philosophy that we should embrace wherever it makes sense to do so. I expect to see the knowledge transfer from this area into the e-Infrastructure community having a lot of impact in future years.

How do you expect 2012 to differ from 2011?
For e-Infrastructures it will be a road to consolidation. For the last decade there has been an exciting arena of experimentation. I expect 2012 to see an increased effort in providing a consolidated e-Infrastructure platform to enable a new era of experimentation and development across different communities. This could include commercial cloud resources alongside publicly funded cloud resources, and providing a platform and mechanisms that allows user communities to more easily deploy the software and services they need to support their activities on the general e-infrastructure. The drive we will see to federate cloud resources in many ways matches the drive we saw a decade ago to federate access to batch systems - something that we have now come to know as the grid!

Ruth Pordes

Open Science Grid Executive Director, Fermilab Computing Division Associate Head for Grids and Outreach

Expertise:
Leadership and coordination of multi-disciplinary science and engineering teams.

What happened in 2011 that you believe was significant?
What is significant to me in all my roles is the incredible success of the global computing for the LHC experiments, the rapid and important scientific results that are being obtained that depend on this computing, and the continued success in the sharing of this computing with other sciences and communities.

How do you expect 2012 to differ from 2011?
Continued focus on getting to grips with the different needs, use scenarios, scales, opportunities, visions and concepts, surrounding Data. (I believe we are still not grappling sufficiently with the synsthesis, principles, and understandings needed to acquire and apply the right technologies and methods to the panoply of needs.

Anything else you'd like to share?
The importance of a mutually understood vocabulary, agreed upon understanding of the conceptual landscapes, and principles and practices of software and computational engineering, increase more than linearly with the continuing uptick in multi-national science and research.

Jay Boisseau

Director, Texas Advanced Computing Center, The University of Texas at Austin
Director of User Services and Co-PI, US National Science Foundation's XSEDE

Expertise:
Primarily in HPC systems architectures and user support, but also portals, visualization and data-intensive computing.

What happened in 2011 that you believe was significant?
I think the two most exciting things in advanced computing last year were:

  • The acceptance (beyond hype) of accelerators as viable in production HPC systems for reasons of raw performance, performance per dollar, and performance per watt on certain applications. There is much work going on by NVIDIA and PGI with respect to CUDA, and by the OpenMP ARB, the new OpenACC effort, and the OpenCL community with respect to more standards-based approaches that are more general for accelerator types. There is general agreement that hybrid computing is going to be a substantial fraction (though not all) of petascale computing beyond 10PF, and that exascale computing almost certainly has to incorporate design elements of the massively parallel accelerators we're using in larger petascale systems now.
  • The 'stabilization' and positioning for growth of Lustre. The open source solution for large-scale parallel file systems was on shaky ground for a while (post Oracle-acquisition of Sun), but companies like WhamCloud and Xyratex, as well as community efforts in the U.S. and Europe, seem to have not only stabilized but enhanced the future of Lustre. Lustre's future has never looked brighter--though there is less certainty about exactly how to scale it to exascale.

How do you expect 2012 to differ from 2011?
SC'11 showed that the processor variety in HPC has become rich again: Intel's and AMD's different approaches to X86 (in Sandy Bridge's and Interlagos' internal designs, respectively) are complemented by GPUs from NVIDIA and AMD; Power7 and PowerPC (BlueGene/Q) processors from IBM; MIPs and SPARC-derived processors making a comeback; FPGAs in use by Convey and others; China's first two processor designs; and experimentation with TI's DSP, ARM, and other processor architectures.

The trio of requirements - raw performance, performance per dollar for acquisition, and now performance per dollar for operations (performance per watt)- have created an interesting parameter space for HPC systems designers! In 2012, Intel, the biggest microprocessor maker of them all, will introduce a major new line: MIC. MIC will be used initially in hybrid systems, but offers a very comprehensive set of programming models as well as tremendous performance potential. And I expect we'll hear more about China's offerings, and more about AMR64, and wouldn't be surprised to hear of at least one or two more very interesting processor architectures emerging for evaluation (if not use) in HPC systems.

The other big thing in 2012 will be data-intensive computing, but so much has already been written on that, and so little is carved in stone that I'll leave that for others to discuss early in 2012.

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2020 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.