• Subscribe

New cloud computing testbeds to spur innovation

Image courtesy www.perspecsys.com.

Two new cloud computing testbeds, Chameleon and CloudLab, will enable the academic research community to develop and experiment with novel cloud architectures and pursue new cloud computing applications. The US National Science Foundation (NSF) announced the two $10 million projects in August as part of its CISE Research Infrastructure: Mid-Scale Infrastructure - NSFCloud program.

While most of the original concepts for cloud computing originated within the academic research community, industry has driven much of the design and architecture as clouds have grown in popularity. The NSF awards complement industry's efforts, and will enable academic researchers to experiment and advance architectures that can support a new generation of innovative applications, including real-time and safety-critical applications like those used in medical devices, power grids, and transportation systems.

"Just as NSFNet laid some of the foundations for the current Internet, we expect that the NSFCloud program will revolutionize the science and engineering for cloud computing," says Suzi Iacono, acting head of NSF's Directorate for Computer and Information Science and Engineering (CISE). "We are proud to announce support for these two new projects, which build upon existing NSF investments in the Global Environment for Network Innovations (GENI) testbed, and promise to provide unique and compelling research opportunities that would otherwise not be available to the academic community."

Chameleon

Chameleon will support the design, deployment, and initial operation of a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago in Illinois, US, and The University of Texas at Austin, US.

Researchers will be able to configure slices of Chameleon as custom clouds using pre-defined or custom software to test the efficiency and usability of different cloud architectures on problems ranging from machine learning and adaptive operating systems to climate simulations and flood prediction. The 650-node, five-petabyte testbed will allow 'bare-metal' access - an alternative to virtualization technologies currently used to share cloud hardware, enabling experimentation with new virtualization technologies that could improve reliability, security, and performance.

"Like its namesake, the Chameleon testbed will be able to adapt itself to a wide range of experimental needs, from bare-metal reconfiguration to support for ready-made clouds," says Kate Keahey, a scientist at the Computation Institute at the University of Chicago and principal investigator for Chameleon. "Furthermore, users will be able to run those experiments on a large scale, which is critical for big data and big compute research. But we also want to go beyond the facility and create a community where researchers will be able to discuss new ideas, share solutions that others can build on, or contribute traces and workloads representative of real-life cloud usage."

Chameleon's support for heterogeneous computer architectures, including low-power processors, graphics processing units (GPUs), and field-programmable gate arrays (FPGAs), as well as a variety of network interconnects and storage devices make the testbed unique. Researchers can mix and match hardware, software, and networking components and test their performance. This flexibility is expected to benefit many scientific communities, including the growing field of cyber-physical systems, which integrates computation into physical infrastructure. The research team plans to add new capabilities in response to community demand or when innovative new products are released.

Other US partners on the Chameleon project (and their primary areas of expertise) include The Ohio State University (high performance interconnects), Northwestern University (networking), and The University of Texas at San Antonio (outreach).

CloudLab

The second NSFCloud project supports the development of CloudLab, a large-scale distributed infrastructure based at the University of Utah, US, Clemson University in South Carolina, US, and the University of Wisconsin-Madison, US. Each site will have unique hardware, architecture, and storage features, and will connect to the others via 100-gigabit-per-second connections on Internet2's advanced platform, supporting OpenFlow and other software-defined networking technologies.

"Today's clouds are designed with a specific set of technologies baked in, meaning some kinds of applications work well in the cloud, and some don't," says Robert Ricci, a research assistant professor of computer science at the University of Utah and principal investigator of CloudLab. "CloudLab will be a facility where researchers can build their own clouds and experiment with new ideas with complete control, visibility, and scientific fidelity. CloudLab will help researchers develop clouds that enable new applications with direct benefit to the public in areas of national priority such as real-time disaster response or the security of private data like medical records."

In total, CloudLab will provide approximately 15,000 cores and in excess of one petabyte of storage at its three data centers. Each center will comprise different hardware enabling additional experimentation. The CloudLab team is partnering with HP, Cisco, and Dell to provide diverse, cutting-edge research platforms. Like Chameleon, CloudLab will feature bare-metal access. Over its lifetime, CloudLab is expected to run dozens of simultaneous virtual experiments and to support thousands of researchers.

Since 2000, Ricci and the CloudLab team have successfully operated Emulab, providing a network testbed where researchers can develop, debug, and evaluate their systems in a wide range of environments. The Chameleon team includes several members of FutureGrid, an NSF-supported testbed that enables researchers to experiment in grid and cloud security.

The goal of the NSFCloud program and the two new project awards is to advance the field of cloud computing broadly. In the second phase of the program, each cloud resource will become fully staffed and operational, fulfilling the mission of serving as an extensively used testbed by the research community.

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2018 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.