• Subscribe

Flexible cloud: Harnessing heterogeneous hardware resources

Speed read
  • Ever greater variety of applications now make use of cloud computing.
  • Conventional commodity-based data centers are not always able to meet increasing range of performance requirements. 
  • The EC-funded HARNESS project seeks to devise cloud platform technologies for exploiting specialized resources.
  • The three-year project is now coming to an end, with the project's industrial partners already benefiting from the technology created.
  • Using the HARNESS technology, cloud tenants can describe their performance needs and constraints for their applications, so that the cloud operator can decide how and when to allocate the appropriate mix of commodity and specialized resources.
  • Alexander Wolf will present project's outcomes at ISC Cloud and Big Data event.

Alexander Wolf, a professor of computing at Imperial College London, UK, is set to speak about the HARNESS project at the ISC Cloud and Big Data event in Frankfurt, Germany, later this month. He tells The Science Node about the work he and his project colleagues have been doing to make cloud computing more flexible…

When the HARNESS project was conceived more than three years ago, the dominant approach to constructing cloud data centers was based on the assembly of large numbers of relatively inexpensive personal computers, interconnected by standard IP routers, and supported by stock disk drives. This is consistent with what is still the current business model for public cloud computing, which uses commodity computation, communication, and storage resources to provide low-cost application hosting.

With an ever greater variety of applications attempting to make use of cloud computing, such as those for online machine learning and scientific computing, we are finding that the conventional commodity-based data center is struggling to meet what are some very different performance requirements. Those requirements seem to be satisfied only with the careful use of specialized resources in concert with the already available commodity resources. This, of course, leads cloud providers to face the daunting new challenge of managing a heterogeneous collection of resources in the cloud data center.

Our goal in creating the HARNESS project was to devise cloud platform technologies for exploiting specialized resources. However, we wanted to do this in a way that is still consistent with today’s platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) layers. That is then the challenge facing the HARNESS project: how can we seamlessly integrate non-conventional resources into the cloud data center stack for the benefit of a much wider range of applications than is possible today?

An important point is that HARNESS is beneficial not only to applications but also to cloud operators, since it opens their market to wholly new classes of applications.

The primary tool for managing applications in the cloud is virtualization, which permits automated scaling of applications by a simple increase or decrease in the numbers of resources (e.g., CPUs) used during execution in response to changes in demand or load. As it turns out, however, this tool works well for only particular kinds of applications, specifically those that either process large amounts of bulk data in parallel without stringent latency constraints or those that serve large numbers of interactive users whose transactions are stateless or at least short in duration.

Rather than just being able to scale an application by adjusting the number of resources, HARNESS enables an application to scale by adjusting the kinds of resources that are utilized. We have considered several new kinds of resources, including GPUs (graphics-processing units)FPGAs (field-programmable gate arrays)ASIC (application-specific integrated circuit) based network routers, GPNMs (general-purpose network middleboxes), and SSDs (solid-state drives).

However, these new resources have quite different programming models compared to their conventional counterparts. Moreover, our understanding of how to predict application performance is relatively immature. Finally, there is little experience in virtualizing these resources to the same degree that, say, we have found ways to virtualize CPUs. In the HARNESS project we have attacked all these problems with several new technologies, and found ways to integrate those new technologies into otherwise conventional platforms, primarily OpenStack.

Cloud tenants can describe their performance needs and constraints for their applications, including an indication by when they will require results to be delivered or how much they are willing to pay, so that the cloud operator, based on the current state of the data center, can decide how and when to allocate the appropriate mix of commodity and specialized resources. We see this as a fundamental paradigm shift from the traditional resource-oriented view of cloud computing to one that we characterize as ‘results oriented’.

Programming something like a GPU or an FPGA is not easy. Fortunately, there are a good number of researchers working on this particular problem today, including members of the HARNESS team, so help is on the way. We can therefore assume that applications or, rather, critical components of applications will be available for execution in a variety of configurations, some conventional and some that use specialized resources; each of which yields a different trade-off between performance and cost.

Beyond programming is then the fundamental problem of cost/performance prediction. Here, HARNESS provides the ability to predict the performance of an application in all its many possible deployments by automatically exploring the space of configurations. The larger the configuration or the more that the configuration involves expensive resources, the higher the cost of that configuration to reach a given performance goal. The application operator is merely required to provide a description of the application (a ‘manifest’) and HARNESS does the rest.

Finally, once a given configuration is chosen to meet some cost/performance trade-off goal, the platform will manage the deployment and execution of the application. Again, the platform makes use of the description of the application.

While the HARNESS project is now coming to a close, our industrial partners are already busy exploiting the technology we’ve created. We are currently pulling together a full, top-to-bottom demonstration of the HARNESS platform, its integration with OpenStack, and its deployment in three different cloud environments. Be sure to visit the HARNESS project website to find our more.

 

The ISC Cloud and Big Data 2015 conference will be held in Frankfurt on 28-30 September. If you’d like to find out more about the conference, including how to register to attend, please visit the event website.

HARNESS stands for ‘Hardware- and Network-Enhanced Software Systems for Cloud Computing’. This three-year project was launched on 1 October 2012 and received funding under the European Commission’s Seventh Framework Programme (FP7). Video courtesy HARNESS project.

Join the conversation

Do you have story ideas or something to contribute?
Let us know!

Copyright © 2015 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.