• Subscribe

iSGTW Feature - Les Robertson: six years at the head of the LCG


Feature - Les Robertson: six years at the head of the LCG


Les Robertson in 2001, "just before it [the LCG project] started," and in 1974, when he first arrived at CERN.
Images courtesy of Les Robertson

In this special feature iSGTW chats to Les Robertson, who recently stepped down after six years at the head of the Large Hadron Collider Computing Grid.

A hole in the funding bucket
Democratic and global
The Grid
Challenges so far: the big three
Our challenges for the future
Countdown to startup

In the beginning

Les Robertson arrived at CERN in 1974 to fix a problem. The European physics research laboratory had just purchased a new supercomputer. The problem, says Robertson, was that it didn't work.

"At that time customers fixed their own operating systems," he explains. "I arrived as an operating systems expert and stayed on."

Twenty-seven years later, Robertson began work on an entirely different problem: Preparations for the Large Hadron Collider were well underway, but the computing resources required to handle LHC data had been left behind.

A hole in the funding bucket

"Computing wasn't included in the original costs of the LHC," Robertson explains. "The story was that you wouldn't be able to estimate the costs involved, although the estimates we made at the time have proven to be more or less correct." This decision left a big hole in funding for IT crucial to the ultimate success of the LHC.

"We clearly required computing," says Robertson, "but the original idea was that it could be handled by other people."

In 2001, these "other people" had not stepped forward.

"There was no funding at CERN or elsewhere," Robertson says. "A single organization could never find the money to do it. We realized the system would have to be distributed."

CERN began asking countries to help. The charge was led by the UK, who contributed a big chunk of e-science funding, closely followed by Italy, who continues to supply substantial funding to CERN. Germany also donated a chunk of funding, and then, says Robertson, other countries followed suit.

"This money gave us a big boost," he says. "It allowed us to create something much bigger."

The Grid

In 1999 Harvey Newman from Caltech had initiated the Monarc project to look at distributed architectures that could integrate computing resources for LHC, no matter where they were located. At around the same time, Carl Kesselman and Ian Foster carved a spot on the world stage for the Grid.

"Their book motivated the idea of doing distributed computing in a very general way," Robertson says. "It stimulated everyone's interest. We decided to ride the wave." But the Grid has not become the panacea, says Robertson. "It has become 250 different things, which has led to benefits and problems. Standards haven't emerged in the way we expected, nor have off the shelf products."

BACK TO TOP

Some centers involved in the WLCG; clockwise from top left: the French Tier-1 in Lyon; the Asian Tier-1 in Taipai, the University of Wisconsin Tier-2 in the U.S., and the Cern Tier-0 in Switzerland.
Images courtesy of IN2P3, ASGC, UW-Madison and CERN

Democratic and global

A big success of the LCG has been the involvement of multiple centers from around the world.

"Different countries, universities, labs…We have over 110 Tier-2 centers up and running, some big and some very small, but all delivering resources to the experiments," Robertson explains. "Many of these are computing centers that haven't been a fundamental part of the experiments environment before, and we've all put a lot of effort into working as a collaboration, sensitizing people to what will be required when the first data starts to arrive. The advantage is that all these centers are now involved in the experiments and so there are many options for injecting new resources when they are required."

BACK TO TOP

Challenges so far: the big three

When asked about the challenges he faced as head of the LCG project, Robertson laughs wryly. "There were several big problems," he says, "and they were all a bit the same."

Money
"Funding was certainly a problem, and the UK and Italy were especially important in providing people to get us started. We also benefited enormously from EGEE and OSG and their predecessors. As far as equipment is concerned, with the exception of ALICE, we have what we need for the first couple of years. After that there's still a lot of work to be done to build up resources as the data grows."

Collaboration
"I was surprised by the intensity of competition within the HEP community. This is collaboration, not a project with funding for people, so we all have to agree on what we do. People have had lots of good ideas, but in the end you have to do the practical thing. Achieving resolution has been harder than I expected."

Distant deadlines
"When the end is far away, there's a temptation to think of sophisticated, clever ways of doing things. But this is difficult when there is little experience and so you don't actually know what you need. Over the past year, the LHC has drawn closer to startup and this situation has changed. People have started to realize that we have to use what is available, because we want to do physics, and we need a solution."

BACK TO TOP

Our challenges for the future

Immediate: Stabilizing operations
"The futures of HEP and the grid depend on what comes out of the LHC. It's very important that the LHC produces something quickly and that grid operations stabilize rapidly."

Mid-term: Managing the data
"We can physically move data around quite well, but the challenges of data placement and management are still being proven. How do we distribute the data, and how do you find out where it is? There are enormous challenges yet to come."

Long-term: Managing energy requirements
"Computing has been getting cheaper and cheaper. Now costs are going up because of power requirements. The cost of supplying energy will affect all large-scale computing. We will have to invest heavily in ways to improve efficiency."

BACK TO TOP

Countdown to startup

So is Robertson confident that all will go according to the LCG plan when the first proton beams race through the LHC? He's hoping!

"There is a lot of work still to be done," he says. "This is new, this idea that you start a machine and the computing required is not all at the same place as the machine. It hasn't actually been done before. When the beams come, we don't know what will happen. Things will be chaotic, people will want things we didn't expect. But HEP is showing that this highly distributed environment is useable. Physicists are no longer dependent on CERN having all the funding or CERN deciding on priorities. We've created a democratic environment where you can plug in computer resources wherever you find it. In principle, that was the real goal of the grid."

- Cristy Burne, iSGTW

BACK TO TOP

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2021 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.