• Subscribe

Learning goes deep

Speed read
  • Deep neural networks have potential to help with complex scientific problems
  • Researchers use Titan supercomputer to generate custom neural networks
  • Method is already proving its worth in studying neutrinos

Deep neural networks—a form of artificial intelligence—have demonstrated mastery of tasks once thought uniquely human. Successes have ranged from identifying animals in images, to recognizing human speech, to winning complex strategy games, among others.

<strong>What makes a face?</strong> Neural networks have been successfully trained to recognize human faces. Courtesy Lottie, Alan Levine, Thomas Lasserre, USDA/Lance Cheung via Flickr.

Now, researchers are eager to apply this computational technique—commonly referred to as deep learning—to some of science’s most persistent mysteries. But scientific data often look much different from those used for animal photos and speech.

To expand the benefits of deep learning for science, researchers need new tools to build high-performing neural networks that don’t require specialized knowledge. Using the Titan supercomputer, a research team led by Robert Patton of the US Department of Energy’s Oak Ridge National Laboratory (ORNL) has developed an algorithm that can generate custom neural networks that match or exceed the performance of handcrafted artificial intelligence systems. By leveraging the graphics processing unit (GPU) power of the Cray XK7 Titan, these networks can be produced in a matter of hours instead of months when using conventional methods.

The research team’s algorithm, Multinode Evolutionary Neural Networks for Deep Learning (MENNDL), is designed to evaluate and optimize neural networks for unique datasets. Scaled across Titan’s 18,688 GPUs, MENNDL can test and train thousands of potential networks for a science problem simultaneously, eliminating poor performers and averaging high performers until an optimal network emerges.

There’s no clear set of instructions scientists can follow to tweak networks to work for their problem. ~ Steven Young.

“With MENNDL, they no longer have to worry about designing a network. Instead, the algorithm can quickly do that for them, while they focus on their data and ensuring the problem is well-posed,” said research scientist Steven Young, a member of ORNL’s Nature Inspired Machine Learning team.

Pinning down parameters

Inspired by the brain’s web of neurons, the concept of deep neural networks was first popularized in the 1940s. But because of limits in computing power, it wasn’t until recently that researchers had success in training machines to independently interpret data.

Today’s neural networks can consist of thousands or millions of simple computational units—the “neurons.” During one common form of training, a network is assigned a task (e.g., to find photos with cats) and fed a set of labeled data (e.g., photos of cats and photos without cats).

<strong>Nature knows best</strong>. Neural networks are inspired by the human brain, but it's only recently that computing power has been sufficient to test the idea, originally conceived in the 1940s. Courtesy Unsplash/Jesse Orrico.

As the network pushes the data through each successive layer, it makes correlations between visual patterns and predefined labels, assigning values to specific features (e.g., whiskers and paws). These values contribute to the weights that define the network’s model parameters. During training, the weights are continually adjusted until the final output matches the targeted goal. Once the network learns to perform from training data, it can then be tested against unlabeled data.

Although many parameters of a neural network are determined during the training process, initial model configurations must be set manually.

Finding the optimal set of hyperparameters can be the key to efficiently applying deep learning to an unusual dataset. “You have to experimentally adjust these parameters,” Young said. “We used an evolutionary algorithm on Titan to find the best hyperparameters for varying types of datasets.”

Unlocking that potential required some creative software engineering by Patton’s team. MENNDL homes in on a neural network’s optimal hyperparameters by assigning a neural network to each Titan node.

MENNDL relies on parallel computing to distribute data among nodes. As Titan works through individual networks, new data is fed to the system’s nodes asynchronously so that each node is quickly assigned a new task, keeping Titan busy combing through possible configurations.

“Designing the algorithm to work at that scale was one of the challenges,” Young said. “To really leverage the machine, we set up MENNDL to generate a queue of individual networks to send to the nodes for evaluation as soon as computing power becomes available.”

To demonstrate MENNDL’s versatility, the team applied the algorithm to several datasets, training networks to identify sub-cellular structures for medical research, classify satellite images with clouds, and categorize high-energy physics data. The results matched or exceeded the performance of networks designed by experts.

Networking neutrinos

One science domain in which MENNDL is already proving its value is neutrino physics. Large detectors at DOE’s Fermi National Accelerator Laboratory (Fermilab) use high-intensity beams to study elusive neutrino reactions with ordinary matter. The devices capture a large sample of neutrino interactions that can be transformed into basic images through a process called “reconstruction.”

<strong>Exploring infinite space.</strong> Deep neural networks are improving the efficiency of research, such as in the MINERvA neutrino experiment. Courtesy Fermilab/Reidar Hahn.

Gabriel Perdue, an associate scientist at Fermilab, leads an effort to integrate neural networks into the classification and analysis of detector data. The work could improve the efficiency of some measurements, help physicists understand how certain they can be about their analyses, and lead to new avenues of inquiry.

Teaming up with Patton’s team, Fermilab researchers produced a competitive classification network in support of a neutrino scattering experiment called MINERvA (Main Injector Experiment for v-A). The task, known as vertex reconstruction, required a network to analyze images and precisely identify the location where neutrinos interact with the detector—a challenge for events that produce many particles.

In only 24 hours, MENNDL produced optimized networks that outperformed handcrafted networks—an achievement that would have taken months for Fermilab researchers. To identify the high-performing network, MENNDL evaluated approximately 500,000 neural networks. The training data consisted of 800,000 images of neutrino events, steadily processed on 18,000 of Titan’s nodes.

“You need something like MENNDL to explore this effectively infinite space of possible networks, but you want to do it efficiently,” Perdue said. “What Titan does is bring the time to solution down to something practical.”

Perdue’s team is building off its deep learning success by applying MENDDL to additional high-energy physics datasets to generate optimized algorithms. In addition to improved physics measurements, the results could provide insight into how and why machines learn.

“We’re just getting started,” Perdue said. “I think we’ll learn really interesting things about how deep learning works, and we’ll also have better networks to do our physics. The reason we’re going through all this work is because we’re getting better performance, and there’s real potential to get more.”

Read the original article on ORNL's website.

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2018 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.