Skip to main content

ICS News Archive

If looking for the latest news, go here

Return to News Archive List
February 18, 2016

Unlocking the potential of deep learning

By Courtney Hamilton
Pierre Baldi leads ICS’s machine learning research, which assists particle physics experiments at CERN.

At the European Organization for Nuclear Research, better known as CERN, the Large Hadron Collider (LHC) produces one petabyte of data per second. Of this, it only records roughly 6 gigabytes per second of data to be processed—amassing a record of 100 petabytes of data since its first run in 2009. For context, 1 petabyte of data is enough to store the DNA of the entire population of the United States, and then clone them twice over, according to Computer Weekly. A library of all of the world’s texts, in all of the world’s languages, would contain nearly 50 petabytes of data. If we work with Computer Weekly’s estimates, 100 petabytes of data would contain the entire human memories of 80,000 people. That’s the amount of data CERN scientists are working with.

And that’s where new techniques in artificial intelligence and machine learning—especially deep learning—become very useful, saysPierre Baldi, Chancellor’s Professor of Computer Science and director of the Institute for Genomics & Bioinformatics. In November, Baldi traveled to CERN with his graduate student Peter Sadowski to lecture on deep learning at a groundbreaking Data Science @ LHC 2015 Workshop, forging links between particle physics and machine learning.

Deep learning gets its name from the many layers used in processing high-level abstractions. As Baldi explains, “Think about vision, for instance, we don’t recognize images immediately. We detect edges or contours; it’s a process that occurs in many stages. It’s a deep process that requires extracting features and combining them together. Deep learning is the idea that you can train this whole stack of processes together using data.”

Baldi and his collaborator, UCI’s Associate Professor of Astronomy and Physics Daniel Whiteson, have shown that deep learning can be used to improve our ability to detect exotic particles like the Higgs boson, and potentially dark matter. The universe is 80 percent dark matter, and physicists have observed its effects on gravity. But because dark matter doesn’t interact with light, we don’t know what it is, exactly. The particle collision experiments attempt to produce dark matter within the LHC—though dark matter’s existence, if produced by the LHC, could only be inferred through energy and momentum lost after a collision. With this level of obscurity, dark knowledge can improve deep-learning processes and therefore aid in the analysis of dark matter data.

Deep learning is also capable of creating “dark knowledge,” a form of knowledge that is not explicitly present in the training data. Consider a vision system trained to recognize a finite number of object categories, each one associated with a different output. “When a car is presented to the trained system, in the output, you’re getting all kinds of numbers. There will be a large number corresponding to the car category, because the system is well-trained to recognize the image of a car,” Baldi explains. “But it will also have some value for a truck, because a truck is somewhat similar to a car. The intensity for a vegetable, however, will be very low, because a vegetable has nothing to do with a car. So, dark knowledge is the information contained in the relative size of these numbers that goes beyond the fact of whether there is a car or not in the image. It tells you something about the degree of similarity between the categories.”

Dark knowledge can be used to better train shallow learning systems—smaller and faster, but less accurate systems like those on your smartphone—fittingly tying together deep learning, dark knowledge and dark matter.

While the link between deep learning and particle physics is picking up steam, there remain concerns. “There is always some level of anxiety when a new method comes into a field, but deep learning comes with additional anxieties because people view it as a ‘black box method,’” Baldi says. “It may be very good at recognizing the signature of Higgs bosons, but you don’t know how it does this, because the system learns from data through an opaque process. People find this a little unsettling.”

These groundbreaking new techniques upend principles physicists have relied on for decades, but there’s an additional challenge to implementing deep learning on a wide scale. “Deep learning requires a lot of computing resources to process such large amounts of data and to train deep-learning systems. It’s very intensive from a computational standpoint,” Baldi says. Naturally, funding such huge enterprises can be difficult.

Still, applying deep-learning techniques to particle physics experiments represents only the tip of the iceberg in the field’s useful applications. While deep learning has mainly been utilized in engineering—examples include Facebook’s face recognition technology, or Google’s use of deep learning to recognize speech and natural language—the field could be useful in all natural sciences. In chemistry, for example, it is being used to predict the outcome of reactions. In biology, it is being used to predict protein structures, or the effect of mutations.

Ultimately, Baldi says, “Deep learning is the greatest tool we have today to detect faint signals in data. The potential impact is great.”

Online Resources:

  • Data Science @ LHC 2015 Workshop (PDFs and recordings)

  • Pierre Baldi’s lecture on “Deep Learning and its Applications in Natural Science at the workshop (PDF and recording)

  • “Artificial intelligence called in to tackle LHC data deluge” article in Nature

— Story by Courtney Hamilton