• Subscribe

At Science Node, we need your help. We have ALMOST reached our fund-raising goal. In order to maintain our independence as a source of unbiased news and information, we don’t take money from big tech corporations. That’s why we’re asking our readers to help us raise the final $10,000 we need to meet our budget for the year. Donate now to Science Node's Gofundme campaign. Thank you!

How deep is your network?

Speed read
  • Artificial intelligence holds promise but may be misunderstood
  • AI, machine learning, and deep learning aren’t the same thing
  • AI is older than you think

Artificial intelligence (AI) is one of the most exciting technologies to watch right now. We’ve recently discussed the application of AI in everything from mechanical musical composition to reversing the American opioid crisis – and that’s just a small taste of what mechanized minds are capable of.

Deep senseYu Wang, senior AI scientist at Leibniz Supercomputing Centre, explains how deep neural networks became so popular. Recorded at ISC High Performance 2018 in Frankfurt, Germany.

Despite AI’s surge in popularity, many misconceptions remain. For example, did you know that AI began its journey more than 50 years ago? Or that the terms “machine learning” and “deep learning” aren’t interchangeable?

Thankfully, AI researcher Yu Wang of the Leibniz Supercomputing Centre is more than happy to share his knowledge. We caught up with him at ISC High Performance 2018 to set the record straight.

The AI family tree

“Artificial intelligence is a very general concept. It includes machine learning, which is a subset of artificial intelligence,” Wang says.

While some popular articles seem to use phrases like ‘AI,’ ‘machine learning,’ and ‘deep learning’ interchangeably, it’s better to think of these terms like a set of Russian nesting dolls.

<strong>Nested intelligence.</strong> Wang likens AI to the largest doll in a nested set, with machine learning inside that, and deep learning inside that. Courtesy Bo Hughins. <a href='https://creativecommons.org/licenses/by-sa/2.0/'>(CC BY-SA 2.0)</a>If the largest doll is AI – the simulation of human-level intelligence in a machine – the doll below it is machine learning. This is where a system has a set algorithm that enables its software to improve future guesses based on feedback. Google’s attempt to use machine learning in order to create the perfect cookie is a good example.

However, nestled inside machine learning is deep learning.

“One subset of the machine learning techniques is called neural networks,” Wang says. “This technique got really hot recently because of the improvement between computational power and the amount of data we can access. Then these neural networks became deeper, that’s why they call it deep learning, or deep neural networks.”

Neural networks are systems built to mimic the neurons in your brain. The idea is to create a machine that can think and form conclusions much like a human does. Deep neural networks require an enormous amount of data as well as specific learning techniques to improve the mechanical mind.

“In deep learning, training a model means you have the data, you pass the data through this neural network, and then you modify it until your model converges to a certain level of accuracy,” Wang says.

“Basically, the training will get your model into a status which is satisfying for your applications. You can then use the model in the field. For example, if you use it to detect a certain cancer, then you put it on the instrument and, once the image comes out, your model will make a prediction out of this image of whether it’s cancer or not. This step is called inferencing.”

Of course, AI has a variety of specific nomenclature associated with it as well as multiple courses of study contained within. However, these terms give a sense of the challenges that researchers are working to overcome.

A quick history lesson  

“Artificial intelligence was around since the 1950s, it’s an old concept,” Wang says. “The original idea is just trying to mimic human intelligence to solve problems.”

While many had dreamed of mechanical minds before, Alan Turing’s proposal of the Turing Test in 1950 revolutionized how people thought about computers. Although this was the first modern mention of a test for what would eventually be known as AI, the phrase “artificial intelligence” was not used until an academic conference in 1956.

But the practical use of machine learning to solve real-world problems is quite recent.

<strong>Clippy</strong>, the (widely reviled) animated assistant for Microsoft Office 97, is an early example of machine learning. Wang notes, however, that because Clippy is based on a Bayesian network, not everyone may agree. Courtesy Microsoft.“Machine learning has been popular since before the 1990s, and many big companies had started using machine learning techniques,” says Wang. “The most famous example is the assistant from Microsoft Word. When you try to click some buttons, this thing jumps out.”

Wang is talking about Clippy. This anthropomorphized paperclip was an animated personality of the Microsoft Office Assistant interface. Born out of the failure of Microsoft Bob – which used the interface of a virtual home strewn with your files and programs – Clippy began interrupting work sessions with the release of Office 97. Within a decade, Clippy would be cast aside by Microsoft.

Despite its short time in the spotlight, Clippy is seen by many as one of the biggest steps towards introducing the average consumer to artificial intelligence. The user was meant to converse with it as if it were human, which seems ordinary now but was revolutionary at the time. Clippy was early to the game that Amazon’s Alexa and Google Assistant eventually ended up winning.

Very few people saw Clippy’s potential for future systems. In the same way, we don’t know what current development may change AI forever. However, Wang has some theories. 

“In supervised learning, you need data with a label,” Wang says. “You have to first have some examples, then you tell the machine ‘these are the example.’ The future is unsupervised learning. The machines will just go out and learn themselves. But that’s a dream—we are working on it.”

Read more:

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2019 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.