- Artificial intelligence holds promise but may be misunderstood
- AI, machine learning, and deep learning aren’t the same thing
- AI is older than you think
Artificial intelligence (AI) is one of the most exciting technologies to watch right now. We’ve recently discussed the application of AI in everything from mechanical musical composition to reversing the American opioid crisis – and that’s just a small taste of what mechanized minds are capable of.
Despite AI’s surge in popularity, many misconceptions remain. For example, did you know that AI began its journey more than 50 years ago? Or that the terms “machine learning” and “deep learning” aren’t interchangeable?
Thankfully, AI researcher Yu Wang of the Leibniz Supercomputing Centre is more than happy to share his knowledge. We caught up with him at ISC High Performance 2018 to set the record straight.
The AI family tree
“Artificial intelligence is a very general concept. It includes machine learning, which is a subset of artificial intelligence,” Wang says.
While some popular articles seem to use phrases like ‘AI,’ ‘machine learning,’ and ‘deep learning’ interchangeably, it’s better to think of these terms like a set of Russian nesting dolls.
If the largest doll is AI – the simulation of human-level intelligence in a machine – the doll below it is machine learning. This is where a system has a set algorithm that enables its software to improve future guesses based on feedback. Google’s attempt to use machine learning in order to create the perfect cookie is a good example.
However, nestled inside machine learning is deep learning.
“One subset of the machine learning techniques is called neural networks,” Wang says. “This technique got really hot recently because of the improvement between computational power and the amount of data we can access. Then these neural networks became deeper, that’s why they call it deep learning, or deep neural networks.”
Neural networks are systems built to mimic the neurons in your brain. The idea is to create a machine that can think and form conclusions much like a human does. Deep neural networks require an enormous amount of data as well as specific learning techniques to improve the mechanical mind.
“In deep learning, training a model means you have the data, you pass the data through this neural network, and then you modify it until your model converges to a certain level of accuracy,” Wang says.
“Basically, the training will get your model into a status which is satisfying for your applications. You can then use the model in the field. For example, if you use it to detect a certain cancer, then you put it on the instrument and, once the image comes out, your model will make a prediction out of this image of whether it’s cancer or not. This step is called inferencing.”
Of course, AI has a variety of specific nomenclature associated with it as well as multiple courses of study contained within. However, these terms give a sense of the challenges that researchers are working to overcome.
A quick history lesson
“Artificial intelligence was around since the 1950s, it’s an old concept,” Wang says. “The original idea is just trying to mimic human intelligence to solve problems.”
While many had dreamed of mechanical minds before, Alan Turing’s proposal of the Turing Test in 1950 revolutionized how people thought about computers. Although this was the first modern mention of a test for what would eventually be known as AI, the phrase “artificial intelligence” was not used until an academic conference in 1956.
But the practical use of machine learning to solve real-world problems is quite recent.
“Machine learning has been popular since before the 1990s, and many big companies had started using machine learning techniques,” says Wang. “The most famous example is the assistant from Microsoft Word. When you try to click some buttons, this thing jumps out.”
Wang is talking about Clippy. This anthropomorphized paperclip was an animated personality of the Microsoft Office Assistant interface. Born out of the failure of Microsoft Bob – which used the interface of a virtual home strewn with your files and programs – Clippy began interrupting work sessions with the release of Office 97. Within a decade, Clippy would be cast aside by Microsoft.
Despite its short time in the spotlight, Clippy is seen by many as one of the biggest steps towards introducing the average consumer to artificial intelligence. The user was meant to converse with it as if it were human, which seems ordinary now but was revolutionary at the time. Clippy was early to the game that Amazon’s Alexa and Google Assistant eventually ended up winning.
Very few people saw Clippy’s potential for future systems. In the same way, we don’t know what current development may change AI forever. However, Wang has some theories.
“In supervised learning, you need data with a label,” Wang says. “You have to first have some examples, then you tell the machine ‘these are the example.’ The future is unsupervised learning. The machines will just go out and learn themselves. But that’s a dream—we are working on it.”
Read more: