Ted Chiang is an award-winning science fiction writer and the author of Exhalation. His short story, “Story of Your Life,” was the basis for the Academy Award-nominated film Arrival. Science Node talked with him about the human fascination with AI—in fact and fiction.
Why do you think we are fascinated by AI?
People have been interested in artificial beings for a very long time. Ever since we’ve had lifelike statues, we’ve imagined how they might behave if they were actually alive. More recently, our ideas of how robots might act are shaped by our perception of how good computers are at certain tasks. The earliest calculating machines did things like computing logarithm tables more accurately than people could. The fact that machines became capable of doing a task which we previously associated with very smart people made us think that the machines were, in some sense, like very smart people.
How does our—let’s call it shared human mythology—of AI interact with the real forms of artificial intelligence we encounter in the world today?
The fact that we use the term “artificial intelligence” creates associations in the public imagination which might not exist if the software industry used some other term. AI has, in science fiction, referred to a certain trope of androids and robots, so when the software industry uses the same term, it encourages us to personify software even more than we normally would.
Is there a big difference between our fictional imaginary consumption of AI and what’s actually going on in current technology?
I think there’s a huge difference. In our fictional imagination “artificial intelligence” refers to something that is, in many ways, like a person. It's a very rigid person, but we still think of it as a person. But nothing that we have in the software industry right now is remotely like a person—not even close. It's very easy for us to attribute human-like characteristics to software, but that's more of a reflection of our cognitive biases. It doesn't say anything about the properties that the software itself possesses.
What’s happening now or in the near future with intelligent systems that really captures your interest?
What I find most interesting is not typically described as AI, but with the phrase 'artificial life.' Some researchers are creating digital organisms with bodies and sense organs that allow them to move around and navigate their environment. Usually there's some mechanism where they can give rise to slightly different versions of themselves, and thus evolve over time. This avenue of research is really interesting because it could eventually result in software entities which have a lot of the properties that we associate with living organisms. It’s still going to be a long ways from anything that we consider intelligent, but it’s a very interesting avenue of research.
Over time, these entities might come to have the intelligence of an insect. Even that would be pretty impressive, because even an insect is good at a lot of things which Watson (IBM’s AI supercomputer) can't do at all. An insect can navigate its environment and look for food and avoid danger. A lot of the things that we call common sense are outgrowths of the fact that we have bodies and live in the physical world. If a digital organism could have some of that, that would be a way of laying the groundwork for an artificial intelligence to eventually have common sense.
How do we teach an artificial intelligence the things we consider common sense?
Alan Turing once wrote that he didn't know what would be the best way to create a thinking machine; it might involve teaching it abstract activities like chess, or it might involve giving it eyes and a body and teaching it the way you’d teach a child. He thought both would be good avenues to explore.
Historically, we've only tried that first route, and that has led to this idea that common sense is hard to teach or that artificial intelligence lack common sense. I think if we had gone with the second route, we'd have a different view of things.
If you want an AI to be really good at playing chess, we have got that problem licked. But if you want something that can navigate your living room without constantly bumping into a coffee table, that's a completely different challenge. If you want to solve that one, you're going to need a different approach than what we’ve used for solving the grandmaster-level chess-playing problem.
My cat's really good in the living room but not so good at chess.
Exactly. Because your cat grew up with eyes and a physical body.
Since you’re someone who (presumably) spends a lot of time thinking about the social and philosophical aspects of AI, what do you think the creators of artificial beings should be concerned about?
I think it’s important for all of us to think about the greater context in which the work we do takes place. When people say, “I was just doing my job,” we tend not to consider that a good excuse when doing that job leads to bad moral outcomes.
When you as a technologist are being asked how to solve a problem, it’s worth thinking about, “Why am I being asked to solve this problem? In whose interest is it to solve this problem?” That’s something we all need to be thinking about no matter what sort of work we do.
Otherwise, if everyone simply keeps their head down and just focuses narrowly on the task at hand, then nothing changes.
Read more:
- Hummingbird-inspired technology could go where drones can't
- Deep learning expert Yu Wang schools us on types of artificial intelligence
- Building a better brain
- Do computers have musical intelligence?