• Subscribe

Why does AI fascinate us?

Ted Chiang is an award-winning science fiction writer and the author of Exhalation. His short story, “Story of Your Life,” was the basis for the Academy Award-nominated film Arrival. Science Node talked with him about the human fascination with AI—in fact and fiction.

Why do you think we are fascinated by AI?

<strong>Ted Chiang</strong> writes frequently about the interactions between humans and technology. His Hugo-award-winning novella, The Lifecycle of Software Objects, follows a former zoo trainer as she raises an AI from a digital pet to a human-equivalent mind. Courtesy Alan Berner.People have been interested in artificial beings for a very long time. Ever since we’ve had lifelike statues, we’ve imagined how they might behave if they were actually alive. More recently, our ideas of how robots might act are shaped by our perception of how good computers are at certain tasks. The earliest calculating machines did things like computing logarithm tables more accurately than people could. The fact that machines became capable of doing a task which we previously associated with very smart people made us think that the machines were, in some sense, like very smart people.

How does our—let’s call it shared human mythology—of AI interact with the real forms of artificial intelligence we encounter in the world today?

The fact that we use the term “artificial intelligence” creates associations in the public imagination which might not exist if the software industry used some other term. AI has, in science fiction, referred to a certain trope of androids and robots, so when the software industry uses the same term, it encourages us to personify software even more than we normally would.

Is there a big difference between our fictional imaginary consumption of AI and what’s actually going on in current technology?

<strong>Intelligent machines.</strong> ‘Maria’ was the first robot to be depicted on film, in Fritz Lang's Metropolis (1927). Courtesy Jeremy Tarling. <a href='https://creativecommons.org/licenses/by-sa/2.0/'>(CC BY-SA 2.0)</a>I think there’s a huge difference. In our fictional imagination “artificial intelligence” refers to something that is, in many ways, like a person. It's a very rigid person, but we still think of it as a person. But nothing that we have in the software industry right now is remotely like a person—not even close. It's very easy for us to attribute human-like characteristics to software, but that's more of a reflection of our cognitive biases. It doesn't say anything about the properties that the software itself possesses.

What’s happening now or in the near future with intelligent systems that really captures your interest?

What I find most interesting is not typically described as AI, but with the phrase 'artificial life.' Some researchers are creating digital organisms with bodies and sense organs that allow them to move around and navigate their environment. Usually there's some mechanism where they can give rise to slightly different versions of themselves, and thus evolve over time. This avenue of research is really interesting because it could eventually result in software entities which have a lot of the properties that we associate with living organisms. It’s still going to be a long ways from anything that we consider intelligent, but it’s a very interesting avenue of research. 

Over time, these entities might come to have the intelligence of an insect. Even that would be pretty impressive, because even an insect is good at a lot of things which Watson (IBM’s AI supercomputer) can't do at all. An insect can navigate its environment and look for food and avoid danger. A lot of the things that we call common sense are outgrowths of the fact that we have bodies and live in the physical world. If a digital organism could have some of that, that would be a way of laying the groundwork for an artificial intelligence to eventually have common sense.

How do we teach an artificial intelligence the things we consider common sense?

<strong>Stickybot</strong> is a gecko-inspired machine whose adhesive feet allow it to climb and explore. Courtesy Mark R. Cutkosky, Stanford University; Sangbae Kim, MIT.Alan Turing once wrote that he didn't know what would be the best way to create a thinking machine; it might involve teaching it abstract activities like chess, or it might involve giving it eyes and a body and teaching it the way you’d teach a child. He thought both would be good avenues to explore.

Historically, we've only tried that first route, and that has led to this idea that common sense is hard to teach or that artificial intelligence lack common sense. I think if we had gone with the second route, we'd have a different view of things.

If you want an AI to be really good at playing chess, we have got that problem licked. But if you want something that can navigate your living room without constantly bumping into a coffee table, that's a completely different challenge. If you want to solve that one, you're going to need a different approach than what we’ve used for solving the grandmaster-level chess-playing problem.

My cat's really good in the living room but not so good at chess.

Exactly. Because your cat grew up with eyes and a physical body.

Since you’re someone who (presumably) spends a lot of time thinking about the social and philosophical aspects of AI, what do you think the creators of artificial beings should be concerned about?

<strong>What kind of intelligence?</strong> Unlike computers, cats excel at navigating physical space. They are less skilled, however, at playing chess.I think it’s important for all of us to think about the greater context in which the work we do takes place. When people say, “I was just doing my job,” we tend not to consider that a good excuse when doing that job leads to bad moral outcomes.

When you as a technologist are being asked how to solve a problem, it’s worth thinking about, “Why am I being asked to solve this problem? In whose interest is it to solve this problem?” That’s something we all need to be thinking about no matter what sort of work we do.

Otherwise, if everyone simply keeps their head down and just focuses narrowly on the task at hand, then nothing changes.

Read more:

 

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2019 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.