Nearly every week I read an article discussing the rise of robots and artificial intelligence. It’s estimated that automation will eliminate millions of jobs, perhaps even to the extent that many fields will no longer exist. Certainly, robots can effectively perform certain tasks that are repetitive, such as placing a camera into a new iPhone, or dangerous, such as delivering explosives to a suspect in a police standoff. But there are some tasks that robots will probably never be able to perform.
Self-driving cars are on the rise, and since I dislike driving, I’m all for them. We’re a long way from their widespread use, however. In March of 2018, a pedestrian in Texas was struck and killed by a self-driving Uber car. In an analysis of the tragic incident, John M Simpson, privacy and technology project director with Consumer Watchdog, said, “the robot cars cannot accurately predict human behavior.” Of course, even humans have difficulty accurately predicting human behavior; we often find ourselves utterly perplexed by something another person says or does. But the inability of the robot cars to think the way humans do is a critical weakness and one not likely to be overcome anytime soon.
Driving is not purely a mechanical activity; it is an extremely social one. When we drive, we interact with other drivers and with pedestrians. These interactions require social thinking: anticipating the needs or desires of other drivers and pedestrians and predicting what they might do. This is where the robot cars fall short.
Our ability to interpret and, by extension, predict the behavior of others is linked to the concept of Theory of Mind, which states that, as humans, we can imagine the thought of another even if we are holding a different thought in our own mind. In the case of the driver, we see a pedestrian walking in one direction but looking in another. The human eye might follow the gaze of the pedestrian and think, “Oh, she’s looking at the coffee shop sign. Although she’s heading in one direction, she may very quickly turn in another direction so she can go towards the coffee shop.” The robot does not experience Theory of Mind and, therefore, is limited in its ability to interpret the human behavior and speculate as to what the pedestrian might do. Some researchers are hard at work at looking how to solve this issue in the programming of the robots.
Social thinking is critical to understanding not only the thoughts of others, but also their feelings. In many fields, social thinking is essential. I work for an organization that provides behavioral health therapy services to children and adolescents. Recently, I talked with the company founder, a psychologist, about the importance of empathy in human interaction. It’s clear that, to be effective, therapists must show empathy for their clients. The concepts of empathy and Theory of Mind are closely connected, although they may involve different parts of the brain.
In a recent article in the print edition of Smithsonian, Thomas Dietterich, professor emeritus of computer science at Oregon State University said, “If a computer tells you, ‘I know how you feel,’ it’s lying. It cannot have the same experiences that humans have and it is those experiences that ground our understanding of what it is like to be human.” A robot therapist will never be able to empathize with a human over the death of parent because a robot has not experienced being born to human parents. A robot therapist cannot empathize with a teenage client because the robot has never had the experience of being a child. A robot therapist cannot relate to the pain of experiencing physical, emotional, or sexual abuse. Not having had those human experiences, the robot lacks the autobiographical memory which some researchers believe is critical to simulating the emotions of others.
Of course, the field of AI is in its infancy, and we may yet be able to program robots to anticipate some aspects of our behavior. The development of affective computing means that some AIs may be programmed to respond to human emotion or even to have their own feelings. Research has shown that many children, especially younger ones, are open to the idea of the use of robots. The robots will likely be limited, however in their capacity; they may be able to coach us on the techniques of hitting a tennis ball but they probably won’t be able to give us a meaningful pep talk on how to recover mentally after a lost match.
There’s no doubt that robots are here to stay, and they may perform very useful functions. (I’d love a robot that folds laundry!) A critical aspect of Theory of Mind is the ability to hold multiple perspectives at the same time and switch between those perspectives. Can robots do that? I’m not an expert on artificial intelligence. Will they ever be able to? I don’t know, but until robots are able to develop social thinking skills, AI robots will not be able to serve effectively in fields where these skills are critical for success. And even if in the future, there are competent robot therapists, these robots will never fully develop human social skills simply because the robots aren’t human. As Aristotle’s Law of Identity states, “A is A.” A tree cannot be a cat. A robot cannot be a human being. Even if a robot is granted human rights, it remains a robot and not a human being.
How do you think automation will change the world? Share your thoughts in the comments section below.
© Katharine Spehar, 2017-2018.
Photo credit: www.pixabay.com