« Nano-Health, Nano-War | Main | Bioprinters vs. the Meatrix »

Life and Love in the Uncanny Valley

There's a story I've seen about a philosopher who bet an engineer that he could make a robot that the engineer couldn't destroy. What the philosopher produced was a tiny little thing, covered in fur, that would squeak when touched -- and when threatened, would roll onto its back and look at the attacker with its big, glistening eyes. When the engineer lifted his hammer to smash the robot, he found that he couldn't. He paid the wager *.

Evolution has programmed us, for good reasons, to be responsive to "cute" creatures. Even the coldest heart melts at the sight of kittens playing or puppies sleeping, and while parents respond most quickly to their own children, we all have at least some positive response to sight of a child. Given all of this, it wouldn't be surprising if our biological imperatives could be hijacked by things that are decidedly not puppies and babies -- but approximated their look and behavior. Like, for example, a robot.

Sociologist Sherry Turkle has studied the effects of technology on society for years. Recently, she brought a collection of realistic robotic dolls called "My Real Baby" to nursing homes. Much to her surprise -- and dismay -- the seniors responded to these artificial dependents in ways that mirrored how they would interact with real living beings. They weren't fooled by the robots; they knew that these were devices. But the artificial beings' look and behavior elicited strong, generally positive, emotions for the elderly recipients. Turkle describes it thusly:

In bringing My Real Babies into nursing homes, it was not unusual for seniors to use the doll to re-enact scenes from their children’s youth or important moments in their relationships with spouses. Indeed, seniors were more comfortable playing out family scenes with robotic dolls than with traditional ones. Seniors felt social “permission” to be with the robots, presented as a highly valued and “grownup” activity. Additionally, the robots provided the elders something to talk about, a seed for a sense of community.

Turkle is bothered by the emotions these dolls -- and similar "therapeutic" robots, such as the Japanese Paro seal -- trigger in the adults interacting with them. She argues:

Relationships with computational creatures may be deeply compelling, perhaps educational, but they do not put us in touch with the complexity, contradiction, and limitations of the human life cycle. They do not teach us what we need to know about empathy, ambivalence, and life lived in shades of gray.

Turkle is particularly concerned with the issue of the "human life cycle." She worries about emotional bonds with beings that can't understand death, or themselves die. "What can something that does not have a life cycle know about your death, or about your pain?" she asks. She fears the disconnection with the reality of life when children and adults alike bond with machines that can't die. But this machine immortality may be a benefit, not a problem.

Many, likely most, of the seniors who embraced the robotic children were seriously depressed. Aging is often painful, physically and emotionally, and life in a nursing home -- even a good one -- can seem like the demoralizing final stop on one's journey. Seniors aren't the only ones who are depressed, of course. According to a recent World Health Organization study published in the Public Library of Science ("Projections of Global Mortality and Burden of Disease from 2002 to 2030"), depressive disorders are currently the fourth most common "burden of disease" globally, ranking right behind HIV/AIDS; moreover, the research group projects that depressive disorders will become the second most common burden of disease by 2030, above even heart disease. Depression is debilitating, saps productivity and creativity, and is all too often fatal. Medical and social researchers are only now starting to see the immensity of the problem of depression.

The ability of the therapeutic robots to reduce the effects of depression, therefore, should not be ignored. The seniors themselves describe how interacting with the robots makes them feel less depressed, either because they can talk about problems with a completely trustable partner, or because the seniors see the robots as depressed as well, and seek to comfort and care for them. Concerns about whether or not the robots are really feeling depressed, or recognize (let alone care about) the human's feelings, appear to be secondary or non-existent. Of far greater importance are the benefits for helping someone in the depths of depression to recover a sense of purpose and self.

If you were to look for a My Real Baby doll today, you'd be hard-pressed to find one. They were a flop as commercial toys, with a common reaction (at least among adults) being that they were "creepy." That kind of response -- "it's creepy" -- is a sign that the doll has fallen into the "Uncanny Valley," the point along the realism curve where the object looks alive enough to trigger biologically-programmed responses, but not quite alive enough to pass for human -- and as a result, can be unsettling or even repulsive. First suggested by Japanese robotics researcher Masahiro Mori in 1970, the Uncanny Valley concept may help to explain why games, toys and animations with cartoony, exaggerated characters often are more successful than their "realistic" counterparts. Nobody would ever mistake a human character from World of Warcraft for a photograph, for example, but the human figures in EverQuest 2, conversely, look close enough to right to appear oddly wrong.

As work on robotics and interactive systems progresses, we'll find ourselves facing Creatures from the Uncanny Valley increasingly often. It's a subjective response, and the empathetic/creepy threshold seems to vary considerably from person to person. It's notable, and clearly worth more study, that the nursing home residents who received the My Real Baby dolls didn't have as strong of an "Uncanny Valley" response as the greater public seemed to have. Regardless, it's important to remember that the Uncanny Valley isn't a bottomless pit; eventually, as the realism is further improved, the sense of a robot being "wrong" fades, and what's left is a simulacrum that just seems like another person.

The notion of human-looking robots made for love has a long history, but -- perhaps unsurprisingly -- by far the dominant emphasis has been on erotic love. And while it's true that many emerging technologies get their first serious use in the world of sexual entertainment, it's by no means clear that there's a real market for realistic interactive sex dolls. The social norms around sex, and the biological and social need for bonding beyond physical play, may well relegate realistic sex dolls to the tasks of therapy and of assistance for those who, for whatever reason, are unable to ever find a partner.

But that doesn't mean we won't see love dolls. Instead of sex-bots driving the industry, emotional companions for the aged and depressed may end up being the leading edge of the field of personal robotics. These would not be care-givers in the robot nurse sense; instead, they'd serve as recipients of care provided by the human partner, as it is increasingly clear that the tasks of taking care of someone else can be a way out of the depths of depression. In this scenario, the robot's needs would be appropriate to the capabilities of the human, and the robot may in some cases serve as a health monitoring system, able to alert medical or emergency response personnel if needed. In an interesting counter-point to Turkle's fear of humans building bonds with objects that can not understand pain and death, these robots may well develop abundant, detailed knowledge of their partner's health conditions.

Turkle is also concerned about the robot's inability to get sick and die, as she believes that it teaches inappropriate lessons to the young and removes any acknowledgment of either the cycle of life or the meaning of loss and death. Regardless of one's views on whether death gives life meaning, it's clear that the sick, the dying, and the deeply depressed are already well-acquainted with loss. The knowledge that this being isn't going to disappear from their lives forever is for them a benefit, not a flaw.

We're accustomed to thinking about computers and robots as forms of augmentation: technologies that allow us to do more than our un-augmented minds and bodies could otherwise accomplish. But in our wonder at enhanced mental feats and physical efforts, we may have missed out on another important form of augmentation these technologies might provide. Emotional support isn't as exciting or as awe-inspiring as the more commonplace tasks we assign to machines, but it's a role that could very well help people who are at the lowest point of their lives. Sherry Turkle is worried that emotional bonds with machines can diminish our sense of love and connection with other people; it may well be, however, that such bonds can help rebuild what has already been lost, making us more human, not less.

-=-=-=-=-


*(If anyone has the source of this story, I'd love a direct reference.)

Comments

It sounds like Turkle is adopting a position similar to the one taken by Joseph Weizenbaum in Computer Power and Human Reason. Weizenbaum, somewhat horrified by people's reaction to his famous Eliza program, was prompted to write the book.

I think AI will be the next step in human evolution. If humans program a biological computer (read: brain), it would therefore be in direct lineage to a biological human brain. It seems that robots are an ideal species, at least in theory. Anyways, as we approach the knee-curve of the singularity (I recommend Kurzweils book: The Singularity is Near), pretty much anything is possible. In fact, we probably can even begin to imagine what we will be ABLE to imagine in the coming iterations. I.E: All of the above seems perfectly reasonable!

There's a story I've seen about a philosopher who bet an engineer that he could make a robot that the engineer couldn't destroy. What the philosopher produced was a tiny little thing, covered in fur, that would squeak when touched -- and when threatened, would roll onto its back and look at the attacker with its big, glistening eyes. When the engineer lifted his hammer to smash the robot, he found that he couldn't. He paid the wager *.

I think you may be referring to "The Soul of the Mark III Beast", by Terrel Miedaner (it was included in The Mind's I). Unfortunately I don't have a copy handy so I am largely going on memory plus a google search.

Nice post-- a very insightful and pragmatic response to Turkle's concerns to using the Eliza Effect to facilitate relationships between humans and human artifacts. You are right to say that its proper use can make us more human, not less.

Regarding your anecdote concerning the engineer and the philosopher, I too believe the reference you may be seeking is Terrel Miedaner's _The Soul of Anna Klane_, which is the original source of the excerpt reprinted in _The Mind's I_ (i.e. "The Soul of the Mark III Beast"). For the time being, the excerpt can be read online here: http://gssq.blogspot.com/2005/03/soul-of-mark-iii-beast-excerpt-from.html

Thanks, Tom!

Simon, Tom

Thanks for your contributions and comments. Looks like my "Mark III Beast" chapter from "The Soul of Anna Klane" has been morphed a bit. The original critter had no fur--- (Fuzzy wuzzy wasn't fuzzy, was he?) Fake fur would not have been consistent with the stark simplicity of its creator's style.

"Anna Klane" was written as an introduction to ideas which may have been offered prematurely, 30 years ago. These are developed on the website: beon-cpt.com

They are for those curious about the origins of existence and the nature of consciousness--- curious enough to press the limits of their current beliefs.

Terrel Miedaner

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37