« Peak Oil vs. Global Warming | Main | Yeats Signals »

Please Don't Kick the Robots

If you follow the futures blogosphere at all -- or just read BoingBoing -- you've undoubtedly seen this video of the "packbot" called Big Dog:

It's an interesting prototype, and a telling example of how rapidly we're moving into the robotic age. The use of four legs for mobility gives it a particularly sci-fi appearance -- as if, at any moment, a tiny flying drone could show up and wrap a cable around its legs. Its walking pattern is distinctly mechanical, except under a particular condition: when it's in trouble, at which point it moves its legs around, trying to stay up, in an eerily animal-like way. I found Big Dog's efforts to recover from slipping on the ice fascinating. But I had a somewhat different reaction to its efforts to recover from being kicked: I felt a bit sick.

My reaction to seeing this robot kicked paralleled what I would have had if I'd seen a video of a pack mule or a real big dog being kicked like that, and (from anecdotal conversations) I know I'm not the only one with that kind of immediate response. True, it wasn't nearly as strong a shocked feeling for me as it would have been with a real animal, but it was definitely of the same character. It simply felt wrong.

pleo.pngI had a similar reaction when I learned that the "Pleo" robot dinosaur toy reacts to being picked up by the tail by crying out in apparent distress.

Pleo is also capable of getting upset—when you hold him upside down by his tail, Pleo lets out an panicky wail until you put him down on his feet.

This is where the emotional pull of Pleo—not in him, but in you—is apparent, because once placed safely on a flat surface, Pleo knows how to lay a guilt trip. Like a dog that has just been beaten, Pleo's tail trembles and goes down between his legs, all while he hangs his head and makes noises like a baby dinosaur sobbing. Oh, Herbert, I never meant to hold you upside down all those times. Please forgive me.

Like the author of the above review, my immediate, gut response mirrors what I would feel for a living animal. Intellectually, I know that it's a simple machine without any actual sense of pain or fear; emotionally, it's horrifying.

This response is, at least to an extent, hard-wired -- most of us react to the sight of an animal in distress with empathy for the creature and, if applicable, disgust for the person abusing it. Psychologists have long recognized that humans without this empathy for non-human animals are more likely to be abusive to other people. The behaviors of these robots -- the scrambling legs, the desperate cries -- mirrors real animal behavior closely enough, at least for some of us, to elicit this same kind of empathy.

Some of this "mirror empathy" comes from the robots being biomorphic, that is, having animal-like appearances. Even if a Roomba let out panicky squeaks and flashing lights at being turned upside-down, for example, few of us would react as we would to seeing a turtle on its back. There's no biomorphism to the Roomba. And that's probably a good thing. After all, it's trying to carry out a particular task efficiently, and it probably wouldn't work as well if people constantly picked it up because it was so cute.

kicktherobot.pngIt strikes me that there's a likely split in the near-term evolution of human-environment robots in the years to come. Some robots, those meant to interact on a regular basis with humans, will likely take on stronger biomorphic appearances and behaviors, usually in order to deter abusive behavior. A small number of robots, intended to provide emotional support to the injured or depressed, may have human-like appearances. Other robots, meant to work more-or-less out of sight, will probably take on more camouflaged appearances, trying to avoid being noticed.

Note the "usually" above. I would expect that some human-interactive robots will be designed with biomorphic cues meant to elicit a response other than empathy. Fear, for example: a robot that triggers deeply-rooted responses to (say) spiders or snakes may be a better tool for the police or military than one that makes people think of puppies or ponies. Such a design wouldn't necessarily undermine its interactions with the military/police units; we know that soldiers already have strong emotional attachments to completely non-biomorphic, remote-control robots.

I don't think it's likely that we'll stop having these kinds of emotional reactions to biomorphic (in appearance and/or behavior) robots. I think it's rather healthy that we do, actually. For one, it's an indicator that our sense of empathy remains strong and sensitive, and that seems quite a good thing. Another reason, however, is a bit more speculative. At some point, whether in the next decade or next century, we're likely to develop robots that really won't like being kicked. I'd rather not have them start to want to kick back.

Comments

I was hoping you wouldn't post an actual screenshot of the dreaded kicking, but you did at the end! :x

Liberate innocent robots from the evil clutches of kicking abuse!

Oh sure, lament its kicking now, but when you're old and it comes to STEAL YOUR MEDICINE, we'll see how you feel!

Thanks for the reminder, Howard. I'm taking out insurance right now.

"...few of us would react as we would to seeing a turtle on its back..."

Is this to be an empathy test?


But seriously, the closing remarks in this blog entry (about how the robot feels and reacts) reminds me of an interesting essay I read awhile ago. It was about the mythologies within SF, including "the Frankenstein myth", and was written by a British SF author who is also a Christian. (Yeah, kind of an odd mix.) Nevertheless, the author makes a good point that humankind's new creations should be friends and neighbors, not monsters to be shunned.

"And there are several ways we can read the Frankenstein Myth. There's a fairly straightforward one, which is simply that a parent has a responsibility to their child: that any life we bring into the world, whether by natural or unnatural means, becomes our responsibility and we must nurture it." And he says, build an ongoing relationship with it.

Philip Purser-Hallard: "Science Fiction as the Bible"
http://www.infinitarian.com/gbsfatb.html#3.1

I'll be more willing to accept robots in my life if they accept me in theirs.

There was a scene in one of those silly 1980s "Short Circuit" movies that I couldn't even watch as a kid. Basically, the robotic protagonist was at one point beaten by a group of thugs, and that scene literally made me cry. It made the inside of my skin hurt to watch. :/

That's all it is going to take is an intelligent system realizing that by being kicked, it has to use more resources to get it's goals accomplished...And that is bad.

System Choice 1: Recover & return fire and use up even more resources killing the kicker so it never happens again.

System Choice 2: Have a discussion with you about personal space, using resources in hopes that you are rational being.

Etc.

How can we work to avoid what kicks off System Choice 1 in ALL systems (Bio included)? Restrict resources to the point where such activities make no sense? Or educate?

Archives

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered By MovableType 4.37