Technology

The real costs of making friends with robots

Dr Kate Darling misses her old pet Yoshi. But Yoshi isn't lost, or dead. He's actually just across the room on a shelf.

Yoshi is – or was until he broke – a Pleo, a robotic baby dinosaur that wakes up with puppy dog eyes, roars delightedly when stroked and does a Pleo paroxysm when held upside down.

"I would show it off to my friends and they'd be like, 'Oh, hold it up by the tail, see what it does'," Darling, a research specialist at MIT Media Lab, says.

"That started to bother me after a while and I'd tell them to put it back down. And then I would pet it to make it stop crying. That was so odd to me because I didn't see myself as that type of maternal person and also I knew exactly how the robot was working," Darling says.

Darling's intrigue over her emotional reaction to Yoshi triggered a research career that sees her in global demand as a speaker on human-robot interactions.

It also raises questions about the costs of seeing robots as agents with feelings, explored in films such as Her and Ex Machina, and now HBO's TV extravaganza Westworld.

Advertisement

Should we befriend and bond with our robots or keep them strictly as tools to our predetermined ends? These questions aren't merely notional; the bots are here.

In Victoria, after Bialik College in Hawthorn East purchased a humanoid NAO robot for $15,000 last year, its students were quick to assign it a gender and a new name – Rosie.

The Association of Independent Schools of South Australia is nearing the end of a three-year trial that has deployed two NAO robots, Pink and Thomas, across early childhood centres and schools in roles as diverse as helping out with programming to being a conversation starter in ethics discussions.

"The four-year-olds didn't see the robot as a computer. They didn't see it as an object, they saw it as another classmate," Dr Therese Keane, a senior lecturer in education at Swinburne Institute of Technology and a co-investigator on the trial, says.

"Each child had their little framed picture on the wall and they created one more spot and put the robot's picture there.

"They would sit around in circles and the robot would come and join their discussion and the students would look at the robot to see whether it was following too," Keane says.

It wasn't just these prep students; Keane saw a dramatic personification of the robot right up into secondary school.

But Professor Rob Sparrow, from the Department of Philosophy at Monash University, counsels caution in how we introduce children to robotic technology.

"I do think they have dangers. In general, it's bad for us to have false beliefs. Treating machines as though they have feelings is encouraging us to live in ignorance," Sparrow says.

"Say the child believes the robot is their friend and as a result doesn't seek out other friends. Their false belief is part of what's wrong here." 

But do the children really believe the robot is its own person, or has it become part of the complex make-believe that populates every childhood?

Dr Rachel Severson, assistant professor in the Department of Psychology at the University of Montana, asks five to 10-year-olds directly whether they have to pretend that a robot has emotions.

"The punch line is that kids do believe it, they aren't pretending," Severson says.

"They know it has sensors, wires and chips, and not skin, bones and blood. But they understand it to function in a similar way to a living thing, yet feel through a different structure."

For Sparrow this technological sleight of hand comes with dystopian portent.

"When you put that robot into your classroom it's as though you've invited a huge team of engineers to watch and teach your children. They are gathering data on your children. And they are educating your children in accordance with a philosophy you may never have realised was there," Sparrow says.

Mattel's Hello Barbie has attracted controversy along these lines since its release in 2015.

It's a Wi-Fi-enabled version of the classic toy that records the child's banter for later access by parents and the manufacturer, prompting one child protection advocate to relabel it "Surveillance Barbie".

"The agendas of the companies using these technologies should be scrutinised," Darling says.

"There are a lot of privacy and data security issues. Parents need to be cautious and exercise good judgment in what they let their child interact with," she says.

In the forthcoming book Robot Ethics 2.0 Darling writes that a child's emotional bond to their robot might also be leveraged to loosen parents' purse-strings for software upgrades or in-app purchases.

Yet the world seems on the cusp of a new era of humanoid robots, and they are doing some good things.

NAO is teaching German to refugees, fostering emotional intelligence in children with autism and, in a program shortlisted for Australia's National Disability Awards, demonstrating rehab exercises to children with cerebral palsy and acquired brain injury at Melbourne's Royal Children's Hospital.

Meanwhile in Japan, Softbank's humanoid companion robot Pepper, touted as "the first robot designed to live with humans", has sold out its initial batches of 1000 units in just minutes.

"Based on your voice, the expression on your face, your body movements and the words you use, Pepper will interpret your emotion and offer appropriate content," the Softbank website boasts.

But it is the devilish detail of that "content" that Sparrow fears could lead to manipulation.

"We know that consumption behaviour is strongly associated with people's mood. Say the robot is designed to ask you if you'd like some chocolate when you feel sad. The engineers have shaped the product to elicit behaviours from you and you aren't necessarily cognisant of that process," Sparrow says.

But if our over-intense robot feelings are luring us into trouble, Darling's research suggests we may be able to dial them down by reframing the relationship as an impersonal one.

She asked participants to clobber a Hexbug Nano – a centipede-like minibot that resembles a walking nailbrush – with a mallet.

Participants whose bots were introduced with a personal back story (for example, "This is Frank. He likes to play and run around") hesitated significantly longer before sending the bug to oblivion than did people whose bot was described as "it" and an "object".

But other research suggests our tendency to anthropomorphise will be difficult to expunge.

Dr Julie Carpenter, research fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University, details the kinship between bomb disposal personnel and their robots in her 2016 book Culture and Human-Robot Interaction in Militarized Spaces.

Carpenter describes an eerie disconnect where highly professional operators, with a sophisticated grip on their robots' machine substructure, still experienced "an affectionate sense of loss" when they were destroyed.

"I spoke to people at the companies that made these robots. They would get notes attached to the remains of a robot saying, 'We really want you to fix [robot's name] and send him back to us. We don't want a new robot,' " Carpenter says.

Given the intensity of our robot empathy it's perhaps not surprising the question of protection for robots should come up.

In February 2015, Boston Dynamics released a video of its all-terrain robotic dog "Spot" being kicked by a handler to demonstrate its self-righting capability.

Latching onto social media posts that the action was "wrong" some news organisations weighed in with articles questioning whether the practice could be cruel or unethical.

Is there any coherence at all to the idea of conferring legal protection on robots?

"Not for the sake of the robots," says Ryan Calo, assistant professor in the School of Law at the University of Washington and co-editor of the 2016 book Robot Law.

"Robots are not social entities in the deep sense. They are not going to have feelings any time soon," Calo says.

But Calo suggests our hardwired emotional reaction to robots should raise warning signals that robot abuse might portend human abuse, noting people who harm animals are disproportionately more likely to harm children.

"If you destroy a non-anthropomorphic machine the proper charge is destruction of property. I can imagine a situation where if you destroy something that feels like a person, the penalty is enhanced because of what it says about you. That might appear to protect the robot but ultimately it is for the sake of society at large," Calo says.

Back in Japan, Pepper's creator Kaname Hayashi is working on a new companion bot.

According to its website, Groove X will be the "only robot in this universe that heals your heart ... possible to be loved as your family, partner, or your loved one, instead of praised just because of its convenience or functionality".

Has Hayashi exited the hyperbolesphere or could robots really challenge the primacy of human-human relationships?

"We're nowhere near having robots even close to approximating what a human relationship looks like. Even if some day in the future we did, we wouldn't lose the value we place on authenticity, in the same way that we like real diamonds over fake diamonds," Darling says.

Hayashi has reportedly said Groove X will be "cuter than BB-8" (the Star Wars droid) so prospective buyers may want to forearm themselves with a cautionary note from Sparrow: "We need media literacy when it comes to robots. In the same way that we learn advertisements on television aren't necessarily to be trusted, we also need to learn that when the robot says 'Have a nice day' it isn't out of genuine concern for our welfare."

Advertisement