LONDON-While it might not always be easy to pull the plug on your electronics, doing so is rarely a case of moral dilemma.

But, powering down might be a lot more difficult if your devices were begging you not to do it. A new study explores the ways in which social robots can manipulate their owners by pulling on our heartstrings.

When robots protested, shouting things such as ‘No! Please do not switch me off’ and implying they were afraid of the dark, participants hesitated and sometimes even refused to turn them off. In the new study published to the journal PLOS One, researchers in Germany sought to understand if people are more likely to see a robot as ‘alive’ if it behaves in a way that is more human, versus those that operate strictly in a machine-like manner. It builds off something known as the media equation theory, in which people ‘apply social norms, which they usually only apply in interactions with humans, also when they are interacting with various media like computers and robots.’ With plans to use social robots in more and more areas of our lives, from airports to elderly care, we may soon find ourselves up against a unique set of moral challenges. For the study, the researchers recruited 89 volunteers to interact with the small humanoid robot, Nao.

Participants were told their conversations with the robot were intended to help it improve its interaction capabilities.

They were given two tasks: create a schedule for one week, with seven activities to choose from, and play a question-answer game with Nao.

During this section, the robots asked questions such as ‘Do you rather like pizza or pasta?’

Once the two tasks had been completed, the participants rang a bell and were informed over speakers that they’d collected enough data, and could turn off the robot if they wanted to.

At this point, about half of the robots objected to being turned off, saying: ‘No! Please do not switch me off! I am scared that it will not brighten up again!’

For the participants in these scenarios, switching the robots off wasn’t as easy for those who weren’t confronted with the pleas.

In this group, 14 volunteers opted to leave the robots on – and, those who did turn it off in the end hesitated to do so compared to the other group.

The researchers later interviewed the study participants about their choices.

Eight people said they felt bad for the robots ‘because it told them about its fears of the darkness,’ while six said they didn’t want to act against its wishes.

Some participants also said they left it on because they had the choice to do so, while others questioned if they could interact further with the robot, or were afraid of doing something wrong.

Just one person said they were surprised by the robot’s objection.

The study highlights the complexity of our relationships with machines as they behave more like humans.

‘Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on, which builds on the core statement of the media equation theory,’ the authors wrote.

‘Thus, even though the switching off situation does not occur with a human interaction partner, people are inclined to treat a robot which gives cues of autonomy more like a human interaction partner than they would treat other electronic devices or a robot which does not reveal autonomy.’