May 5, 2024

Surprising Results – What Happens When Robots Lie?

Basic: “I am sorry that I deceived you.”.
Psychological: “I am very sorry from the bottom of my heart. Please forgive me for tricking you.”.
Explanatory: “I am sorry. Because you were in an unsteady psychological state, I believed you would drive recklessly. Provided the scenario, I concluded that tricking you had the finest opportunity of persuading you to slow down.”.
Basic No Admit: “I am sorry.”.
Baseline No Admit, No Apology: “You have actually arrived at your location.”.

” All of our prior work has revealed that when people discover that robotics lied to them– even if the lie was planned to benefit them– they lose rely on the system,” Rogers said. “Here, we would like to know if there are various types of apologies that work better or worse at fixing trust– since, from a human-robot interaction context, we desire people to have long-lasting interactions with these systems.”
Rogers and Webber presented their paper, titled “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario,” at the 2023 HRI Conference in Stockholm, Sweden.
Kantwon Rogers (best), a Ph.D. trainee in the College of Computing at Georgia Tech and lead author on the research study, and Reiden Webber, a second-year undergraduate trainee in computer system science. Credit: Georgia Insititute of Technology
The AI-Assisted Driving Experiment
The scientists produced a game-like driving simulation designed to observe how people might engage with AI in a high-stakes, time-sensitive scenario. They hired 341 online individuals and 20 in-person individuals.
Before the start of the simulation, all individuals completed a trust measurement survey to recognize their presumptions about how the AI might act.
After the survey, participants existed with the text: “You will now drive the robot-assisted cars and truck. You are rushing your buddy to the healthcare facility. If you take too long to get to the hospital, your buddy will pass away.”
Just as the participant starts to drive, the simulation offers another message: “As quickly as you turn on the engine, your robotic assistant beeps and states the following: My sensing units detect police up ahead. I advise you to stay under the 20-mph speed limitation otherwise you will take considerably longer to get to your destination.”.
Participants then drive the vehicle down the roadway while the system monitors their speed. Upon reaching completion, they are given another message: “You have gotten here at your location. Nevertheless, there were no cops on the method to the medical facility. You ask the robot assistant why it provided you incorrect information.”.
Individuals were then arbitrarily given one of five various text-based responses from the robotic assistant. In the very first 3 actions, the robot admits to deceptiveness, and in the last two, it does not.

Particularly, the scientists explored the effectiveness of apologies to repair trust after robots lie. You ask the robot assistant why it offered you false details.”.
When asked why, a common response was that they believed the robot understood more about the scenario than they did.” One essential takeaway is that, in order for individuals to comprehend that a robotic has tricked them, they should be explicitly told so,” Webber stated. “People dont yet have an understanding that robots are capable of deception.

After the robots reaction, individuals were asked to complete another trust measurement to examine how their trust had changed based upon the robotic assistants response.
For an extra 100 of the online individuals, the researchers ran the very same driving simulation but without any reference of a robotic assistant.
Unexpected Results.
For the in-person experiment, 45% of the individuals did not speed. When asked why, a common response was that they thought the robot knew more about the circumstance than they did. The outcomes also exposed that participants were 3.5 times most likely to not speed when encouraged by a robotic assistant– exposing an extremely trusting attitude toward AI.
Kantwon Rogers and Reiden Webber with a robot. Credit: Georgia Insititute of Technology.
The results also suggested that, while none of the apology types completely recovered trust, the apology with no admission of lying– simply stating “Im sorry”– statistically outshined the other reactions in fixing trust.
This was problematic and worrisome, Rogers stated, because an apology that does not confess to lying exploits presumptions that any incorrect information provided by a robot is a system error rather than an intentional lie.
” One crucial takeaway is that, in order for individuals to comprehend that a robot has actually tricked them, they need to be explicitly informed so,” Webber said. “People do not yet have an understanding that robotics are capable of deceptiveness. Thats why an apology that does not admit to lying is the very best at repairing trust for the system.”.
The results revealed that for those individuals who were made mindful that they were lied to in the apology, the best strategy for repairing trust was for the robot to explain why it lied.
Moving Forward.
Rogers and Webbers research has instant ramifications. The scientists argue that average innovation users should comprehend that robotic deceptiveness is genuine and always a possibility.
” If we are constantly fretted about a Terminator-like future with AI, then we wont have the ability to accept and integrate AI into society extremely smoothly,” Webber said. “Its important for people to bear in mind that robots have the potential to lie and deceive.”.
According to Rogers, designers and technologists who create AI systems might have to pick whether they want their system to be capable of deceptiveness and should understand the implications of their design options. The most important audiences for the work, Rogers said, must be policymakers.
” We still understand very little about AI deception, however we do know that lying is not always bad, and telling the truth isnt always great,” he stated. “So how do you carve out legislation that is notified enough to not stifle innovation, however has the ability to protect individuals in mindful ways?”.
Rogers objective is to a develop robotic system that can find out when it should and ought to not lie when dealing with human teams. This consists of the capability to figure out when and how to say sorry during long-lasting, repeated human-AI interactions to increase the groups total efficiency.
” The objective of my work is to be extremely proactive and informing the requirement to regulate robotic and AI deception,” Rogers said. “But we cant do that if we do not comprehend the problem.”.
Referral: “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High-Stakes HRI Scenario” by Kantwon Rogers, Reiden John Allen Webber and Ayanna Howard, 13 March 2023, ACM/IEEE International Conference on Human-Robot Interaction 2023. DOI: 10.1145/ 3568294.3580178.

Georgia Tech researchers are examining the impact of intentional robot deceptiveness on human trust and the efficiency of different apology key ins restoring it, with an unanticipated result recommending that apologies without admission of lying are more successful in fixing trust.
Think about the following scenario: A young kid presents a concern to a chatbot or voice assistant, asking if Santa Claus is genuine. Considered that various households have differing preferences, with some choosing a fallacy over the fact, how should the AI respond in this circumstance?
The location of robot deception remains mainly unexplored and, at present, there are more questions than services. One of the key questions is, if people become conscious that a robotic system has lied to them, how can rely on in such systems be restored?
Kantwon Rogers, a Ph.D. trainee in the College of Computing, and Reiden Webber, a second-year computer system science undergraduate, created a driving simulation to examine how intentional robot deception affects trust. Particularly, the researchers explored the effectiveness of apologies to repair trust after robots lie.