“The robot told me to do it”: robots encourage risky behavior
Research shows that when there is a robot that encourages risky behavior, most people do it without too much hesitation. This poses problems on the power of influence and suggestion by algorithms and artificial intelligences like Siri or Google Home.
New research has shown that robots can encourage people to take more risks in a simulated gaming scenario than they would if there was nothing to influence their behavior. This study explored our understanding of whether robots, which can influence risk taking, could have ethical, practical and political implications.
Robotic peer pressure?
Dr Yaniv Hanoch, associate professor of risk management at the University of Southampton, who led the study, explained: “We know that peer pressure can lead to higher risk behavior and it is crucial that we knew better if machines can have a similar impact.”
This new research, published in the journal Cyberpsychology, Behavior, and Social Networking, involved 180 undergraduates participating in the Balloon Analogue Risk Task (BART), a computer-based assessment that asks participants to press the space bar. a keyboard to inflate a balloon displayed on the screen. Each time you press the space bar, the ball inflates slightly and 1 cent is added to the player’s “temporary bank”.
Testing the robot’s influence on humans
Balls can explode at random, which means the player loses the money they earned for that ball and they have the option to “pull out” before that happens and move on to the next ball.
A third of the participants took the test in a room alone (the control group), a third took the test alongside a robot who only provided them with instructions but remained silent the rest of the time and the last, the experimentation group, passed the test with the robot providing instructions as well as encouraging statements such as “why did you stop pumping ? ”
The results showed that the group encouraged by the robot took more risks, detonating their balloons much more frequently than those of the other groups. They also made more money overall. There was no significant difference in the behavior of students with the silent robot and those without a robot.
The robot’s superior influence over natural instincts
Dr Hanoch said: “We saw participants in the control condition reduce their risk-taking behavior following a balloon explosion, while those in the experimental condition continued to take as much risk as before. And that means the robot seems to exert a greater influence over the direct experiences and instincts of the participants.”
The researcher now believes further studies are needed to see if similar results emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars. For example, Alexa might influence you to buy more on Amazon or Google Home to discourage you from having different political views compared to the corporate doxa.
Dr Hanoch concluded: “With the wide diffusion of AI technology and its interactions with humans, this is an area that requires urgent attention from the research community.”
“On the one hand, our findings could raise alarms about the possibility of robots causing damage by increasing risky behavior. On the other hand, our data indicates the possibility of using robots and AI in prevention programs, such as anti-smoking campaigns in schools, and with hard-to-reach populations, such as drug addicts.”