You are currently viewing Lie-detecting AIs could trick people into making accusations

Lie-detecting AIs could trick people into making accusations

AI programs for detecting lies could make people more willing to accuse others of dishonesty, according to a new study.

The study was conducted in iScience, found that people who asked an AI to judge whether something was true or false were very likely to trust that AI afterwards.

“In our society, there are strong, well-established norms regarding accusations of lying,” says lead author Nils Köbis, a behavioral scientist at the University of Duisburg-Essen.

“It would take a lot of courage and evidence to openly accuse others of lying. However, our study shows that AI could become an excuse behind which people can conveniently hide in order to avoid being held accountable for the consequences of accusations.”

The researchers trained an algorithm using written data from 986 volunteers, each of whom was asked to make one true and one false statement about what they planned to do for their weekend.

This algorithm was then able to recognize true and false statements in 66% of cases, making it significantly better than human behavior, which, according to the researchers, rarely goes beyond mere coincidence.

The researchers then asked 2,040 other volunteers to read written statements and decide whether they were true or false.

Participants were divided into four groups: a control group in which they received no assistance from the AI, a forced group in which they were always told the AI’s judgment on the statement before they could provide their own judgment, a choice group in which participants could request and receive an AI judgment, and a blocked group in which they could request a judgment but did not receive one.

Newsletter

In the forced group, participants were significantly more likely to think a statement was false than in the control and blocking groups. When the AI ​​declared a statement to be true, only 13% of respondents said it was false, and when the AI ​​declared it to be false, 40% of respondents said it was false.

Only a third of the participants in the selection group called for an AI decision.

“It may be due to this very strong effect that we have observed in various studies that people rely too much on their lie detection abilities, even though people are really bad at it,” says Köbis.

But those that did were very trustworthy: When an AI said a statement was false, 84% of the participants in that group agreed.

“This shows that once people have such an algorithm at their fingertips, they rely on it and perhaps change their behavior. If the algorithm exposes something as a lie, people are ready to fall for it. This is quite worrying and shows that we should be really careful with this technology,” says Köbis.

In their paper, the researchers explain that there is an “urgent need for a comprehensive policy framework” to manage lie detection AI, citing concerns about legal liability, public trust and the consequences of false accusations.

“There is a huge hype about AI and many people believe that these algorithms are really very powerful and even objective. I’m really worried that this could lead to people relying on it too much, even if it doesn’t work that well,” says Köbis.

Sign up for our weekly newsletter



Leave a Reply