ai-conspiracy-theories | Essays

Can AI wrest us from the grip of conspiracy theories?

In “Durably reducing conspiracy beliefs through dialogues with AI,” researchers from the Sloan School of Management at MIT and the psychology department at Cornell found that when instructed to “deliver well-argued, person-specific debunks,” an AI chatbot (ChatGPT-4 Turbo, specifically) had a 20% success rate in reducing someone’s belief in a conspiracy theory.

“This effect did not decay over two months,” write Thomas H. Costello, Gordon Pennycook, and David G. Rand. “It was consistently observed across a wide range of different conspiracy theories, and it occurred even for participants whose belief in conspiracy theories was deeply entrenched and of great importance to their identities.”
>
The researchers describe conspiracy theories as a conviction that secret, malevolent plots involving an array of powerful conspirators are behind public events — everything from the COVID-19 pandemic to election fraud, social unrest, and even aliens. “Despite the often quite implausible nature of many conspiracy theories, a large fraction of the world has come to believe them, including as much as 50% of the U.S. population.”

These people ain’t fools, either.

“Conspiracy theorists are not all likely to be simple-minded, mentally unwell folks — a portrait which is routinely painted in popular culture,” says Shauna Bowes, a doctoral student in clinical psychology at Emory University who led a study published by the American Psychological Association. That research found that people gravitated to conspiracy theories out of an understandable need to understand and feel safe in their environment, but also a more insidious one: to feel like the community they identify with is superior to others.

This is why broaching these subjects with friends, colleagues, and loved ones can be a painful and infuriating exercise — but a very necessary one.

The AI debunking occurred over three rounds of conversation with the chatbot and began when the research subject explained their belief in their own words. In one such exchange, in which the subject was 100% convinced the attacks on 9/11 had been staged by the U.S. government, the chatbot began by acknowledging her point and providing counterevidence that was easy to check.

The exchanges were conversational in tone but deceptively rich in content.

“If you read the conversations the bots have, one thing they never do is insult the people they’re chatting with. And what they do do is a TREMENDOUS AMOUNT OF WORK,” says author Hank Green on Threads. “They pull together huge amounts of information to interface with specific concerns that the ‘nutjobs’ have. This is work that I, a human, would never do for a stranger.”

Of course, the counterargument to that is, calm, friendly chatbots are just as likely to radicalize as they are to deradicalize. Then again, radicalization is pretty easy to pull off. In an age when so many people believe wrong-headed things after “doing their own research,” it’s hard to argue that there isn’t room for improvement.

In the example of the 9/11 conspiracist, the subject’s certainty dropped from 100% to 40% after the three exchanges, providing a meaningful opening to new facts, and possibly, better relationships with others.

It’s worth considering how debunking prompts could be used in online searches, health care settings, public policy debates — why not pastoral settings or family gatherings? Any time when conspiratorial beliefs pose a barrier to decision-making and community.

Even as a first try, the research is promising. “Refining systems like this could teach us a huge amount about deradicalization,” Green says. “Deradicalization is hard work that humans don’t like doing. Maybe there is something here.”

A version of this essay was originally published in the Equity Observer email newsletter. Catch up on past issues here. Sign up for insightful commentary, breaking news, and community shout-outs delivered twice weekly. Find your people.





Jobs | July 27