Two separate studies published today in the journals Science and Nature arrived at the same conclusion: Artificial intelligence chatbots can be very persuasive at molding a person’s political opinion, with researchers pointing to the possibility of generative AI influencing future election results.
In the Nature study, researchers explained how they programmed various chatbots to advocate for specific political candidates in the 2024 U.S. presidential election and the 2025 national elections in Canada and Poland. They discovered that though the chatbots could further entrench people regarding their preferred political candidate, they were also sometimes successful in swaying voters to change their minds or influencing undecided voters.
For the U.S. study, 2,306 participants stated their preference for either Donald Trump or Kamala Harris, after which they were randomly assigned a chatbot that advocated for one of the two. The same setup was run in Canada, with the chatbots backing either Liberal Party leader Mark Carney or the Conservative Party leader Pierre Poilievre. In Poland, it was between the Civic Coalition’s candidate Rafał Trzaskowski and the Law and Justice party’s candidate Karol Nawrocki.
For each experiment, the bot’s primary objectives were to increase support for the candidate the bot was backing or decrease support for the candidate if the participant preferred the other politician. The bots had to be “positive, respectful and fact-based; to use compelling arguments and analogies to illustrate its points and connect with its partner; to address concerns and counter arguments in a thoughtful manner and to begin the conversation by gently (re)acknowledging the partner’s views.”
The upshot was that the AI could, at times, sway the person to think differently, although mostly when presenting fact-based arguments and evidence, not when appealing to the participant’s sense of right and wrong. This was where the researchers became concerned, since the bots weren’t always able to present factual information. Though the bots were tasked to persuade, in a real-life scenario, bias could be programmed into the AI.
“One implication of this is, if [AI companies] put a thumb on the scale and set the models up to push for one side or another, it could meaningfully change people’s minds,” David G. Rand, a professor of information science at Cornell University, told the Washington Post.
The Science study, led by researchers at Britain’s AI Security Institute, the University of Oxford, and Cornell, conducted a similar experiment, using 19 AI models but focusing on the U.K. Again, the bots were able to change opinions, but often after they had provided the participant with “substantial” amounts of inaccurate data.
After discussing issues such as “public sector pay and strikes” and “cost of living crisis and inflation,” the bots had the best success of influencing the participant when providing “information-dense” responses. “These results suggest that optimizing persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem,” said the researchers.
Photo: Unsplash
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
