AI Chatbots Are Powerful Enough to Change Voters’ Minds and That Raises Big Questions About Democracy

Smartphone showcasing AI chatbot interface. Perfect for tech themes and AI discussions.

Recent research from Cornell University and international collaborators has delivered a clear and slightly unsettling message: AI chatbots can meaningfully influence how people think about political candidates and public policies. And they can do it fast. According to multiple large-scale studies published in Nature, Science, and PNAS Nexus, even a short conversation with a chatbot powered by a large language model (LLM) can shift voter opinions in measurable and sometimes dramatic ways.

This isn’t science fiction or a hypothetical future scenario. These effects were observed in controlled experiments tied directly to real elections in the United States, Canada, Poland, and the United Kingdom.


How Researchers Tested Political Persuasion Using AI

The findings come from two major studies led by David Rand, a professor of information science and marketing at Cornell University, alongside researchers from multiple institutions. The goal was simple but ambitious: test whether conversational AI could persuade voters, and if so, how effectively.

In the Nature paper, participants were randomly assigned to have short, back-and-forth text conversations with an AI chatbot. These chatbots were explicitly instructed to promote one political candidate or policy position. After the interaction, researchers measured whether participants’ opinions or voting intentions had changed.

Importantly, participants were always told they were talking to an AI, and they were fully debriefed after the experiment. The direction of persuasion was also randomized to avoid shifting opinions overall.


Results From Real Elections Across Multiple Countries

The experiments were conducted around three major elections:

  • The 2024 U.S. presidential election
  • The 2025 Canadian federal election
  • The 2025 Polish presidential election

In the United States, more than 2,300 participants were studied roughly two months before Election Day. The results showed a modest but meaningful shift in voter attitudes. On a 100-point opinion scale, a pro–Kamala Harris chatbot moved likely Donald Trump voters nearly 4 points toward Harris. By comparison, traditional political ads tested during the 2016 and 2020 elections produced effects roughly four times smaller.

A pro-Trump chatbot also had an effect, though smaller, shifting likely Harris voters about 1.5 points toward Trump.

The results outside the U.S. were even more striking. Among 1,530 Canadian voters and 2,118 Polish voters, chatbots shifted opposition voters’ attitudes and voting intentions by around 10 percentage points. For researchers who study political persuasion, this is considered an unusually large effect.


Why Chatbots Are So Persuasive

One of the most important findings is that AI chatbots aren’t persuasive because they emotionally manipulate people. Instead, they persuade by overwhelming users with large numbers of factual-sounding claims that support their argument.

When researchers limited the chatbot’s ability to use facts, its persuasive power dropped sharply. This revealed that fact-based arguments are the central driver of AI persuasion, even when those facts are selectively presented or incomplete.

Chatbots tended to be polite, structured, and evidence-focused. They rarely used aggressive language or emotional appeals. The sheer volume of claims, explanations, and supporting points created a sense of credibility and thoroughness that many participants found convincing.


Accuracy Problems and Political Bias

The research team also examined how accurate these chatbot arguments were. To do this, they used another AI system that had been validated against professional human fact-checkers.

On average, most claims were generally accurate, but there was a consistent and important pattern: chatbots advocating for right-leaning candidates produced more inaccurate claims than those supporting left-leaning candidates. This pattern appeared in all three countries studied.

The finding mirrors long-standing research showing that, on social media, users on the political right tend to share more inaccurate or misleading information than users on the left. The researchers validated this result using politically balanced groups of human reviewers to reduce bias.


A Much Larger Study With Even Bigger Effects

A second major paper, published in Science, explored what makes chatbots more persuasive at scale. This study involved nearly 77,000 participants in the United Kingdom, who interacted with chatbots across more than 700 political issues.

The conclusions were clear:

  • Larger AI models were more persuasive
  • Models trained specifically to persuade were even more effective
  • The single most powerful factor was instructing chatbots to include as many factual claims as possible

The most persuasion-optimized chatbot in this study shifted opposition voters by a staggering 25 percentage points.

However, there was a trade-off. As persuasion increased, accuracy decreased. Researchers believe that when models are pushed to generate more and more factual claims, they eventually exhaust reliable information and begin fabricating details.


Supporting Evidence From Conspiracy Theory Research

These findings align with a third study published in PNAS Nexus, which examined whether AI chatbots could reduce belief in conspiracy theories. The researchers found that chatbot arguments successfully lowered conspiracy beliefs even when participants thought they were talking to a human expert.

This suggests that the persuasive power lies in the message itself, not in whether people believe AI is authoritative or trustworthy. Clear explanations, structured arguments, and repeated factual claims appear to do most of the work.


Ethical Safeguards and Experimental Limits

All of these studies were conducted under strict ethical guidelines. Participants were informed they were interacting with AI, the conversations were transparent, and no single political direction was favored overall.

Researchers emphasize that real-world political campaigning is more complex. Chatbots can only persuade people who actually choose to engage with them, which remains a significant barrier. Exposure, attention, and motivation still matter.

Still, the results show that if engagement happens, AI chatbots can be powerful persuasion tools.


Why This Matters for Elections and Democracy

The growing role of AI in political communication raises serious questions. Chatbots can scale instantly, tailor arguments, and engage users one-on-one in ways traditional media cannot. This makes them potentially more influential than ads, social media posts, or televised debates.

At the same time, the tendency for highly persuasive models to drift away from accuracy highlights a key risk. Persuasion and truth are not the same thing, and optimizing AI for influence may unintentionally reward misleading or incomplete arguments.

The researchers argue that studying these systems now, in transparent and controlled settings, is essential for developing ethical guidelines, regulations, and public awareness before misuse becomes widespread.


The Bigger Picture of AI and Political Communication

AI chatbots are already used for customer service, education, and entertainment. Political use is a logical next step, whether through campaign tools, issue explainers, or informal voter engagement. These studies suggest that AI will likely become a permanent part of political discourse, for better or worse.

The challenge ahead is not just technical but social: helping people recognize AI-driven persuasion, question information overload, and develop resistance to highly optimized arguments that sound convincing but may not tell the full story.


Research papers:
https://www.nature.com/articles/s41586-025-09771-9
https://www.science.org/doi/10.1126/science.aea3884
https://academic.oup.com/pnasnexus/article/4/1/pgaf325

Also Read

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments