A New AI-Powered Social Media Tool Shows How Tweaking Feed Algorithms Can Reduce Political Hostility

Iconic view of the White House with lush gardens and a central fountain on a sunny day.

A team of researchers from Stanford University, the University of Washington, and Northeastern University has built a new tool that offers a surprisingly simple way to cool down political hostility on social media. Instead of blocking posts or requiring cooperation from platforms like X (formerly Twitter), the tool quietly reorders a userโ€™s feed. Posts containing antidemocratic attitudes or strong partisan animosity are pushed lower, allowing users to see them later rather than immediately.

What makes this especially interesting is that the tool doesnโ€™t depend on any direct participation from X. It works as a browser-based interface that sits on top of the userโ€™s existing feed. Behind the scenes, the system uses a large language model to identify content that expresses extreme partisan negativityโ€”things like wishing harm on political opponents, rejecting democratic norms, or expressing open hostility toward bipartisan cooperation.

The researchers tested this tool on roughly 1,200 participants over a 10-day period during the 2024 U.S. election. Users agreed to let the tool modify the order of posts in their feed. Some experienced a feed where negative political content was downranked. Others saw such content upranked, making it more prominent. A third group saw an unchanged feed for comparison.

The results were clear: users who saw less antidemocratic and hostile political content reported feeling warmer toward members of the opposing political party. On average, their views shifted by two points on a 100-point scaleโ€”an amount researchers say typically takes years to occur at the national level. Interestingly, the effect was bipartisan, showing up among both conservative and liberal participants.

The tool also reduced emotional negativity, including feelings of anger and sadness, suggesting that simply changing when users encounter polarizing postsโ€”not removing themโ€”can have a measurable effect on their mood and political tolerance. Users exposed to more aggressive political content, on the other hand, showed a corresponding increase in hostility.

The systemโ€™s design is based on previous sociological research on what constitutes antidemocratic attitudes. That includes content promoting extreme measures against political opponents, dismissing facts that contradict oneโ€™s party, or openly endorsing actions that undermine democratic norms. The research team deliberately avoided removing any posts, focusing instead on reordering the feed to prevent what one researcher described as emotional hijackingโ€”those immediate negative reactions that can reinforce political division.

The implications go beyond reducing polarization. Michael Bernstein, one of the lead researchers at Stanford, emphasized that this project demonstrates how users and independent researchers can meaningfully understand and influence ranking algorithmsโ€”something historically under the tight control of social media companies. This opens the door to a future where users might customize algorithmic behavior based on their own priorities rather than accepting whatever the default ranking system presents.

To support broader experimentation, the team released the toolโ€™s code publicly. That means developers and researchers can build their own ranking systems for purposes such as improving mental health, reducing harassment exposure, or amplifying certain types of positive content. While this particular study focused on political animosity, the mechanismโ€”AI-assisted rerankingโ€”can theoretically be applied to many other areas where online content affects user well-being.

Understanding Why Algorithmic Ranking Matters

Social media platforms typically highlight posts that generate strong engagement. Unfortunately, content that provokes outrage or hostility often performs extremely well under engagement-based ranking. This incentive structure means polarizing posts naturally rise to the top of a userโ€™s feed, even when those posts distort the emotional climate and push people toward extreme reactions.

This study shows that algorithmic tweaksโ€”performed externally, without platform permissionโ€”can noticeably change user perceptions. Instead of seeing hostile political content right away, users might encounter it later, after already engaging with less inflammatory material. This softens the emotional impact and reduces the tendency toward reacting based on anger or fear.

While a two-point shift on a feelings scale may sound modest, researchers emphasized that such a change is meaningful. Studies of political attitudes show that small, consistent shifts in affective polarization can influence voting patterns, openness to dialogue, and trust in democratic institutions.

Limitations and Considerations

Although the findings were promising, the study does have limits:

โ€ข It was restricted to web users, not mobile app users, which represents only a portion of Xโ€™s total audience.
โ€ข The experiment lasted 10 days, so researchers do not yet know how long the positive effects might last.
โ€ข Emotional improvements measured during the experiment did not necessarily persist after it ended.

Still, these constraints do not diminish the core proof of concept: algorithmic reranking is both possible and impactful, even without social media platform cooperation.

Why This Matters for the Future of Social Platforms

One of the most important takeaways is the possibility of user-controlled algorithms. Today, major platforms decide what users see, using proprietary models that canโ€™t be inspected or modified. This study challenges the assumption that users are powerless in this process.

If browser-based reranking tools become more common, users might one day adjust sliders or settings to prioritize the kind of content they wantโ€”less hostility, more factual information, fewer repetitive posts, or more diverse perspectives. Developers could also design tools targeting specific problems like misinformation, harassment, or mental health strain.

This approach also has implications for regulation. Policymakers often struggle with balancing free expression and online safety. A system like this offers a middle path: no censorship, but more control for the individual.

More About Antidemocratic Attitudes

Because the study centers on identifying antidemocratic and hostile partisan content, itโ€™s useful to understand the categories researchers used. These included:
โ€ข Statements endorsing violence or punishment against political opponents.
โ€ข Messages rejecting democratic processes or bipartisan cooperation.
โ€ข Posts that deliberately dismiss factual information because it benefits the opposing party.
โ€ข Expressions supporting the idea of abandoning democratic norms to favor oneโ€™s own political side.

These categories come from a body of sociological work that highlights how such attitudes predict lower trust in institutions and higher support for extreme political actions. By identifying and downranking this specific type of content, the tool effectively reduces its emotional impact.

Broader Implications for AI and Social Media

This study fits into a growing conversation about how AI models can shape online environments. Large language models excel at classifying nuanced text, making them suitable for tasks like detecting hostility, misinformation, or emotional tone. At the same time, giving users algorithmic autonomy aligns with increasing public demand for transparency and control.

While this research does not eliminate political tension, it demonstrates that even small adjustments in content ranking can nudge people toward healthier interactions. In a digital landscape dominated by engagement metrics, thatโ€™s an encouraging development.

Research Paper:
https://www.science.org/doi/10.1126/science.adu5584

Also Read

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments