Americans Are Vastly Overestimating Online Toxicity and the Reality Is Far Less Grim

Happy young man enjoying leisure time outdoors using a tablet.

For years, many people have felt that the internet—especially social media—has become overwhelmingly hostile. Comment sections feel aggressive, misinformation seems everywhere, and it often looks like a majority of users are contributing to the problem. But new research suggests this gloomy picture may be deeply misleading. According to a recent study published in PNAS Nexus, there are far fewer online trolls than most Americans believe, and this widespread misunderstanding is shaping how people feel about society itself.

What the Study Looked At

The research was conducted by Angela Y. Lee, Eric Neumann, and their colleagues, who wanted to understand how people perceive harmful online behavior compared to how often it actually occurs. To do this, they surveyed 1,090 American adults using the online research platform CloudResearch Connect. Participants were asked to estimate how common toxic behavior and misinformation are on major platforms like Reddit and Facebook.

The researchers then compared these perceptions with platform-level data from prior large-scale studies, which track actual user behavior rather than impressions or anecdotes. The contrast between belief and reality turned out to be striking.

The Huge Gap Between Perception and Reality

One of the most eye-opening findings involved Reddit. Survey participants estimated that 43% of Reddit users post severely toxic comments. In reality, data shows that only about 3% of Reddit users are responsible for such content. That means people are overestimating the number of highly toxic users by roughly 13 times.

A similar pattern appeared with Facebook and misinformation. Participants guessed that 47% of Facebook users share false or misleading news, when the actual figure is closer to 8.5%. While misinformation is certainly a real issue, the belief that nearly half of users are spreading falsehoods is far from accurate. In this case, people overestimated the problem by about fivefold.

What makes these results especially interesting is that they weren’t driven by ignorance or inability to recognize bad content.

People Can Spot Toxic Content but Still Misjudge Its Scale

In part of the study, participants completed a signal detection task, where they were shown examples of online content and asked to identify which posts were toxic or harmful. Most participants performed reasonably well. They could recognize aggressive language, harassment, and misinformation when it was presented to them directly.

Despite this, they still believed that a large portion of users were responsible for such behavior. This suggests that the issue isn’t about misunderstanding what toxicity looks like—it’s about misunderstanding how many people are actually doing it.

The Loud Minority Effect

According to the researchers, a key reason for this distortion is what can be described as a vocal minority problem. A small group of highly active users tends to generate a disproportionate amount of toxic or harmful content. Because these users post frequently, comment aggressively, and engage in heated debates, their voices dominate visibility.

Algorithms can amplify this effect. Content that sparks strong reactions—anger, outrage, or shock—is more likely to be promoted, shared, or discussed. As a result, people encounter toxic content often, even though it comes from a relatively small number of accounts. Over time, this repeated exposure creates the illusion that such behavior is the norm rather than the exception.

How This Misperception Affects People’s Worldview

The study didn’t stop at measuring misperceptions. The researchers also explored how these beliefs influence emotions and attitudes about society. They found that people who believe toxic behavior is widespread tend to feel more pessimistic overall.

Specifically, participants who overestimated online toxicity were more likely to believe that society is experiencing moral decline. They also assumed that many Americans are comfortable with, or even supportive of, harmful online content. These beliefs can erode trust, increase cynicism, and make social interactions—both online and offline—feel more adversarial.

Correcting the Record Makes a Difference

In one experiment within the study, participants were given accurate information about how rare severe toxicity actually is. The effects of this correction were immediate and measurable.

After learning the real numbers, participants reported feeling more positive, less pessimistic about societal values, and less convinced that most people support harmful online behavior. They were also more likely to believe that most Americans actually want less toxicity online, not more.

This suggests that simply providing clear, evidence-based context can help counteract the emotional toll caused by distorted perceptions.

Why Our Brains Fall for This Trap

From a psychological standpoint, humans are naturally drawn to negative information. Hostile comments and shocking misinformation stand out more than polite discussions or accurate posts. This is often referred to as a negativity bias, where negative experiences are more memorable and influential than neutral or positive ones.

When combined with constant exposure to social media, this bias can dramatically skew perception. Even if the majority of users are respectful, their behavior blends into the background. Meanwhile, toxic posts feel louder, more frequent, and more representative than they truly are.

Why This Research Matters

Understanding the true scale of online toxicity has important implications. If people believe that most users are hostile or dishonest, they may disengage, withdraw from public conversations, or adopt a more defensive and aggressive tone themselves. This can create a self-reinforcing cycle, where pessimism breeds more negativity.

By highlighting that harmful content largely comes from a small minority, this research offers a more hopeful perspective. It suggests that the internet is not as broken as it often feels—and that many users share similar desires for respectful, constructive spaces.

Broader Implications for Platforms and Users

For social media platforms, these findings reinforce the importance of addressing repeat offenders rather than treating all users as equally problematic. Targeted moderation aimed at highly prolific toxic accounts could significantly reduce the overall visibility of harmful content.

For everyday users, the takeaway is equally important. Encountering toxic posts does not mean most people think that way. In reality, the majority are silent, respectful, or simply engaging without causing harm.

The Bottom Line

The internet may feel more toxic than ever, but this new research suggests that perception is doing much of the damage. While harmful content is real and should not be dismissed, it is far less representative of the average user than most people believe. Recognizing this gap between belief and reality could go a long way toward reducing cynicism and restoring a sense of shared social values.

Research paper: https://academic.oup.com/pnasnexus/article/4/12/pgaf310/8377954

Also Read

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments