How Personalized Algorithms Can Distort What We Learn and How We See the World
Personalized algorithms shape nearly everything we interact with online — from the videos we watch to the articles we read. But a new study suggests that these same algorithms can quietly distort our understanding of unfamiliar topics and give us a false sense of confidence about what we think we know. This research, led by Giwon Bahg during his doctoral work in psychology at The Ohio State University, takes a direct look at how algorithm-driven content can influence learning, exploration, and generalization, even when people approach a topic with no prior knowledge at all.
The study was published in the Journal of Experimental Psychology: General and investigates a fundamental question: What happens when an algorithm decides what information we see as we try to learn something completely new?
Below is a clear breakdown of what the researchers found, how they tested it, and why the results matter for anyone using online platforms that personalize content — which, at this point, is essentially all of us.
How the Study Was Designed
To understand the effects of algorithmic personalization on learning, the researchers created a fully fictional environment so participants would enter with zero background knowledge. This removed real-world biases and allowed the team to see how algorithms shape learning from scratch.
Participants were introduced to imaginary crystal-like alien species. Each alien had six distinct features, and these features varied across different alien types. For example, one feature might be a square box that appeared dark black for some alien categories and pale gray for others. The participants’ task was simple: learn how to identify and categorize these aliens based on the hidden features.
Every alien’s features were initially covered by gray boxes. The only way to uncover a feature was to click on it.
The experiment involved two main groups:
- Full Sampling Group:
Participants here had access to all features and were instructed to sample everything in order to get a complete picture. Their experience mimicked a non-personalized, open-exploration environment. - Algorithm-Guided Group:
In this group, participants chose features to click, but a personalized algorithm decided which study items would appear next. The algorithm was designed to predict and serve the features participants were most likely to continue sampling. It subtly encouraged them to keep clicking the same types of features. Although participants could technically explore any feature they wanted, the algorithm nudged them toward repetitive behavior.
This setup allowed the researchers to mimic how recommendation systems work on platforms like YouTube, TikTok, Netflix, and Instagram — systems that feed users more of what they’ve shown interest in, even if the initial choice was random.
What the Researchers Found
The results were surprisingly clear — and concerning.
1. Algorithm-Guided Participants Explored Less
Those guided by the personalized algorithm examined fewer overall features and showed a consistent pattern of narrow sampling. Instead of exploring the full range of alien characteristics, they gravitated toward a limited subset.
This means the algorithm effectively channeled their learning path, even though it never prevented them from exploring other features.
2. Their Understanding Became Distorted
When tested on new alien examples they had not seen before, these participants often misidentified the alien types. Their judgments were based on the limited features they had repeatedly seen, leading them to make overgeneralized assumptions.
In other words, they believed the world of aliens worked a certain way because the algorithm showed them only a slice of the full picture.
3. They Were More Confident When They Were Wrong
Perhaps the most striking finding:
Participants who learned through personalized algorithms were more confident in their incorrect answers than their correct ones.
This finding highlights a dangerous combination — limited exposure plus high confidence. In real-world scenarios, this could contribute to misinformation spread, hardened beliefs, and reduced curiosity.
Why These Findings Matter
Personalized algorithms are everywhere. Anytime we click, watch, like, or pause, we leave behind tiny traces of preference. Algorithms pick up these traces and quickly shift to feeding us more of what appears to match our interests.
The researchers point out a simple example:
If someone who has never watched movies from a certain country tries to explore them, a streaming platform will recommend a handful of films. If the person randomly chooses an action-thriller, the platform will assume they love that genre and continue recommending similar films. Soon, the viewer may form a distorted impression of the entire country’s cinema — or even its culture — based solely on this narrow subset.
This mirrors what happened in the alien experiment. Even when people intended to learn broadly, algorithmic nudging made their learning selective and biased.
The study highlights three major concerns:
1. Early Bias Formation
People can form biased beliefs immediately, even without any initial stance or knowledge. The algorithm doesn’t wait for expertise; it starts shaping perception from the first click.
2. Overconfidence Can Amplify the Problem
When individuals strongly believe incorrect ideas, they may be less likely to seek out new information or challenge their assumptions. Overconfidence powered by limited data is a recipe for misunderstanding.
3. Implications for Children and Young Learners
The researchers specifically worry about young users who consume large amounts of algorithm-curated content. These users may confuse recommendation-driven repetition with reliable knowledge, shaping their worldview in ways they cannot detect.
How This Relates to Broader Algorithmic Behavior
Personalized algorithms are designed to maximize engagement, which often means showing more of what a user has already clicked on. This can create several well-known effects:
Filter Bubbles
Users see only content that aligns with their behavior, leading to narrow exposure.
Echo Chambers
Repetitive exposure to similar information strengthens existing beliefs.
Generalization Errors
People infer broad rules or patterns from limited, biased samples.
What the new study adds is a crucial point:
These distortions happen even when the topic is brand new and neutral, not just in political or emotionally charged contexts.
This suggests algorithmic influence goes deeper than previously assumed. It can subtly shape the very process of exploration and learning.
Why You Should Care
If you’re someone who regularly learns new things online — whether through videos, articles, or recommendation feeds — this research is highly relevant. It suggests that:
- Your first few clicks can define the trajectory of what you learn next.
- Algorithms may limit the diversity of information you see, even if you believe you’re exploring freely.
- You might feel more certain about your understanding than the actual information justifies.
Being aware of this influence is the first step toward counteracting it. A simple strategy is intentionally seeking out diverse sources, broader categories, and contrasting viewpoints, especially when learning something new.
In an age where more people rely on digital platforms to learn, stay informed, and form opinions, understanding how these algorithms shape our perception is essential.
Research Paper
Algorithmic Personalization of Information Can Cause Inaccurate Generalization and Overconfidence
https://doi.org/10.1037/xge0001763