Observation vs. Expectation – Do We See Reality or What We Expect to See?

You’ve probably heard it said a thousand times: “Observation is theory-laden.” But I want to push that idea a little further—not just that theory shapes what we think we see, but that it actually changes what we literally see. I’m talking about perceptual shifts, not just interpretive ones.

This isn’t just philosophy-of-science wallpaper. 

It matters if we want to understand how scientific knowledge is built, especially when different observers working with the same instruments end up “seeing” different things.

I want to dive into a few striking cases—some historical, some psychological—that suggest observation isn’t neutral ground at all. In fact, once you have a theory, you may not be able to see things the old way ever again.

So let’s start with a few moments from the history of science that make the whole “theory-ladenness” claim feel not just real, but kind of mind-bending.

When Scientists Literally Saw Differently

One of the things I love about digging into history is how often you stumble across moments when someone saw something new—not just in the sense of noticing it, but in the very literal, perceptual sense. And here’s the kicker: what they saw was shaped by what they believed.

Let’s kick off with William Herschel in 1781. Most of us were taught that he “discovered” Uranus. But that’s not quite right. 

What he actually saw was a fuzzy object through a telescope—nothing about it screamed “planet.” At first, he labeled it a comet. 

That made perfect sense within his Newtonian framework: comets were known, planets weren’t expected out there. 

But over time, as the object’s motion didn’t match typical comet paths, and as theoretical expectations shifted, the object took on new meaning. It became “a planet.” Not because it suddenly looked different—but because the category it could belong to had changed.

This isn’t just retrospective labeling—it’s a change in perceptual interpretation. You might say, “Sure, but that’s just semantics.” But is it? 

As Ian Hacking pointed out, categories don’t just sort things; they affect what counts as real.

Another example I keep coming back to is Friedrich Kekulé’s discovery of the benzene ring structure. 

Yes, I know it was a dream. But what’s fascinating is why his dream took the form it did: a snake biting its own tail. The idea of a ring wasn’t floating randomly in his head—it emerged from theoretical tensions in the known bonding behavior of carbon atoms at the time. 

What that dream did was snap his conceptual understanding into a visual, almost sensory, experience. The form of the molecule became visible to him precisely because he was primed for it by theory.

There’s also Alexander Fleming and penicillin. Lots of people had mold in their Petri dishes. It wasn’t rare. But what Fleming saw wasn’t just contamination—it was possibility. 

Why? 

Because his expectations were framed by germ theory, and he was looking at bacterial inhibition through a very specific lens. That’s not just good luck or brilliance (though it’s partly both). It’s a case of expectation altering salience—literally changing which parts of a scene grab your attention.

What these stories have in common isn’t just insight—they show how theory opens up new visual possibilities. Once you start thinking in a new framework, it’s like you’ve been handed a new set of perceptual tools. 

And that means that even the most “direct” observation—staring through a microscope, looking at the stars, peering into a Petri dish—is never just about data collection. It’s about interpretation layered into perception itself.

Now, I know some of you might be thinking, “Sure, but this is all after-the-fact re-description. 

People saw what they saw, and only later did theory get layered on.” I get that. But let’s be honest—our brains don’t work in linear, modular stages. The boundaries between perception and cognition are blurrier than we often admit.

And that brings me to the next thing I want to explore—how modern cognitive science actually backs up this idea, showing that perception isn’t some neutral sensory pipeline, but something much more dynamic, responsive, and yes, theory-laden right down to the level of neural firing.

Coming up next: what predictive coding and visual neuroscience can tell us about how theories literally shape what we see.

How the Brain Helps Prove the Philosophers Right

Let’s shift gears from history to neuroscience, because this is where things get wild. 

A lot of us in philosophy of science are familiar with the idea that observation is conceptually influenced—but when you start looking at the neuroscience of perception, it turns out the brain literally builds your visual experience based on what it expects.

This is where predictive coding enters the picture. If you’re not already deep into the neuroscience side, here’s the basic idea: instead of passively taking in sensory data, the brain actively predicts what it’s going to see, and then compares that prediction to the incoming data

It’s constantly minimizing the “prediction error.” So your visual experience isn’t just a reflection of the world—it’s a dynamic negotiation between expectation and evidence.

Here’s the key point: expectations are not afterthoughts—they’re built into perception at the ground level.

Perception Is Inference, Not Recording

Take a study by Summerfield and Egner (2009). They showed that when subjects expected a certain kind of face to appear (e.g., happy vs. neutral), their visual cortex actually responded differently even when the exact same image was presented. The same pixels on the screen, but different neural activation—because the brain was preloading the expectation.

Even more compelling is work in early visual areas like V1 and V2. 

Traditionally, these were thought to be purely bottom-up—just processing raw sensory input. But now we know they’re modulated by feedback loops from higher cortical areas. 

So if you’re expecting to see a vertical line, your brain may pre-activate the neurons tuned to vertical orientations—even before the stimulus arrives.

This is a game-changer. It means theory-ladenness isn’t just happening at the level of description or interpretation—it’s baked into perception.

Top-Down Effects Aren’t Optional

Critics like Firestone and Scholl (2016) have argued that most so-called “top-down” effects on perception are really just post-perceptual: they don’t change what we see, only how we think about it. 

But their criteria for distinguishing perception from cognition are, honestly, a bit conservative. 

When you start taking Bayesian models seriously—especially those that treat perception as hierarchical probabilistic inference—the idea that we can cleanly separate “seeing” from “thinking” falls apart fast.

Even attention—what you notice, what gets filtered out—is deeply influenced by prior beliefs. And if attention modulates early visual processing (which it does), then theory-ladenness goes deeper than skeptics want to admit.

Language, Categories, and Seeing

Let’s also talk about language. Gary Lupyan’s research has shown that hearing a label before seeing an object can alter how quickly and accurately people recognize it

For example, if you hear “zebra” before seeing an ambiguous animal image, your visual system is biased toward that category. This isn’t just interpretation—there’s faster detection in visual search tasks, and altered perceptual sensitivity.

Language isn’t just tagging perception—it’s shaping it in real time.

So What?

So why does this matter? 

Because it strengthens the case that scientific observation isn’t just “theory-laden” in a soft sense—it’s theory-entangled all the way down. The expectations of the observer don’t just influence their conclusions; they may change what appears in consciousness in the first place.

Which brings us to a practical question: If our seeing is so malleable, how can we ever hope for objective science?

That’s where we turn next—by breaking down the different kinds of theory-ladenness, we can figure out where the risks are greatest, and where we might still salvage some common ground.

Not All Theory-Ladenness Is the Same: A Useful Breakdown

One of the problems with how theory-ladenness gets discussed is that it’s usually treated like a single phenomenon: either observation is theory-laden, or it’s not. But that binary framing hides a lot of important nuance.

So let’s break this down. Here’s a taxonomy of theory-ladenness I’ve found helpful when trying to explain why it’s not just a philosophical quirk, but a critical issue for epistemology, methodology, and even lab design.

1. Perceptual Theory-Ladenness

This is the most radical and controversial form: your theory changes what you literally see. Ambiguous figures (like the duck-rabbit) are a good demonstration of this. What’s interesting is that you can’t see both interpretations at the same time—you flip between them. 

Similarly, in scientific practice, a shift in theoretical perspective can make one interpretation “pop out” while others vanish from view. You see the double helix instead of two strange curves. You see blood circulation instead of random flows.

And once you’ve seen it one way, it’s very hard to unsee it.

2. Instrumental Theory-Ladenness

This one is more common and easier to accept: scientific instruments don’t just record—they interpret. Think of Galileo’s telescope. People argued he wasn’t seeing celestial bodies, but optical artifacts. 

Or think about cloud chambers—you need a theory to interpret the tracks as evidence of particles. The same data can be read entirely differently depending on your assumptions about what kinds of entities exist.

So instruments aren’t neutral windows; they’re theory-guided interfaces.

3. Linguistic Theory-Ladenness

This is the Sapir-Whorf side of things. The idea here is that what we can describe influences what we can notice or even distinguish

For example, some languages divide up color space differently, and speakers actually show different perceptual sensitivities in visual discrimination tasks.

In science, linguistic framing can shape what counts as “an observation” in the first place. If a community lacks the conceptual vocabulary to describe a phenomenon, it often goes unrecorded entirely.

4. Attentional Theory-Ladenness

This one flies under the radar, but it’s a big deal. What you pay attention to in a scene is not random—it’s guided by your training, your research questions, and your theoretical goals.

Think about radiologists. 

When shown complex images, trained experts fixate on very different parts of the image than novices do—because they’ve learned what to look for. That’s theory-ladenness at the level of selective attention. And attention, as we know, influences what even gets processed visually.

5. Normative Theory-Ladenness

Finally, we have cases where the significance of an observation is theory-dependent. 

For example, an anomaly only shows up as an anomaly if your theory tells you what counts as normal. Think of Kuhn’s notion of “puzzle-solving”—if your paradigm doesn’t have a slot for a new result, it doesn’t register as important. It’s just noise.

This form of theory-ladenness doesn’t necessarily change perception, but it changes the epistemic weight we assign to what we see.


By teasing these apart, we can be more precise about when theory-ladenness is a problem—and when it might just be the cost of doing science in a structured way

Next up, we’ll talk about how this ties into diversity of perspectives—not as a political add-on, but as an epistemic necessity.

Why We Actually Need Different Viewpoints to See Clearly

Let’s be honest: when people talk about the need for diversity in science, it often gets framed in terms of fairness, representation, or ethics. And those are important. But there’s another, less appreciated reason to care about diversity—it helps correct for theory-ladenness.

If observation is shaped by expectation, and expectation is shaped by background assumptions, then it follows that having multiple backgrounds helps us see more.

Longino’s Argument for Pluralism

Philosopher of science Helen Longino has made this case powerfully. Her idea of “transformative criticism” isn’t just about debate—it’s about socially distributed epistemic labor

Different scientists bring different assumptions to the table, which means they’ll notice different things, question different assumptions, and interpret the same data in different ways. 

And that’s good. 

Objectivity, on this view, is a product of active disagreement—not its absence.

What This Looks Like in Practice

Take feminist critiques of primatology. Before those voices entered the field, interpretations of animal behavior were often steeped in patriarchal assumptions—aggression was highlighted, nurturing was ignored. 

The animals didn’t change. The lens did. And once the field diversified, whole new behaviors became visible, studied, and explained.

Or consider how indigenous ecological knowledge has expanded our understanding of biodiversity and sustainability. 

That’s not just cultural data—it’s epistemic access to realities Western science had ignored.

AI, Perception, and Bias

Here’s a twist: this applies to machine perception, too. In computer vision and AI, we’re learning the hard way that training data shapes what systems “see.” 

If your dataset is narrow, your model becomes blind to everything outside that frame. The same goes for humans.

If your lab is homogeneous, your science is, too.

Epistemic Humility and Methodological Pluralism

This all points to the value of epistemic humility

If none of us sees the whole picture—because we’re all theory-laden in different ways—then the only way forward is collaboration across perspectives. That’s not a compromise. That’s the best route to robust knowledge.

This doesn’t mean anything goes. But it does mean we should stop pretending that one perspective—especially our own—is the “view from nowhere.” There’s no God’s-eye view. 

But a well-structured chorus of perspectives? 

That gets us closer.


Final Thoughts

If there’s one thing I hope you’re taking away, it’s this: theory-ladenness isn’t just a philosophical slogan—it’s a lived reality of scientific work

We don’t just interpret the world through our theories—we see it that way. The good news is, that’s not a fatal flaw. It’s a call to build science in a way that acknowledges our partial perspectives and leverages them.

The more eyes, the better. Especially when those eyes see differently.