The Role of Observation – Can We Trust What We See?
Observation gets treated like the sacred bedrock of science. We trust it, rely on it, build entire theories around it. But is that trust always deserved?
I’ve been thinking a lot lately about how much of what we “see” is actually shaped before we ever look through a microscope or telescope. We tend to act like instruments extend our senses in some clean, neutral way—but they don’t. Our tools, our training, our expectations—they all shape what we’re able to notice, and even what counts as “data.”
This isn’t just some abstract philosophical quibble either. There’s a growing tension in modern science between the appearance of observation and the reality of interpretation. And I think it’s worth digging into.
So let’s take a closer look—first by going back in time a bit to see where this tension really started to show.
Observation Isn’t Pure and It’s Always Theory-Soaked
We’ve all heard the phrase “theory-laden observation.” It’s tossed around so often that it almost sounds cliché now—but it’s still profoundly true, and I’d argue, still underappreciated in everyday scientific work.
Let me show you why.
A Historical Case – Telescopes and the Moon
When Galileo first pointed his telescope at the moon, he saw craters and mountains. That sounds obvious to us now, but to his contemporaries—who were still invested in Aristotelian cosmology—it was scandalous.
Why?
Because the theory said celestial bodies were perfect spheres.
So, some early observers literally couldn’t see the roughness; they interpreted the shadows as optical illusions or lens defects. The same sensory input led to wildly different conclusions because of different theoretical commitments.
That’s not just historical trivia—it’s a reminder: what we observe is always shaped by what we expect to observe.
Microscopy and Spontaneous Generation
Jump ahead to the 17th century. Early microscopists like Robert Hooke and Antonie van Leeuwenhoek opened up the microbial world. But again, what they saw was immediately filtered through competing theories.
Some used microscopic observations to support spontaneous generation (life arising from non-life), because they interpreted wriggling animalcules as proof of life springing from “vital forces” in decaying material. Others saw the same organisms and argued the exact opposite—that these were contaminants, not proof of generation.
Same images, opposite takeaways. Sound familiar?
Seeing Planets That Weren’t There
Here’s another weird example I love: the 19th-century astronomers who were convinced there was a planet called “Vulcan” orbiting between Mercury and the Sun. Multiple astronomers saw it.
They logged sightings.
They even plotted trajectories. But as Einstein later showed, Mercury’s odd orbit didn’t require a hidden planet—it was just general relativity doing its thing.
So what happened?
Were they lying?
Deluded?
Nope. They saw what they expected to see—literally. The theoretical framework created a perceptual readiness that made a non-existent object visible.
This is where it gets juicy: if theory can make us see things that aren’t there, how confident can we be in what we see through today’s even more complex tools?
The “Objectivity” Problem
You might say, “Sure, but we’ve got better checks now.” And that’s true. We have calibration routines, peer review, cross-validation, automated systems. But none of that removes the conceptual lens.
The moment you decide where to point a telescope, what resolution to scan at, which signal counts as noise—you’re already swimming in theory.
What’s more, modern instruments don’t just extend vision—they transform it. An electron microscope doesn’t let us “see small things.” It shows us contrast patterns generated by scattering, which we then interpret into images of cells or atoms. There’s no unmediated seeing happening here. The image is a model.
We’ve moved from “I see it with my own eyes” to “the data pipeline generated an interpretable artifact.” That’s not a bad thing—it’s just a different thing. But let’s not pretend it’s neutral.
Why This Still Matters
You might already buy all this in principle. But how often do we apply it to our own research habits? How often do we stop and ask, “Am I actually observing something, or am I confirming what I thought I’d see?”
If we want to be intellectually honest—and push science forward—we have to be willing to doubt our own eyes. Or at least, recognize that our “eyes” now include stacks of software, assumptions, and algorithmic tweaks.
And to me, that makes observation more interesting—not less. Because now, instead of pretending we’re detached observers, we can start thinking more carefully about how we’re co-producing what we call reality.
Next up: let’s talk about modern tools and why “seeing” something in 2025 is a very different beast than it used to be.
Seeing Through Machines: How Modern Tools Mediate Reality
Let’s talk about the machines.
We love to say instruments “extend our senses.” And sure, that’s technically true—but it massively undersells what’s really happening. In modern science, instruments don’t just enhance perception—they transform it. Sometimes they generate the very thing we claim to observe.
Let me walk you through a few examples that really drive this home.
Electron Microscopes: Seeing What’s Not There (Exactly)
Take the scanning electron microscope (SEM). We often act like it’s a super-powered eyeball. But what it actually gives us is surface topography generated by electrons bouncing off a sample.
It’s then color-mapped, contrast-boosted, and cleaned up by software.
The image looks tactile and “real,” but it’s a processed output, not a direct view. No photons, no color, no “seeing” in the traditional sense.
When students or even researchers say, “We saw a virus particle,” I can’t help but ask: what do you mean by saw?
What you saw was a data product—visualized, interpreted, and refined. The raw signal was unintelligible until theory and computation stepped in.
Gravitational Waves: Observation Without Sight
Or let’s go bigger: LIGO. The discovery of gravitational waves was a landmark moment, and deservedly so. But think about how we observed them.
No one saw a wave. No one saw two black holes merge.
What actually happened: a 4-kilometer interferometer detected a tiny distortion—smaller than a proton—across a vacuum tube, and that signal was processed through a mountain of filters and templates based on general relativity predictions.
Only when the data matched certain waveform models did we say, “Yes, that’s a detection.”
We didn’t just observe gravitational waves—we recognized them. There’s a difference.
The pattern didn’t declare itself as obvious truth. It was flagged as meaningful because of a pre-existing framework.
Exoplanets: Observing Absences, Not Planets
Same with exoplanets. One of the main ways we detect them is via the transit method—watching a star’s brightness dip as a planet crosses in front of it. Again, we don’t actually see the planet.
We observe a loss of light, run some models, and infer a body must be there.
Kepler data is a great example. It’s noise-heavy, and identifying a planet requires aggressive data smoothing, detrending, and probabilistic vetting.
And even then, we often say something like, “We’re 99% confident this dip corresponds to a planetary transit.”
That’s not seeing. That’s statistical inference plus faith in the models.
Medical Imaging: fMRI and the Illusion of Precision
One more: functional MRI. I’ve seen these colorful brain maps used to justify everything from cognitive biases to consumer behavior theories. But the resolution is poor (usually 2-3 mm voxels), temporal lag is significant, and there’s heavy pre-processing.
The famous “dead salmon” experiment (2009) showed that if you don’t correct for multiple comparisons, you can detect brain activity in a dead fish. That was hilarious—but also deeply revealing.
It means our observations are only as valid as the statistical and computational choices we make. The “observation” is an inference embedded in layers of theory and math.
So where does that leave us?
It means modern scientific observation is often a data-construction process. We’re not just seeing—we’re building visibility. Tools don’t just reveal—they define. And that forces us to rethink what it means to say something is “observed” in the first place.
How Theory Shapes What We Notice (and What We Ignore)
Now that we’ve seen how tools mediate our access to data, let’s go deeper into the role of theory—because it’s not just about the gadgets. It’s about the mental models we bring to every stage of research.
Science Doesn’t Just Observe—It Frames
We like to believe that observation precedes theory. But in practice, it rarely does.
What gets labeled “observation” is almost always shaped by prior assumptions: where to look, what to measure, what’s signal vs. noise. This isn’t corruption—it’s necessity. There’s just too much data otherwise.
But it has consequences.
Quantum Mechanics: When the Theory IS the Observation
The classic case is quantum mechanics. Take the double-slit experiment. Whether or not you observe the electron’s path changes the outcome. That’s not an illusion—it’s fundamental.
And yet, even here, how we “observe” depends on the setup, which is chosen based on interpretive frameworks (Copenhagen, many-worlds, pilot-wave, etc.). Observation in quantum physics isn’t just about measuring—it’s about enacting a reality.
As Bohr said, “The procedure of measurement has an essential influence on the conditions on which the very definition of the physical quantities in question rests.”
So what does that say about the idea of objective observation?
Cosmology: Building the Sky We See
Modern cosmology is another striking example. We assume a ΛCDM (Lambda Cold Dark Matter) model. It’s the standard. But when we build sky surveys or model redshifts, those assumptions get baked into the data processing.
We’re not just capturing photons—we’re cleaning, calibrating, interpreting, and fitting them to models. A bump in the data? Could be cosmic inflation. Or it could be instrument noise. The decision comes down to theoretical plausibility.
In other words: what we see in the cosmos depends on what we believe about the cosmos.
Bayesian Data Analysis: Theory as a Prior
In statistics, this idea is formalized. Bayesian inference explicitly requires you to start with a prior belief. Data doesn’t speak for itself—it updates belief states.
Scientists using Bayesian methods must choose priors carefully. But that also means observation is inseparable from belief structures.
This isn’t a flaw—it’s honest. But it dismantles the myth of “just the facts.”
What’s Noise? What’s Discovery?
I’ve been in lab meetings where one person calls a signal “anomalous” and another calls it “the result.” The difference? Expectations. One person is primed to find novelty; the other’s defending a known theory.
Even at the cutting edge, we’re constantly filtering based on what fits.
So maybe the deeper question isn’t just can we trust what we see, but: who gets to decide what counts as seeing?
Five Ways to Think About If We Ever Really Trust Observation
Okay, we’ve pulled apart observation from multiple angles—historical, technological, theoretical. Now let’s zoom out and look at how different philosophical perspectives try to make sense of all this.
I’m going to lay out five ways experts think about scientific observation, and why each one has something valuable to offer. Think of these as interpretive tools, not dogmas.
1. Instrumental Realism
This is the most straightforward: instruments extend our senses, and if they give reproducible results across labs and contexts, that’s enough to trust them.
If three different interferometers detect a gravitational wave, or a particle shows up at the expected energy range in a collider—we believe it exists. End of story.
This works well in physics and engineering. But it doesn’t explain why instruments sometimes agree on things that later turn out to be wrong. (See: Vulcan.)
2. Constructive Empiricism (van Fraassen)
Here’s a more cautious take. Bas van Fraassen argues that science doesn’t need to tell us what’s real. It only needs to deliver models that are empirically adequate—that is, they correctly predict observable phenomena.
Under this view, we don’t commit to whether quarks or strings “exist.” We just care that the theories involving them work. It’s agnostic, and kind of refreshing.
But it also raises the question: what counts as observable anymore? Especially when so much is mediated?
3. Critical Realism
This one says: yes, there’s a real world out there. But our access to it is always partial and shaped by our tools, language, and social context.
Theories help us get at the real structure of the world, even if imperfectly. This approach is popular in social sciences but gaining traction in physics too.
It encourages humility—trust the data, but not too much.
4. Phenomenological Skepticism
This is the radical one. Observation, according to this view, is always subjective—there’s no escaping our embeddedness. You’re not an eye floating in space. You’re a body, in a lab, trained in a culture, primed by experience.
This doesn’t mean truth is impossible—it just means we have to be aware of the filters. Observation is situated.
It’s not mainstream in physics, but it’s hugely influential in philosophy of science and STS (science and technology studies).
5. Agential Realism (Barad)
Karen Barad takes it even further. In her framework, observer and observed aren’t separate. They’re entangled. Measuring something isn’t a passive act—it’s an intra-action that produces the object and the observer simultaneously.
In that light, you can’t ask, “Do we observe reality?” because reality emerges through observation.
Yeah, it’s mind-bending—but increasingly relevant in quantum contexts.
Final Thoughts
So, can we trust what we see?
Well—yes, and no. Observation is still the lifeblood of science. But it’s not the transparent window we sometimes pretend it is. It’s more like a lens—curved, complex, and in constant need of calibration.
If we want to get closer to truth, we don’t need to throw out observation. We just need to treat it with the curiosity, skepticism, and philosophical depth it deserves.
Because the question isn’t whether we can trust our eyes—it’s how we can learn to see better.