How Simplified Theories Explain a Complex World of Models and Maps

It’s a bit weird that we keep using models we know aren’t true. 

The Bohr atom? 

Totally outdated.

Rational agents in economics? 

Yeah, good luck finding one in the wild. 

And yet, we keep teaching them, citing them, and sometimes even building on them. 

Why?

Because models aren’t supposed to be perfect. 

They’re supposed to be useful, not true. George Box said it best: “All models are wrong, but some are useful.” 

But I think most of us already know that. The deeper, trickier question is: why are these “wrong” models so often the foundation of right ideas? What does that say about how we do science—or even what we think science is?

In this piece, I want to look at how simplification powers discovery, when it breaks down, and why some models earn a place in our toolbox even after we know they’re broken.

Simple Models That Changed Everything

Let me start with one of the most obviously “wrong” models we still love: Bohr’s model of the atom.

The idea that electrons orbit the nucleus like planets around the sun? 

Totally busted. Quantum mechanics blew that out of the water a century ago. 

But here’s the thing: the Bohr model explained spectral lines of hydrogen before quantum mechanics could even explain itself. It introduced quantized orbits, which led to quantum jumps, which eventually helped shape the modern quantum framework.

Was it wrong? Absolutely. But it was the right kind of wrong—the kind that gives you traction, helps you think, and sets you up for the next leap.

And this isn’t a one-off. Physics is full of these simplifications that are scientifically obsolete but historically essential. 

Take Galileo’s frictionless planes. You’d never find one in the real world, but that idealized surface let us isolate acceleration from all the messiness of resistance. Or think about ideal gases. Do they exist? No. But they’ve taught generations how thermodynamics works.

So what’s really going on here?

The Power of Strategic Ignorance

All of these models share one thing: they ignore reality—but do it on purpose. They carve away complexity until you’re left with something solvable. Something thinkable. And that’s not a flaw; it’s a strategy. It’s the strategy.

In philosophy of science, we sometimes frame this as “idealization.” But I think that word doesn’t quite capture how bold—and how clever—these moves are. We’re not just smoothing over details. 

We’re engineering cognitive leverage. We’re crafting tools that aren’t real, but that make real discovery possible.

Bohr’s atom let us visualize something invisible. Galileo’s ramp let us isolate a principle that was being drowned in friction. These aren’t failures to represent the world—they’re acts of imagination that make the world intelligible.

Models as Scaffolding

There’s a fascinating idea in cognitive science called scaffolding—the notion that temporary supports can help build permanent knowledge. And that’s exactly what these early models do.

For instance, even though the Bohr atom is wrong, students still find it easier to grasp than a probabilistic electron cloud. It builds intuition. And once you’ve got that, you can tear it down and move on to the Schrödinger equation.

Same with Newtonian mechanics

It’s not accurate at relativistic scales, but it still gets you to the Moon. And more importantly, it teaches you how to think about forces and motion. It’s a mental framework that works within its domain.

So maybe we need to stop thinking of these models as lies or mistakes. They’re more like maps that don’t show every street, but still get you where you’re going.

We Don’t Need Truth. We Need Traction.

There’s a deeper philosophical point here, and I think it’s worth stressing: truth is not the only goal in science. Predictive power, explanatory clarity, conceptual accessibility—these all matter too.

And sometimes, the truth is just too messy to be useful. So we build simplified stand-ins—not because we’re lazy or naive, but because they’re the only way in. They give us traction, and that’s what moves science forward.

When Models Fail (and Why That Actually Helps Us)

Let’s be honest—models don’t always just simplify. Sometimes they distort. Sometimes they break. And when they do, the results can be pretty serious. But here’s the twist: those failures aren’t just cautionary tales. They’re productive.

A failed model doesn’t just tell us that we got something wrong. It often tells us exactly what kind of complexity we were ignoring—and why that complexity matters. In fact, some of the biggest scientific and technological advances have come directly from hitting the limits of oversimplified models.

Let’s walk through a few examples where simplifications weren’t just off—they caused major blind spots.


1. Linear Models in Climate Science

Linear models are attractive. They’re neat, they behave, they’re easy to analyze. But the climate system? 

It doesn’t care about your math preferences. It’s nonlinear to its core.

For years, early models underestimated things like positive feedback loops—melting ice reducing albedo, methane release from thawing permafrost, etc. Those aren’t “corrections,” they’re structural features of a nonlinear, chaotic system. And linear models—even sophisticated ones—simply weren’t built to see them coming.

The result? Policy advice that sounded confident but was, in hindsight, wildly over-optimistic. The model failed because it didn’t match the system’s complexity—and that failure was a wake-up call.


2. The Efficient Market Hypothesis (EMH)

Economists will know this one well. The EMH assumes that all actors are rational and that markets instantly reflect all available information. That’s elegant. It’s clean. It’s… fiction.

Leading up to the 2008 financial crisis, risk models built on EMH assumptions were being used to price derivatives and design financial instruments. The model said that diversification would smooth out risk. But in reality, irrational behavior, feedback loops, and unmodeled correlations led to systemic collapse.

The beauty of EMH—its formal simplicity—was also its fatal flaw. And what’s worse, the model’s apparent rigor gave decision-makers false confidence. It didn’t just mislead—it concealed the fact that it was misleading.


3. Hardy-Weinberg Equilibrium in Population Genetics

Hardy-Weinberg is a classic. But its assumptions—no selection, no migration, infinite population size, etc.—are almost never met in real populations. That’s not a problem if you know what you’re doing. But if you treat H-W equilibrium as a realistic baseline, you’re going to miss the complexity of actual evolutionary dynamics.

In small populations, for example, genetic drift and stochasticity can dominate, but those don’t show up in the idealized model. This led to early misunderstandings in conservation biology, especially around inbreeding and population viability.


4. SIR Models in Epidemiology

During COVID, SIR models (Susceptible-Infectious-Recovered) were everywhere. They’re simple and powerful—but they also ignore social structure, mobility networks, and behavior adaptation.

By treating everyone as equally likely to infect everyone else, they missed crucial heterogeneity: superspreaders, clustered networks, changing policies. In the early days, this led to both under- and over-estimates of outbreak dynamics. Later models like agent-based models filled in some of these gaps, but it took time—and missteps.


5. Early Neural Models in Cognitive Science

For decades, the brain was modeled as a set of isolated signal-processing units—like early feedforward neural nets. But real cognition is embodied, dynamic, and interactive. Brains don’t just process input—they regulate movement, emotion, attention, and adapt based on lived experience.

Overly simplified models of cognition led to false expectations about AI, as well as misunderstandings about human learning and intelligence. The field is still recovering from those early blinders.


So What Do These Failures Teach Us?

Each of these examples shows how a model’s assumptions become liabilities when we try to generalize them beyond their intended scope. But that’s not the end of the story.

In each case, failure didn’t end the conversation—it pushed it forward. We had to develop better models, or even whole new ways of thinking (nonlinear systems, behavioral economics, embodied cognition, network-based epidemiology). That’s growth.

What’s exciting to me is that models fail in specific, instructive ways. And if we’re paying attention, we can use that to diagnose the mismatch between our theories and the world.

So yeah, models fail. That’s not the problem. The problem is when we forget they can.

Why Models Don’t Need to Be “True” to Be Powerful

Here’s a question I love throwing at a room full of scientists: Do you actually believe your models are “real”? You’d be surprised how many thoughtful people hesitate.

The more you dig into modeling, the more you realize: the relationship between a model and the world isn’t representational in any naive sense. It’s not like holding up a mirror. It’s more like building a prosthetic—a tool that lets you interact with something you can’t touch directly.


Realism, Anti-Realism, and the Messy Middle

This brings us into familiar philosophical territory: realism vs. anti-realism. If your model gets the numbers right, does that mean it’s telling you something “true” about the world?

Some say yes—scientific realists argue that good models must be latching onto real structures out there. Others—instrumentalists—say a model doesn’t need to be true, just useful.

But there’s a third view I think more of us should take seriously: structural realism. This view, championed by John Worrall, says that while the entities in our theories might change (like the luminiferous aether), the structures we uncover tend to survive. Bohr’s atom? Wrong about electrons as planets—but right about quantized energy levels.

So even “wrong” models might capture something structurally real. That’s the sweet spot.


The Map Is Not the Territory—And That’s Good

We’ve all heard the metaphor: the map is not the territory. But Borges took it to the extreme with “On Exactitude in Science,” describing a map so detailed it matched the territory 1:1—and was completely useless.

A model that tries to include everything becomes indistinguishable from the system itself—and just as hard to understand. So models have to lie. They have to omit, distort, simplify. That’s the whole point.

What matters is whether they lie in a way that reveals something.


Epistemic Opacity and Complexity

In fields like AI and systems biology, we’re dealing with models so complex that no single person can fully understand them. This is epistemic opacity—the idea that our tools have outgrown our cognitive capacity.

Does that mean we should give up? Not at all. It just means we need better ways to track what the model is doing, even if we don’t fully grasp every line of code or equation. And maybe we need to redefine what understanding even means in this context.


The Work That Models Actually Do

We often treat models as passive descriptions. But they’re active tools—they generate hypotheses, guide experiments, clarify assumptions. Nancy Cartwright calls them “nomological machines”—engines that produce regularities under specific conditions.

They’re not just mirrors of nature. They’re constructive instruments. And once you see that, the question isn’t “Is the model true?” but “What work is it doing?”

Modeling the World with a Toolbox, Not a Single Lens

If there’s one mistake we keep repeating in science, it’s assuming there’s one best model for a given system. That’s comforting. But it’s almost never true.

The truth is, complex systems demand multiple perspectives. And if we want to understand them, we need to embrace model pluralism—not just in theory, but in actual practice.


1. Ensemble Modeling

This is standard in weather forecasting now. Instead of relying on a single simulation, forecasters run dozens (or hundreds) of models with slightly different inputs or assumptions. The goal isn’t to find the “right” model—it’s to explore a range of plausible futures.

This approach is being adopted in genomics, AI, even finance. And it shows that predictive power doesn’t come from certainty—it comes from diversity.


2. Multi-Scale Models

Systems in biology, chemistry, and ecology often operate across scales—from molecules to ecosystems. No single model spans them all, so we build models at different levels and link them together.

In cancer research, for example, you might have a molecular model of cell signaling, a tissue-level model of tumor growth, and a population-level model of patient response. Each captures something crucial the others miss.


3. Agent-Based Models (ABMs)

Sometimes you don’t want to model averages. You want to model behavior—individual, quirky, adaptive behavior. ABMs let you do that by simulating the actions of many agents interacting over time.

These are increasingly common in epidemiology, economics, traffic systems, even political science. And they’re often better at capturing emergent phenomena than traditional equation-based models.


4. Model-Based Reasoning (MBR)

Philosophers like Nancy Nersessian and Ronald Giere argue that models are not representations so much as thinking tools. They shape how we reason, what we notice, what we ignore. They’re part of our cognitive infrastructure.

So choosing a model isn’t just about data or math—it’s about what kind of questions you want to ask.


5. Meta-Modeling and Uncertainty

Here’s the frontier: modeling the models. In AI and Bayesian statistics, there’s growing work on quantifying model uncertainty—not just error in predictions, but uncertainty about the structure of the model itself.

This helps us avoid overconfidence and lets us make better decisions under deep uncertainty—something we badly need in domains like climate risk, economic policy, and public health.


Pluralism Without Relativism

Let me be clear: model pluralism doesn’t mean “anything goes.” Not all models are equally valid. But it does mean recognizing that different models are valid for different purposes, and we’re better off when we can toggle between them.

It’s like switching tools in a lab. You don’t throw away the microscope just because you also have a spectrometer.


Final Thoughts

So yes—all models are wrong. But the good ones? 

They’re wrong in ways that work. They teach us, stretch us, force us to revise. They’re not just representations, they’re invitations to think differently.

If we treat models as living tools—evolving, contextual, and plural—we don’t just do better science. We understand more deeply what science is actually for: not perfect mirrors of nature, but practical ways of navigating it.

Thanks for sticking with me. Now—what models are you rethinking these days?