The Problem of Induction – Why One White Swan Doesn’t Prove All Swans Are White

We’ve all taught or explained Hume’s problem more times than we can count: just because every swan you’ve seen is white doesn’t mean the next one will be. But here’s the thing—I think we sometimes sell this short

We talk about it as if it’s just a little puzzle, a quirky epistemological glitch, but it’s actually a philosophical wrecking ball aimed right at the foundations of empirical reasoning.

This isn’t just about science being tentative or probabilistic. It’s about whether we have any rational grounds for thinking the future will resemble the past. That’s a much deeper cut. 

And I think even we, as people steeped in this stuff, sometimes underestimate how radical Hume’s challenge really is—and how often it’s mischaracterized. 

So let’s start by clearing away some of the most common misunderstandings that still creep into serious discussions.

Five Ways We Keep Getting Hume Wrong

Let’s be honest—we’ve all come across (or maybe even contributed to) some of these. They’re subtle, easy to slip into, and they blunt the sharpness of Hume’s actual argument. So here are five persistent misreadings I think we should call out.


1. Hume Wasn’t Arguing That Regularities Don’t Exist

This one crops up all the time, usually in the form of “Well, of course there’s regularity in nature—just look at physics!” But Hume’s not denying that we observe regularities. He’s saying: Why should we expect them to continue?

He doesn’t doubt that the sun has always risen; he doubts that we’re rationally justified in assuming it will rise tomorrow. That’s the twist: it’s not the patterns that are in question—it’s our confidence in their persistence. Regularities are brute facts in experience; the leap to expecting them is where things fall apart.


2. Induction’s Success Doesn’t Justify It

This is the classic circularity trap. Someone points out that induction works—look at airplanes, vaccines, semiconductors! But Hume’s point is precisely that we can’t use the success of induction to justify induction, because that’s… just another inductive argument.

To put it more sharply: “Induction has worked well in the past, therefore it will continue to work” is itself an inductive claim. And we’re back where we started.

It’s like trying to prove a compass always points north by noticing that it’s pointed north every time you’ve checked. Sure, that’s consistent—but it’s not a logical proof. There’s no deductive path from past success to future reliability.


3. Pattern Recognition Isn’t the Same as Inductive Justification

This one’s become more common in the machine learning era. Neural networks “learn” patterns in data and generalize them with amazing accuracy. But we have to be careful here: this is statistical generalization, not epistemic justification.

What these models do is inductive in practice, but they don’t solve the problem of induction. They just sidestep it by optimizing for predictive performance. And that’s fine! But we shouldn’t confuse performance with philosophical security.

If anything, ML models highlight the problem: they can overfit, they can fail dramatically when the environment shifts, and they never know they’re right. They’re brute-force pattern recognizers—useful, but epistemologically blind.


4. Bayesian Reasoning Doesn’t Magically Solve the Problem

I’ve had this debate more times than I can count. Bayesianism is elegant, sure. It gives us a formal model for updating beliefs based on new evidence. But it doesn’t escape Hume’s challenge.

Why? Because Bayesian reasoning requires a prior, and the choice of that prior is a deep inductive assumption. Even Jeffreys priors, which aim to be “noninformative,” still assume some structure about the world.

In other words: you can’t Bayesian your way out of this. You’re just formalizing your inductive assumptions more clearly. And Hume’s challenge applies just as forcefully to a formally defined belief update rule as it does to a gut feeling.


5. Hume’s Appeal to Habit Isn’t a Satisfying Explanation

Some folks think Hume’s just giving a psychological account: we expect the future to resemble the past because we’re conditioned that way. Okay, maybe that’s part of it—but that’s not a solution, that’s a confession.

Explaining why we believe something isn’t the same as showing that we should. Hume’s challenge is normative, not descriptive. He’s saying: even if our brains are wired to expect continuity, we’re not epistemically entitled to it.

This is like saying, “People believe in luck because of superstition.” Sure, but that doesn’t mean they’re justified in doing so.


So yeah—Hume’s not just tossing a little puzzle into the mix. He’s calling into question whether our entire way of reasoning about the world has any rational footing at all. And while we’ve done a lot to formalize and refine induction since Hume, I think the core problem is still very much alive. Sometimes I think we’ve just gotten better at not thinking about it too hard.

Three Ways of Wrestling with Hume – Popper, Carnap, and Goodman

Alright, now that we’ve cleared up what Hume was actually saying, let’s look at what some of the major 20th-century thinkers did with it. 

Popper, Carnap, and Goodman each took different routes in dealing with the problem of induction—and not one of them fully solved it. But how they tried (and where they fell short) is pretty revealing.


Popper’s Radical Escape Hatch: Just Say No to Induction

Let’s start with Popper. His move is bold: just abandon induction entirely. In The Logic of Scientific Discovery, he argues that science isn’t about confirming theories—it’s about falsifying them. We don’t verify general claims like “All swans are white”; we just test them and look for counterexamples. One black swan, and the claim is out.

It sounds clean—and it kind of is. But there are some cracks.

First, scientists do make predictions and expect regularities. They expect gravity to work tomorrow the way it did today. Falsificationism sidesteps that expectation. In real-world science, we often work with degrees of belief, not binary falsification.

Second, Popper smuggles in induction through the back door. Think about it: if you falsify theory A and move to theory B, you’re still assuming that your experimental method is reliable across time and space. 

That assumption—that the same setup will yield the same kind of outcome—is itself an inductive one.

So even when Popper claims to have cut induction out, he ends up leaning on it implicitly.


Carnap and the Quest for Logical Probability

Carnap tried something different. He wanted to formalize inductive reasoning using logical probability. In his early work—especially in The Logical Foundations of Probability—he develops systems that assign numerical degrees of confirmation to hypotheses based on evidence.

This feels satisfying. Instead of saying “induction is unjustified,” Carnap says, “Here’s how to quantify what the evidence does justify.”

But then comes the problem: which system of inductive logic is the “right” one?

Carnap quickly realized that different choices about language, predicates, and background assumptions lead to different confirmation functions. There was no canonical system—just an infinite family of equally consistent options.

If that sounds familiar, it’s because it echoes the modern Bayesian worry: how do you justify your priors? Carnap was trying to ground confirmation in logical structure, but the result was too fragile to ground scientific inference in any absolute sense.

He gave us beautiful tools, but not a justification.


Goodman’s Riddle: The Problem Isn’t Just What We Infer, But How We Say It

And then there’s Nelson Goodman, who arguably deepened Hume’s problem with his “new riddle of induction.” His famous grue/bleen example shows that the real problem might lie in our choice of predicates.

Let me remind you: something is grue if it’s observed before time T and is green, or if it’s unobserved before T and blue afterward. So if all emeralds observed before time T are green, they’re also grue. So why do we project “green” rather than “grue” into the future?

Goodman’s point is that inductive inference depends on what we treat as “natural” properties, but our sense of which predicates are natural isn’t itself justified by logic. It’s based on entrenchment, history, linguistic norms—stuff that’s disturbingly contingent.

In a way, Goodman turns Hume’s problem into a linguistic and conceptual one: not just “why do we expect regularities?” but “why do we expect these regularities?”


So to sum up:

  • Popper tried to get rid of induction and didn’t quite succeed.
  • Carnap tried to formalize it and ended up multiplying systems without justification.
  • Goodman made the whole thing even messier by showing how our conceptual framework affects what we project.

None of them really “solved” Hume’s problem. But they each exposed a different layer of how deep the issue goes. And they pushed us to recognize that inductive reasoning isn’t just a philosophical puzzle—it’s a live question about how we think, reason, and build knowledge.

Why Machine Learning Didn’t Kill Hume (and Might Prove Him Right)

So let’s talk about modern times—because I’ve heard a lot of people say something like this:

“Come on, machine learning proves induction works. We train on past data, predict the future, and it works!”

And sure, in some sense, that’s true. ML models absolutely generalize from past data. But here’s the kicker: they don’t “solve” the problem of induction—they just make it louder.

All Models Have Inductive Bias

Every machine learning algorithm—whether it’s a decision tree, a deep neural net, or a Bayesian model—bakes in assumptions. These are called inductive biases, and they’re the way models “prefer” some patterns over others.

For instance:

  • A linear model assumes relationships are linear.
  • A convolutional neural net assumes spatial locality and translation invariance.
  • A transformer assumes positional encoding and attention-weighted aggregation.

These biases aren’t justified by logic. They’re guesses—educated, pragmatic, often effective—but guesses nonetheless.

And here’s the Humean twist: those guesses only work because the world happens to cooperate. But there’s no guarantee it will continue to do so.


ML Models Can Fail Spectacularly When the Distribution Shifts

One thing ML makes painfully obvious is this: generalization depends on the training data being representative. When that breaks—when the world changes, or when new data comes from a different distribution—performance collapses.

This is called distribution shift, and it’s a very real version of Hume’s worry. Imagine a model trained to detect tumors from X-rays suddenly getting X-rays from a new machine—it can misclassify everything. Not because it’s “bad,” but because it assumed the past was like the future—and that assumption failed.

So ML doesn’t solve induction; it dramatizes its fragility.


Predictive Success Isn’t Epistemic Justification

Let me put it bluntly: ML is not doing epistemology. It doesn’t claim that its predictions are true or justified. It just aims to minimize loss on a held-out set.

You can have a model that’s extremely good at predicting, but totally incapable of explaining anything. It might not even know what features are causally relevant—it just picks up statistical patterns. That’s useful, but it’s not the same as understanding.

So when people say that ML makes Hume irrelevant, I’d argue the opposite: ML forces us to confront his problem every time a model breaks in the wild.


And Yet… We Keep Doing It Anyway

This is what fascinates me. Despite knowing all this, we still train models, run experiments, build forecasts. Why? Because it works well enough. Because we have to. Because uncertainty doesn’t make prediction useless—it just makes it fragile.

In other words: we live with Hume’s problem every day—we just build around it rather than resolving it.

How Science Lives with Inductive Uncertainty

So how do scientists—actual practicing scientists—deal with the fact that the entire enterprise rests on shaky epistemic ground?

They don’t resolve it. They manage it.

Here are five ways science pragmatically works around the problem of induction without pretending to solve it.


1. Theories Are Always Provisional

No one in physics thinks general relativity is The Final Word. It works incredibly well, but everyone knows it breaks down at quantum scales. Theories are tools—not sacred truths.

This attitude mirrors Hume: science doesn’t prove things true forever. It just finds models that work until something better comes along.


2. Replication Is About Robustness, Not Truth

Why do we replicate experiments? Not to prove absolute truth, but to test whether results hold across contexts. Replication is a proxy for inductive stability—if something holds in different labs, with different instruments, then maybe it’s not just noise.

But again, that “maybe” is key. Replication is a confidence booster—not a foundation.


3. Statistics Embraces Uncertainty

P-values, confidence intervals, error bars—all of these are ways of quantifying our inductive doubt. They’re not magical safeguards. They’re just formal acknowledgments that our inferences might fail.

Even Bayesian stats, with all its elegance, still relies on prior assumptions that echo Hume’s worry.


4. Pluralism of Models

Look at climate science or economics. Researchers often use multiple models with different assumptions, knowing none is perfect. Why? Because truth isn’t the point—resilience is.

If different models converge on similar predictions, we start to trust the result—not because it’s proven, but because it survives scrutiny from multiple angles.


5. Practical Humility

This might be the most important one. Good scientists know they don’t have final answers. They expect to be wrong eventually. They treat theories like maps: useful, maybe even beautiful, but never complete.

This isn’t just good practice—it’s a kind of epistemic wisdom. It’s Humean through and through.


Final Thoughts

So here we are, nearly 300 years after Hume, and his problem still haunts everything from lab experiments to deep learning models.

We haven’t solved it. But we’ve gotten really good at living with it—at building systems and institutions that function not despite uncertainty, but through it.

Maybe that’s the real lesson here: you don’t have to justify every tool to use it wisely. But you should never forget that the ground beneath it is always a little shaky.

And for philosophers like us, that’s not a bug—it’s the whole point.