What do scientific theories actually tell us about the world?

One of the things that’s exciting about learning new things, is that often a new understanding in one area sheds light on what might seem like a completely separate topic.  For me, information about how the brain works appears to have shed new light on a question in the philosophy of of science, where there has long been a debate about the epistemic nature of scientific theories.

Spacetime lattice Image credit: mysid via Wikipedia
Spacetime lattice
Image credit: mysid via Wikipedia

One camp holds that scientific theories reflect reality, at least to some level of approximation.  So when we talk about space being warped in general relativity, or the behavior of fermions and bosons, there is actually something “out there” that corresponds to those concepts.  There is something actually being warped, and there actually are tiny particles and/or waves that are being described in particle physics.  This camp is scientific realism.

The opposing camp believes that scientific theories are only frameworks we build to predict observations.  The stories we tell ourselves associated with those predictive frameworks may or may not correspond to any underlying reality.  All we can know is whether the theory successfully makes its predictions.  This camp is instrumentalism.

The vast majority of scientists are realists.  This makes sense when you consider the motivation needed to spend hours of  your life in a lab doing experiments, or to endure the discomforts and hazards of field work.  It’s pretty hard for geologists to visit the antarctic for samples, or for biologists to crawl through the mud for specimens, if they don’t see themselves in some way as being in pursuit of truth.

But the instrumentalists tend to point out all the successful scientific theories that could accurately predict observations, at least for a time, but were eventually shown to be wrong.

The prime example is Ptolemy’s ancient theory of the universe, a precise mathematical model of the Aristotelian view of geocentrism, the idea that the Earth is the center of the universe with everything revolving around it.    For centuries, Ptolemy’s model accurately predicted naked eye observations of the heavens.

But we know today that it is completely wrong.  As Copernicus pointed out in the 1500s, the Earth orbits around the sun.  Interestingly, many science historians have pointed out that Copernicus’ model actually wasn’t any better at making predictions than Ptolemy’s, at least until Galileo started making observations through a telescope.  Indeed, the first printing of Copernicus’ theory had a preface from someone, probably hoping to head off controversy, saying the ideas presented might only be a predictive framework unrelated to actual reality.

For a long time, I was agnostic between realism and instrumentalism.  Emotionally, scientific realism is hard to shake.  Without it, science seems little more than an endeavor to lay the groundwork for technology, for practical applications of its findings.  Many instrumentalists are happy to see it in that light.  A lot of instrumentalists tend to be philosophers, theologians, and others who may be less than thrilled with the implications of scientific findings.

However I do think it’s important for scientists, and anyone assessing scientific theories, to be able to put on the instrumentalist cap from time to time, to conservatively assess which parts of a theory are actually predictive, and which may just be speculative baggage.

But here’s the thing.  Often what we’re really talking about here is the difference between the raw mathematics of a theory, and its language description, including the metaphors and analogies we use to understand it.  The idea is that the mathematics might be right, but the rest wrong.

But the language part of a theory is a description of a mental understanding of what’s happening.  That understanding is a model we build in our brains, a neural firing pattern that may or may not be isomorphic with patterns in the world.  And as I’ve discussed in my consciousness posts, the model building mechanism evolved for an adaptive purpose: to make predictions.

In other words, the language description of a theory is itself a predictive model.  Its predictions may not be as precise as the mathematical portions, they may not be currently testable in the same manner as the mathematics (assuming those mathematics are actually testable; I’m looking at you string theorists), but it will still make predictions.

Using the Ptolemy example above, the language model did make predictions.  It’s just that many of its predictions couldn’t be tested until the availability of telescopes.  Once they could, the Ptolemy model quickly fell from favor.  (At least it was quick on historical time scales.  It wasn’t quick enough to avoid making Galileo’s final years miserable.)  As many have pointed out, it wasn’t that Copernicus’ model made precisely right predictions, but it was far less wrong than Ptolemy’s.

When you think about it, any mental model we hold makes predictions.  The predictions might not be testable, currently or ever, but they’re still there.  Even religious or metaphysical beliefs make predictions, such as whether we’ll wake up in an afterlife after we die.  They’re just predictions we may never be able to test in this world.

This means that the distinction between scientific realism and instrumentalism is an artificial one.  It’s really just a distinction between aspects of a theory that can be tested, and the currently untestable aspects.  Often the divide is between the mathematical portions and the language portions, but the only real difference there is that the mathematical predictions are precise, whereas the language ones are less precise, to varying degrees.

Of course, I’m basing this insight on a scientific theory about how the brain works.  If that theory eventually ends up failing in its predictions, it might have implications for the epistemic point I’m making here, for the revision to our model of scientific knowledge I think is warranted.

And idealists might note that I’m also making the assumption that brains exist, that along with the rest of the external world they aren’t an illusion.  I have to concede that’s true, and even if this understanding makes accurate useful predictions, within idealism, it still wouldn’t be mapping to actual reality.  But given that I’m also assuming that all you other minds exist out there, it’s a stipulation I’m comfortable with.

As always, it might be that I’m missing something.  If so, I hope you’ll set me straight in the comments.

Falsifiability is useful, but a matter of judgment

The Black Swan is the state bird of Western Au...
The Black Swan is the state bird of Western Australia (Photo credit: Wikipedia)

Our discussions last week on Jim Baggott’s book, ‘Farewell to Reality’, and Sean Carroll’s Edge response, left me pondering falsifiability, the idea that theories should be falsifiable in order to be considered science.

Falsifiability is a criteria identified by the philosopher Karl Popper.  Popper was arguing against a conception held at the time by logical positivists known as verificationism, the idea that something couldn’t be considered scientific unless it was a verifiable proposition.

To be verifiable, a proposition needed positive empirical evidence for its existence.  If a proposition wasn’t verifiable, the logical positivists relegated it to metaphysics, which they regarded as meaningless literal non-sense (that is literally not of the senses).

Many philosophers saw verificationism as too stringent, noting that it would cut out too much of what was then considered legitimate science.  A number of alternate criteria were proposed, but Popper’s caught on.

Unlike verificationism, falsifiability doesn’t require positive evidence for every assertion, but it does require that the assertion has the possibility of being proven wrong.  This seems like a hair splitting difference when you first hear it, but the distinction is important.

The classic example is black swans.  If we only ever observe white swans, then we might form a theory that all swans are white.  According to verificationism, that theory is not science since we haven’t proven that all swans are white.  However, this could only be done by observing every swan that exists, that has ever existed, or will ever exist.  It is far too stringent a criteria.

Falsifiablity accepts the theory that all swans are white as scientific because it has the possibility of being disproven, when the first black swan is observed.  Note that falsifiability doesn’t mean that we necessarily have control of when the contradictory evidence might arise, only that the possibility exists.

This actually turns out to be a critical part of falsifiability.  Your theory doesn’t have to be falsifiable under controllable experimental conditions (although it’s certainly a good thing if it is), it only has to falsifiable in principle.

Popper argued that if you’re not talking about something falsifiable, then you’re talking about philosophical concepts such as metaphysics.  But unlike the logical positivists, Popper didn’t discount metaphysics, pointing out that what is metaphysics in one century might be science in future centuries.  A good example here is atomism, which was a metaphysical concept for the ancient Greek philosophers, but became a scientific one in the modern age.

The problem, of course, is that what is falsifiable in principle is a matter of judgment.  Popper famously used his criteria to mark Marxism, psychoanalysis, and natural selection as not being scientific, although he later changed his mind about natural selection.  (Note that he was talking about natural selection, not evolution overall.)

Popper discounted Marxism and psychoanalysis because the theories were so flexible that they could be used to explain anything in a post-hoc manner, but couldn’t be used to make predictions.

Falsifiability has a lot going for it, but it’s not a simple criteria.  To understand why, consider what happened when the planet Uranus was discovered.  The planet’s orbit was not found to be in strict accord with Newtonian physics, yet no one at the time declared Newtonian physics falsified.  Instead scientists continued to assume that Newtonian physics were correct and used them to deduce the existence of yet another planet, Neptune.

Of course, eventually phenomena were observed that Newtonian physics couldn’t explain, such as the precession of Mercury’s orbit.  Using the logic that worked for Neptune, some astronomers predicted the existence of another planet closer to the sun, Vulcan (no relation to the Star Trek version), which was never found to exist.

Despite this fact, scientists didn’t abandon Newtonian physics until Albert Einstein formulated a better theory, general relativity.  (It’s worth noting that Newtonian physics remains approximately correct enough that NASA still uses it for most of its spacecraft flight planning.)

This reluctance of scientists to abandon a well established theory until a better one comes along was observed by Thomas Kuhn.  Kuhn noted that the history of science is one of paradigm shifts, where major theories are updated as necessary to accord with new observations, and adhered to until a new theory supplants it in what he termed a paradigm shift, such as the one from a Newtonian universe to an Einsteinian one.

So, where does this leave falsifiability as a criteria for whether or not something is science?  I think as long as we remember that we’re talking about falsifiability in principle, not necessarily in practice, it remains a useful concept.  But, as I mentioned above, falsifiability in principle is a matter of judgment, one that often has to be made by scientists themselves.

However, I think falsifiability remains an important metric.  Without it, we’re left with notions like science being whatever scientists decide it is, a criteria that would only strengthen critics of science who are unhappy about it not accepting things like the paranormal, new age spirituality, and many other ambiguous or ill defined concepts.

Falsifiability has also become important in law, as a principle used to distinguish science from religious or other forms of thought, particularly in cases involving creationism or intelligent design, neither of which pass the falsifiability test.

Does that mean that string theory or related concepts aren’t science?  Again, I think it’s a matter of judgment.  As long as string theorists are striving to find a falsifiable theory, an argument can be made that they’re doing prospective science.  However, the decades long failure to produce such theories are causing many to lose patience with that enterprise.

However, many other ideas such as multiverse which are completely causally disconnected from us, are not falsifiable, even in principle.  They are metaphysics.  As Popper said, exploring the concepts may have value, but its value will be that of the value of philosophical contemplation, rather than empirical science.

So, falsifiability remains useful a useful criteria, but whether or not a theory meets that criteria is a matter of judgment.

Ask Ethan #17: The Burden of Proof – Starts With A Bang

Perhaps no word in the English language generates as much misunderstanding as the word theory. In scientific circles, this word has a very specific meaning that’s different from everyday use, and — as a theoretical astrophysicist myself — I feel it’s my duty to help explain exactly what we mean when we use it.

via Ask Ethan #17: The Burden of Proof – Starts With A Bang.

We often hear young earth creationists, climate change deniers, and other opponents of science use variations of the phrase, “It’s just a theory”.  If you’re tempted to think a theory is any fanciful unproven notion, then you might consider reading Ethan Siegel’s description of what a scientific theory actually is, particularly in relation to scientific laws or hypotheses.