One of the things that’s exciting about learning new things, is that often a new understanding in one area sheds light on what might seem like a completely separate topic. For me, information about how the brain works appears to have shed new light on a question in the philosophy of of science, where there has long been a debate about the epistemic nature of scientific theories.
One camp holds that scientific theories reflect reality, at least to some level of approximation. So when we talk about space being warped in general relativity, or the behavior of fermions and bosons, there is actually something “out there” that corresponds to those concepts. There is something actually being warped, and there actually are tiny particles and/or waves that are being described in particle physics. This camp is scientific realism.
The opposing camp believes that scientific theories are only frameworks we build to predict observations. The stories we tell ourselves associated with those predictive frameworks may or may not correspond to any underlying reality. All we can know is whether the theory successfully makes its predictions. This camp is instrumentalism.
The vast majority of scientists are realists. This makes sense when you consider the motivation needed to spend hours of your life in a lab doing experiments, or to endure the discomforts and hazards of field work. It’s pretty hard for geologists to visit the antarctic for samples, or for biologists to crawl through the mud for specimens, if they don’t see themselves in some way as being in pursuit of truth.
But the instrumentalists tend to point out all the successful scientific theories that could accurately predict observations, at least for a time, but were eventually shown to be wrong.
The prime example is Ptolemy’s ancient theory of the universe, a precise mathematical model of the Aristotelian view of geocentrism, the idea that the Earth is the center of the universe with everything revolving around it. For centuries, Ptolemy’s model accurately predicted naked eye observations of the heavens.
But we know today that it is completely wrong. As Copernicus pointed out in the 1500s, the Earth orbits around the sun. Interestingly, many science historians have pointed out that Copernicus’ model actually wasn’t any better at making predictions than Ptolemy’s, at least until Galileo started making observations through a telescope. Indeed, the first printing of Copernicus’ theory had a preface from someone, probably hoping to head off controversy, saying the ideas presented might only be a predictive framework unrelated to actual reality.
For a long time, I was agnostic between realism and instrumentalism. Emotionally, scientific realism is hard to shake. Without it, science seems little more than an endeavor to lay the groundwork for technology, for practical applications of its findings. Many instrumentalists are happy to see it in that light. A lot of instrumentalists tend to be philosophers, theologians, and others who may be less than thrilled with the implications of scientific findings.
However I do think it’s important for scientists, and anyone assessing scientific theories, to be able to put on the instrumentalist cap from time to time, to conservatively assess which parts of a theory are actually predictive, and which may just be speculative baggage.
But here’s the thing. Often what we’re really talking about here is the difference between the raw mathematics of a theory, and its language description, including the metaphors and analogies we use to understand it. The idea is that the mathematics might be right, but the rest wrong.
But the language part of a theory is a description of a mental understanding of what’s happening. That understanding is a model we build in our brains, a neural firing pattern that may or may not be isomorphic with patterns in the world. And as I’ve discussed in my consciousness posts, the model building mechanism evolved for an adaptive purpose: to make predictions.
In other words, the language description of a theory is itself a predictive model. Its predictions may not be as precise as the mathematical portions, they may not be currently testable in the same manner as the mathematics (assuming those mathematics are actually testable; I’m looking at you string theorists), but it will still make predictions.
Using the Ptolemy example above, the language model did make predictions. It’s just that many of its predictions couldn’t be tested until the availability of telescopes. Once they could, the Ptolemy model quickly fell from favor. (At least it was quick on historical time scales. It wasn’t quick enough to avoid making Galileo’s final years miserable.) As many have pointed out, it wasn’t that Copernicus’ model made precisely right predictions, but it was far less wrong than Ptolemy’s.
When you think about it, any mental model we hold makes predictions. The predictions might not be testable, currently or ever, but they’re still there. Even religious or metaphysical beliefs make predictions, such as whether we’ll wake up in an afterlife after we die. They’re just predictions we may never be able to test in this world.
This means that the distinction between scientific realism and instrumentalism is an artificial one. It’s really just a distinction between aspects of a theory that can be tested, and the currently untestable aspects. Often the divide is between the mathematical portions and the language portions, but the only real difference there is that the mathematical predictions are precise, whereas the language ones are less precise, to varying degrees.
Of course, I’m basing this insight on a scientific theory about how the brain works. If that theory eventually ends up failing in its predictions, it might have implications for the epistemic point I’m making here, for the revision to our model of scientific knowledge I think is warranted.
And idealists might note that I’m also making the assumption that brains exist, that along with the rest of the external world they aren’t an illusion. I have to concede that’s true, and even if this understanding makes accurate useful predictions, within idealism, it still wouldn’t be mapping to actual reality. But given that I’m also assuming that all you other minds exist out there, it’s a stipulation I’m comfortable with.
As always, it might be that I’m missing something. If so, I hope you’ll set me straight in the comments.