I’ve been thinking again about the realism vs anti-realism debate, about what scientific theories actually tell us about the world. Historically in the philosophy of science, the debate is between realists, who see scientific theories being at least an approximate representation of reality, and instrumentalists or anti-realists, who see those theories as mere prediction frameworks that predict observations, but can’t really tell us anything more than that.
Most scientists are realists, at least about their own theories. It’s not unusual for people to be realist about some theories and anti-realist about others, depending on how they feel about the implications of the theory. For example, someone can be a realist about germ theory, but an anti-realist toward the quantum wave function.
One thing I’ve been struggling with lately is the binary notion involved here: real vs anti-real. As I’ve noted before, when it comes to something like the wave function, I think some degree of realism is hard to avoid, but that doesn’t necessarily mean accepting full scale realism. This is hard to talk about within a dichotomy between real and anti-real.
It leads me to wonder what the real issue is here. What’s really at stake in these debates? The answer might be the scope of the theory, how broad or narrow its domain of applicability might be. Someone holding an instrumentalist or anti-real stance toward a theory will tend to think its scope is narrow. It works for the observations we’re able to test it against, but they think trying to apply the theory to unobservable outcomes is pushing it beyond its bounds.
At first glance, the anti-real stance seems epistemically responsible. A core tenet of science is testing a model’s predictions. So using the model to say things beyond our ability to test might seem problematic, flirting with the danger of black swans. And yet it’s very common in science to take a well tested model and assume its structure continues beyond what we can observe, to apply it beyond testable boundaries.
For example, geologists use radiometric dating to determine the age of rocks. It’s how we know the age of the Earth, or of fossils. We can test the measurements against things we do know the age of, such as historical artifacts. But for rocks and fossils from hundreds of millions to billions of years ago, we’re assuming that the structures and frameworks we can verify hold well beyond those verifications.
Anyone who’s ever debated a young earth creationist has probably heard the line decrying that this is an overreach of the theory. Do they have a point? A geologist would point out that there were many indications that the earth was far more ancient before radiometric dating came along. There’s also what appears to be corroborating evidence from astronomy and other sources. Although these other sources are also vulnerable to the same charge, that they involve using theories to make predictions far outside of their testable domain.
In short, much of what science tells us about the world gets called into question if we’re consistent in an anti-real stance. Of course, as noted above, most people aren’t consistent with this. The question is how we decide which stance to take for any particular theory. There’s a danger of just choosing based on what threatens or flatters our metaphysical preferences.
Are there more rational justifications for accepting the predictions of theories beyond their testable domain? I alluded to one above, that multiple theories converge and corroborate each other. That seems to increase the probability that they’re accurate in those domains.
Another is to see if there are any indications of the limits of the theory in the data we do have. The classic example is comparing two predictions from Newtonian mechanics, the existence of Neptune outside of Uranus’ orbit, and the existence of Vulcan inside of Mercury’s, in both cases due to issues with the known planet’s orbit. The Neptune prediction was validated a few years after it was made, but the Vulcan one never was, because the limit of Newtonian theory had been reached. General relativity would be required to explain Mercury’s orbital precession.
It’s well recognized that Newtonian dynamics continue to have a very practical domain of applicability. So much so that NASA continues to use it for most of its mission planning. General Relativity is more accurate but much harder to work with. And as someone once pointed out to me, even the ancient geocentric Ptolemaic model retains a domain of applicability for some purposes, such as casual amateur astronomy.
Finding the boundaries of a theory is why I find the experiments testing the limits of quantum superpositions, entanglement, and other aspects of the wave function so interesting. If the real debate is about its scope, then testing its limits will hopefully give us hints on how far we can follow its implications. As noted a few posts back, right now the data doesn’t seem to be giving us any indications of those limits. But of course that could change at any time.
Interestingly, this scoping issue seems to apply to more than just scientific theories, but also philosophical ones as well, but I think I’ll save that for another post.
What do you think? Does thinking about this in terms of scope, of domain of applicability, make more sense than a simply real vs anti-real dichotomy? If not, what do you think the issues are with it?