I’ve often noted here the importance of predictions, both in terms of our primal understanding of reality, such as how to get to the refrigerator in your house, or in terms of scientific theories. In truth, every understanding of reality involves predictions. Arguably a fundamental aspect of consciousness is prediction.
Of course, not every notion involves testable predictions. That’s often what is said separates science from metaphysics. For example, various religions argue that we’ll have an afterlife. These are predictions, just not ones that we’ll ever be able to test. (Short of dying.)
But the border between science and metaphysics (or other forms of philosophy) is far blurrier than any simple rule of thumb can capture. Every scientific theory has a metaphysical component. (See the problem of induction.) And today’s metaphysics may be tomorrow’s science. Theories are often a complex mix of testable and untestable assertions, with the untestable sometimes being ferociously controversial.
Anyway, Sabine Hossenfelder recently did a post arguing that scientific predictions are overrated. After giving some examples (somewhat contrived) where meaningless predictions were made, and a discussion about unnecessary assumptions in poor theories, she makes this point:
To decide whether a scientific theory is any good what matters is only its explanatory power. Explanatory power measures how much data you can fit from which number of assumptions. The fewer assumption you make and the more data you fit, the higher the explanatory power, and the better the theory.
I think this is definitely true. But how do we know whether a theory has “explanatory power”, that it “fits the data”? We need to look at the theory’s mathematics or rules and see what they say about that data. One way to describe what we’re looking for is… accurate predictions of the data.
Hossenfelder is using the word “prediction” to refer only to assertions about the future, or about other things nobody knows yet. But within the context of the philosophy of science, that’s a narrow view of the word. Most of the time, when people talk about scientific predictions, they’re not just talking about predictions of what has yet to be observed, but also predictions of existing observations.
What Hossenfelder is actually saying is that we shouldn’t require a theory to be able to do that narrow version of predict. It can also do predictions of existing data. If we want to be pedantic about it, we can call these assertions about existing data retrodictions.
(We could also use “postdiction” but that word has a negative connotation in skeptical literature, referring to mystics falsely claiming to have predicted an event before it happens.)
Indeed, for us to have any trust in a theory’s predictions about the unknown, it first must have a solid track record of making accurate retrodictions, of fitting the existing data. And to Hossenfelder’s point, if all a theory does make are retrodictions, it still might be providing substantial insight.
There is a danger here of just-so stories, theories which explain the data, but only give an illusion of providing insight. Hossenfelder’s point about measuring the ratio of assumptions to explanation, essentially of valuing a theory’s parsimony, is somewhat a protection against that. But as she admits, it’s more complicated than that.
For example, naively using her criteria, the interpretation of quantum mechanics we should all adopt is Everett’s many-worlds interpretation. It makes fewer assumptions than any other interpretation. (It’s the consequences, not the assumptions, that people object to.) But the fact that none of the interpretations currently make unique and testable predictions (or retrodictions) is what should prevent our accepting any particular one as the right one.
So, in general, I think Hossenfelder is right. I just wish she’d found another way to articulate it. Because now anytime someone talks about the need for testable predictions, using the language most commonly used to describe both predictions and retrodictions, people are going to cite her post to argue that no such thing is needed.