At Aeon, Nevin Climenhaga makes some interesting points about probability. After describing different interpretations of probability, one involving the frequency with which an event will occur, another involving its propensity to occur, and a third involving our confidence it will occur, he describes how, given a set of identical facts, each of these interpretations can lead to different numbers for the probability. He also describes how each interpretation has its problems.

He then proposes what he calls the “degree of support” interpretation. This recognizes that probabilities are relative to the information we consider. That is, when we express a probability of X, we are expressing that probability in relation to some set of data. If we take away or add new data, the probability will change.

This largely matches my own intuition of probability, that it is always (or almost always) relative to a certain perspective, to a particular vantage point. If I ask what is the probability of it raining tomorrow, you can give an answer before looking up the weather report based on what you know at that moment. It might not be a particularly precise probability, but it can still be made based on where you live and your experience of how often it typically rains there. Of course, once you look at the weather report, you’ll likely adopt the probabilities it provides (unless the forecast where you live has historically been unreliable).

(One possible exception to probabilities being relative is quantum physics. Depending on which interpretation you favor, quantum probabilities may be objective or they may be relative. In non-deterministic interpretations, they might be objective (although that depends on your interpretation of the interpretation 🙂 ). But in the deterministic interpretations, it would still be relative to our perspective.)

Every so often I do a post discussing the probability of something, such as the probability of other intelligent aliens in our galaxy. It’s not unusual for someone to comment that we don’t know enough to estimate any probabilities and that the whole exercise is then pointless. But if probabilities are relative, this position is wrong.

Of course, my estimated probabilities may be wrong, but if so the correct way to address it is in relation to the data that is being considered. Or to offer additional data that may change the probability. Or point out why some (or all) of the data should not be considered when making the estimate.

But if we have a perspective, then we have the ability to estimate probabilities from that perspective. If our perspective is one of complete ignorance, the probability should reflect it. Maybe we can only say the probability of something being true is 50%, that is, it has an equal chance of being true or false. Or if the proposition is one of ten possible outcomes, then it might be more along the lines of 10% probable.

But it doesn’t take much knowledge to shift a probability. In 1600, a natural philosopher could probably rationally argue that, based on what was then known, the probability of the heliocentric model of the solar system being true was only 50%. But after Galileo’s blurry telescopic observations a few years later, along with confirmations by other observers, the probability shifted dramatically, so much so that by Newton’s time in the latter part of that century, the probability had shot up much higher.

Does that mean the natural philosopher in 1600 was wrong in his probabilities? No, because relative to his perspective at the time, those were the probabilities. He would only have been wrong if he hadn’t used the data available to him in making his estimate, or used it correctly, or insisted due to ideological commitments that the probability was zero.

So we’re always in a position to estimate probabilities. We may not be in a position to do so *precisely*, since that usually requires a lot of data, but the argument that we should never try strikes me as invalid. The only valid argument is whether or not we’re doing it correctly based on what is then known.

Unless of course I’m missing something?

Probabilities reflect our own uncertainties, rather than intrinsic uncertainties (quantum mechanics excluded.) They are a neat mathematical way of giving rigour to vagueness.

LikeLiked by 3 people

That’s an excellent way of putting it!

LikeLiked by 1 person

Probability is tricky. For one, it strongly depends on how much we know. E.g. if you consider the probability that the plane you are about to board may crash and you use only statistical data to base your opinion, you may come up with a number of one in a million or so. But if you know that the plane has faulty gear, the probability will be 1.

But, to make things more complicated, you also need to know how much you can trust the data you have. I.e., the probability of that data to be incorrect called “confidence level”.

An interesting example is a number from statistical quality control called “LTPD” (lot tolerance percent defective). If you test 231 sample and have 0 failures, you can expect not more than 1% defects in the lot with “confidence level” of 90% (i.e. a 10% probability that there will be more than 1% defects). But if you want to be 99% confident, you can guarantee no more than 2% defects. And if you are OK with 30% probability of being wrong, you can guarantee no more than 0.5% defects. You can play with these numbers here https://www.maximintegrated.com/en/design/tools/calculators/general-engineering/ltpd.cfm

And, of course, there is an example of Bayesian theorem application to medical tests well explained here https://youtu.be/CKyD5seCt7k

The matter is, actually, more complicated if probabilities of a false negative and a false positive are different.

LikeLiked by 2 people

Good point about confidence levels. I thought about bringing them up, but felt like it would have clouded the point. But they’re an important point if we’re trying to be rigorous.

The issue, of course, is that all knowledge is ultimately probabilistic. We never had 100% certitude about anything. So when we use that knowledge to estimate probabilities, we end up with probabilities of the probabilities, confidence levels, and all the rest.

LikeLiked by 1 person

Probability is a book I leave firmly closed!

LikeLiked by 2 people

Are you 100% sure about that?

LikeLiked by 3 people

Relatively.

LikeLiked by 1 person

It’s starting to sound subjective at that point. Which is fine, a life form has to work out its odds (if it works out anything) for its survival (or what it thinks is involved in its survival). But yeah, it comes from a perspective, but the word ‘probability’ makes it sound like an absolute knowledge. Which is what it always feels like of course, because without perspective the subjective cannot be distinguished from the objective (and the brain starts out with a default zero perspective on itself). Each interpretation of the same set of facts can either be taken as slowly starting to map oneself from more of a position outside of oneself…or it can be taken as some kind of absolute knowledge and the discrepancy in interpretations kind of written off.

LikeLiked by 1 person

I think you can make is less subjective by focusing on the data. For example, weather forecasts are expressed as probabilities using meteorological data that is prescribed and standard. Any meteorologist using that data and the standard techniques will produce the same forecast. The measure of those standard models is in the accuracy of their predictions, that is, if they say that there’s an 80% chance of rain given certain measurements, then we’d expect any sample of 10 days with those conditions to have about 8 days where it rained.

But anytime someone says there is probability X that Y will happen, we should expect them to provide the data and model they’re using to make that statement.

LikeLike

Reblogged this on General Neuroscience.

LikeLiked by 1 person