What is Bayes’s Theorem?

225 views

What is Bayes’s Theorem?

In: 48

6 Answers

Anonymous 0 Comments

Without getting too deep into the maths, Bayes’ theorem basically tells us how to update our beliefs based on the new evidence we see. When you dive into Bayes’ theorem, you’re going to come across this well-known equation: p(A|B) = p(A)*p(B|A)/p(B), but I prefer to write it like this: p(H|E) = p(H)*p(E|H)/p(E). Here, H stands for “hypothesis” and E is for “evidence”. Let’s break it down.

The p(H|E) part is what we call the posterior probability. It’s all about our belief **after** we take a look at the evidence (that’s why the “|E” part is there). This is the number we’re trying to figure out.

Then, there’s p(H), which is the prior probability. This one tells us about our belief **before** we’ve seen any evidence.

So, we start with the prior and we want to get to the posterior, right? To do this, we need a kind of “adjuster”, and that’s where p(E|H)/p(E) comes in. This is the likelihood ratio that helps us modify our belief. Basically, it’s about how likely we are to see this evidence if we assume the hypothesis is totally true (which, to be clear, it can never be 100%, but we assume it for the sake of calculation). Say the evidence would occur just as often whether the hypothesis is true or not, meaning p(E|H) = p(E). In this case, the likelihood ratio would be 1, so your prior belief doesn’t change and your posterior probability stays the same. But if the evidence is twice as likely to show up when the hypothesis is true, meaning p(E|H)/p(E) = 2, then you double your prior belief.

Here’s where it gets a bit tricky though: doubling your prior doesn’t always lead to a major shift in the posterior. Like, if your prior belief was 1%, even with a likelihood ratio of 2, your posterior only bumps up to 2%. So, it’s not always as significant as it might seem.

Let’s look at a practical example: You’re a doctor and you’re testing for a blood-borne disease. From past experience, you know that 2 out of every 100 people have the disease in your population. All positive are accurately detected. However, the test also returns a positive result for 5 out of 100 people who don’t actually have the disease. Now, let’s say a random person tests positive. What’s the probability that this person actually has the disease?

Sure, you could just crunch the numbers and get the answer, but let’s try to understand the intuition behind it. You start with your prior belief before the test, which is 2%. This means, without any additional information, there’s a 2% chance that this person has the disease.

Then, you run the test and it comes back positive. Given what you know about the test, you can expect about 7% of tests to come back positive anyway (2% true positives and 5% false positives).

But what you’re really interested in is how much you should adjust your initial belief based on this new piece of evidence. So, you calculate the likelihood ratio. Assuming the person really does have the disease, then they’re 100% going to test positive. This is 14.29 times more likely than getting a positive test result by chance, because p(E|H)/p(E) = 1/.07 = 14.29.

But we’re not done yet. Even though a positive test is 14.29 times more likely for a person with the disease, it doesn’t mean they’re guaranteed to have it. You’ve got to remember that this disease is pretty rare to start with. So, even with a positive test result, the posterior probability only jumps to 2%*14.29 = 28.6%. It’s a lot higher than 2%, but still less than a coin flip.

In practice, we’d likely run more tests, and each new posterior probability would become the prior probability for the next test.

I hope this explanation and example show that Bayes’ theorem is just all about adjusting our beliefs in light of new evidence.

You are viewing 1 out of 6 answers, click here to view all answers.