I'm having trouble understanding likelihood ratios and diagnostic tests.
August 1, 2013 8:48 AM   Subscribe

I'm struggling to understand likelihood ratios (LR) in the context of diagnostic tests, and why a positive LR is influenced by the sensitivity of the test.

I know that:
      1. its a tool to get from a pre-test probability to a post-test probability
      2. it is defined as the (percentage of people with the disease who test positive) divided by (the percentage of people without the disease who test positive).
      3. Or, alternately, a positive LR is equal to sensitivity/ (1-specificity).
What I don't understand is why is a positive LR dependant on the sensitivity of the test?

For example, lets say I want to diagnose someone with Hairy Face Syndrome, and my diagnostic test is that the clouds open up, and God comes down from the heavens and tells me "this man has hairy face syndrome!"

This is, understandably, a very *specific* test, but very poorly sensitive.

The way I've always understood likelihood ratios is that the positive likelihood ratio helps you interpret what to do with a positive result, not how useful searching for a positive result is likely to be.

Therefore, why does the low sensitivity of waiting for an act-of-god negatively impact the likelihood ratio and, consequently, your interpretation of it when it does, miraculously happen?
posted by cacofonie to Science & Nature (6 answers total)
 
Best answer: I wonder if the issue is that you're conceptualizing sensitivity and specificity in a way that isn't fully correct? You say your word-of-god test is very specific but not very sensitive. Why? That would mean that god NEVER comes down and declares an unhairy person hairy (no false positives). But for it to be poorly sensitive, it means that there are lots of hairy-faces walking around who, for whatever reason, god decides not to talk about (lots of false negatives).

In other words, specificity and sensitivity are really just mathmatical expressions of the rate of false negatives and false positives, NOT a reference to how 'targeted' or 'narrowly-focused' a test is.

And because you can have a test that is very sensitive but not specific, or both sensitive AND specific, etc., etc., LR is just a way to get both attributes (sensitivity and specificity) into one number so you can make easy judgments. All the decisions you make based on an LR, you could also make by looking at sensitivity and specificity - it's just a way of summarizing them. And because it's just a way of summarizing them, it varies when they vary.
posted by Ausamor at 9:22 AM on August 1, 2013


This link might help explain a bit.
posted by inertia at 11:25 AM on August 1, 2013


Best answer: I think your biggest impediment in understanding the positive LR is that you have chosen a poor example. In your example of God intervening by yelling down a proclamation, the assumption is that no matter what the condition of the patient, you can expect a negative test result practically all the time. In the example, the sensitivity is practically zero, and the specificity is practically 1, so (1-specificity) is practically zero. Your positive LR is then the likelihood ratio between two fantastically unlikely events. So, it fails to capture the intuition of the problem.

The worked example picture grid near the bottom has a much more realistic scenario. One thing to notice is that both sensitivity and specificity (columns) are both functions of the test alone, and not of the prevalence of the condition in the people being tested. But, the PPV and NPV going across depend on the prevalence of the condition in the people being tested (the prior probability of having the condition). In the example, the probability that the person has the condition before screening is 1.48%.

The positive LR is most easily understood (to me) like this. Your positive test results come from both true-positives, and false negatives.

Borrowing numbers from the example on wikipedia, let's say you mix together 100 people who had the condition and 100 people who did not have the condition, and choose one person at random. Your odds ratio for having the condition against not having it is 100:100, or 1:1, or 1/1. If you test this person and get a positive result, what are the new odds they have the condition? Well, for the 100 people with the condition we get 67 true-positive results (on average) and for the 100 people without the condition, we expect 9 false-negatives. So, with a positive result, the odds our patient has the condition are E[#TP]/E[#FN], or 67/9. Notice that the odds the patient has the condition are equal to the positive likelihood ratio BUT ONLY IF we start with an exactly equal degree of belief that the patient has the condition versus does not have the condition (the 100:100 starting point). Clearly, the more sensitive the test is, the greater the number of true positive results we will see coming from those 100 people with the condition, and the more specific the test is, the fewer false-negatives we will see coming from the 100 people without the condition. The equation for the problem above would be (100/100) * (67/9) = (67/9). To convert the final answer to a probability the person has the disease, we would have to compute (67/9) = x/(1-x) and solve for x. (basically converting from odds to a probability.)

The value of the pLR is that it is uncoupled from the starting odds, which in the first example were 100:100.

The computation can be repeated imagining that we are drawing people from the general population in which a person has a 1.48% probability of having the condition. Setting up the equation in terms of odds, we have (1.48/98.52) * (67/9) = x/(1-x). Solving for x should yield ~10%, the PPV of the test.
posted by Maxwell_Smart at 12:36 PM on August 1, 2013 [1 favorite]


Response by poster: Wow, thanks for the answers, they're so helpful! Thanks especially for pointing out the misconception. I often take things to extremes to force them to make sense and thats how I survived undergraduate math, but sometimes it gets me into trouble.

Its a bit clearer with your explanation, Maxwell, but maybe you could help with the example that got me thinking about this to begin with:

Someone on the wards made the following statement:

" When auscultating for severe aortic stenosis, the absence of a murmur radiating to the right sternal border has a negative likelihood ratio of 0.1. However, I don't find it useful since its so rare to find a murmur that doesn't radiate"

The second sentence implies that the test has a low sensitivity. But sensitivity is a core part of the LR formula, so it contradicts the first sentence, and hence began my confusion.
posted by cacofonie at 2:21 PM on August 1, 2013


Response by poster: I guess the other takeaway point is that,

Even if I'm sitting there, with God saying that this patient has HFD, the fact that he has allowed billions of other cases of HFD to pass by without speaking, should influence me to be more skeptical.
posted by cacofonie at 2:24 PM on August 1, 2013


"The second sentence implies that the test has a low sensitivity".....

Not necessarily - it could just have a low prevalence.
posted by Ausamor at 6:56 AM on August 2, 2013


« Older Cash Rules Everything in Pittsburgh, but why?   |   Is there enough to do in and around Minneapolis... Newer »
This thread is closed to new comments.