# How can a risk calculation have a false positive?

April 2, 2009 8:48 AM Subscribe

How can an estimate of a health risk (eg 1 in 500) have a false positive?

Yesterday we were told we have a very high risk of Down Syndrome based on the Nuchal translucency test. We are currently doing sume reading and biding our time, waiting till we can have the maternal blood test done for a better estimate of risk. (I'm including our current situation here to clarify the question, but am not looking for discussion on the various other tests that are available, just the weird stats question below)

Wikipedia (and other information we've come across) says "Nuchal scanning alone detects 62% of all Down Syndrome with a false positive rate of 5.0%"

But if your risk is provided post-scan/screening test as 1:50 or 1:30,000 (or whatever) - how can you have a false positive?. Or are "false positives" only generated when the risk is very very high - eg when the measurement is such that doctors say a pregnant woman has a 50:50 chance, or nearly 100% chance of Down Syndrome? For that matter, how does it claim to detect 62% of cases - again, when they only identify a risk? (If you say someone has a 1:50,000 chance, does that mean you have identified that case if it turns out to be true?)

I'm sure this must be some statistical rule but it doesn't make any sense to us!

Yesterday we were told we have a very high risk of Down Syndrome based on the Nuchal translucency test. We are currently doing sume reading and biding our time, waiting till we can have the maternal blood test done for a better estimate of risk. (I'm including our current situation here to clarify the question, but am not looking for discussion on the various other tests that are available, just the weird stats question below)

Wikipedia (and other information we've come across) says "Nuchal scanning alone detects 62% of all Down Syndrome with a false positive rate of 5.0%"

But if your risk is provided post-scan/screening test as 1:50 or 1:30,000 (or whatever) - how can you have a false positive?. Or are "false positives" only generated when the risk is very very high - eg when the measurement is such that doctors say a pregnant woman has a 50:50 chance, or nearly 100% chance of Down Syndrome? For that matter, how does it claim to detect 62% of cases - again, when they only identify a risk? (If you say someone has a 1:50,000 chance, does that mean you have identified that case if it turns out to be true?)

I'm sure this must be some statistical rule but it doesn't make any sense to us!

I'm no statistician but my reading is that the 62% is the odds that IF Down syndrome exists, it will be detected.

The 5% is the odds that IF Down Syndrome does not exist, it will be falsely reported as detected.

So, the Hit Rate is 0.62. The False Alarm rate is 0.05.

If we assume your prior probability is 0.01 (one out of a hundred women) then that means according to my handy Bayesian calculator, that if the test says positive, then your actual probability of having Downs is 11%.

So, despite having a positive result, the odds would be 89% that you DONT have Downs. Further tests would help narrow this down. Please don't use my numbers as definitive. I'm only guessing on the prior probability, which is an important factor.

posted by vacapinta at 9:14 AM on April 2, 2009 [1 favorite]

The 5% is the odds that IF Down Syndrome does not exist, it will be falsely reported as detected.

So, the Hit Rate is 0.62. The False Alarm rate is 0.05.

If we assume your prior probability is 0.01 (one out of a hundred women) then that means according to my handy Bayesian calculator, that if the test says positive, then your actual probability of having Downs is 11%.

So, despite having a positive result, the odds would be 89% that you DONT have Downs. Further tests would help narrow this down. Please don't use my numbers as definitive. I'm only guessing on the prior probability, which is an important factor.

posted by vacapinta at 9:14 AM on April 2, 2009 [1 favorite]

Rate of false positives has nothing to do with the probability of the condition and everything to do with the accuracy of the test. Essentially (and using the classic definition of Type 1 error), a 5.0% false positive rate means that for every 20 people who don't have the condition and are tested, one will test positive. The probability of having the condition could be 1 in 2 or 1 in 2 million, but the false positive rate is only concerned with those who don't actually have the condition to begin with.

On preview, vacapinta has it.

posted by The Michael The at 9:18 AM on April 2, 2009

On preview, vacapinta has it.

posted by The Michael The at 9:18 AM on April 2, 2009

The test measures the thickness of the nuchal translucency. If the thickness is greater than expected, then the pregnancy is more likely to be affected. The more it deviates from the expectation, the greater the chance of an affected pregnancy. If it deviates just slightly, then there is not much chance of an affected pregnancy. There is always some uncertainty as to how likely it is that the pregnancy is affected, but depending on the magnitude of the deviation, you can tell how likely it is.

The uncertainty must be quantified. One way of quantifying the uncertainty is to use an odds ratio, which is far more informative for an individual test result, and easier for a layperson to interpret.

Another manner of quantifying the uncertainty is to use a binary classification—choose some point on the thickness scale at which you will say that all values below are negatives and all values above are positive. Since the results can be ambiguous, you will obviously get some false positives (FP) and false negatives (FN). If you set this point such that the false positive rate (FP/FP+TP) is constant, you might then see what the true positive rate or detection rate (TP/TP+FN) is at that point. The better the test is at discriminating the two possibilities, the higher the detection rate will be for a fixed false positive rate. This is a way of measuring the test's statistical power. A more powerful and accurate test would have a higher detection rate at the fixed 5 percent false positive rate.

Wikipedia has more discussion about the receiver operating characteristic curves that are used to analyze the performance of these tests, which may be helpful in understanding the problem.

You should not use vacapinta's calculations. Really the only number you should be concerned with is the odds ratio that was already given to you. It should incorporate all of the information about the uncertainty of the test.

posted by grouse at 9:18 AM on April 2, 2009 [2 favorites]

The uncertainty must be quantified. One way of quantifying the uncertainty is to use an odds ratio, which is far more informative for an individual test result, and easier for a layperson to interpret.

Another manner of quantifying the uncertainty is to use a binary classification—choose some point on the thickness scale at which you will say that all values below are negatives and all values above are positive. Since the results can be ambiguous, you will obviously get some false positives (FP) and false negatives (FN). If you set this point such that the false positive rate (FP/FP+TP) is constant, you might then see what the true positive rate or detection rate (TP/TP+FN) is at that point. The better the test is at discriminating the two possibilities, the higher the detection rate will be for a fixed false positive rate. This is a way of measuring the test's statistical power. A more powerful and accurate test would have a higher detection rate at the fixed 5 percent false positive rate.

Wikipedia has more discussion about the receiver operating characteristic curves that are used to analyze the performance of these tests, which may be helpful in understanding the problem.

You should not use vacapinta's calculations. Really the only number you should be concerned with is the odds ratio that was already given to you. It should incorporate all of the information about the uncertainty of the test.

posted by grouse at 9:18 AM on April 2, 2009 [2 favorites]

If you look at the citation for that claim in Wikipedia, it leads you to an abstract of the article from which the claim is derived. That abstract says:

posted by iminurmefi at 9:21 AM on April 2, 2009 [1 favorite]

*Detection and false-positive rates were computed in two ways: statistical modelling and directly.***The cut-off risk was 1 in 250 at term.**posted by iminurmefi at 9:21 AM on April 2, 2009 [1 favorite]

A nuchal translucency test performed by a hospital/clinic doesn't give a yes or no answer; it gives a

IANAstatistician, but an appropriate analogy might be: When the news reports the results of a poll that says the error is +/- 3%, 19 times out of 20. That is, in 95% of cases (19/20) the error will fall within 3%; the 95% is the confidence.

Therefore, in your case, they might be 95% confident that the risk is 1 in 500. Don't quote me on the numbers, though.

posted by Simon Barclay at 10:05 AM on April 2, 2009 [1 favorite]

**probability**of whether the foetus has Down's syndrome. If I'm understanding the OP correctly, the question is: Provided they were given a number like 1/500, what does a "false positive" mean?IANAstatistician, but an appropriate analogy might be: When the news reports the results of a poll that says the error is +/- 3%, 19 times out of 20. That is, in 95% of cases (19/20) the error will fall within 3%; the 95% is the confidence.

Therefore, in your case, they might be 95% confident that the risk is 1 in 500. Don't quote me on the numbers, though.

posted by Simon Barclay at 10:05 AM on April 2, 2009 [1 favorite]

Highlighting grouse's last sentence for truth:

These 62% and 5% numbers can be confusing to interpret -- that's why you're getting all these long explanations here. The odds ratio (1:500 or whatever) has done all the interpretation for you. It should be read exactly as it seems. If the ratio is 1:500, then for 500 couples with characteristics and test results like yours, 1 baby will have Down Syndrome.

posted by wyzewoman at 10:43 AM on April 2, 2009

**Really the only number you should be concerned with is the odds ratio that was already given to you. It should incorporate all of the information about the uncertainty of the test**.These 62% and 5% numbers can be confusing to interpret -- that's why you're getting all these long explanations here. The odds ratio (1:500 or whatever) has done all the interpretation for you. It should be read exactly as it seems. If the ratio is 1:500, then for 500 couples with characteristics and test results like yours, 1 baby will have Down Syndrome.

posted by wyzewoman at 10:43 AM on April 2, 2009

Hopefully I can make the probabilities easy to understand. It's a bit mathsy, but bear with me!

A "Detection rate" of 62% means that 62% of all embryos with Downs will have a positive result from this test.

A "False positive rate" of 5% means that 5% of all

So if we screened 100 healthy embryos and 100 with Downs, we would expect:

--- 62 Downs embryos correctly found

--- 38 Downs embryos missed

--- 95 healthy embryos correctly cleared

--- 5 healthy embryos incorrectly marked as Downs carriers.

So that's what the false positive rate is. However -- and this is the bit many people find hard to grasp --

So here's the last step: If Downs syndrome actually occurs once in 800 embryos*, that changes the numbers. Let's look at 10,000 tests to make the maths easier.

In 10,000 embryos we'd expect roughly 13 to actually have downs and 9987 to be healthy (because [1 in 800]*10,000 = 12.5). So in the tests we expect:

--- 8 Downs embryos correctly found (62% of 13)

--- 5 Downs embryos missed (38% of 13)

--- 9488 healthy embryos correctly cleared (95% of 9987)

---

So for this test, the fact that you have a positive result means that the chances of your embryo actually carrying Downs is something like 1.6%, or 1.6:100 (8 true Downs embryos among 507 positive results). The ratio that your doctor gave you reflects this number, and is the only one you need to worry about.

If the hospital has given you a number different from mine, trust them of course! The proportion of embryos carrying Downs varies widely with parents' age, health and, slightly, race. So instead of my 1/800, they'll know enough about you to start from a more accurate probability.

As a brief summary:

Detection rate="Sensitivity"=proportion of actual positives that get marked positive by the test.

False positive rate=1-"Specificity"=proportion of actual negatives that get marked positive by the test.

Odds ratio=chance that a sample with a positive result really is positive. Slightly counter-intuitive but

*This is an approximate figure: the real probability will vary with your and your partner's age, overall health and race, in roughly that order of importance. I know some background here but I am NOT a medical doctor.

posted by metaBugs at 11:03 AM on April 2, 2009 [6 favorites]

A "Detection rate" of 62% means that 62% of all embryos with Downs will have a positive result from this test.

A "False positive rate" of 5% means that 5% of all

*healthy*embryos will also have a positive result from this test.So if we screened 100 healthy embryos and 100 with Downs, we would expect:

--- 62 Downs embryos correctly found

--- 38 Downs embryos missed

--- 95 healthy embryos correctly cleared

--- 5 healthy embryos incorrectly marked as Downs carriers.

So that's what the false positive rate is. However -- and this is the bit many people find hard to grasp --

*this is almost meaningless unless we know how common Downs is in the population*.So here's the last step: If Downs syndrome actually occurs once in 800 embryos*, that changes the numbers. Let's look at 10,000 tests to make the maths easier.

In 10,000 embryos we'd expect roughly 13 to actually have downs and 9987 to be healthy (because [1 in 800]*10,000 = 12.5). So in the tests we expect:

--- 8 Downs embryos correctly found (62% of 13)

--- 5 Downs embryos missed (38% of 13)

--- 9488 healthy embryos correctly cleared (95% of 9987)

---

**499 healthy embryos incorrectly marked as Downs carriers**(5% of 9987)So for this test, the fact that you have a positive result means that the chances of your embryo actually carrying Downs is something like 1.6%, or 1.6:100 (8 true Downs embryos among 507 positive results). The ratio that your doctor gave you reflects this number, and is the only one you need to worry about.

If the hospital has given you a number different from mine, trust them of course! The proportion of embryos carrying Downs varies widely with parents' age, health and, slightly, race. So instead of my 1/800, they'll know enough about you to start from a more accurate probability.

As a brief summary:

Detection rate="Sensitivity"=proportion of actual positives that get marked positive by the test.

False positive rate=1-"Specificity"=proportion of actual negatives that get marked positive by the test.

Odds ratio=chance that a sample with a positive result really is positive. Slightly counter-intuitive but

**is the only number that you, as a patient, need to worry about**.*This is an approximate figure: the real probability will vary with your and your partner's age, overall health and race, in roughly that order of importance. I know some background here but I am NOT a medical doctor.

posted by metaBugs at 11:03 AM on April 2, 2009 [6 favorites]

Please make them do another NT scan. Ours was positive, the head of maternal fetal medicine told us that we had a 95% chance of having a DS child (talk about giving out bad info). I went home and started researching and realized that they had not been able to get a good side view so the tech had done a reading from the top of my son's head. You can't tell if they are bending their neck forward so that is not a good way to get a measurement. Instead of having me come back another day to try and get a good reading, they just went with what they got and gave us that news. We went home devastated but determined to get the best medical care for our child. After realizing the mistake, I threw 400 fits, made them repeat and the next test was perfect. The Dr was very smug and said that it was "refreshing to see patients who take a hand in their own care." He was a pompous ass and pissed that I second guessed him. Our son was born 27 weeks later with no DS, healthy and squalling.

Metabugs has a great explanation of the stats but I wanted to tell you our story. They should have offered to repeat the scan and if not, make them. You have to have the best information when dealing with the health of your child. Be sure that you are comfortable that the scan was done correctly. There is lots of room for error with this test. Best of luck...

posted by pearlybob at 2:00 PM on April 2, 2009 [1 favorite]

Metabugs has a great explanation of the stats but I wanted to tell you our story. They should have offered to repeat the scan and if not, make them. You have to have the best information when dealing with the health of your child. Be sure that you are comfortable that the scan was done correctly. There is lots of room for error with this test. Best of luck...

posted by pearlybob at 2:00 PM on April 2, 2009 [1 favorite]

This thread is closed to new comments.

posted by namewithoutwords at 9:14 AM on April 2, 2009