who could possibly be against comparing effectiveness?
November 17, 2010 3:44 PM Subscribe
Evidence-based medicine/comparative effectiveness skeptics?
Any serious, scholarly critiques of the movement toward evidence-based medicine and comparative effectiveness research? Maybe examining situations like these:
I'd also be interested in something from the other side, explaining why the above situations would never happen.
Any serious, scholarly critiques of the movement toward evidence-based medicine and comparative effectiveness research? Maybe examining situations like these:
- Your doctor wants to try a treatment that, based on experience, she thinks may work for you, but research shows that it helps only 3 percent of patients. Turns out that those 3 percent correspond to people of your age/gender/ethnicity/body type (hence your doctor's intuition.)
- Related -- a treatment prolongs life by 10 years for 5 percent of the population and not at all for the rest. This is reported as "6 extra months of life."
- Screening for a disease doesn't improve life expectancy because people who test negative become complacent about their health and discontinue other healthy behaviors. Those who continue their healthy behaviors would have benefited from the screening.
I'd also be interested in something from the other side, explaining why the above situations would never happen.
Best answer: Parachute use to prevent death and major trauma related to gravitational challenge.
That paper is hilarious, but it's also a solid critique of overly zealous demand for evidence. Is that the kind of thing you are looking for? It seems to me that the examples you are giving are simply calls for more conscientious applications of the principles of evidence-based medicine.
posted by gmarceau at 4:12 PM on November 17, 2010 [2 favorites]
That paper is hilarious, but it's also a solid critique of overly zealous demand for evidence. Is that the kind of thing you are looking for? It seems to me that the examples you are giving are simply calls for more conscientious applications of the principles of evidence-based medicine.
posted by gmarceau at 4:12 PM on November 17, 2010 [2 favorites]
What proj said.
proj: Is the answer 99%? Examples like yours keep me up all night wondering if I got it right, and I would much rather enjoy my Les Mis concert instead...
posted by SMPA at 4:13 PM on November 17, 2010
proj: Is the answer 99%? Examples like yours keep me up all night wondering if I got it right, and I would much rather enjoy my Les Mis concert instead...
posted by SMPA at 4:13 PM on November 17, 2010
Best answer: Your examples are things that are gradually being addressed within EBM, either by the move towards personalised medicine, or by efforts to measure effectiveness, not just efficacy.
You might like to start reading papers by JP Ioannidis, who has been gradually highlighting the failures in current EBM, and who was recently profiled at length in the Atlantic.
posted by roofus at 4:18 PM on November 17, 2010
You might like to start reading papers by JP Ioannidis, who has been gradually highlighting the failures in current EBM, and who was recently profiled at length in the Atlantic.
posted by roofus at 4:18 PM on November 17, 2010
Best answer: In a slightly different vein, this is a live debate in mental health care, where there is a big push toward evidence based practice. The people who are writing against it, and arguing for practice-based evidence, are Scott Miller, Barry Duncan, Bruce Wampold, and Michael Lambert.
The argument goes something like this: Mental health treatment modalities that are tested tend to be tested when they are 1) easiest to test, and 2) the researchers have a big allegiance to the treatment. It turns out that the first excludes many viable therapies that are not easily manualized, and the second tends to really affect outcomes and interpretation of the data. Further, even within the movement for evidence-based mental health practice, the results of modality studies tend to be misused. For example, cognitive behavioral therapy (CBT) will be shown to effectively treat depression in a group of patients with major depressive disorder. Those patients, unlike most patients, are selected to have one diagnosis--major depressive disorder--with no other complicating factors. In other words, they have good social supports, jobs, they aren't losing their house, they don't have substance abuse issues, etc. The truth is, though, that most people present for mental health treatment with several concurrent diagnosable problems, depression and anxiety or depression and alcoholism. EVB proponents tend to promote CBT for treatment of all patients with depression, even though by their own criteria the only people who we know can be successfully treated with CBT are those with uncomplicated major depressive disorder. (In this example.)
There's a further problem, which is this: The same CBT study shows that the modality is effective for 80% of those with uncomplicated major depressive disorder. What happens to the other 20%? EVB proponents don't say. Perhaps they're just SOL.
The argument on the other side is that EVB proponents are measuring the wrong thing. That is, they believe they are measuring the effects of the particular modality they are studying, but they're actually measuring the effect of mental health treatment in general. All modalities work about equally well.
This is, obviously, different from medical studies. For one thing, therapy studies are not blinded. However, it does get to some of what you're asking about.
posted by OmieWise at 4:34 PM on November 17, 2010 [3 favorites]
The argument goes something like this: Mental health treatment modalities that are tested tend to be tested when they are 1) easiest to test, and 2) the researchers have a big allegiance to the treatment. It turns out that the first excludes many viable therapies that are not easily manualized, and the second tends to really affect outcomes and interpretation of the data. Further, even within the movement for evidence-based mental health practice, the results of modality studies tend to be misused. For example, cognitive behavioral therapy (CBT) will be shown to effectively treat depression in a group of patients with major depressive disorder. Those patients, unlike most patients, are selected to have one diagnosis--major depressive disorder--with no other complicating factors. In other words, they have good social supports, jobs, they aren't losing their house, they don't have substance abuse issues, etc. The truth is, though, that most people present for mental health treatment with several concurrent diagnosable problems, depression and anxiety or depression and alcoholism. EVB proponents tend to promote CBT for treatment of all patients with depression, even though by their own criteria the only people who we know can be successfully treated with CBT are those with uncomplicated major depressive disorder. (In this example.)
There's a further problem, which is this: The same CBT study shows that the modality is effective for 80% of those with uncomplicated major depressive disorder. What happens to the other 20%? EVB proponents don't say. Perhaps they're just SOL.
The argument on the other side is that EVB proponents are measuring the wrong thing. That is, they believe they are measuring the effects of the particular modality they are studying, but they're actually measuring the effect of mental health treatment in general. All modalities work about equally well.
This is, obviously, different from medical studies. For one thing, therapy studies are not blinded. However, it does get to some of what you're asking about.
posted by OmieWise at 4:34 PM on November 17, 2010 [3 favorites]
(Sorry if this is off-topic, but I do think it's an excellent illustration of the problem the OP is asking about.)
SMPA: The answer is (assuming that I've done the math right) 0.9%. That is, the odds that a positive test under those conditions actually indicates that the disease is present are less than one percent.
Imagine you have 100,000 patients who have the same risk profile as the patient.
You can assume that 0.1%, or 100 of them, are HIV positive.
You can assume that 99.9%, or 999,900 of them are HIV negative.
Of the ones who are positive, 99 will test positive and 1 will get a false negative on a test with 99% accuracy.
Of the ones who are negative, 989,901 will test negative and 9,999 will get a false positive.
That means that if your test is positive, the odds that you are actually positive are about 0.9% (99/(99+9,999).
In other words, if you're in a very low risk group and you test positive, get another test. Get more than one. Because the odds are extremely, extremely high that the test is wrong, even if it's a very good test.
posted by decathecting at 4:43 PM on November 17, 2010 [1 favorite]
SMPA: The answer is (assuming that I've done the math right) 0.9%. That is, the odds that a positive test under those conditions actually indicates that the disease is present are less than one percent.
Imagine you have 100,000 patients who have the same risk profile as the patient.
You can assume that 0.1%, or 100 of them, are HIV positive.
You can assume that 99.9%, or 999,900 of them are HIV negative.
Of the ones who are positive, 99 will test positive and 1 will get a false negative on a test with 99% accuracy.
Of the ones who are negative, 989,901 will test negative and 9,999 will get a false positive.
That means that if your test is positive, the odds that you are actually positive are about 0.9% (99/(99+9,999).
In other words, if you're in a very low risk group and you test positive, get another test. Get more than one. Because the odds are extremely, extremely high that the test is wrong, even if it's a very good test.
posted by decathecting at 4:43 PM on November 17, 2010 [1 favorite]
Best answer: Steven Novella, the Director of General Neurology at Yale Medical School, is a friendly but vocal critic of EBM. As a member of the skeptical movement, his main argument is that EBM fails to consider the prior probability of claims, so inherently silly ideas are given too much credit if there are enough positive studies. He coined the term "science-based medicine" as an alternative, which seems to have gotten some traction, and he has a blog with several other physicians on the topic. Not quite the angle of criticism that you seem to be looking for, but you might find it interesting anyway.
posted by abcde at 4:44 PM on November 17, 2010 [2 favorites]
posted by abcde at 4:44 PM on November 17, 2010 [2 favorites]
Your terminology (evidence-based practices and comparative effectiveness) is at odds with at least two of your three examples. If you're not hip to the difference between comparative effectiveness and cost effectiveness, that might be a block in your search for critiques.
Your first two examples--where a treatment has been proven effective (against what you don't say, but let's assume the next-best treatment rather than no treatment) but only for a small subset of the population--are situations in which comparative effectiveness reviews would generally support the use of that treatment. After all, a six-month increase in average lifespan is indeed an increase, and thus validates the treatment as superior to the next-best alternative. Comparative effectiveness and evidence-based practices (EBPs) do not, as a rule, take into account the cost of the treatment; they are only concerned with the clinical efficacy or effectiveness. Now a cost-effectiveness review might look at those same two treatments and decide that the expense per quality-adjusted life year is too high, and thus not recommend it. So any given medical practice or treatment might be classified as:
1. not evidence-based/not supported by comparative effectiveness [no way these treatments are ever cost-effective]
2. is evidence-based/supported by comparative effectiveness--that is, it works, at least for some people--but is not cost-effective
3. is evidence-based/supported by comparative effectiveness review and is also supported by cost-effectiveness reviews
posted by iminurmefi at 4:50 PM on November 17, 2010
Your first two examples--where a treatment has been proven effective (against what you don't say, but let's assume the next-best treatment rather than no treatment) but only for a small subset of the population--are situations in which comparative effectiveness reviews would generally support the use of that treatment. After all, a six-month increase in average lifespan is indeed an increase, and thus validates the treatment as superior to the next-best alternative. Comparative effectiveness and evidence-based practices (EBPs) do not, as a rule, take into account the cost of the treatment; they are only concerned with the clinical efficacy or effectiveness. Now a cost-effectiveness review might look at those same two treatments and decide that the expense per quality-adjusted life year is too high, and thus not recommend it. So any given medical practice or treatment might be classified as:
1. not evidence-based/not supported by comparative effectiveness [no way these treatments are ever cost-effective]
2. is evidence-based/supported by comparative effectiveness--that is, it works, at least for some people--but is not cost-effective
3. is evidence-based/supported by comparative effectiveness review and is also supported by cost-effectiveness reviews
posted by iminurmefi at 4:50 PM on November 17, 2010
You can find the solution to a similar style Bayes Theorem problem about AIDS testing here (PDF). The numbers above were just made up and you don't actually have enough information to solve the problem there.
posted by proj at 4:51 PM on November 17, 2010
posted by proj at 4:51 PM on November 17, 2010
Response by poster: proj is right, I'm really looking for critiques of how these practices -- in addition to cost effectiveness analyses, which I should have mentioned explicitly -- do or might work in practice, given current research methodologies and ways that we simplify experimental results when turning them into concrete guidelines. There are some great suggestions in this thread; I have a lot of reading to do.
posted by Ralston McTodd at 5:24 PM on November 17, 2010
posted by Ralston McTodd at 5:24 PM on November 17, 2010
Comparative effectiveness and personalized medicine: evolving together or apart?
posted by Wordwoman at 5:29 PM on November 17, 2010
posted by Wordwoman at 5:29 PM on November 17, 2010
The link in roofus' comment was working.
The link for article on Ionnidis is here
posted by sien at 6:12 PM on November 17, 2010
The link for article on Ionnidis is here
posted by sien at 6:12 PM on November 17, 2010
Numbers are wrong there. I (think I've) fixed it:
Imagine you have 100,000 patients who have the same risk profile as the patient.
You can assume that 0.1%, or 100 of them, are HIV positive.
You can assume that 99.9%, or 99,900 of them are HIV negative.
Of the ones who are positive, 99 will test positive and 1 will get a false negative on a test with 99% accuracy.
Of the ones who are negative, 98,901 will test negative and 999 will get a false positive.
That means that if your test is positive, the odds that you are actually positive are about 9% (99/(99+999).
posted by alexei at 6:42 PM on November 17, 2010 [1 favorite]
Imagine you have 100,000 patients who have the same risk profile as the patient.
You can assume that 0.1%, or 100 of them, are HIV positive.
You can assume that 99.9%, or 99,900 of them are HIV negative.
Of the ones who are positive, 99 will test positive and 1 will get a false negative on a test with 99% accuracy.
Of the ones who are negative, 98,901 will test negative and 999 will get a false positive.
That means that if your test is positive, the odds that you are actually positive are about 9% (99/(99+999).
posted by alexei at 6:42 PM on November 17, 2010 [1 favorite]
This is one of my major research interests. Some articles to read:
Causality, Mathematical Models and Statistical Association: Dismantling Evidence-Based Medicine, Journal of Evaluation in Clinical Practice, Paul Thompson.
Cigarettes, Cancer, and Statistics by Sir Ronald Fisher. (Mostly about the link between lung cancer, but there are some good general lessons in here about the impossibility of true randomization in RCTs.)
The essence of EBM, Brendan M. Reilly, British Medical Journal
What kind off evidence is it that Evidence-Based Medicine advocates want health care providers and consumers pay attention to? R Brian Haynnes, BMC Health Services Research
The Use of Statistical Methods in the Analysis of Clinical Studies, David Salsberg, Journal of Clinical Epidemiology
posted by mellifluous at 7:45 PM on November 17, 2010 [2 favorites]
Causality, Mathematical Models and Statistical Association: Dismantling Evidence-Based Medicine, Journal of Evaluation in Clinical Practice, Paul Thompson.
Cigarettes, Cancer, and Statistics by Sir Ronald Fisher. (Mostly about the link between lung cancer, but there are some good general lessons in here about the impossibility of true randomization in RCTs.)
The essence of EBM, Brendan M. Reilly, British Medical Journal
What kind off evidence is it that Evidence-Based Medicine advocates want health care providers and consumers pay attention to? R Brian Haynnes, BMC Health Services Research
The Use of Statistical Methods in the Analysis of Clinical Studies, David Salsberg, Journal of Clinical Epidemiology
posted by mellifluous at 7:45 PM on November 17, 2010 [2 favorites]
Thanks, Alexei. I figured I'd get a wrong number in there someplace and screw it up, and indeed I did, by adding an extra digit on the first count.
I think that one of the big takeaways from this sort of analysis and from some of the stuff you point out in your question is, as proj mentioned, that it matters how the results of studies are reported. If I tell you that my test is 99% effective, that sounds different from saying that 90% of all positives are false. If I say that my drug gives people 6 months longer to live, that sounds very different from saying that it has no effect on outcomes for 95% of patients.
A big part of the problem is that science journalism in general and medical journalism specifically are really, really poor. It's not because those journalists aren't smart or because they want to mislead people. It's because most of them were English majors and haven't taken a science class since 11th grade and haven't studied statistics or epidemiology or any other relevant sub-fields at all. They're also asked to produce 600-word summaries of thousand page studies on very, very short deadlines. The articles end up parroting the press releases. I'm not blaming the writers. But I am saying that a lot of what passes for medical journalism really gives you no indication of what studies actually say or how doctors are actually using them.
If you're really interested in this topic, I'd suggest starting by reading some of the studies you're interested in. Even the abstracts will be better than the press-releases for figuring out what's really going on. But any decent doctor is not going to decide what drug to give you based on a press-release published in the local paper about comparative effectiveness. So read what they're reading (or should be reading).
posted by decathecting at 8:09 PM on November 17, 2010
I think that one of the big takeaways from this sort of analysis and from some of the stuff you point out in your question is, as proj mentioned, that it matters how the results of studies are reported. If I tell you that my test is 99% effective, that sounds different from saying that 90% of all positives are false. If I say that my drug gives people 6 months longer to live, that sounds very different from saying that it has no effect on outcomes for 95% of patients.
A big part of the problem is that science journalism in general and medical journalism specifically are really, really poor. It's not because those journalists aren't smart or because they want to mislead people. It's because most of them were English majors and haven't taken a science class since 11th grade and haven't studied statistics or epidemiology or any other relevant sub-fields at all. They're also asked to produce 600-word summaries of thousand page studies on very, very short deadlines. The articles end up parroting the press releases. I'm not blaming the writers. But I am saying that a lot of what passes for medical journalism really gives you no indication of what studies actually say or how doctors are actually using them.
If you're really interested in this topic, I'd suggest starting by reading some of the studies you're interested in. Even the abstracts will be better than the press-releases for figuring out what's really going on. But any decent doctor is not going to decide what drug to give you based on a press-release published in the local paper about comparative effectiveness. So read what they're reading (or should be reading).
posted by decathecting at 8:09 PM on November 17, 2010
« Older How do I help my brother get out of an unhappy... | You gotta run, run, run, run, run Newer »
This thread is closed to new comments.
Your first point gets at conditional probabilities -- this is a classic example used to demonstrate Bayes Theorem -- Pretend you are a white, heterosexual woman with no history of intraveneous drug use, blood transfusions, or unprotected sex. An HIV test that is accurate 99% of the time indicates that you are positive with HIV. However, someone with your characteristics is likely to have HIV, on average, in about .1% of the population (these are made up numbers). What is the probability that the test is correct and you actually have HIV? Many, many, many practicing physicians get this problem wrong.
As you point out in your second bullet point, it may be that outliers are driving some of the findings, but methodical methodologists (woo!) should be good at finding this out. Your third bullet point is much harder to test for and requires longitudinal data, which are notoriously hard to come by and requires a specific psychological mechanism to operate (complacency following a clean bill of health).
None of these counts against evidence-based medicine, they are all merely problems of interpretation and are solved by improved numeracy amongst medical researchers and medical practitioners. Often what is required for physicians who don't have time to keep current with all of the research, however, is to fall back on "first do no harm" and thus may over-prescribe certain regimens (of potential interest is today's post on the NYT Wellness blog).
posted by proj at 4:07 PM on November 17, 2010 [1 favorite]