How do you study a study?
April 27, 2010 1:32 PM   Subscribe

How can a layperson find out if a study has been discredited, or else disputed?

I've been reading several (non-fiction) books lately where this is the underlying theme: Study X is performed and is flawed for many reasons, but its conclusions are either intuitive or shocking or politically expedient at the time it was conducted; it gets repeated over and over until it's "fact" in a discipline to the extent that questioning it will get you laughed at, even though other experiments are showing something isn't quite right; just within the past decade or two decades there's been a ton of research proving the study's conclusions are wrong so now the public is hearing conflicting information but has no idea what to believe, but the media/government/academia hasn't entirely caught up yet because there are too many things for people to keep up with, or the media and government are unknowingly pushing outdated science and it's hard to overcome this, or there are a bunch of stubborn academics who refuse to accept the old study was wrong, or (very often) people are so specialized academically that they have to spend all their time keeping up with the developments in their narrower field and miss out on things in related fields that could be vital, etc. The bottom line is something is wrong, and the information exists, but people don't know it's wrong because they don't have the information.

That this happens is unfortunate but seemingly inevitable, and that's easy to accept. However, it makes it difficult for a layperson to make sense of the conflicting information. Sometimes there's literally no way for a layperson to figure out what might have gone wrong, and sometimes it's even difficult for a specialist in the field to figure it out. For example, one study I read about made some outrageous claims about (iirc) the drug ecstasy literally eating through your brain, but it was never able to be replicated even by the same researcher, and later it was revealed that what probably happened was the lab the researcher got the ecstasy from accidentally gave him a different chemical that did produce those results. The brain scans from that study still appeared on government anti-drug websites for years and years after the study was discredited. I remember seeing those same brain scans in my DARE classes as a child. However, if I hadn't happened upon this book that was about scientific missteps and the author hadn't been a scientist who made a point of figuring out precisely what happened, I would have never known that study was discredited. I assume that the majority of people who were exposed to its conclusions don't know it was discredited.

So my question is this: if a layperson is reading a book or article and a specific study is mentioned, what resources are available to them that will allow them to see if a study has been discredited or disputed?

It's difficult enough for a layperson to access studies as it is, and that's only helpful if you want to look at the methodology or see if their conclusions match the results. But for something like the above, how is anyone supposed to know that study has been discredited? If you pull up a copy of the study, it's not going to have a big red "DISCREDITED" stamp on it. If you've cited the study before, no one automatically mails you updates about it; the government didn't even realize it had been discredited.

Books that really break these things down are helpful, but hard to find. "The internet" is an answer, but it's too broad for what I'm asking for. Is there a place that keeps track of developments that happen to a particular study after a study is published? Are there particular websites that are better for looking up this sort of thing than others? Is there a secret Google-fu manueuver that will get better results? Any resources that are particular to certain disciplines?

Similarly, what are the easiest ways for a layperson to get access to the text of studies? Every time I've tried it's been a hassle; so many online sources are pay only, and it's difficult to find most of the journals at a regular library because there are so many of them now.
posted by Nattie to Education (22 answers total) 21 users marked this as a favorite
 
You can search Google Scholar for the article of interest, and then click the "Cited by" link. Also, the relevant page on the journal's website is likely to have a similar link.

There was a previous question about finding copies of journal articles online.
posted by James Scott-Brown at 1:54 PM on April 27, 2010


Is there a place that keeps track of developments that happen to a particular study after a study is published?

No.

Are there particular websites that are better for looking up this sort of thing than others?

No.

Is there a secret Google-fu manueuver that will get better results? Any resources that are particular to certain disciplines?

Other than "[terms from original study] disconfirm" or related terms like discredit or wrong etc, no.

what are the easiest ways for a layperson to get access to the text of studies?

Go to a university library.
posted by ROU_Xenophobe at 2:01 PM on April 27, 2010 [1 favorite]


The answer that you're looking for, a centralized bibliography of all studies referencing one another, is yes and no. Ultimately, the answer is no, there is no such thing. If you're willing to do some work, PubMed has a side bar of Related Articles.

Sorry.
posted by Sophie1 at 2:02 PM on April 27, 2010


Seconding Google Scholar!

If you live near a college/university library see if you're allowed to use their computers to look up articles in (paid subscription) databases. Ask the reference librarian.
posted by mareli at 2:03 PM on April 27, 2010


For explaining recent medicine/health research that has been over-hyped or misrepresented by the popular media, the NHS-run Behind the Headlines ("your guide to the science that makes the news") reports are helpful.

If you pull up a copy of the study, it's not going to have a big red "DISCREDITED" stamp on it.
If it's so wrong that it's been retracted, it may well have a highly-visible watermark saying so. The PDF of Wakefield's Lancet paper (Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children) has a big red "RETRACTED" stamp across each page.
posted by James Scott-Brown at 2:08 PM on April 27, 2010


If you google the study title, oftentimes, the author will have a free copy on his or her website. Failing that, emailing the author a "reprint request" will often yield a copy.

Also, simple googling will tend to reveal controversies if they exist. Read all studies carefully and critically-- many times, the flaws are obvious if you do this. There are basic errors like:

confusing correlation and cause ("marijuana use causes heroin use")

lack of control groups in drug studies or use of anecdote as data ("case studies" are just anecdotes, they can illustrate something that happened once, but not more than that)

sample selection issues: I just saw a recent study that claimed methadone during pregnancy may cause eye troubles in kids because some doctors saw a bunch of kids with eye problems whose mothers had used methadone. it didn't use a case-control technique where it compared frequency of the problems in methadone-exposed infants to those who weren't exposed so it can't tell you anything about anything, really.

another sample selection issue: the "clinician's error." in the methadone study cited above, it's rather likely that the mothers whose children have eye problems will take them to the doctor-- this tells us nothing about the others (probably way more common) who don't have eye problems so they don't turn up at the doctor's with the problem. Similarly, doctors may believe a disease is always chronic because they don't see the patients who get better or whose symptoms weren't severe enough to make them seek help!

small sample sizes/ lack of replication: if the sample is small, results could be due to chance.

Stuff like this can help you avoid thinking that studies likely to be discredited are sound. Of course, any of this stuff can be early information on a path that does include replication or the type of study that can show cause/effect.
posted by Maias at 2:14 PM on April 27, 2010


I check Google Scholar first when I want to see what papers cite a particular paper. I might also check Web of Science, which costs money but many academic libraries have it, and is also needlessly difficult to use.

None of these services will be exhaustive.
posted by grouse at 2:15 PM on April 27, 2010


Response by poster: Thanks so far, these are all great answers!
posted by Nattie at 2:22 PM on April 27, 2010


Following up Maias comment, Peter Norvig's article Warning Signs in Experimental Design and Interpretation is a good overview of common problems to look for when reading research.
posted by James Scott-Brown at 2:26 PM on April 27, 2010 [6 favorites]


Correct me if I'm wrong, Nattie, but it sounds like you're not asking "how do I research this particular topic" (because you're not proficient in academic research), but rather "is there something like MetaFilter for academic research where the results are obvious, along with back-references so that I know if this paper is BS without having to read every paper that has cited the one I am looking at".

Sadly, I don't think there is such a thing.
posted by ArgentCorvid at 2:29 PM on April 27, 2010


I agree with most of the advice here, but there are a few other things you should consider. Your view of new studies coming out and completely discrediting another isn't that accurate. Science is about an accumulation of knowledge and deciding which way the evidence leads. There are times when a study is retracted (eg. for gross misconduct), you can watch for journals to put out notices like this, however this is rare.

To get a better view of a subject, you might want to look at review articles or metaanalysis. The latter is a study that pools data from a number of previous studies, while the former is a peer-reviewed summary by a specialist on a particular subject. For medical studies, for example, Cochrane Reviews are often held in high regard.
posted by Midnight Rambler at 2:59 PM on April 27, 2010 [2 favorites]


Awesome question. I don't have a source for discredited studies, I just wanted to say that I wish more people had your critical thinking skills.

Jokingly: Assume every study is in dispute.

Reality: Critical thinking is our best weapon. As mentioned above by Maias, watch out for things like correlation where there is no mechanism. Another example: Breastfeeding causes kids to be smarter. Maybe, maybe not. Breastfeeding is more en vogue right now than it was when I was a baby. All the "educated" moms are doing it, if they physically can. How do we know that a particular study isn't accidentally biasing itself because the well-educated-and-breastfeeding moms are also more likely to be doing lots of early learning activities with their children? So this is correlation without mechanism. If someone studies a nutrient(s) in breastmilk that is absent from formula, and shows that it helps brain development, that would be the mechanism.

As for getting access to the articles, a layperson could try Open Access and also PubMed, although much of PubMed still links to articles that cost money. There's a big push right now - okay, maybe a medium-sized push - to get the results of all studies that are funded by taxpayers to be free to said taxpayers.

Also try RePORT, which is a searchable database of currently funded projects in the USA, and the tabs at the top of a given study proposal will link you to the results, if any.

One more is Clinical Trials in the USA. Research.gov is another USA site where you can look things up.

Unfortunately, I am really addressing the "Where do I find results of studies?" than I am "Where do I find out which studies have been discredited?"
posted by Knowyournuts at 4:44 PM on April 27, 2010


We have a guide to deciding when to change your behavior based on new medical research at stats.org here.

The rest of the site also has resources that may be helpful in evaluating research claims.
posted by Maias at 5:03 PM on April 27, 2010 [1 favorite]


Midnight Rambler's advice, particularly regarding metanalysis and Cochrane Reviews (assuming you're interested in medical research), is spot on. The only issue is the level of understanding that you need before some of these papers make sense. That is, often there is a lot of nuance in the results, and whilst these may be spelt out in easily understandable language, they also may not.

Another issue is what we mean by 'wrong' or 'discredited'. Different experimental methodologies may result in different results – they may both be right, but one might be more relevant than the other.

There are also studies published that report significant findings (and 'significant' in science means a very specific thing: that the results have been statistically examined and that the reported relationship is only 5% (or whatever the level of statistical significance being used is) likely to have occurred by chance), but that the effect size, the actual difference between the treatment and the control group, is vanishingly small. So the finding might be true, but the actual difference is too small to worry about.

Or it may be that a different statistical analysis suggests that the reported relationship isn't actually there. The study may be technically correct, but the results may not actually represent the true state of the world (or the newer study may not).

All this is just saying that it's often more complicated than old studies simply being discredited. It's more accurate to say that our knowledge develops and often changes. So I don't think a big easy encyclopaedia of discredited science is really possible.
posted by damonism at 6:01 PM on April 27, 2010


Ben Goldacre's Bad Science gives a lot of useful advice on how to read things critically, whether it's science reporting, 'scientific' claims being made by politicians or publicists, or the actual science. An example: how to spot overstatements, as when a surrogate experimental outcome is taken by synecdoche for an unproven clinical outcome (e.g. "Reduces antioxidants!" for "Reduces antioxidants when added to petri dish, but does not appear to produce same effect in your bloodstream!"). You can see how such a claim might appear in any of the spheres I've just mentioned: "The compound tested was found to reduce antioxidants" [study: yes, but so what? in what conditions?]; "Scientists have announced that Compound A reduces antioxidants" [newspaper report; again, so what?]; "Our [miracle pill/food product] reduces antioxidants!" [publicity; that's supposed to make me pay thirty 'feelgood dollars' per packet].

This is not answering your question. However, when you've got a question about a specific study, the forums on his website might well be a good place to pose it. Book and forums are both centred on medical science (and the UK), but by no means exclusively.

On Preview: Goldacre also talks about meta-analysis and Cochrane Reviews.
posted by lapsangsouchong at 7:01 PM on April 27, 2010


More than ever, bloggers are acting as a vanguard of science and calling out studies that are blatantly wrong. Obviously you should consider the source carefully, but doing a google search for the name of the article may turn up some nice critiques from people who know what they're talking about.
posted by chrisamiller at 7:28 PM on April 27, 2010


As an academic librarian, I say email, phone, or go to your local college or university and ask a reference librarian there, especially if they've got one specializing in the area you're looking at (health science, regular science, etc.) . Most of the higher-ed schools I've worked at are happy to help community members, although they might prefer it if you didn't tie up the ref librarians during high-traffic times like finals. (And an email inquiry would allow them to work on it during their down time.)

Once you've got the citation information, go to your local library and ask to have it sent to you via interlibrary loan -- unless your local library is one of the annoying ones that charge for ILL or don't participate in it, it shouldn't be a problem.

Also check to see if your local library is part of any resource-sharing programs. For example, many Texas libraries are part of TexShare, which allows participating libraries access to a number of databases, and allows people with TexShare cards from participating libraries to check out books at other TexShare libraries. I'm fairly sure there are similar programs elsewhere, although I don't know where or what their terms are.
posted by telophase at 7:49 AM on April 28, 2010


(and 'significant' in science means a very specific thing: that the results have been statistically examined and that the reported relationship is only 5% (or whatever the level of statistical significance being used is) likely to have occurred by chance)

Not to pick on you, damonism, but "statistically significant" does mean a very specific thing, and that is not it. In the case of a significance level of 0.05, it means that the results as large or larger or larger than the reported results are 5% likely to occur if the null hypothesis is true. This is a very common error, currently listed as #2 on the frequent misunderstandings of the p-value in Wikipedia.

In this interview in Science with Harvard biostatistics chair Victor De Gruttola ("passionate about his p-values") says about this error, "It's not that. It's really not like that. It's the difference between I own the house or the house owns me. It's two different concepts."

In other words, your formulation is P(H0 | D = data) but p is actually P(D ≥ data | H0). The former seems more like something that would be more useful to measure but it is impossible with the frequentist statistical framework used in classical hypothesis testing.
posted by grouse at 9:16 AM on April 28, 2010 [1 favorite]


One thing you could do is look into which journal published the paper. Some journals are known for being the best, everything they publish is rigorously peer reviewed and they also publish letters to the editor from other experts in the field giving their opinion on the study after the fact. These journals also require anything published in them have a thorough limitations discussion, in which the authors admit and describe every last thing that could possibly be un-perfect with the study and how it could alter the conclusions... then there are journals who have a reputation for publishing just about anything. I have myself (along with many colleagues) published scientific literature and there is definitely a list we will go down... try to get it published with Journal A, if they don't accept it, move down to Journal B and so on.

No study is perfect. You could find the best designed study in history and still pick it apart and question it's conclusions. Ideally, become well-versed in study design and read the study yourself! as others have mentioned, University libraries will have them.
posted by Carol@ILPoisonCenter at 11:10 AM on April 28, 2010


Not to pick on you, damonism, but "statistically significant" does mean a very specific thing, and that is not it.

I stand corrected then. In my defence, it has been 14 years since I last took a statistics course :-)
posted by damonism at 5:36 PM on April 28, 2010


Not quite what you're asking but I have a site Science Rumors which I'm setting up to collaboratively gather and rank information about scientific studies. Basically my goal is for the layperson to have a place to go and learn/debunk/research, and for it to act as a filter where you can see only the studies that are already verified to some extent or other. I'm planning to do most of this based on metadata about the study, its publication, its methodology, etc. I also want to track whether it's a confirmation or refutation of an earlier study/theory, or purely new data.

Your desire to look at a study and see what later studies have affected its usefulness is cool and while it's not the focus of the site I'll have to make sure to do it as much as the data we're gathering allows.

Unfortunately it's a placeholder site still but I'm committed to getting it rolling soon. If anyone would like to be contacted once I've got it rolling, or would like to get involved please do let me know.
posted by vsync at 5:58 PM on April 28, 2010 [1 favorite]


Response by poster: Thanks again everyone, I appreciate all these answers. And vsync, hell yes I would love to be contacted when you get it rolling; that sounds like a cool idea!
posted by Nattie at 2:17 PM on May 28, 2010


« Older Animated Shorts Filter, help with my nostalgia!   |   Can I swap 30" drop-in ovens? Newer »
This thread is closed to new comments.