I need examples of really bad research.
September 21, 2008 7:35 PM   Subscribe

I am looking for examples of really bad research.

I am the teaching assistant for an undergraduate research methods course and I am collecting examples of some really bad research to show students. These experiments can be bad in any number of ways: bad design, faulty reasoning, misuse of statistics, poor writing, and so forth.

So, MetaFilter, do you have a favourite example of bad research? If you have recommendations for blogs or websites that address issues like this, that would be great, too. (I recently came across a nice blog discussion on a paper that claims that global warming causes suicide rates to increase...)

These examples are meant to be fun for the students, so the more cringe-worthy, the better. Bonus points if they are related to cognitive science.

Thanks in advance!
posted by tickingclock to Science & Nature (49 answers total) 37 users marked this as a favorite
 
Well, there should be lots of stuff at Bad Science.

Of course there's always the Annals of Improbable Research, although none of their articles are to be taken seriously.
posted by grouse at 7:49 PM on September 21, 2008 [2 favorites]


Maybe "the Marriage Crunch" -- that a woman of such-and-such an age is more likely to be struck by lightening than to get married.
posted by ClaudiaCenter at 7:53 PM on September 21, 2008


These prescription drugs have been taken off the market in the past 10-12 years because of serious, often lethal side effects.

Rezulin: Given fast-track approval by the Food and Drug Administration (FDA), Rezulin was linked to 63 confirmed deaths and probably hundreds more. "We have real trouble," a Food and Drug Administration (FDA) physician wrote in 1997, just a few months after Rezulin's approval. The drug wasn't taken off the market until 2000.

Lotronex: Against concerns of one of its own officers, the Food and Drug Administration (FDA) approved Lotronex in February 2000. By the time it was withdrawn 9 months later, the Food and Drug Administration (FDA) had received reports of 93 hospitalizations, multiple emergency bowel surgeries, and 5 deaths.

Propulsid: A top-selling drug for many years, this drug was linked to hundreds of cases of heart arrhythmias and over 100 deaths.

Redux: Taken by millions of people for weight loss after its approval in April 1996, Redux was soon linked to heart valve damage and a disabling, often lethal pulmonary disorder. Taken off the market in September 1997.

Pondimin: A component of Fen-Phen, the diet fad drug. Approved in 1973, Pondimin's link to heart valve damage and a lethal pulmonary disorder wasn't recognized until shortly before its withdrawal in 1997.

Duract: This painkiller was taken off the market when it was linked to severe, sometimes fatal liver failure.

Seldane: America's and the world's top-selling antihistamine for a decade, it took the Food and Drug Administration (FDA) 5 years to recognize that Seldane was causing cardiac arrhythmias, blackouts, hospitalizations, and deaths, and another 8 years to take it off the market.

Hismanal: Approved in 1988 and soon known to cause cardiac arrhythmias, the drug was finally taken off the market in 1999.

Posicor: Used to treat hypertension, the drug was linked to life-threatening drug interactions and more than 100 deaths.

Raxar: Linked to cardiac toxicities and deaths.
posted by netbros at 7:59 PM on September 21, 2008


There's the classic case of Blondlot's N rays.

(hey, and those Seldane-induced arrhythmias were fun!)
posted by scruss at 8:05 PM on September 21, 2008 [1 favorite]


I think the original research on Cold Fusion by Pons and Fleischmann is a superb example of bad research, especially when compared to the Jones work.
posted by Class Goat at 8:10 PM on September 21, 2008


From this past week: Why are killing rampages increasing? See the comments for many of the flaws (notably: not controlled for population, no definition of key term, too small sample size, reporting bias).
posted by Jaltcoh at 8:18 PM on September 21, 2008


This isn't very specific, but Freakonomics (both the blog and the book) deal with fallacies and misinformation in a rather digestible manner.
posted by dhammond at 8:23 PM on September 21, 2008


There are the bioethics issues involved in Jessie Gelsinger's case at the University of Pennsylvania - he was an 18 year old boy who signed up for gene therapy and ended up dying of "unusual and deadly immune system response that led to multiple-organ failure".

At the moment, I don't have access to Pubmed but if you need article links (and not just ones to a CBS news article from 1999) let me know and I'll look at work tomorrow.

There's loads of experiments that falsified their own data (clones, bottled urine, etc.).

Does this fall under the types of bad science you seek?
posted by oreonax at 8:23 PM on September 21, 2008


There are lots of creationist journals, which are sure to be chock full of bad research. Perhaps check out the Institute for Creation Research or Answers Research Journal.
posted by Flunkie at 8:26 PM on September 21, 2008 [1 favorite]


In the 'fun' category, the folks that give out the Ig Nobel awards should have something for you.
posted by ArgentCorvid at 8:27 PM on September 21, 2008




It's not research, but Feynman on cargo cult science has some great insight into bad science (and it's funny).
posted by qxntpqbbbqxl at 8:31 PM on September 21, 2008 [2 favorites]


Polywater is the classic
posted by lalochezia at 8:35 PM on September 21, 2008


I think this one would keep your students entertained... Miller et al (2007) studied strippers' wages and found that they made more money during ovulation, and decided that this was evidence for human estrous. It's not as obviously flawed as some of the research out there, but it could be good fodder for a discussion of how researchers tend to ignore other possible mechanisms that could account for their observations, as well as the limitations of specific methodologies (the researchers never actually ASKED strippers how their menstrual cycles affected how they felt, for example).
posted by pluckemin at 8:41 PM on September 21, 2008


There are some good examples of crap statistical interpretation and so on in and among the various sidebars, problem sets, and examples in Triola's Introduction to Elementary Statistics, 10th Edition. (It's the textbook my stats class is using, and it has a fairly heavy emphasis on critical reasoning.)
posted by fairytale of los angeles at 8:44 PM on September 21, 2008


Watson and Rayner's Little Albert experiment.
posted by Marisa Stole the Precious Thing at 8:55 PM on September 21, 2008


Kinsey apparently had some pretty messed up stuff goin on. I can dig what he was doing for the times, defnetely ahead of his time but some of the stuff was just straight up nasty, I guess he used to interview pedophiles. I heard from some guy (some guy is my fav source of info) that one of the pedophiles had performed sex acts on over 30 minors and Kinsey kept it secret from the University and police.
posted by BrnP84 at 9:00 PM on September 21, 2008


Read anything by Malcom Gladwell.
posted by boots77 at 9:24 PM on September 21, 2008




The Vicar study that stated subliminal messages worked to sell Coke and popcorn. It was fabricated but led to legislative action.

(That Milgram is bad research is debatable only on ethics. It was successfully replicated recently.)
posted by ALongDecember at 9:58 PM on September 21, 2008


Response by poster: Wow, leave it to MeFi -- I knew I could trust you! It's been less than 2.5 hours since I posted my question, and already I've got hours' worth of links to read.

Just to clarify, I am not looking for examples of ethical breaches. Ethics in science is an interesting topic and definitely worthy of discussion, but not for the specific purpose of guarding students against bad reasoning.

netbros, that's an amazingly long list of prescription drugs that were taken off the market. I am actually more interested in the faulty reasoning that led to the original approval of these drugs to the general public -- do you know where I could find such information?

oreonax, the Jessie Gelsinger case is interesting, but it's not exactly what I'm looking for. However, if you have specific examples of "experiments that falsified their own data (clones, bottled urine, etc.)," that would be great.

I'll have more things to say once I've had a chance to read all the links. Again, thanks everyone -- and keep the answers coming!
posted by tickingclock at 10:15 PM on September 21, 2008


What about the old classic observation failures like Percival Lowell's canals on Mars? Or Blondlot's N-Rays?
posted by i_am_joe's_spleen at 10:25 PM on September 21, 2008


We must not forget Franz Joseph Gall's research regarding Phrenology.
posted by karizma at 10:42 PM on September 21, 2008


There was the AIDS researcher at UW who falsified his data.
posted by messylissa at 10:44 PM on September 21, 2008


I'm sorry I can't be more specific about this but my memory is a bit dim. The "nature versus nurture" argument among certain scholars is a venerable one. One idea early on was to take advantage of a fortuitous opportunity: studying cases of twins who were raised separately.

This all happened something like 50 or 60 years ago. At the time, some adoption agencies thought that it would be more difficult to find adoptive parents for two kids at a time, so when they ended up with twin babies they'd split them up and put each in a separate home. In most of those cases the kids grew up without knowing they had twins. But the records still existed, and there was one particular researcher who traced down several dozen pairs of twins and did a bunch of testing on them to see how similar they were.

It turned out he was very much a zealot for the "nature" side of the argument, and here's where my memory gets somewhat indistinct. His study showed immense similarities for the twins. Later it was shown that a lot of his data was faked.

(Does this strike a bell with anyone? Fill in details and correct mistakes, please.)
posted by Class Goat at 10:49 PM on September 21, 2008


Cyril Burt is your man, Class Goat.
posted by i_am_joe's_spleen at 11:02 PM on September 21, 2008


There was a pretty famous case in the literature of auditory attention. The idea behind attention is that if some prior information about some characteristic of a task-relevant stimulus is known, then that information can be used to enhance all incoming sensory information having that characteristic, and suppress information without that characteristic. For example, if a person was instructed to listen for and press a button in response to a slightly higher-pitched beep in the right ear only, and to ignore beeps in the left ear, the brain response to beeps in the right ear will be greater than to beeps in the left ear.

One really interesting question was the level at which this attentional effect takes place. Does an irrelevant stimulus get suppressed really early on? Or does all incoming sensory information get processed equally well, with some sort of differential selection taking place almost "at the end of the pipe" right before things enter into conscious awareness. [as a side note, electrophysiological studies of humans have found attention effects as early as 20-50ms post-stimulus-- basically as soon as incoming auditory information hits the auditory cortex]

A few investigators who thought they could find an attention effect very early on recorded from the cochlear nucleus of an awake cat. They measured activity from the cochlear nucleus in response to beeps played in each ear of a cat. To make the cat pay attention to one side or the other, they used a mouse lure. Sure enough, the investigators found their early attention effect-- more activity when the mouse was on the same side of the cat as the beep, and less when the mouse was on the opposite side.

Shortly afterward, a retraction was printed in Science. There was a confound in the study. The cat was rotating its ear toward the mouse (and speaker) on one side, and away from the sound on the opposite side. Repeating the experiment with the ears temporarily paralyzed completely eliminated the effect.

Modification of Electric Activity in Cochlear Nucleus during "Attention" in Unanesthetized Cats
RAÚL HERNÁNDEZ-PEÓN, HARALD SCHERRER, and MICHEL JOUVET
Science 24 February 1956 123: 331-332 [DOI: 10.1126/science.123.3191.331] (in Articles)
posted by Maxwell_Smart at 11:15 PM on September 21, 2008 [1 favorite]


Educational Research routinely has research with no pretest control groups (when they could have easily had one). So, when they find that headstart "works" for inner city kids, it's a dubious finding because there is 20% of life maturation that happens between age 4 and 5, so of course they're getting smarter for many other reasons!

Also, the classic (you probably know this one already) bad research example: Ice Cream causes murder! First of all, the two are correlated-- so that must be shown as not necessarily (people really don't get on any sort of deep level the difference between correlation and causation). But put the example to the class and let them work it out. The likely reason for the correlation: heat, which is correlated with aggression. When I pointed this out in my research class it really impressed this girl. So much so that she followed me out of her own party at the end of the semester and . . . yeah.
posted by No New Diamonds Please at 11:23 PM on September 21, 2008


Alan Sokal's brilliant paper, "Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity" illustrates the danger of accepting something as true because it's in a journal, and is too complicated to understand.

Metafilter demonstrates the danger of using Wikipedia too heavily in your work.
posted by Mike1024 at 1:03 AM on September 22, 2008 [2 favorites]


For faulty reasoning, bad design, bad control and tortured nonsense-statistics I like Arthur Kellerman's articles about guns and violence. Read about his most famous article (the one that gave us the your gun is 43 times more likely to kill you than save you factiod) here and here.
posted by K.P. at 2:23 AM on September 22, 2008


Water memory.
posted by kisch mokusch at 3:07 AM on September 22, 2008


Assign excerpts from Steven Jay Gould's "The Mismeasurement of Man"....especially the chapter where he tries to replicate the finding that European skulls hold more flaxseed than African ones (and therefore more braaaaains!)....and finds he's having to pack in the seeds extra tight into the white skull to get the same data on that dubious experiment.
posted by availablelight at 5:01 AM on September 22, 2008 [1 favorite]


If it's a social science research methods course -- Dewey Defeats Truman!
posted by piro at 5:16 AM on September 22, 2008


Hummer greener than Prius?
posted by brandman at 5:28 AM on September 22, 2008


The 9/11 "Truth" conspiracy theorists "research".
posted by yeti at 5:38 AM on September 22, 2008


This book, “Corrupted Science: Fraud, Ideology, and Politics in Science,” might have some useful stuff - a longish summary was recently posted at boingboing.
posted by chr1sb0y at 7:30 AM on September 22, 2008


There is a lot of pseudoscience associated with many of the claims of the Young Earth Creationist movement. Flood geology is a good place to start. See also: God of the Gaps

You might also want to show some clips from Ben Stein's Expelled: No Intelligence Allowed
posted by Alison at 7:49 AM on September 22, 2008


Shinichi Fujimura's faked archaeology discoveries probably count as forged data.
posted by reptile at 7:53 AM on September 22, 2008


I'd start by telling the joke about the scientist who's experimenting on frogs.

He yells "JUMP!" at the frog and the frog jumps one meter.

Then he cuts off one of the frog's legs, yells "JUMP!" and the frog jumps half a meter.

Then he cuts off another of the frog's legs, yells "JUMP!" and the frog jumps a fifth of a meter.

Then he cuts off a third leg, yells "JUMP!" and the frog does not jump. He yells "JUMP!" again, and the frog does not jump.

"Aha!" he says. "I have my result!" So he carefully writes in his lab book: "When three legs are removed, a frog becomes deaf."

This joke explains so much bad science.
posted by Sidhedevil at 8:25 AM on September 22, 2008 [2 favorites]


less pirates = global warming

Also try searching pubmed for either "erratum" or "withdrawn". That produces a lot of utter clangers.
posted by roofus at 8:40 AM on September 22, 2008


my personal favorite is from this Bad Science column - a study that purports to link Down Syndrome to "Orientals".

as a cognitive scientist, I feel that much of the field of experimental philosophy falls under the broad umbrella of bad research. sorry, guys. most studies are poorly designed and the analyses are typically conducted by people who don't know statistics.
posted by dropkick queen at 9:20 AM on September 22, 2008


There's Dr. Hwang who was disgraced when he "falsified data for 9 of the 11 patient-derived embryonic stem cell colonies he reported in Science this June" (from the NYTimes article linked).

Here's an LA Times article about falsified data involving fabricated "interviews, urine samples and urine sample records" from 2007. (Here is the NIH notice file on him. I imagine you could find more by digging through their old records.)

At the risk of sounding catty, this paper makes some serious leaps in rationale that confound me greatly as well as making some incomplete-to-wrong statements about IFNg (if you can't get to the link, it's DK Sarkar, et al. PNAS July 1, 2008 vol. 105 no. 26 9105-9110 ).

There are some examples of technologies that can be more art than science that require a solid understanding of the proper controls, etc. in order to trust the results. Fields of view in histology/microscopy are easily manipulated and adjusted - this is fairly easy to illustrate with images found online if you're so inclined.

Flow cytometry is entirely dependent on believing that the operator used the appropriate controls (always check methods and materials). If you have access to a flow core around you, they will most likely have or be likely to generate wildly different looking graphs using the exact same cells to illustrate this.

Erm, I'm still not sure I've helped but these are what I think might answer your question on a rather blurry Monday morning.
posted by oreonax at 9:43 AM on September 22, 2008


I was also going to mention The Mismeasure of Man and find I was beaten to it - only coming in to correct the title. It's an excellent, excellent book, and details how racist Victorian and Edwardian prejudices ultimately trickled down into wide-ranging events and issues that still affect us today. IQ testing? Originally meant to find students in need of individualized tutoring in French schools, not label everyone's brains for life. Oh, and it's interesting how American Social Studies never mentions that whole involuntary sterilization of the disabled and minorities thing... The last sterilization was in 1981 - less than thirty years ago!
posted by bettafish at 11:00 AM on September 22, 2008


This might not be 100% what you're asking for. It's bad research in the most literal sense possible, but not bad research in the sense that data was falsified. A poorly done literature search was one of the factors that led scientists to trial a drug in a form that was known to be toxic to humans. One person died, which could have been prevented if there'd been due diligence done on the form of the drug used. It also sounds like the investigator ignored some warning signs in earlier test subjects, before a later test subject died. See Hopkins faults safety lapses by the Baltimore Sun.
posted by lillygog at 1:39 PM on September 22, 2008


From what I recall, Bob Rosenthal's book Essentials of Behavioral Research is chock full of great examples of bad research and how to do things right. It even has a psychological bent.

It's been quite a few years, and I may be partly conflating what he taught us in the class with what was in the book, but your university library almost certainly has it, and it's worth checking out.
posted by CruiseSavvy at 4:10 PM on September 22, 2008


I read a paper in one of my early psych methods classes that really stuck with me: The Relative Potency of the Nursery School and the Statistical Laboratory in Boosting the IQ by F. L. Goodenough and K. M. Maurer (1940 Journal of Ed Psyc paper reproduced in J.J. Jenkins and D.G. Paterson's (1961) Studies in Individual Differences.)

The basic story is that another group published a paper showing how effective their instructional system was at raising the IQ of low ability kids. Goodenough and Maurer quite rightly retorted that, while doing absolutely nothing beneficial, you could show improvement in any low-scoring group thanks to measurement error (and the resulting regression to the mean) and you can only evaluate gain scores relative to a comparable control group. I found it compelling and I live in fear of making such a stupid mistake in my own statistics. The best part, of course, is the sarcastic title of the paper which you only appreciate after reading the article.

Also, if you don't mind an example that isn't a formal research article, there's a humdinger from a blogger linked by the Chronicle of Higher Ed:
http://newsroom.ucla.edu/portal/ucla/election-blog-post.aspx?id=1219
Their analysis of the correlation of presidential IQ and performance is idiotic for several reasons (note: I'm an IQ fan, so I have no problem with associating intelligence and life outcomes). The most important problem is that the historical estimates of IQ are almost certainly confounded with the estimates of presidential success. If your students can see past the politics, this article could lead to a great discussion about construct validity in research.
posted by parkerjackson at 7:12 PM on September 22, 2008


The Stanford Prison Experiment? I saw the video in college. Ugh.
posted by ostranenie at 6:11 AM on September 24, 2008


I like the examples in which the criteria of the group being "studied" already contain the characteristic that the study is supposed to evaluate. Does that make sense? I don't think I'm saying it right. When the input already contains the output.

Here's an example -- the California prison system did a study to determine whether and how to change their numeric security system. Inmates with security level 0 to 18 get to live at Level I housing (lower security), inmates with 19 to 27 are assigned to Level II housing, etc., etc. all the way up 52 points or more is Level IV.

The study used as its data a "Form 839" -- the screening form for inmates at reception centers. The Form 839 contains a yes/no check box for "mental illness." The study found a small but statistically significant correlation between "mental illness" on the form and later disruptive behavior (disciplinary actions, called 115s). So then all of the new people coming in to reception with the "yes" box checked got an automatic 4 extra points on their security scores.

The problem was that the 839s being studied had no definition of how to check yes/no on the "mental illness" box, and in fact it is very likely that the yes/no was often checked based on people being having psychiatric conditions that were noticeable and disruptive.* So ... right, people with a disruptive condition are going to later have disruptive disciplinary problems, duh.

*During the study period, the reception centers were not actually screening incoming inmates for mental health conditions in any systematic way. The data used reported far lower levels of psychiatric conditions than would be expected or even plausible. Our hypothesis is that that at the time of the study, and previously, the staff was simply labeling persons who were disruptive or who “read” mentally ill (i.e. shouting, throwing, assaulting) as being mentally ill.

Later, after a screening-for-psychiatric-conditions system was implemented, the number of "yes" boxes spiked up, and all of those people got 4 extra points. It took many years for us lawyers to get the CDCR to rescind the 4 points policy. Now there are thousands of inmates who still have 4 points who need to get them removed.

Thanks, social scientists.
posted by ClaudiaCenter at 9:12 AM on September 24, 2008


Not so much bad research, but a well-laid-out example of how leading questions bias survey response, from DemFromCT on DailyKos. Timely, too, as it's about the bank bailout.

Polling the Bailout: Wording of Questions Matters
posted by fairytale of los angeles at 9:48 AM on September 24, 2008


« Older Uberup? Or uberdown   |   They take a lickin' and keep on tickin... Newer »
This thread is closed to new comments.