Credible scientific studies?
January 17, 2009 9:15 AM   Subscribe

What scientific studies should you believe?

Very open ended question, I know. "Should."

If you were in an argument over whether Vitamin C truly helps in preventing colds or dowsers being for real, you would likely mention the study you recently read on Wikipedia that linked to the journal Nature and maybe feel pretty good about yourself. Credible source, no?

Or maybe you mentioned the study on racism that was in Newsweek or Time or New York Times (science, not politics).

I'm very much a skeptical person but I think skepticism will only take you so far before you start using it as an excuse for anything you don't want to believe so that it will keep your internal beliefs intact.

I sent a study on racism to some friends recently that was published in Time. It mentioned how the study was executed and something about a white man brushing past a black man and exclaiming a racist remark, and one responded with "Well, I couldn't know if when he bumped into him he had a solemn face or the actor was a bad actor or if the scientists were interested in the result..." and it goes on.

My response to this is: "You got all this from text? From an email?"

I mean, he talks about the credibility of a study that was talked about to a reporter, that the reporter reduced to an article, that was probably changed by the editor... and you BELIEVE this article describes perfectly what went on? Yet you don't believe the study was credible? Is Time not credible enough to believe they evaluated credible scientists?

Who do you NOT believe to be credible? The scientists? The journalist? The editor? Or the email sender?

Sorry for lack of clarity but here goes: Humans are flawed and perceptions are likely to wreak havoc in all kinds of situations but as far as the scientific method is concerned, I think there is an advantage to understanding... but what of who reports them? Scientists themselves and the magazines or journals who publish them.

All that we are left with is what journal or magazine has a record of getting things done as accurately as possible so that the original message is as uncorrupted as possible to the end user.

How do you approach truth when certainty is always lacking? How do you argue your position in a debate with as few fallacies as possible?

But more specifically, what names can you trust?
posted by theholotrope to Science & Nature (30 answers total) 3 users marked this as a favorite
 
Well, for one, there are degrees of belief. I think everyone forms conclusions based on weighted consideration of all available evidence. So, if you think the source is dubious, you don't necessarily have to completely disregard the result of a study, but it might not count for much.

For me, I think the publication goes a long ways towards giving a study credibility: Science, Nature, PNAS, NY Times, etc. That's credibility to me. Of course, there's still the possibility that the studies are flawed. For us lay-people, the best we can do is to accept the judgement of people we trust.
posted by mpls2 at 9:25 AM on January 17, 2009


Academic journals are peer reviewed. That means a panel of experts in whatever highly specific field the article pertains to evaluate the findings/arguments/experiments/etc. and determine whether or not they are worthy of being published. Some journals are more reputable than others, but you can bet that if an article is in a publication so lofty as Science or Nature that it has been heavily scrutinized and likely has merit.

Newspaper articles are not subject to peer review. Even in a paper like the New York Times, there is no panel that reviews the article. It's targeted at a lay population, and necessarily does not contain the kind of rigorous detail that scientific journals do.

If you want to know what names to trust, you'll need to become familiar with the reputations of the scientists in question. Journalists, however, are not scientists. Some report scientific findings more accurately and adeptly than others, but they're still journalists.
posted by solipsophistocracy at 9:35 AM on January 17, 2009


Are you asking about studies, or about interpretive articles, written for popular magazines, about the studies? And what do you mean by "trust?" Do you mean to ask which publications reliably publish their content in good faith and are not corrupt? Or, do you mean to ask which publications subject their content to the most rigorous scrutiny? Older studies are always being superseded by newer, better studies. That doesn't mean that the old studies were bad, even if the conclusions that were drawn at the time were flat wrong. Science strives to be at the cutting edge of our expanding understanding of the world, but the studies it produces get left behind in time as the cutting edge moves on. With science, absolute truth isn't on the table.
posted by jon1270 at 9:44 AM on January 17, 2009


Best answer: As an academic, and a researcher, I'm not sure "belief" is the most appropriate way to frame how I think of the concept. Scientific papers are opinions, often (and hopefully) well informed opinions that rely on empirical evidence to demonstrate something about some phenomenon.

I think most good scientists (at least the ones I associate with) understand that knowledge, even scientific knowledge, is contingent on the limitations of whatever epistemology it relies on for its validity. Science isn't "reality," it models reality as best it can and seeks to apply these models in useful ways. Scientific journals don't embody knowledge any more than a scientist does, but they do circulate it. Knowledge, seen in this way, isn't a pure construct, but a communication across time and space. As it moves from one locus (or, say, journal or scientist) to another, it changes, even if in imperceptible ways. It can be made better, it can degrade until it stops circulating.

You can bet that as a complicated knowledge construct (say some high level theory) that is well understood between a group of people, as it circulates outside that group, it is presented in less nuanced ways. No reporter is going to understand it as well as one of the people reformulating it, and most people who invoke it to do some sort of rhetorical work for them is going to understand it as well as the reporter who talked to the scientist to write the article.

BUT this isn't a bad thing, it's just the way it is. It's one of the limitations of knowledge. It's why you can't just get on the Internet, read some shit, and be the expert. Expertise happens in more complicated ways, and needs more complicated inputs than any one medium or mode of information delivery can provide. So to answer your explicit question of who is credible, I say to know that, you really need to understand why these people are invoking the knowledge they are invoking, what they want to do with it, what they have contributed to it, what the limitations of their own understanding is, etc. Each of them could be credible depending on these things, or each of them could be wrong, no matter what their expertise.

I might frame the question, rather than whether I believe them, as how useful I think their ideas for my purposes, and how influential do I think others might find their ideas in the way I want to use them.
posted by mrmojoflying at 9:44 AM on January 17, 2009 [11 favorites]


Even in a paper like the New York Times, there is no panel that reviews the article. It's targeted at a lay population, and necessarily does not contain the kind of rigorous detail that scientific journals do.

The NYT is not likely to cite a study from a non-credible source, for obvious reasons.
posted by mpls2 at 9:46 AM on January 17, 2009


The NYT is not likely to cite a study from a non-credible source, for obvious reasons.

Agreed, but the journalist's interpretation of a credible source is not necessarily going to paint the most accurate picture of the research. In the article (from the NYT, by the way) that is the subject of this recent FPP for example, the journalist basically ignores the pertinent scientific finding and goes on to spend the majority of the article speculating about what an "anti-love vaccine" would be like. It is an example of terrible scientific reporting. Just because the source is credible doesn't mean the interpretation of the findings will be.
posted by solipsophistocracy at 9:52 AM on January 17, 2009


The farther the results are from expectations, the greater the amount of evidence required.
posted by ook at 10:00 AM on January 17, 2009


skepticism will only take you so far before you start using it as an excuse for anything you don't want to believe so that it will keep your internal beliefs intact.

This is not what skepticism is. What you're describing is some sort of emotional crutch situation and in my own anecdotal experience this simply doesn't happen. People who are skeptical, who are free thinkers, who evaluate the material before them in a skeptical manner are not likely to be the same people who are so concerned with their internal beliefs that they dismiss obvious evidence to the contrary. It just doesn't work that way.

I sent a study on racism to some friends recently that was published in Time.

No. You sent an article which probably cited a study. Time doesn't publish academic studies. As such you are relying on the journalist to interpret the findings of the study for you the layperson. This is often just fine, but particularly in the fields of science and social science journalists have been known to misinterpret or misrepresent academic studies. Caveat Lector.

My response to this is: "You got all this from text? From an email?"

Your example, or your retelling of it here, is confusing and it's unclear what your friends were actually taking issue with.

you BELIEVE this article describes perfectly what went on? Yet you don't believe the study was credible?

I don't understand what your problem is. Poor studies are commissioned and published all the time at all levels of academia. Likewise, reporters with deadlines to meet, may accidentally, or willfully misinterpret such studies. We're talking about a writer who may not be an expert in a given field, attempting to distill a narrative out of a academic research paper which maybe many dozens if not hundreds of pages long.

Mistakes may be made at the level of the study, or at the level of the journalist. Further, many studies are simply inconclusive, and produce simply a body of data from which journalists may pick and choose their own conclusions.

Is Time not credible enough to believe they evaluated credible scientists?

It doesn't work that way. With rare exception the journalist probably doesn't know the people behind the study, or perhaps even the institution. For every rock star institution that drips name recognition and credibility, there are thousands of other institutions out there producing research. There is no rating system or database a journalist can look up to find out how credible a scientist is. The best they can do is google around and try find out what other work the academics have conducted, where they teach, who is paying for the study, etc...

Sorry for lack of clarity

Indeed, these stream-of-consciousness, one-too-many-bong-hits AskMes are tedious.

Humans are flawed and perceptions are likely to wreak havoc in all kinds of situations

Humans are pattern seeking animals. We're not "flawed" we're just good at what we do.

All that we are left with is what journal or magazine has a record of getting things done as accurately as possible so that the original message is as uncorrupted as possible to the end user.

I think you're a little too caught up in this medium is the message type thinking...

Look. Say, my department does a study. It's 200 pages long, filled with charts, graphs, equations, really boring shit. The conclusions of the study however are useful to a reporter who is working on a story about... I dunno... gang violence. So he cites our study, or a portion of it, or maybe even just one data point, and from that he writes a 800 word piece for some doctor's office dribble like Time.

Then you come along and get all distracted by your own navel gazing and questioning how "flawed" humans are, and, "OH MY GOD HOW CAN WE REALLY KNOW ANYTHING?!?" And suddenly you think you're really just a brain floating in a tank somewhere and all of this is the Matrix... and well...

Just read the fucking study. Google it. If Time found it, so can you.

That's the beauty of the scientific method... not that it's some magic system of understanding, but that it's understandable by anyone who takes to the time to read it.

In other words, if you read the study, and it sounds fishy to you, guess what? You can do your own study. You can produce your own body of work that either confirms or discredits someone's findings, and if the study is credible at all, it will tell you right there on the page how they found out what they found out...
posted by wfrgms at 10:11 AM on January 17, 2009 [5 favorites]


You know, my class on experiment design answered a lot of your questions in surprising and nonintuitive ways. These days, I look for correct application of statistical principles to data whose measurement was fairly straightforward.

It's harder to find than it sounds.
posted by ikkyu2 at 10:27 AM on January 17, 2009 [1 favorite]


Well, there was a commentary (or was it a theoretical paper) I saw referred at probably ScienceBlogs which stated that around three-quarters of published positive results of statistical significance were likely false positives. Someone with better memory help me out here. This was within the last 2-3 years.

Just read the fucking study. Google it. If Time found it, so can you.

Not necessarily. I've known of a few studies publicized even before Advance Publication, not to mention the $30+ charge that a non-subscribing layperson would usually have to pay to read a single paper.
posted by Gyan at 10:29 AM on January 17, 2009


If you find out who conducted the study, there's a good chance that you can contact the corresponding author via email to receive a copy if the charge is really steep.

Indeed, these stream-of-consciousness, one-too-many-bong-hits AskMes are tedious.

This cannot be stated strongly enough.
posted by solipsophistocracy at 10:44 AM on January 17, 2009 [1 favorite]


I never trust journalists to interpret studies for me. If I read an article about a study whose topic I am interested in, I look up the actual study myself. I don't always read the whole thing, but the abstract or introduction will describe how the study was conducted and how the results were interpreted to draw a conclusion. I decide based on those factors whether the study is credible.

Journalists are not trained to evaluate the ways in which studies are conducted. They usually only read press releases about the studies that are at best written by study authors who want to bolster their findings, and at worst written by interest groups who want the public to believe a certain thing and are willing to overstate or manipulate study results to do so. I prefer to investigate for myself.
posted by decathecting at 10:45 AM on January 17, 2009


Most biological/medical types of peer-reviewed papers must be critically evaluated no matter who wrote it or where it was published. It is important to scrutinize the methodology of any experiment and the statistical analyses performed on the results to determine if the outcome is statistically significant (it's all about the P-value). If something is published in Nature, Science, JAMA, or New England Journal, most likely it has already been throughly evaluated. However, that should not stop each reader from evaluating it themselves (that's what makes them scientists). I assume the same is true within the physical sciences. I'm not entirely sure how these things are evaluated within the social sciences, although I'm certain there are some similarities.

As for newspapers, media reports, and wikipedia, they are all biased. It is fine to get your information from these sources, but keep in mind they were often written by someone for some reason. Part of what makes you an intelligent human being is understanding that and critically thinking about it.
posted by ruwan at 11:14 AM on January 17, 2009


I sent a study on racism to some friends recently that was published in Time. It mentioned how the study was executed and something about a white man brushing past a black man and exclaiming a racist remark, and one responded with "Well, I couldn't know if when he bumped into him he had a solemn face or the actor was a bad actor or if the scientists were interested in the result..." and it goes on.

My response to this is: "You got all this from text? From an email?"


This says to me that your friend is using very good scientific critique. S/he hears of this study and how it was carried out, and immediately thinks of factors that might have influenced the results. It's how peer editing and scientific review works. It doesn't mean s/he disbelieves the results, it just means that these are things that hopefully the experimenter thought of as well and controlled for. All s/he is doing is thinking of questions, which can be answered by reading the actual text of the actual published study.

Now, if the study DIDNT compensate for all of these issues that s/he thought of, that might influence the results, then and only then is that grounds for skepticism.
posted by CTORourke at 11:51 AM on January 17, 2009


Indeed, these stream-of-consciousness, one-too-many-bong-hits AskMes are tedious.

This cannot be stated strongly enough.


Awww.. come on folks, the asker presented an honestly naive question that is typical for the aged 16-24 crowd, or older if one hasn't received a rigorous post-secondary education - really nothing that extraordinary, or that complicated to lend insight to. Personally, I appreciated the question, for all it's perceptions, because it presented someone actually asking the kind of question that I would want a novice researcher asking. Are the disciplines cranky today, or what :).
posted by mrmojoflying at 12:05 PM on January 17, 2009 [1 favorite]


And by novice, I mean GenEd novice, not M.S. novice.
posted by mrmojoflying at 12:06 PM on January 17, 2009


Agreeing with a number of people here about the credibility of various sources and greater weight for peer-reviewed sources. I'd add a couple of things here, though they may not be much comfort to the original poster.

First, it should be remembered that there is an enormous gap between popular expectations of the scientific process and the scientific process itself. In particular, I think most people think of science as providing once-and-for-all confirmations of important and then storing them in a warehouse or something. While it does so for many minor details, the big important explanatory claims (even the sort that the original post mentions) are deeply interwoven with each other and constantly being challenged, revised, dropped, recast, refined and so on. It's not that science is failing in this respect, either. Its greatest epistemological asset (imho) is its restlessness, its tendency to poke and tear itself apart in search of something better. That's good for the kinds of questions it approaches, but to someone more attuned to everyday questions, it sounds like evasion. That doesn't mean that all hypotheses are equally legitimate by any stretch, but rarely, if ever, is anything as completely settled as your friends seem to want things to be (though their challenges to the methods are themselves a healthy thing, assuming they're not merely being contrary).

Second, you can't overstress the matter of a journalist's interpretation and repackaging provisional conclusions in accessible terms. Neuroscience (the only thing I read that is legitimately science, rather than popular interpretations) is particularly susceptible to this for various reasons. Someone comes out with a study finding that there is increased oxygen flow to a region of the amygdala when researchers show subjects still photos from a horror movie, and I guarantee you that Yahoo! will have a 200-word article on it that says "Fear happens in your amygdala!" As a general rule of thumb, I would say be much more wary of any study that claims (or is presented by others as claiming) anything definitive about social behavior or the mind. Not that there isn't real science on these things, just that the gap between the real stuff and what popular sources say about it is extra-wide.
posted by el_lupino at 12:32 PM on January 17, 2009 [1 favorite]


The simple answer is that unless you have the expertise to read and evaluate the methodology and results of a study yourself, you will always be dependent on someone else to evaluate such things for you. Any time you read an article about a study in the popular media (ie Time or the NYT), you are reading the study first filtered through the eyes of its own authors who are always susceptible to overstating and misinterpreting their own data, further filtered through the writer of the secondary article who is faced with sexing up (and dumbing down) his own writing and a deadline, often in the absence of the methodological training required to do what they're doing right.

Counting on peer review only helps a bit, but tons of crap manuscripts get accepted every day even into good journals. I spot complete stinkers published in JAMA all the time, and that's purported to be a top-tier medical journal.

This is one reason why as an academic physician, even after getting my medical degree and bachelors in chemistry and physics, I went back to get a masters in clinical research. I needed even more training in study design and statistical analysis before I felt confident as both a researcher and critical reader of the literature. Making sense of studies really is much more challenging than one might suspect.
posted by drpynchon at 12:33 PM on January 17, 2009


Well, there was a commentary (or was it a theoretical paper) I saw referred at probably ScienceBlogs which stated that around three-quarters of published positive results of statistical significance were likely false positives

That would probably be "Why Most Published Research Findings Are False" (link)

See also "Why Current Publication Practices May Distort Science" (link)
posted by PercussivePaul at 12:35 PM on January 17, 2009


By the way, if you are trying to find information that gives you a place to stand, try to read review or survey papers. Good surveys give you a sense of the history of the field and all the debates that have occurred over the years, and after reading them you will hopefully have some conception of the current state of the art, meaning you will know what concepts are generally considered to be true or settled by practitioners in the field, and what new ideas are appearing but still being explored and debated. A good place to start for the physical and social science (if you have a university subscription) is Annual Reviews.
posted by PercussivePaul at 12:43 PM on January 17, 2009


I should also have said, survey papers have a further advantage in that they evaluate all of the studies for you. It's not realistic to expect a layman to dig up sociology papers and be able to assess the quality of the study.
posted by PercussivePaul at 12:49 PM on January 17, 2009


Here is the article in Time magazine. The headline says: Racist attitudes are still Ingrained.
Here is the article in Science magazine, to which the Time article refers.
Here is the webpage of the main author of the Science article.
Here is her list of publications, where one can download a pdf version of the article.
Here is an excerpt from the abstract: "...The present research demonstrates that although people predicted that they would be very upset by a racist act, when people actually experienced this event they showed relatively little emotional distress..."
posted by metastability at 1:21 PM on January 17, 2009 [1 favorite]


How to Identify a "Good" Scientific Journal (essay with link to podcast version)

Especially read the part near the end where he recommends checking this Wikipedia list for top-notch journals (including the further Wikipedia lists that are linked to within that list).
posted by Jaltcoh at 1:25 PM on January 17, 2009


and you BELIEVE this article [Time] describes perfectly what went on?

According to the Time article:

The study, by researchers at Yale University and Toronto's York University, involved 120 nonblack students who were told they were being recruited for an experiment on team-oriented problem-solving. They were broken into three groups. The members of the first group were individually placed in a room with a black actor and a white actor, both posing as fellow participants in the study, and watched as the black actor slightly bumped the white actor while leaving the room. After the black actor had left, the white actor played out one of three scenarios, saying, "I hate it when black people do that," "Clumsy n______" or nothing at all. None of the people in the two other study groups experienced the interactions directly; one group watched them on video and the other simply read about them.

According to the actual study:


...we assigned 120 participants who self-identified their race/ethnicity (e.g., black, Asian, Pakistani) to the role of “experiencer” or “forecaster” and exposed them to an incident involving no racial slur, a moderate racial slur, or an extreme racial slur. Because our goal was to examine how people who do not belong to the target group respond to racial slurs, black participants were not included in this study (22). Upon entering the laboratory, the experimenter introduced the ex-periencers to two male confederates—one black and one white—who posed as fellow participants, and then the experimenter exited the room. Shortly thereafter, the black confederate left the room, ostensibly to retrieve his cell phone, and gently bumped the white confederate’s knee on his way out. In the control condition, this incident passed without comment. In the moderate slur condition, once the black confederate had left the room, the white confederate remarked, “Typical, I hate it when black people do that.” In the extreme racial slur condition, the white confederate stated, “clumsy ‘N word.’” Within minutes, the black confederate returned, followed by the experimenter, who asked everyone to complete an initial survey, which included items assessing current affect. Next, the experimenter asked the real participant to select one of the confederates as a partner for a subsequent anagram task and to report their choice orally to the experimenter. Finally, all participants completed the anagram task in another room with the person they had selected. In the forecaster condition, participants were presented with a detailed description of the events that experiencers actually encountered. Forecasters were asked to predict in writing how they would feel if they were in the experiencer’s position and to predict which confederate they would choose as a partner.
posted by metastability at 1:44 PM on January 17, 2009


Lots of good replies already. A few points I want to add:

I've known of a few studies publicized even before Advance Publication...

These I do not put much stock in. Or anything in New Scientist which is notorious for being chock-full of reporting unreviewed research.

In general, you should not rely on re-reporting of scientific research published outside the scientific literature, such as in general-interest magazines and newspapers, or Wikipedia. Summaries within the scientific literature can be quite useful, however. The most popular journals like Science and Nature include one-page "News and Views" summaries of some of their research articles, which are written using less specialist vocabulary, and can put the research into context more objectively than the article's authors. Review articles are very useful as well. Meta-analyses of multiple studies can be even more useful than the original articles they get their data from.

it's all about the P-value

Well, no. The p-value is a measure of statistical significance, but statistical significance does not necessarily mean scientific significance. Furthermore, statistics and p-values are often misused, so a significant p-value does not even necessarily mean statistical significance.

I think it's funny that one of the actors in the scenario is called a "white confederate."
posted by grouse at 2:12 PM on January 17, 2009 [2 favorites]


"Who do you NOT believe to be credible? The scientists? The journalist? The editor? Or the email sender?"

The long chain of communication. Any one of them could have misunderstood something, misinterpreted it, or cherry-picked the trivial aside that supports the point they wanted to make.
posted by jacalata at 2:11 AM on January 18, 2009


Huh. This post seems appropriate.
posted by converge at 5:31 AM on January 18, 2009


Also, I'm very glad that wfrgms has epistemology worked out for himself and everyone else. I need a new priest and life coach.
posted by converge at 5:34 AM on January 18, 2009


Take a class in epidemiology.
posted by tiburon at 1:39 PM on January 18, 2009


This is a hard problem. I've thought about it a lot, and written somewhat less; doing more is a Long-Term Project.

You might benefit from Edward Tufte's Beautiful Evidence, which includes advice on consuming presentations. Tufte's other books are excellent, too.

Nature and Science both have "News" sections where the issue's technical articles are summarized for readers from other disciplines. Some stories are written by freelance journalists or the journal's editorial staff, but many are written by Real Experts, often (probably) a referee for the corresponding technical article. A year's subscription to one of these journals, spent skimming most or all of these summaries, would give you a good feeling for how practicing scientists deal with the questions you raise.
posted by fantabulous timewaster at 2:08 PM on January 22, 2009


« Older Making Out, Leveling Up   |   How to fix a chronically late spouse? Newer »
This thread is closed to new comments.