Fallacies and Errors in Inferential Statistics
January 26, 2013 12:17 AM   Subscribe

I have recently been introduced to the concept of pseudoreplication as a mistake that people often make when using inferential statistics to evaluate treatment outcomes. My field (evolutionary and conservation biology) makes heavy use of inferential statistics, including techniques that are vulnerable to pseudoreplication, yet nowhere in my formal education have I been taught about how poor experimental design and lack of statistical rigor can lead to fallacies like this. My personal statistical proficiency is poor, but I am working to remedy that. To that end, could folks help me by identifying and ideally explaining whatever other potential pitfalls you can think of, and explaining how they can be avoided through careful experimental design and data-analysis?
posted by Scientist to Science & Nature (5 answers total) 13 users marked this as a favorite
 
Are you familiar with Bayes' theorem, and especially the base rate fallacy?
posted by wayland at 12:49 AM on January 26, 2013 [1 favorite]


My personal statistical proficiency is poor, but I am working to remedy that.

Hearing a little more about this might help to offer you specific advice. The fact that you're even worried about pseudoreplication at all suggests that it might not be as bad as you think?

To that end, could folks help me by identifying and ideally explaining whatever other potential pitfalls you can think of, and explaining how they can be avoided through careful experimental design and data-analysis?

It sounds like you're asking how to get the right answers and avoid getting the wrong ones :) Although I can definitely sympathize with your desire not to send a mistake out with your name on it in the short run, I think it's ultimately more difficult to do science by treating statistics as a list of do's and don'ts. You might want to consider signing up for a statistics course, preferably a grad course for statisticians rather than a service course. This could be especially plausible if you're a grad student, but might be worth thinking about even if you're not. Rutherford, I once read, checked himself into freshman chem around the time he started work on the gold foil business. This was after he won the Nobel, mind you.

Anyway, two more things. You're probably already familiar with multiple hypothesis testing; it seems like it's gotten some much-needed attention in recent years, but I still see issues with it all the time. A candidate from Prestigious U recently gave a job talk at our department which strongly suggested that they had never heard of it.

A last point, which might be more generally applicable, is to try to develop some kind of generative model of your data and check yourself against it. The first problem discussed in Lazic 2010, for example, could have been avoided by generating like 100000 replicates of the experiment under H0 in silico and comparing the distribution of the test statistic to the expected distribution given the df. Sometimes the model is too complex for that to be feasible, and sometimes it's overkill. But it's a thought.
posted by lambdaphage at 12:55 AM on January 26, 2013 [3 favorites]


The differences in differences fallacy (previously).
posted by googly at 2:02 AM on January 26, 2013


Simpson's paradox is a favorite...
posted by paultopia at 12:55 PM on January 26, 2013


I think you probably have these covered, but if I were to compile a list of statistical pitfalls I would definitely put these two maxims near the top:

Absence of evidence is not evidence of absence. (in other words, don't interpret a negative result as proving the null).

Correlation does not equal causation. (especially with observational studies, think carefully about the potential influence of variables not included in the analysis).
posted by cnanderson at 7:15 PM on January 26, 2013


« Older What should I do about a scabbed over cold sore?   |   demissionization Newer »
This thread is closed to new comments.