# Ever search for evidence of absence instead of finding absence of evidence?

December 23, 2012 12:57 PM Subscribe

Is scientific research ever organized to search for evidence of absence by reversing the null hypothesis? If not, why not?

It occurs to me that I can't think of any studies that do so, and that in the fields in which I'm interested (mostly medicine), people instead reject a hypothesis only after one or more published studies have failed to find a statistically significant correlation. Is there validity in rejecting hypotheses based on (repeated?) failure to find positive results, rather than choosing a correlative relationship as the null hypothesis?

I can imagine that, in many fields, you wouldn't want to set up research this way because you end up with a weak conclusion, but then I can imagine other research where a weak conclusion might be all that they can get (like, say, studying homeopathy).

Do many studies include enough raw data that they could be reinterpreted with a reversed null like this? Are there some reasons why reinterpreting the data would be an invalid way to do an experiment? If not, wouldn't it be a good idea for somebody to do some meta-research with these reversed nulls, to for instance say something stronger about the ineffectiveness of something like homeopathy?

It occurs to me that I can't think of any studies that do so, and that in the fields in which I'm interested (mostly medicine), people instead reject a hypothesis only after one or more published studies have failed to find a statistically significant correlation. Is there validity in rejecting hypotheses based on (repeated?) failure to find positive results, rather than choosing a correlative relationship as the null hypothesis?

I can imagine that, in many fields, you wouldn't want to set up research this way because you end up with a weak conclusion, but then I can imagine other research where a weak conclusion might be all that they can get (like, say, studying homeopathy).

Do many studies include enough raw data that they could be reinterpreted with a reversed null like this? Are there some reasons why reinterpreting the data would be an invalid way to do an experiment? If not, wouldn't it be a good idea for somebody to do some meta-research with these reversed nulls, to for instance say something stronger about the ineffectiveness of something like homeopathy?

Read about empirical Bayesian analysis, where prior evidence is added to the inference. This evidence can include what has not been observed: a banal example would be inferring the likelihood the sun will not rise tomorrow, taking into account all previous observations of prior sunrises in recorded history.

posted by Blazecock Pileon at 2:00 PM on December 23, 2012 [1 favorite]

posted by Blazecock Pileon at 2:00 PM on December 23, 2012 [1 favorite]

See Gallistel's (very important IMO) paper, The Importance of Proving the Null [pdf].

posted by advil at 2:03 PM on December 23, 2012 [2 favorites]

posted by advil at 2:03 PM on December 23, 2012 [2 favorites]

Yes, if I understand what you're asking, there indeed are designs for drug trials like non-inferiority trials where the intent is basically to establish that drug A, say, is

Can't help you with the rest of your question(s), though, but you might be interested in taking a look at this paper as a starting point: Why Most Published Research Findings Are False.

posted by un petit cadeau at 2:05 PM on December 23, 2012 [2 favorites]

*not*less efficacious than drug B, as opposed to 'superiority trials' where you want to prove that drug A*is*efficacious compared to drug B (or placebo.) In non-inferiority trials, you're basically setting out to prove the usual null that A & B have the same (= no difference) treatment effect -- this is that superiority trials try to reject.Can't help you with the rest of your question(s), though, but you might be interested in taking a look at this paper as a starting point: Why Most Published Research Findings Are False.

posted by un petit cadeau at 2:05 PM on December 23, 2012 [2 favorites]

Thanks for the answers.

The non-inferiority link suggests that there are problems with organizing a study this way; the "proving the null" link suggests that raw data isn't frequently available, that data can be re-analyzed if it's present, and that, at least, deciding for the null on the basis of one (non-Bayesian) analysis is not a valid conclusion.

Unless I'm reading those incorrectly, which is a real possibility, in which case I hope you'll let me know :)

posted by nathan v at 10:20 AM on December 24, 2012

The non-inferiority link suggests that there are problems with organizing a study this way; the "proving the null" link suggests that raw data isn't frequently available, that data can be re-analyzed if it's present, and that, at least, deciding for the null on the basis of one (non-Bayesian) analysis is not a valid conclusion.

Unless I'm reading those incorrectly, which is a real possibility, in which case I hope you'll let me know :)

posted by nathan v at 10:20 AM on December 24, 2012

This thread is closed to new comments.

One way to do this would be to compare the distribution of outcomes to the expected distribution implied by the null hypothesis, and show that they are indeed very close.

Another way might be to collect a large enough sample to show not only that the confidence interval for the effect includes zero, but also that the confidence interval is very narrow so that there is only a tiny bit of probability in the area that implies any substantively significant benefit.

posted by ROU_Xenophobe at 1:48 PM on December 23, 2012 [2 favorites]