How to best publish lack of findings
April 7, 2014 7:03 AM   Subscribe

I completed my Masters dissertation (in the UK) which used a research-area standard measure to compare two populations of subjects. Nothing I hypothesised was found to be significant, and actually, even with heavy stratifying and manipulation, I couldn't find any links. I did, however, find a problem with the measure. Its creator is still doing research. How can I best share this problem to encourage others to look at it?

We ended up having a relatively large amount of subjects for the topic area (more than 100). Whilst I am all for publishing everything, I don't think my dissertation itself would have any interest to anyone, and would struggle to turn it into something useful myself.

If it makes a difference, the masters was in osteopathy, and the general topic area is interoceptive awareness, palpation and mindfulness.
posted by fizban to Science & Nature (10 answers total) 1 user marked this as a favorite
 
Is this the sort of thing that the 'Journal of Articles in Support of the Null Hypothesis' might be interested in?
posted by piato at 7:14 AM on April 7, 2014


I think it's interesting from the perspective that if your hypotheses were sound, then others would be likely to make those assumptions and also be found wrong. There's a story in that, as you are showing that conceptions don't line up with realities. Also, significance isn't everything. There's a lot to be accomplished in mixed-methods analysis, so perhaps focus a bit more on the qualitative side of things. Again, showing the mismatch in beliefs/assumptions as compared to statistical results. And lastly, look into that measure and see if there's something you can contribute about theoretical perspectives on methodology and tools. Good luck!
posted by iamkimiam at 7:36 AM on April 7, 2014 [2 favorites]


PLOS One.
posted by googly at 7:47 AM on April 7, 2014 [1 favorite]


Well what's your goal?
Sounds like you want to critique the measure. If this is the case you need to go where people using that measure are. And they aren't in PLOS One or a journal for unsupported findings. They are in the journal you found the measure in.

Generally when people critique a measure, they usually propose and validate a new one. Sometimes it starts a change and sometimes it doesn't. And in my experience the critiics are students of the original measure's author who know the measure inside and out and have used it many times in different environments and can really speak to its conceptual and Methodological drawbacks.

So for now I'd say the easiest thing you could do is a blog post. When students Google the measure they might see your blog post.

But as far as a formal critique you're likely to need more than what you've got.

Maybe search around to see if there are other critiics and email them?
posted by k8t at 7:57 AM on April 7, 2014 [1 favorite]


Does your institution publish master's theses online? I think PLoS ONE or a pre-print server is a fine place to put this for two reasons. First, at the current stage or work it will be tough to get that into a journal where it will be widely read. OTOH PLoS articles will show up in google / pubmed searches and citation trees for the measure. If your finding is novel, it would be wasteful to the community to have more people using that measure without knowing the limitation. Second, if your dataset has reasonable power to detect the association of interest to you, it's important that the literature not be just marginally significant findings when somebody goes to do a systematic review or meta analysis. The power of your study isn't something that you've mentioned. Are your CIs tight enough to limit interesting associations? If your dataset is small (100 may be small in absolute even if large relative to others) then don't bother with an article along the lines of "my study had no power and found nothing."
posted by a robot made out of meat at 8:26 AM on April 7, 2014


It can often be better to not publish, then publish in a shit journal. Just food for thought.

Otherwise null results really shouldn't be an issue, so long as you use those results to build out or dismantle the theoretic reasoning behind the study.
posted by jjmoney at 9:47 AM on April 7, 2014


Apologies for glibness, but you could start calling it "The Fizban Conjecture."
posted by rhizome at 9:54 AM on April 7, 2014


Best answer: The problem with the measure, is it something that would likely turn up in other peoples' data if they re-analyzed it, or is it something that only affected you because of the design of your study? Is it likely to affect outcomes of many past studies, if they were re-analyzed? Is there some sort of correction to the measure that you'd like to propose, or is the problem not something that you'd be sure how to fix? If you can come up with your desired outcome, well enough to make it a succinct title and abstract, then it's probably publishable. Even if your data didn't make a good experimental conclusion, an evaluation of a standard metric is something useful to the community; just be sure that you rephrase all your data into the context of the metric and not in the context of your original hypothesis.

Talk to your advisor about whether it would it be feasible to submit a short article to the most relevant peer-reviewed journal, with a title like "[Precision] issues with [Measure Name] in the context of [Your Data Type]" Many journals have a "quick letters" section, or a special submission guideline for articles under 4 pages. You could just report the problem that you found and propose that other people who have used this measurement in the past go back and redo their results, or propose that people use a modified measure in future, or propose that future studies of your particular type use a different measure entirely, or whatever is appropriate.
posted by aimedwander at 10:22 AM on April 7, 2014


Response by poster: The problem is definitely one that could be found in other peoples data if it existed for them, using the base data that they have to capture to use it.

aimedwander - that is exactly the sort of thing I wondered about the feasibility of. Would I also contact the person who created it, or would that be strange? I would happily describe the measure and issue here, but maybe that would be wrong as well?

I am clearly very new to the publishing side of things...
posted by fizban at 12:00 PM on April 7, 2014


Unfortunately I'm not expert in publications, either; I just had a kind of spotty grad school career that necessitated some interesting discussions of if and how to publish one of our experiments. It was all very much based on my adviser's expertise and familiarity with the journals, so I haven't got a good leg to stand on offering advice. Best I can say is, talk to the senior researches involved, and know that they'll probably disagree with each other to some extent, meaning that whatever you choose to do will be at least somewhat right.

As far as contacting the creator of the metric, I think that would be reasonable. (and no, my opinion is not gospel here, either!) Send an email, asking whether situations like [X] were considered when the metric was designed, and whether he's been in contact with anyone who's noticed issues related to [Y]. You shouldn't spell out your research, or invite him to help you necessarily, but an email would help establish (a) that you're a decent person not trying to undermine him or his metric, and (b) it's not a known problem that nobody happened to have mentioned.

Best of luck with all this!
posted by aimedwander at 8:12 PM on April 10, 2014


« Older Keeping the rain out   |   Help, I haven't been framed! Newer »
This thread is closed to new comments.