Comments on: How really really really reliable is this test?
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test/
Comments on Ask MetaFilter post How really really really reliable is this test?Mon, 26 Nov 2007 14:49:28 -0800Mon, 26 Nov 2007 14:49:28 -0800en-ushttp://blogs.law.harvard.edu/tech/rss60Question: How really really really reliable is this test?
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test
[statsfilter] How to elegantly measure test-retest reliability after multiple (i.e. 4) repetitions of the same test? <br /><br /> In order to examine the temporal stability of a measure of a neuropsychological test we thought it would be wayyyy smart to do give subjects said test over a period of 2 weeks. It was our opinion that it's a good test and there would be little to no practice effects so scores should be stable not just at test-retest but at test-retest-retest-retest. You get the idea. The problem is that the standard Spearman-Brown reliability<br>
coefficient is designed only for one retest scenarios. From what I can gather from this <a href=" http://www2.chass.ncsu.edu/garson/pa765/reliab.htm"> page </a> I can use a 2-way random ICC treating the tests as different raters and get a sense of the reliability over all 4 measurements. Anyone know if this is indeed the proper test to be using? Any examples of a published psychology paper doing test-retest reliability for more than 2 tests?post:ask.metafilter.com,2007:site.77149Mon, 26 Nov 2007 14:48:23 -0800SmegoidstatisticsreliabilityiccBy: Smegoid
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test#1146000
Crickey, sorry about the bad grammar in above. Not enough preview action.comment:ask.metafilter.com,2007:site.77149-1146000Mon, 26 Nov 2007 14:49:28 -0800SmegoidBy: nixxon
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test#1146336
I think the ICC is certainly a good way to go for this type of reliability analysis. I can't think of any references off the top of my head for papers that report reliabilities this way, but they shouldn't be too hard to find. You want to be looking for "interrater reliability" and "multiple raters" as keywords in your search for supporting references, but really the approach you're suggesting isn't controversial at all (at least in my field).<br>
<br>
Also if I remember correctly, Cohen's Kappa can be used with more than 2 raters -- so that might be worth looking into. All of the methods of calculating reliability should yield comparable results, unless there's something really funky with your data.comment:ask.metafilter.com,2007:site.77149-1146336Mon, 26 Nov 2007 18:31:01 -0800nixxonBy: a robot made out of meat
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test#1146465
Simple regression r^2, or an anova result.<br>
<br>
Your idea is that the jth score for the ith person (y_ij) is equal to some underlying number specific for that person (a_i) plus an error (e_ij). The estimator for a_i is going to be the average over j of (y_ij). You will then be looking at the ratio of the variance of the residuals from this model and the variance of all the data. The null hypothesis is that there is no a_i, that is the test is completely unreliable, which is the same as the there being just one mean for everybody's tests.<br>
<br>
If you don't like to think of it as a regression, it's the same as a one-way anova with the hypothesis that the means are not equal. A super-duper power to detect that the means aren't the same is equivalent to the test being reliable (it tends very strongly to return a value close to the expected value every time).comment:ask.metafilter.com,2007:site.77149-1146465Mon, 26 Nov 2007 20:19:33 -0800a robot made out of meatBy: a robot made out of meat
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test#1147096
I suppose that I should add that the pearson correlation is this in two variables. If you want to do the regression, you add a dummy variable for each person but one. That variable is equal to one if the test belongs to that person, zero else. Regression would be a nice place to start because you can add a coefficient for alternative testing conditions, or to test if people are just getting better (or worse) at the test over time.comment:ask.metafilter.com,2007:site.77149-1147096Tue, 27 Nov 2007 10:22:23 -0800a robot made out of meatBy: Smegoid
http://ask.metafilter.com/77149/How-really-really-really-reliable-is-this-test#1147367
Thanks for the input. <br>
<br>
Nixxon: It's rare to find the ICC used to examine test-retest reliability (that or my pubmed search skills are appalling). But I did find a few cases. I find a measure called Fleiss's Kappa which is the multi-rater version of Cohen's Kappa. Might try that out, though at the moment ICC seems fine.<br>
<br>
Robot: I hadn't thought of it as a regression. We tried simple one way anova's to look at scores across testing periods and in one test they improve slightly (not all that surprising) and in another they're stable. The only issue I have with looking at this as an ANOVA is that reviewers will want a more standard reliability statistic. Looking at this as from a GLM perspective is interesting. I'm going to try that out. <br>
<br>
Thanks!comment:ask.metafilter.com,2007:site.77149-1147367Tue, 27 Nov 2007 13:18:42 -0800Smegoid