# Statistical analysis for a prospective cohort study with quantifiable outcomes?

November 15, 2012 12:57 PM Subscribe

What kind of statistical analysis would I use to compare the outcomes of a prospective cohort study, one with an intervention and one as the control?

Sorry, more specifically, my friend and I are performing a prospective cohort study where there are two groups of 15 athletes, and the goal is to provide one (randomized) cohort with an intervention that we think will prevent injuries; we are then going to compare outcomes by giving everyone in the intervention and the control group a validated test that is a good predictor for injuries. The test output is quantifiable, it will be either a single number for each subject, or a set of 3-5 numbers. Once we have test results for both groups, what kind of analysis would make the most sense to use?

Just to be clear, we are not tracking injuries, so there is no 'survival' time as in clinical trials. We are simply using the validated test as a proxy to extrapolate likelihood for injury. The test numbers are the outcome. Is there some specific justification we can provide for our (quite low) n, aside from the fact that this is just what we consider doable at this point?

Normally for something like this I would sit down and try to read more on the internet, but we're short on time as we unexpectedly have to submit this for IRB approval with an organization by tomorrow.

Sorry, more specifically, my friend and I are performing a prospective cohort study where there are two groups of 15 athletes, and the goal is to provide one (randomized) cohort with an intervention that we think will prevent injuries; we are then going to compare outcomes by giving everyone in the intervention and the control group a validated test that is a good predictor for injuries. The test output is quantifiable, it will be either a single number for each subject, or a set of 3-5 numbers. Once we have test results for both groups, what kind of analysis would make the most sense to use?

Just to be clear, we are not tracking injuries, so there is no 'survival' time as in clinical trials. We are simply using the validated test as a proxy to extrapolate likelihood for injury. The test numbers are the outcome. Is there some specific justification we can provide for our (quite low) n, aside from the fact that this is just what we consider doable at this point?

Normally for something like this I would sit down and try to read more on the internet, but we're short on time as we unexpectedly have to submit this for IRB approval with an organization by tomorrow.

If assignment is random, and you're not worried about noncompliance or anything, a simple comparison of means (t-test) should work. However, you should probably give everyone the test at the beginning as well, to verify that the indicator is not significantly different between groups at the baseline.

posted by theodolite at 1:10 PM on November 15, 2012

posted by theodolite at 1:10 PM on November 15, 2012

You might want to consider using the Mann-Whitney U-test (also known as the Wilcoxon rank-sum test) too. Can you tell us more about the properties of your outcome variable?

posted by un petit cadeau at 1:15 PM on November 15, 2012

posted by un petit cadeau at 1:15 PM on November 15, 2012

Well, what is your outcome exactly? Are you looking to prove that the intervention is non-inferior (e.g. just as good as) to no intervention? Or are you looking to prove that the intervention is superior to doing nothing? I'd recommend a non-inferiority analysis using propensity scores.

posted by floweredfish at 1:26 PM on November 15, 2012

posted by floweredfish at 1:26 PM on November 15, 2012

I forgot to mention, we will be establishing a baseline by testing in the beginning as well.

The proposed outcome is improved performance on a balance test which has a measurable, linear output; the higher the number, the better you have performed.

posted by legospaceman at 1:44 PM on November 15, 2012

The proposed outcome is improved performance on a balance test which has a measurable, linear output; the higher the number, the better you have performed.

posted by legospaceman at 1:44 PM on November 15, 2012

*Once we have test results for both groups, what kind of analysis would make the most sense to use?*

Ask important determinant of the outcome at baseline (with the pre-test) and followup that you think matter and can plausibly change over the course of the study. After the fact, 1) demonstrate that important determinants have about the same frequency in both groups (this just shows that randomization worked) 2) just do a t-test (or a rank-based relative like MWU) 3) do a regression with changes in measured determinants and treatment as predictors and the change in your test as the outcome. I would not use propensity scores or other complex causal inference tools since you have randomized controls and need to submit tomorrow.

*Is there some specific justification we can provide for our (quite low) n, aside from the fact that this is just what we consider doable at this point?*

If you have a reasonable idea of how variable the short-term change in your test is likely to be, you can say how large a treatment effect you can exclude with a null result. This is the flip side of a sample size calculation, you can use minimum detectable difference calculators like these.

posted by a robot made out of meat at 2:00 PM on November 15, 2012 [1 favorite]

If you are establishing baselines as well, wouldn't a repeated-measures AN(C)OVA with two time points and simple between-groups factor (possibly with baseline as covariate) be appropriate?

posted by Keter at 5:00 PM on November 15, 2012

posted by Keter at 5:00 PM on November 15, 2012

I'd suggest using whatever tests are commonly used in the literature you are citing.

Different fields have different preferred statistical methods and quirks and using something unfamiliar means your peer reviewers will be scratching their heads both when reviewing your grant and when you try to publish what you have done. Always make it as easy as possible for the consumers of your work to understand what you have done.

posted by srboisvert at 7:01 PM on November 15, 2012

Different fields have different preferred statistical methods and quirks and using something unfamiliar means your peer reviewers will be scratching their heads both when reviewing your grant and when you try to publish what you have done. Always make it as easy as possible for the consumers of your work to understand what you have done.

posted by srboisvert at 7:01 PM on November 15, 2012

If your outcome is a continuous variable and you have two groups (control and intervention) you would use a t-test (mentioned previously) if the outcome variable is distrubuted normally, and a Mann-Whitney U (also mentioned previously) if the outcome variable is skewed. A t-test compares means, a Mann-Whitney compares medians. If the outcome does not approximate the normal distribution, it is violating a major assumption of a t-test, which is that the data are normally distributed. That is why you would use a Mann-Whitney.

Caveat: with such a small sample size you are unlikely to find a statistically significant difference between groups.

posted by lulu68 at 10:08 PM on November 15, 2012

Caveat: with such a small sample size you are unlikely to find a statistically significant difference between groups.

posted by lulu68 at 10:08 PM on November 15, 2012

« Older When in America, how should one refer to the... | Repay friends who helped during long break-up? Newer »

This thread is closed to new comments.

posted by brainmouse at 1:00 PM on November 15, 2012