Why leave out that one error bar?
January 24, 2006 3:37 PM Subscribe
A statistics / scientific convention question. I've noticed in scientific journals that often when a set of data is presented with values normalized to one of the sample groups, and the value for that sample group is arbitrarily set to 1, 10, 100 or whatever, to simplify interpretation, the variability/error data for that one sample group is left out. Is there a good statistical reason for that or is it just some random convention with no good reason?
posted by shoos to science & nature (17 answers total)
Here's an example: you have a set of data on the height of trees according to their age (say trees that are 5, 10 and 20 years old). You calculate the mean height and standard deviation for each age group. For whatever reason, you want to normalize the mean values for all three groups to the 5-year-old group and set that value to 1 to present the data. My question is why would people not show the standard deviation (adjusted for the normalization) for the 5-year-old group along with those for the other two groups.