How many times do I need to weigh a given object to overcome the scale's inaccuracy?
June 18, 2008 1:30 PM   Subscribe

If I have a scale that is accurate to .1 grams and I want it to be accurate to say, .05 grams, how many measurements do I need to take to do this? Can I even do this?

The digital readout is up to the hundredths, even though the claimed accuracy is in the tenths. Would such inaccuracy follow a normal distribution? Could I correct this my measuring the object say, 1000 times to get a better estimation? I feel as if there's a way to do this, and that it might have been on a chemistry final at some point in my life.
posted by geoff. to Science & Nature (22 answers total) 1 user marked this as a favorite
 
At first glance, it seems you would be able to simply measure the item more times to acquire a larger sample. Using a simple average you could improve the final measurement. The problem I see is that when the scale was manufactured it was likely created with the +/- .1 tolerance. I imagine the manufacturer performed a large number of samples during design (if not at the time it left the factory) so your increasing the sample set would likely result in the same outcome. If this is a range of values where the answer is sometimes exact and other times it can vary up to .1 in either direction then, yes, increasing the sample set will work. Because we don't have a measure of the variance, though, it is difficult to develop a precise formula to attain +/- .05 accuracy.
posted by mcarthey at 1:40 PM on June 18, 2008


There is accuracy and then there is precision. If you weigh something 100 times and it is the same every time within .01 grams you have a precision of .01 grams. The reading may not be accurate though as it will only be accurate to .1 grams. You scale is likely more precise than accurate so that weighing something multiple times will not help you here.
posted by caddis at 1:41 PM on June 18, 2008


As a further illustration. If you weigh something that has an actual weight of 10.0000 grams and you get 100 readings of between 10.08 and 10.10 you still don't know the actual weight to any more than .1 gram.
posted by caddis at 1:44 PM on June 18, 2008


Seems to me that you could only increase precision, not accuracy, by doing that. You would be getting a very precise estimate of a number that's still only accurate to 0.1g. (on preview, what caddis said)

If you have an object of known mass, you could determine the accuracy of your specific scale by repeated measurements. It may actually be better than 0.1g.
posted by zennie at 1:45 PM on June 18, 2008


It's more likely a calibration error than a random error. That is, all results may be multiplied by 1.01 so 10g appears as 10.1g. In this case you can weight your item 100 times and get the same reading each time, but it will still be inaccurate.

You can also get nonlinearity errors, where one item weighs 10g but cut it in half and both halves weigh 4.9g because the sensing element isn't perfect. In this case, again, you can take all the repeated readings you want, and have them all come back the same, but still not correct.
posted by Mike1024 at 1:48 PM on June 18, 2008


It's not obvious what the distribution of any inaccuracy would be, unfortunately. If you could take sufficient measurements (however many that would be), it could get rid of noise / random error, but a systematic error wouldn't be obvious unless you could calibrate it with another object of known mass.
posted by Lady Li at 1:49 PM on June 18, 2008


you could calibrate it with another object of known mass

Like these.
posted by TedW at 1:59 PM on June 18, 2008


Get a set of known masses and weigh them. Plot the measurements as a function of their real weight, use the slope to remove any multiplicative error, and the extrapolated offset to remove any constant error. I honestly think a calibration like this would be fun.
posted by kiltedtaco at 2:02 PM on June 18, 2008


Repeating measurements will reduce the random error, but will not reduce any systematic errors.
posted by swordfishtrombones at 2:24 PM on June 18, 2008


I do believe I came across this rule in my unfinished Chemistry degree (more likely in University than High School), but damned if I can't remember it. I do remember we tended to weigh things three times for labs. Kind of a "measure twice, cut once" rule, I guess. Sorry I can't be more precise!
posted by rhizome at 2:34 PM on June 18, 2008


Are you doing this once (like weighing a piece of precious metal) or trying to divide a bunch of material into little piles, each with an equal mass?

The thing is, there is error due to noise (as in I measure a 100g weight ten times and get results from 99g to 101g) and systematic error (as in I measure a 100g weight ten times and get results from 104.99g to 105.01g).

You can factor out noise by making a bunch of measurements and taking the average.

For systematic error you're going to need a check weight. Better still, two or three becasue, using my example above, the 100g weight was off by 5 grams, but a 60g weight might only be off by 2.5 grams and so on.

Calibration weights a painfully expensive, but the mass of most coins is held pretty constant. So if you can find yourself some nice new coinage and their offcial weights, you can at least do a reasonable calibration check.
posted by Kid Charlemagne at 3:41 PM on June 18, 2008


Get a set of known masses and weigh them. Plot the measurements as a function of their real weight, use the slope to remove any multiplicative error, and the extrapolated offset to remove any constant error. I honestly think a calibration like this would be fun.

We did this in my physics class two years ago, as kind of a "introduction to the scientific method" type thing. We were surprised when several of the scales we used (we had to do five trials with five different weights with five different scales, blah) the accuracy was not what the manufacture claimed.

It is a fun little thing to do, and it's the best way to further the accuracy of readings from a standard scale. After multiplicative/additive errors were corrected, I think one team got one of their scales to be accurate to like +-.005 kg or something. Not bad.
posted by Precision at 3:56 PM on June 18, 2008


Best answer: There is accuracy and then there is precision.

There's also repeatability. Sometimes you get measuring devices which will yield the same wrong answer with very little variation.

It really depends enormously on what the source of the error is. In some kinds of measurements, the source of error is Gaussian noise. In those cases you can take multiple measurements and average them, and neutralize the noise, because the noise lands on a normal curve.

But sometimes there is some sort of consistent systemic error. For instance, if you had a balance scale where the pivot was off center, so that one arm was slightly longer than the other, then it will always get the wrong answer. Averaging multiple measurements won't do you any good at all.

That latter case is one in which repeatability is good but accuracy is poor. In the case of the off-center balance, it's a multiplier error. But there are more complicated cases, where the sensor is non-linear; that's common with thermocouples, for example.

The only way to correct that kind of error is to take a large number of measurements at different known values, and develop an error curve for the sensor. Sometimes, if you're lucky, you can do a curve fit and develop a formula for it. In other cases you're stuck with doing a table search and interpolation.

Then when you take a measurement of an unknown value, you know from the table, or the formula, how far off you are.

With respect to your scale, there's no way to know what the source of imprecision is. If it's a balance scale it could be due to machining precision, which would make the error consistent. If it's a spring scale, it could be due to quality issues on the spring, which would also be consistent. If it's using a quartz sensor, it's probably an issue of sensor non-linearity, and the quoted accuracy is a function of how much effort they put into correcting it -- and the residue will also be systemic and consistent.
posted by Class Goat at 4:02 PM on June 18, 2008


I would agree, and add this. Don't know how well it would work, but it seems theoretically possible.

You put your object on the scale and it reads 10.1 grams. Then add objects that weigh .01 grams until the display changes to 10.2. If it took 3 objects (.03) to move up to 10.2, then your thing weighs 10.17. I think.
posted by gjc at 7:58 PM on June 18, 2008


There's also repeatability.
which is called precision
posted by caddis at 5:00 AM on June 19, 2008


a primer
posted by caddis at 5:03 AM on June 19, 2008


Best answer: You can never trust the last digit on a digital display: there is always some Schmitt triggering that makes it round off incorrectly rather than flicker. (Or it flickers, in which case you have to know something about the noise anyway.) The fact that your scale has more digits than its claimed accuracy is promising. You may find that the last digit only shows 0 and 5, or only even final digits, or suchlike.

How you would calibrate your scale (or determine that it can't be calibrated well enough; either is possible) depends on what you need it for. Often in precision work, if you think hard enough, you can find another quantity you're dealing with whose mass is more important than the mass of the International Standard Kilogram In Paris. This is the whole reason the balance scale was invented (long before the ISKiP, at that): you compare the masses of the two things you care about, get them equal, and then check the equality by swapping the pans. If your pans don't balance both ways, you know which way to adjust them, and by how much.

To answer your specific question: Do N repeated measurements with any old test mass, plot them vs. time, make sure the best-fit slope is consistent with zero. If their histogram is a good fit to a Gaussian with mean μ and standard error σ, the error on the mean is σ/√N. This tells you about stability and repeatability. If you want to know the mean reading 20 times better than the error on a reading, you will need 400 measurements. You will know 10% of the way through whether the normal distribution is a grossly acceptable fit or not.

To check linearity you need several test weights. Use a marker to write numbers on some coins (few grams each) or, if you want lighter test weights, pieces of cardstock; or both. Put an empty bucket on the scale to get it near the range where you care about linearity. Put the test weights on a few times in different orders: record weights for #1, #1 and 2, #1,2,3,...,10; #2,3,4; #3,6,9; however they come out of the pile. You'll get a big overdetermined matrix of single and combined weights that you can invert to find best values for the (scale's reading of the) mass of each test weight. Now, for each of your measurements, plot the "predicted" reading using the best-fit weights against the actual reading. Your plot should be a straight line with slope 1.

If you really need an "grams" calibration, you can get insulin syringes with 10 μL "units" marked on them from the pharmacy. Distilled water weighs 1 mg per μL at most temperatures.
posted by fantabulous timewaster at 6:44 PM on June 19, 2008 [1 favorite]


There's also repeatability.
which is called precision


No, that's not correct. Precision is a function of the measurement scale. For instance, if it's a mechanical readout it will be a function of the fineness of the graticule. If it's digital, precision is a function of the size of the A/D converter or the fineness of the graticule in a digital encoder.

Repeatability is something else. You can have high precision but low repeatability.

The measure of repeatability is the standard deviation over a large number of measurements of the same known target. If the standard deviation is small then repeatability is good.
posted by Class Goat at 9:36 PM on June 19, 2008


did you even read the primer?
posted by caddis at 4:26 AM on June 20, 2008


The measure of repeatability is the standard deviation over a large number of measurements of the same known target.

Precision is a description of uncertainty of measurements taken under the same conditions, or in practice "how close together" measurements are. Repeatability as defined above would be a way to describe precision using the standard deviation statistic. Significant digits are another more crude way to describe precision in a continuous scale.

I didn't read the primer.
posted by zennie at 10:53 AM on June 20, 2008


Caddis, I spent most of my career designing on test and measurement equipment of various kinds. If Wikipedia says that "precision" and "repeatability" are the same thing, then Wikipedia is wrong, because that's not how we used the terms.

Precision is granularity of the measurement. Repeatability is the standard deviation on a large number of measurements of the same target. And accuracy is how close the answer is to being correct. Each of those is a different thing.
posted by Class Goat at 2:32 AM on June 22, 2008


that should have been "...designing test and measurement equipment..."
posted by Class Goat at 2:33 AM on June 22, 2008


« Older More Doctor please.   |   Moving two dogs and a cat across country in a Mini... Newer »
This thread is closed to new comments.