# What is the difference between floating point accuracy and precision?

January 2, 2012 2:54 PM Subscribe

What is the difference between floating point accuracy and precision?

While Tomorrowful's definitions apply to the general terms precision and accuracy, that is not how they are used relating to floating point numbers.

Accuracy is indeed how close a floating point calculation comes to the real value. However, precision in floating point refers the the number of bits used to make calculations. Floating point calculations are entirely repeatable and consistently the same regardless of precision.

posted by JackFlash at 3:07 PM on January 2, 2012 [3 favorites]

Accuracy is indeed how close a floating point calculation comes to the real value. However, precision in floating point refers the the number of bits used to make calculations. Floating point calculations are entirely repeatable and consistently the same regardless of precision.

posted by JackFlash at 3:07 PM on January 2, 2012 [3 favorites]

tomorrowful's explanation is correct for broader scientific discussion that relates computations back to the real world. But, specifically regarding floating point math (with or without a GPU), the terms "accuracy" and "precision" are used interchangeably by computer scientists.

posted by wutangclan at 3:08 PM on January 2, 2012

posted by wutangclan at 3:08 PM on January 2, 2012

Accuracy is about how close a value is to what it is meant to be.

Precision is about how exactly we can specify it.

1. If you ask Bill Gates how much money he has, and he says 243,456,152,001,923 cents, he is being very precise - but we don't know whether he is at all accurate.

2. If he said "56 billion" that's a lot less precise, but (according to Google at any rate) it's a lot more accurate.

With integers, we can represent very precise numbers (as in example 1), but to store very precise numbers that are very big or very small, we need a would lot of storage - the bigger the number the more storage.

With floating point, we deliberately lose precision in order to be able to store very big or small numbers with a consistent amount of storage (computer programmers like consistency). So a floating point number is a lot more like example 2.

Unfortunately, having low precision means that as we do arithmetic with floating point, the low precision causes small errors in accuracy. The errors accumulate as we manipulate the floating point numbers, so if we don't take care we can end up with very wrong results.

For example, imagine I'm storing decimal floating point with one significant digit, and I want to divide 1 by 3 and then multiply by 3 again.

1/3 = 0.3

0.3 * 3 = 0.9

My lack of precision has lost me some accuracy. So, he precision here is about how many digits I allowed myself for the calculation. The accuracy is about the difference between my final result and the correct result.

I can often get better accuracy with the same precision, by spotting operations that will lose accuracy and doing them at the end of the calculation.

posted by emilyw at 3:15 PM on January 2, 2012 [5 favorites]

Precision is about how exactly we can specify it.

1. If you ask Bill Gates how much money he has, and he says 243,456,152,001,923 cents, he is being very precise - but we don't know whether he is at all accurate.

2. If he said "56 billion" that's a lot less precise, but (according to Google at any rate) it's a lot more accurate.

With integers, we can represent very precise numbers (as in example 1), but to store very precise numbers that are very big or very small, we need a would lot of storage - the bigger the number the more storage.

With floating point, we deliberately lose precision in order to be able to store very big or small numbers with a consistent amount of storage (computer programmers like consistency). So a floating point number is a lot more like example 2.

Unfortunately, having low precision means that as we do arithmetic with floating point, the low precision causes small errors in accuracy. The errors accumulate as we manipulate the floating point numbers, so if we don't take care we can end up with very wrong results.

For example, imagine I'm storing decimal floating point with one significant digit, and I want to divide 1 by 3 and then multiply by 3 again.

1/3 = 0.3

0.3 * 3 = 0.9

My lack of precision has lost me some accuracy. So, he precision here is about how many digits I allowed myself for the calculation. The accuracy is about the difference between my final result and the correct result.

I can often get better accuracy with the same precision, by spotting operations that will lose accuracy and doing them at the end of the calculation.

posted by emilyw at 3:15 PM on January 2, 2012 [5 favorites]

All the above answers are right, but I think there's a simpler explanation. In FP math where I work (embedded control):

* Accuracy is how close to the real value you can represent. 1/2 can be represented in base-2 floating point with perfect accuracy. (1).0 * 2^-1 However, 9/10 cannot.

* Precision is how many bits you have in your floating-point representation. So a standard IEEE-754 float has four bytes of precision. (Some are spent on the mantissa, some on the exponent.)

More precision gets you more accuracy, so the two are closely linked, but in my field they imply different things. Precision means storage space, complexity, etc., where accuracy means "how good is it." We might say, "oh, we can afford to do that with single-precision (i.e., four-byte) floats," and "there's not enough accuracy to make that stable; we might have to move to fixed-point."

posted by introp at 3:37 PM on January 2, 2012

* Accuracy is how close to the real value you can represent. 1/2 can be represented in base-2 floating point with perfect accuracy. (1).0 * 2^-1 However, 9/10 cannot.

* Precision is how many bits you have in your floating-point representation. So a standard IEEE-754 float has four bytes of precision. (Some are spent on the mantissa, some on the exponent.)

More precision gets you more accuracy, so the two are closely linked, but in my field they imply different things. Precision means storage space, complexity, etc., where accuracy means "how good is it." We might say, "oh, we can afford to do that with single-precision (i.e., four-byte) floats," and "there's not enough accuracy to make that stable; we might have to move to fixed-point."

posted by introp at 3:37 PM on January 2, 2012

Accuracy refers to how "correct" a measurement is; how close it is to the accepted value.

Precision refers to how exactly a measurement is reported or how closely repeated measurements will agree.

A measurement can be precise but inaccurate.

A measurement can be imprecise but accurate.

precision can be smaller or larger than the smallest significant digit of the measurement:

56 +/- 0.1 must be somewhere between 55.9 and 56.1

56.2 +/- 5 must be somewhere between 51.2 and 61.2

That second case (a measurement thats more accurate than the +/- precision) is not so common, but it can happen. Notice that that extra 0.2 does make a difference to the range of possible values.

posted by Lanark at 3:59 PM on January 2, 2012

Precision refers to how exactly a measurement is reported or how closely repeated measurements will agree.

A measurement can be precise but inaccurate.

A measurement can be imprecise but accurate.

precision can be smaller or larger than the smallest significant digit of the measurement:

56 +/- 0.1 must be somewhere between 55.9 and 56.1

56.2 +/- 5 must be somewhere between 51.2 and 61.2

That second case (a measurement thats more accurate than the +/- precision) is not so common, but it can happen. Notice that that extra 0.2 does make a difference to the range of possible values.

posted by Lanark at 3:59 PM on January 2, 2012

FP math is tricky. It's full of non-intuitive traps.

If you keep adding 1 to a single precision FP number (3 byte mantissa / 1 byte exponent), it stops incrementing at around 16 million. That's the point where your precision becomes worse than 1.

In FP math your accuracy depends on the precision in fundamental ways. It's very easy to end up in a situation where you're dividing a very large number by a very small one and the result is basically worthless.

There's also an issue with the libraries. Single precision libraries round off after every step, because they're so imprecise. It's a known problem.

But in binary, the fraction 1/10 is infinitely repeating, as 1/3 is in decimal. So if you don't round off, and your precision is finite, in binary 1 / 10 * 10 = 0.99999... This doesn't happen with the single precision libraries because they round after every step.

But the double precision libraries were also written in the 1970's, when people often paid for their computer time by the CPU cycle. And on average double precision math takes 4 times the CPU cycles as single precision if you don't have one of those nifty coprocessors that was invented in the 1980's.

As a result, the double precision libraries don't round off automatically; they assume you'd probably rather wait until you've done all the steps of your calculation before doing that.

So every generation is afflicted with smart people who decide to "throw double precision at it" thinking they have the coprocessor and CPU cycles tol get "all the precision they need," and what they end up with is a lot of weird unexpected rounding errrors.

My advice is to avoid using floating point math at all unless absolutely necessary. As one pioneer put it IIRC, "if you don't know how to solve your problem with integer math, you don't know how to solve your problem."

posted by localroger at 4:38 PM on January 2, 2012

If you keep adding 1 to a single precision FP number (3 byte mantissa / 1 byte exponent), it stops incrementing at around 16 million. That's the point where your precision becomes worse than 1.

In FP math your accuracy depends on the precision in fundamental ways. It's very easy to end up in a situation where you're dividing a very large number by a very small one and the result is basically worthless.

There's also an issue with the libraries. Single precision libraries round off after every step, because they're so imprecise. It's a known problem.

But in binary, the fraction 1/10 is infinitely repeating, as 1/3 is in decimal. So if you don't round off, and your precision is finite, in binary 1 / 10 * 10 = 0.99999... This doesn't happen with the single precision libraries because they round after every step.

But the double precision libraries were also written in the 1970's, when people often paid for their computer time by the CPU cycle. And on average double precision math takes 4 times the CPU cycles as single precision if you don't have one of those nifty coprocessors that was invented in the 1980's.

As a result, the double precision libraries don't round off automatically; they assume you'd probably rather wait until you've done all the steps of your calculation before doing that.

So every generation is afflicted with smart people who decide to "throw double precision at it" thinking they have the coprocessor and CPU cycles tol get "all the precision they need," and what they end up with is a lot of weird unexpected rounding errrors.

My advice is to avoid using floating point math at all unless absolutely necessary. As one pioneer put it IIRC, "if you don't know how to solve your problem with integer math, you don't know how to solve your problem."

posted by localroger at 4:38 PM on January 2, 2012

In C++ when you set precision of an output, you're simply setting the decimal places displayed. The lower the precision, the lower the potential accuracy (unless it's full of trailing zeros, then you'd have high accuracy achieved with low precision.) So here's my best attempt at defining this in layman's terms:

- Accuracy: How close you are to the "true" value.

- Precision: How much information you have about that specific value.

posted by samsara at 5:04 PM on January 2, 2012

- Accuracy: How close you are to the "true" value.

- Precision: How much information you have about that specific value.

posted by samsara at 5:04 PM on January 2, 2012

localroger's answer is pretty good, though I'm not sure the OP's question is well-posed. Those terms mean something, but it's not a comparable thing. Precision, for IEEE floating point (by far the most common kind) is fixed, and refers to the number of bytes used to represent a single number. For instance, as listed above, eight bytes is "double" precision.

Accuracy depends drastically on the details of the computation, and would be measured as a percent difference or fraction for a calculation. For example, the fractional error in computing (1/10)

posted by wnissen at 8:06 PM on January 2, 2012

Accuracy depends drastically on the details of the computation, and would be measured as a percent difference or fraction for a calculation. For example, the fractional error in computing (1/10)

^{2}is about 2*10^{-16}, so you could say it was that accurate, or 2*10^{-18}% accurate. On the other end of things, sin(pi) is 1.2246467991473532e-16, so the accuracy is zero. The accuracy obviously depends intimately on the precision used for the calculation, but fundamentally they are different things, measured in different units.posted by wnissen at 8:06 PM on January 2, 2012

This thread is closed to new comments.

accuracyis how close a measurement is to the real value;precisionis how consistently the measurement comes back the same way every time you make it under identical conditions.posted by Tomorrowful at 3:01 PM on January 2, 2012 [1 favorite]