Rounding and Significant Digits
July 9, 2024 12:04 AM   Subscribe

When you are considering rounding numbers, is there a difference between 10.3 and 10.30? Specifics inside!

What I'm wondering is if the trailing 0 on 10.30 means that you should only round to the hundreds place whereas the lack of a zero on 10.3 means you should round to the tens place. Is there a rule for this?

I don't have any practical application for this question, but I will make up an imaginary one to be specific. Let's say there is a cat obesity tester that says 'Measurements of 10.3 and above mean your cat is obese', and another tester says that 'Measurements of 10.30 and above mean your cat is obese'. My imaginary cat receives a score of 10.275. Would I round up to 10.3 for the first one, but to 10.28 for the second one based on the number of significant digits displayed?

Is there any kind of specific rule that governs this? I realize it is incredibly unimportant, but I can't stop thinking about it, so please help.

(I don't actually have a cat, that question is entirely imaginary, sorry)
posted by Literaryhero to Science & Nature (19 answers total) 1 user marked this as a favorite
Yes! You got it.
posted by lokta at 12:57 AM on July 9 [1 favorite]

When I see numbers given to me I assume that they're given to me with as much precision as is meaningful. When comparing, I wouldn't round at all. 10.297 is less than 10.3, 10.30, and 10.300. But also the fact is that no medical indicator has a super cut and dry threshold; if your measurement is very close to the cut-off I think it's probably worth taking this factor seriously and it's reasonable to take preventative care. But it also matters what the range for particular indicator is though - if a "normal" cat is 10.25, and an obese cat is 10.30, 10.275 isn't so bad. if a normal cat's reading are around 6, and the cat in question is 10.275, then that's pretty darn close.

Anyway I hope your imaginary cat is okay, and I wish you had shared pictures.
posted by aubilenon at 12:59 AM on July 9 [8 favorites]

I used to teach remedial math for tech college students many of whom had just scraped the math qualification to get in. I did bang on about not including trailing figures unless they were confident about them. It was silly to report the average of a sample of 10 measurements as 10.297 cm if the measuring device was only accurate to mm: 10.3 cm was _more_ accurate; because it reflected the reality of the situation better. But I didn't make a huge song and dance about it because some of them struggled with "average" and how to calculate it at all at all.
posted by BobTheScientist at 1:20 AM on July 9 [5 favorites]

If the number 10.30 was produced as a measurement by some data-generating process, there should also be some corresponding uncertainty to the measurement. It isn't obvious what that uncertainty might be from the number itself.

If the number was produced by a measurement made by a trained experimental scientist, perhaps they might communicate the uncertainty in the measurement by writing something like 10.30 ± 0.02 , to communicate that they wouldn't be surprised if the true value they're measuring was actually 10.28 or 10.32, but they're pretty confident it was less than 10.35, etc.

Suppose the measurement was produced by a process with a lot of variability in it, and someone familiar with that process estimated the variability as ± 100. It'd be a bit strange to write 10.30 ± 100 as the uncertainty is so large compared to the measurement -- we wouldn't be surprised if the true value was actually -83.1 or 46.1. It might be reasonable to summarise this as 10 ± 100 or even 1e1 ± 100 .

If a value of 10.30 was produced by someone that isn't versed in significant digits, or produced by some process you don't understand, then arguably all bets are off and it isn't really possible to infer anything from the presence of absence of a trailing zero.

Searching for terms like "uncertainty propagation" finds some related material from textbooks - see the pages on uncertainties in measurements, significant figures, propagation of error in - Supplemental_Modules_(Analytical_Chemistry) -Quantifying_Nature
posted by are-coral-made at 3:38 AM on July 9

Best answer: I take 10.30 to mean you have information about the second decimal place but it happens to be zero. You're not leaving it out because I might need to know that it's zero. 10.3 indicates that you don't have that information and one decimal place is as precise as you can be, whether or not I need to know.

In manufacturing there is absolutely a difference between the two as dimensions, for example. 10.32mm only fits one of the values you have and if specified as 10.30mm is out of spec. (Probably)

Precision vs accuracy might be an interesting related rabbit hole to disappear down.
posted by deadwax at 5:59 AM on July 9 [5 favorites]

Best answer: It's worth noting that in the other direction, with places before the decimal point, zeroes are less informative. If I give a measurement of "2300", that's really unclear on the potential error — is it accurate only to the hundreds place, or tens, or ones (presumably no more accurate than that)? 2305 or 2347 don't have this problem; they're both communicated unambiguously as accurate to the nearest one. Likewise, 2300.0 is accurate to the nearest tenth, but for numbers which end in pre-decimal-point zeroes and aren't accurate to at least the first decimal place, there's fundamental unclearness in just communicating the number itself. This can be worked around by providing the number along with accuracy context ("2300 to the nearest ten" communicates "between 2295 and 2305"), or by using a variant description which does have decimal places ("2.30 thousand", or the scientific notation form "2.30×103" both also communicate that nearest-ten accuracy).
posted by jackbishop at 6:46 AM on July 9 [4 favorites]

There's a lot of context-driven cues for accuracy as well, in this latter case. A whole number less than 100, being used to measure something countable rather than continuous, would in most contexts be taken to be an exact value; if I say "here's a bag of 40 apples", most people would not consider that a truthful description of a bag containing 35 apples; of course, if you buy such a bag, which is usually actually measured by weight, it's likely to include specific language along the lines of "approximately" or "around" on the packaging in describing the number of items. But even for countable objects, sufficiently large quantities are typically treated as having the same inherent inaccuracy as continuous measurements: for instance, if I were to say "there were 14-billion one-dollar bills in circulation in 2022*", most folks would not take that as meaning exactly 14000000000 individual one-dollar notes, but rather as an approximation.

*In actual fact, the data source I took that from said "14.3 billion", which is unambiguously communicating an accuracy of within the nearest hundred-million, but I was supposing a more careless phrasing for purposes of illustration.
posted by jackbishop at 6:57 AM on July 9

Most of the answers so far assume that whoever provided you the numbers knows what they are doing. I find that is not a safe assumption. Many people just pass along whatever Excel, google, or their calculator reports. Most aren't doing a rigorous error bounds estimate or propagation of significant digits. For example, it doesn't matter if you type 10.30 or 10.30000 or 10.3 into Excel, it's going to display as 10.3.

In your imaginary cat example, if it were that close you would not rely on ambiguous rounding arcana to make a decision, you would talk to your vet about what the various inputs are to that number, and how they apply to your cat in particular. (And if that "obesity" calculation is anything close to how BMI is calculated and reported for humans, mostly just ignore it in favor of better measures anyway)
posted by mrgoldenbrown at 7:45 AM on July 9 [2 favorites]

IMO: it has to do with the significance of the digits after the decimal, same as it does before the decimal, and programmers put some thought into such things, but often scales were determined a long time ago and some are kinda dumb and some are really dumb.

There is also a 'cultural' component to this. More decimals seems more precise to the layperson, and therefore more 'trustworthy', even though often they are estimates and total made-up nonsense. It depends on what the number is being used for!

In your example:is there a significant difference between 10.275 and 10.30? Let's do a small experiment:
An average cat weighs 20 lbs. ergo, the difference between 20/10.275 = .51275 vs 20/10.3 = .515 or .515 - .51275 = .001 lbs, or less than 1 ounce. Is that significant? No it is not. You cannot maintain your cat's weight to less than 1 ounce.

Now let's do the same measurement for a distance calculation (if my math is right). is .001 km different? At 1 km, that's off by about 1 meter. Close enough for person? Yes. How about a gun shot? No.
posted by The_Vegetables at 8:25 AM on July 9 [1 favorite]

You wouldn't want to round 10.4 down to 10 because that would give a misleading diagnosis, so yes the measurement scale you are given does imply a certain number of "significant digits" are required.
posted by grog at 9:01 AM on July 9

Best answer: Yep! Chiming in to back up the answers saying that your interpretation of rounding with significant digits is correct.

For chemists, the rule is that the last digit should be uncertain --> you might read the volume on this graduated cylinder as 12.34mL and I might read it as 12.35mL. We agree on the tens, ones, and tenths of a millilitre, but we read the last digit differently. So this tool (the graduated cylinder has an uncertainty of ± 0.01mL. Every time we use this graduated cylinder, we would measure to the hundredths of a millilitre. It is precise to the hundredths of a millilitre. The accurate measurement would be however much volume is truly in that cylinder (hopefully really close to 12.34 or 12.35 mL).

But lots of people randomly cut or add decimal places for all sorts of reasons. So I never trust anyone's decimal places and life my life accordingly.

In my personal and professional life, I ignore rounding rules so as to ensure safety margins in things that matter. If I am estimating taxes due in the future, I round up a couple of hundred buck to make sure the money is there when I need it. When I award final grades, I round up an entire percentage point for any students on the cusp of a grade boundary: so if my gradebook says that kiddo earned 84.6% (letter grade: B) --> final grade is rounded to 85% by the computer --> I correct it to read 86% (letter grade: A). There is no chance my grading was better than 2% in accuracy, so if a student's (computer rounded) grade is within 1% of the next letter grade up, I round up.
posted by Sauter Vaguely at 9:18 AM on July 9 [1 favorite]

perhaps an example with pi would help:
Approximating numbers by rounding is not quite as straightforward as truncation...
posted by HearHere at 10:48 AM on July 9

Best answer: Significant digits is actually a term (sometimes also significant figures). You and everyone else above has it right.

However, do note there are cultural and other factors that mean that not everyone means that exact thing when writing something like 10.30 - or even more 10.34.

Also, if you get into really serious scientific writing, they won't just assume "significant digits". They will actually specify or clarify the actual uncertainty, confidence interval, or standard deviation in some way.

Just as a random example, here is a table from an article about stellar distances that lists uncertainty ranges like "+/-31" or "+/-181".

But also in that same article you will see different values they are discussing written like these:

* f = 0.02
* α = 1.811
* Z/X = 0.0230
* Z0 = 0.01876
* Y0 = 0.26896

You bet your darn tootin' those numbers are consciously written with the proper number of decimal places to show with what precision each number is known.

If you were to look up the source papers for those figures you would probably find the uncertainty laid out even more precisely. But for everyday working with numbers and such, just knowing a number to its proper significant digits - and then plugging that into your calculator or computer when running equations or calculations involving that number, then using the significant digits of the input numbers to determine how many digits of the final result are significant - is sufficient.

So that is good enough for everyday use in many situations. If you need more precise determination of output error ranges based on input error ranges, you are looking at the study of Error Analysis and the details of that can be quite complicated and daunting indeed.
posted by flug at 2:22 PM on July 9 [1 favorite]

Yes, the way I learned it in doing chemistry, that usually indicates the number of significant digits you should use, but sometimes it means what instrument you should use. So maybe a little different.

If the procedure said to measure out 100 ml of some liquid, then I could use a big fat beaker and fill it to the 100 ml line, good enough. If it said 100.0 ml, that meant to measure it out with a graduated cylinder marked off in 1 ml increments, to 100 ml. But after that I would use 100 in the math - accurate to +/- 1 ml, not +/- 0.1 ml. It could be 100.2, depending how good my eyeball.
posted by ctmf at 5:11 PM on July 9 [1 favorite]

I guess the difference is, you can generally read an analog instrument to 1/2 of the marked incremets. So a graduated cylinder with 1 ml increments is only accurate to +/- 0.5 ml. Still, that was the convention. The ".0" meant "exactly", not to add on a tenth-of-a-ml precision. I think I'm explaining badly.
posted by ctmf at 5:17 PM on July 9

I only really ever saw that convention when the number was a round number and the precision was kind of ambiguous. 100: is that 1 significant figure or 3? When they meant 1 significant figure it was written 100, and 100.0 meant "no really, an even 100, not 99 or 101."
posted by ctmf at 5:25 PM on July 9

You can solve some ambiguity around round numbers with scientific notation! 1*10^2 vs 1.00*10^2
posted by aubilenon at 8:17 PM on July 9

Response by poster: Thanks everyone, I will consider my curiosity sated. It seems like there are various guidelines depending on the situation but no specifically hard and fast rules. Good enough! :)
posted by Literaryhero at 11:18 PM on July 9

I don't know if this is an obsolete and/or British thing, but decimal places/significant digits with trailing zeroes indicated additional precision. On programming forums, you'd occasionally get someone angry that adding 10.30 to 10.30 didn't return 20.60 … then you'd have to have the hard conversation about floating point probably not being able to represent exactly that number anyway.
posted by scruss at 4:19 PM on July 14

« Older Are there therapists that join you in exposure...   |   Sane Widows PC setup 2024 edition Newer »

You are not logged in, either login or create an account to post comments