Computers and Irrational Numbers
March 30, 2018 6:56 AM   Subscribe

How do computers "think" about irrational numbers? If I tell my computer: √2 ^ 2, does it first calculate the square root out to a certain decimal place, then square it, and round the answer up to 2? Or does it somehow understand the irrationality of √2 and not perform that part operation? I don't have a CS background, so this question might be pretty naive. Links to articles/ discussions are welcome, my googling didn't go so well.
posted by Think_Long to Science & Nature (17 answers total) 6 users marked this as a favorite
 
Best answer: There are two main ways this is handled.

At the chip level, the processor, you have a limited amount of bits to work with. Most machines are 64 bit, and they may have 128 bit double-precision types. What that means is, you can only stuff so much information into those 64 or 128 bits. The relevant wiki is the IEEE spec that explains how you represent decimal values on the chip and do math etc with them. The double precision wiki entry helps too.

At the software level: There are libraries (software) that have arbitrary precision/length. The relevant wiki. Big idea is that chip-level is fast, but because you can only place so much info in 64/128 bits, you will have round off issues and errors. The libraries sacrifice speed for getting the right answer.
posted by k5.user at 7:14 AM on March 30, 2018


Best answer: You might want to look into the IEEE standards for floating-point arithmetic. The Wikipedia article is here and the full (very long and very technical) document is here. If you search for this topic, you'll find a lot of other explanations and examples; this is a key concept for any undergraduate math or computer science program that includes significant computation.
posted by ElKevbo at 7:15 AM on March 30, 2018


Best answer: What Every Computer Scientist Should Know About Floating-Point Arithmetic

(I took a class that spent a good month on this subject and I'm actually not 100% sure what the answer to your example is! I think it's theoretically possible that a compiler for a lazy language could be sophisticated enough to recognize a square root squared and skip the actual computation of the square root, but I'm not sure if any programming language actually does this.)
posted by perplexion at 7:25 AM on March 30, 2018 [2 favorites]


Best answer: Software designed to carry out calculations via symbolic mathematics, such as computer algebra systems, are probably going to handle specific cases like roots—"understand the irrationality of √2 and not perform that part operation" as you say.

I would think that they aren't going to have a general way to represent just any irrational number, but maybe someone more familiar with that stuff can address the question.
posted by XMLicious at 7:27 AM on March 30, 2018 [2 favorites]


Best answer: Computers typically store floating point values as binary numbers, not decimal, and they are of limited precision; no computer can store the exact value of an irrational number. And yes, that means for calculations, very tiny imprecisions will crop up. For the most part, these imprecisions will be so tiny that they will not matter, but sometimes they can be important, and sometimes it can be catastrophic if the programmer does not take them in to account.

For example, on my desktop computer, this is what I get when I calculate (√2)²:
2.0000000000000004440892098500626161694526672363281250
posted by 1970s Antihero at 7:29 AM on March 30, 2018 [2 favorites]


Best answer: Or to clarify my answer: It's tough to answer the question "how does the computer think about numbers" because the computer "thinking process" is pretty complex. You write a program that *you* can understand, then the computer "translates" it into a list of instructions that the computer can execute on hardware. So there's actually two levels of "thinking" here; the first when the computer tries to understand what you're telling it to do, and the second when it goes ahead and does what it thinks you told it to do.

At the point where the computer is actually executing instructions, all of the above answers are correct- an 'irrational' number is always going to be a series of bits representing a rounded, imprecise version of that number.

But the "translation" process also counts as part of the thinking process, and that's where it's a little more complicated. You could actually write your own "translator" (actually known as a compiler) and tell the computer that "√x ^ 2" is "x". So, the process would look something like -> person types "√2 ^ 2" -> compiler notices the pattern and replaces it with "x" in the list of instructions it gives to the computer -> the computer, executing that list of instructions, never uses an irrational number because it has no idea it's there. It just gives X, which is 2.

But again, I'm not sure how many if any compilers do this. I suspect some languages designed for math (like computer algebra systems) do, but that's just speculation on my part.
posted by perplexion at 7:53 AM on March 30, 2018


Response by poster: Okay, cool. From what I gather, machines are only able to consider irrationals as discrete numbers out to whatever their processing limit is set at.

So, I imagine any mathematician working on a theoretical level needs to use or develop a compiler with the pre-coded patterns that have already been identified as perplexion described. In that case, then the machine still isn't considering an irrational number holistically*, just recognizing the pattern for when it might become rational?

*I realize the computer doesn't actually "think" about things this way, I'm just trying to grasp its most base level logic I guess
posted by Think_Long at 8:26 AM on March 30, 2018


Best answer: A small correction to a comment above: a 64-bit float is a double precision floating point number. Ordinary precision floats are 32-bit. Some CPUs implement extended precision variants (the Intel floating point unit does 80 bit arithmetic internally for instance, which can affect results). 128 bit floats are sometimes called Quadruple precision.

Think_Long: modern compilers are certainly capable of the kind of optimising transformations you're thinking of. e.g. for the code
#include <cmath>
float pow2(float arg) {
  return pow(sqrt(arg),2.0f);
}
then g++ -O -ffast-math will emit a function which simply returns its argument:
$ cat math.S 
	.file	"math.cc"
	.text
	.globl	_Z4pow2f
	.type	_Z4pow2f, @function
_Z4pow2f:
.LFB272:
	.cfi_startproc
	rep ret
	.cfi_endproc
.LFE272:
	.size	_Z4pow2f, .-_Z4pow2f
	.ident	"GCC: (Debian 7.3.0-13) 7.3.0"
	.section	.note.GNU-stack,"",@progbits
However, the -ffast-math option is required, as the compiler is otherwise not allowed to make transformations which alter the results of a function, even when those results may differ from 'mathematical reality' purely due to loss of precision during intermediate operations. The compiler can make quite radical transformations in the input code if it can spot these kind of optimisations, but you can't really rely on it doing so & knowing the rules and how they apply to mathematical operations is in fact quite a deep topic as people have hinted above.

More generally, symbolic algebra packages will happily make these kind of simplifications because they work at a different level of abstraction than a c++ compiler does.
posted by pharm at 8:54 AM on March 30, 2018 [1 favorite]


Best answer: Generally speaking, most languages understand the following:

Integers: Whole numbers, possibly < 0.
Floating-point numbers: Machine approximations of decimal numbers.

Optionally, some languages will include the following:
Fixed-point numbers: 3.0004, exact to a certain number of decimal places. Truncated beyond that level of precision.
Rational numbers: {1/3, 1/2, 2/3} Any ratio of an integer to an integer.
Imaginary numbers: Arithmetic involving √-1.

Functions that produce square roots almost always create one of the above approximations of the irrational number. Languages generally follow order of operations, making approximations as they go. When coding math, it's important to recognize what numeric representations you're feeding into operations and functions, and what numeric representations you expect to get out of them.

As previously noted, computer languages generally don't perform algebraic analysis to recognize that (√x)2 = x, although there are exceptions.
posted by GenderNullPointerException at 9:31 AM on March 30, 2018


Best answer: From what I gather, machines are only able to consider irrationals as discrete numbers out to whatever their processing limit is set at.

No, that's not the case in general. That's true of IEEE floating point stuff, which is what most people above are talking about. I think you're getting more answers from computer folks than math folks, but they should be careful about what they are ignoring. There's a whole world of symbolic computation, aka "computer algebra". Stuff like Mathematica, Maple, Sage/Maxima, etc. This is hinted at by XMLicious and a few others above, but it bears more attention.

These computer algebra systems commonly "think" of expressions like 2^0.5 just like you should: it's the number that, when squared, give you two. And software systems made to work with this stuff will be able to handle algebraic numbers* like root two as conceptual objects without using any decimal approximation.

So the answer to your question is very much "it depends". If nobody is using any special tools, then the IEEE floating point stuff applies. But lots and lots of people use a variety of tools that are specifically designed to avoid the problems associated with using decimal approximations to numbers like root two! For things like web browsing and video games etc, it usually doesn't matter if we have exact analytic results for this stuff. But sometimes it does, and clever people have worked very hard to make it possible for computers to treat numbers like root two as exact concepts, rather than as decimal or binary numbers.

So, I imagine any mathematician working on a theoretical level needs to use or develop a compiler...
No. TLDR:while some mathematicians work on developing this type of software, or prefer to hand roll their own, most anyone who needs to do this stuff professionally will be using one of the top 5 big names in computer algebra/symbolic computation.

*Most irrational number that can be named or easily written are algebraic numbers. Most people only ever deal with two numbers that are irrational and not algebraic: e and pi ( though there are of course infinitely many more, they just rarely come up outside of number theory). Maxima uses floats for e and pi, but only when evaluated.
posted by SaltySalticid at 10:19 AM on March 30, 2018 [12 favorites]


Best answer: Yeah, this can have significant consequences --- anyone who tests "equality" of floating-point numbers in computer code is eventually going to land in trouble because it notoriously gives false negatives once you process the numbers at all. Numbers which "should" be equal to each other often aren't simply because of roundoff error, and best practice is generally to test not if floats are equal, but whether they differ by some number far below the threshold where two numbers might be intentionally different.

Integer arithmetic doesn't have this problem, of course. Fixed-precision can, although it usually doesn't.
posted by jackbishop at 10:24 AM on March 30, 2018


Response by poster: I think "Computer Algebra" was the term that I was searching for, that seems to be the exact thing I've been puzzling over. Really interesting answers, everyone, thanks for the different perspectives!
posted by Think_Long at 11:54 AM on March 30, 2018 [1 favorite]


Best answer: There is at least one other option that hasn't been mentioned here: exact real arithmetic. Here, numbers are represented as programs that can produce a value to any requested degree of precision. (For example, you specify n, the program returns the represented value to n decimal places.) Computations can be done with these, and the result inspected with whatever degree of precision is required. I believe that this has been used in practical numerical calculations, but I don't know much about that. They also have some nice theoretical implications, relating computability theory to real analysis.
posted by eruonna at 2:01 PM on March 30, 2018


Best answer: Just as an illustration, using Python 2.7.13 - you can invoke "python" at a computer terminal window and try it out yourself too:
>>> import math
>>> math.sqrt(2.0**2) - (math.sqrt(2.0))**2
-4.440892098500626e-16

posted by RedOrGreen at 2:04 PM on March 30, 2018


Best answer: -4.440892098500626e-16

This may seem like a weird value, but this is actually a 1-bit error, exactly 2⁻⁵¹.
posted by 1970s Antihero at 4:15 PM on March 30, 2018 [1 favorite]


Best answer: A couple of comments:

Floating point numbers are probably hexadecimal (pretty much equivalent to binary), i.e. totally base 16 and converted to base 10 only if anyone needs to see the value.

These numbers have a lot of trouble even with very common fractions, just as we all have trouble writing 1/3 as a decimal: 0.333333333...333333. It's not just transcendental numbers that aren't exact. I worked with a DEC minicomputer which, if you asked it to print the number 6, would print 5.99999.

Historically, numbers have been represented in lots of different ways, and computations have been done in different ways. In some early computers, multiplication was done pretty much the same way as kids are taught in 3rd grade using a multiplication table. Exponents were calculated by taking logarithms (using a table), multiplying, and anti-logging.

IBM computers used "packed decimal" numbers for money computations because it offered efficient use of storage combined with exact financial computation - no fractions of cent carried along.
posted by SemiSalt at 10:32 AM on April 1, 2018


Floating point numbers are probably hexadecimal (pretty much equivalent to binary), i.e. totally base 16 and converted to base 10 only if anyone needs to see the value.
I'm no numerical expert, but the last machines I heard of that used hexidecimal floating-point were System 360/370 systems. A binary mantissa always uses all the bits available (in fact, since you know the first bit is a one, you actually get one bit "for free") but a hex mantissa sometimes "wastes" up to three (four, counting the "free" bit) bits compared to binary.
posted by Gilgamesh's Chauffeur at 6:03 PM on April 4, 2018


« Older Can I Eat It - Prime Rib Edition   |   Contractor nightmare - what are our options? Newer »
This thread is closed to new comments.