October 5, 2005 9:25 AM Subscribe

What's the sum of all integers? I remember from somewhere in the distant past of my math education that you can sum up all the integers and get a wide variety of different results. For instance, you can write the sum as S = 0 + (1 + -1) + (2 + -2) + ... where each pair cancels to 0, so S must be 0. But you can also write it as S = 0 + 1 + (2 + -1) + (3 + -2) + ... where each pair adds up to 1, so S must be infinity. With a little more work it seems like you could also get other answers. My question is: are these answers correct, or are all but one of them mistakes like the numerous "proofs" that 3=2? If they're all correct, how does it make sense that the same numbers can add up to different things when you get to infinity? If they're not all correct, what's the fundamental mistake?

posted by jacobm to Science & Nature (29 answers total) 2 users marked this as a favorite

posted by jacobm to Science & Nature (29 answers total) 2 users marked this as a favorite

IANAM but the mistake is that you don't "get to infinity"; there is no "sum of all integers". You can make assertions that are true of all integers, and assertions that are true of the set of all integers, but you cannot perform a calculation on all the integers (or all the reals, or all the complex numbers,...) and get a well-defined result.

posted by nicwolff at 9:36 AM on October 5, 2005

posted by nicwolff at 9:36 AM on October 5, 2005

What you are describing is a supertask. Supertasks are frequently fraught with paradoxes such as this one, which is similar to Thompson's Lamp (see link above for details).

posted by justkevin at 9:39 AM on October 5, 2005

posted by justkevin at 9:39 AM on October 5, 2005

The sum of an infinite set of things doesn't, properly speaking, exist. What we talk about instead is the *sequence* of *partial sums*.

There is a notion of what it means for a sequence to converge to a number. For example, the sequence 1/2, 1/3, 1/4, .... converges to zero. This means that for any little number (traditionally denoted ε), eventually there's a member of the sequence that's smaller than ε.

Now, if I've got a bunch of things that I want to add up, I can construct a sequence out of them as follows: the first member of the sequence is just the first thing; the second is the sum of the first two; the third is the sum of the first three, and so on. Now I've got a sequence, and I can ask whether it converges.

For example, consider the numbers of the form 1/2^n, where n=0,1,2.... So we've got 1, 1/2, 1/4, 1/8, 1/16, ..... The partial sums are: 1, 3/2, 7/4, 15/8, 31/32, ... These partial sums converge to 2. So we write:

1 + 1/2 + 1/4 + 1/8 + ... = 2.

This is really just shorthand for all the above stuff about partial sums, though.

Here's a weird thing: shuffling the order of the things you're adding up can**change** whether their sequence of partial sums converge! Strange but true. For example, 1 - 1/2 + 1/3 - 1/4 + 1/5 -... can be reordered to add up to **any** number you want, or to fail to converge at all..

The example you give is another instance of this.

posted by gleuschk at 9:41 AM on October 5, 2005

There is a notion of what it means for a sequence to converge to a number. For example, the sequence 1/2, 1/3, 1/4, .... converges to zero. This means that for any little number (traditionally denoted ε), eventually there's a member of the sequence that's smaller than ε.

Now, if I've got a bunch of things that I want to add up, I can construct a sequence out of them as follows: the first member of the sequence is just the first thing; the second is the sum of the first two; the third is the sum of the first three, and so on. Now I've got a sequence, and I can ask whether it converges.

For example, consider the numbers of the form 1/2^n, where n=0,1,2.... So we've got 1, 1/2, 1/4, 1/8, 1/16, ..... The partial sums are: 1, 3/2, 7/4, 15/8, 31/32, ... These partial sums converge to 2. So we write:

1 + 1/2 + 1/4 + 1/8 + ... = 2.

This is really just shorthand for all the above stuff about partial sums, though.

Here's a weird thing: shuffling the order of the things you're adding up can

The example you give is another instance of this.

posted by gleuschk at 9:41 AM on October 5, 2005

I think I started to be suspicious of string theory when I read Polchinski's book on it and saw the formula, right there in Chapter 1, that the sum of all positive integers is -1/12. Of course, there's a huge number of caveats & justifications involved in this, but still.

posted by Johnny Assay at 9:53 AM on October 5, 2005

posted by Johnny Assay at 9:53 AM on October 5, 2005

The sum of an infinite series is defined if and only if the series converges. Roughly speaking, that means the sum of the first N terms in the series grows closer and closer to a fixed value as N becomes larger.

For an example of sum of an infinite series that is defined, consider the series:

1/2 + 1/4 + 1/8 + 1/16 + 1/32....

The sum of the first term is 1/2. The sum of the first two terms is 3/4. The sum of the first three terms is 7/8. The sum of the first four terms is 15/16. And so forth. As we add more and more terms, the sum becomes closer and closer to 1, without ever reaching it. (To generalize, the sum of the first N terms in this series is 1 - 1/2^{N}.) In fact, we can get the sum as close as we like to 1 (as long as we don't want to reach 1 itself) by adding up a finite number of terms in the series. To put it another way, the sum exceeds 1-x, where x is any positive number, no matter how small (but remember that "positive" implies "non-zero"), in a finite number of terms of the series.

Thus, we say that the sum of the infinite series above is 1.

Now consider your series: 0 + 1 + (-1) + 2 + (-2) + 3 + (-3) + 4 + (-4)...

The sum of the first term is 0. The sum of the first two terms is 1. The sum of the first three terms is 0. The sum of the first four terms is 2. The sum of the first five terms is 0. The sum of the first six terms is 3.

This series does not converge, as the consecutive sums are not getting closer and closer to any particular number. It's not enough that every other consecutive sum is zero; it's still not converging. Thus, this sum is undefined.

For more, you may be interested in reading the wikipedia articles Series, Convergent series, and Divergent series.

It may also be helpful to meditate on another series, similar to yours but a bit simpler: what is the sum of 1 + (-1) + 1 + (-1) + 1 + (-1) + 1 + (-1)...?

On preview: um, pretty much the same thing gleuschk said. I'd only add that the "switching the order around changes the answer" only works for sequences that converge, but do not absolutely converge (again, see the wikipedia article). Unless I'm mistaken, the sequence you give*isn't* an example of that, because that sequence doesn't converge in any sense; there's no ordering of the integers, AFAIK, which would cause the consecutive sums to converge.

posted by DevilsAdvocate at 10:20 AM on October 5, 2005

For an example of sum of an infinite series that is defined, consider the series:

1/2 + 1/4 + 1/8 + 1/16 + 1/32....

The sum of the first term is 1/2. The sum of the first two terms is 3/4. The sum of the first three terms is 7/8. The sum of the first four terms is 15/16. And so forth. As we add more and more terms, the sum becomes closer and closer to 1, without ever reaching it. (To generalize, the sum of the first N terms in this series is 1 - 1/2

Thus, we say that the sum of the infinite series above is 1.

Now consider your series: 0 + 1 + (-1) + 2 + (-2) + 3 + (-3) + 4 + (-4)...

The sum of the first term is 0. The sum of the first two terms is 1. The sum of the first three terms is 0. The sum of the first four terms is 2. The sum of the first five terms is 0. The sum of the first six terms is 3.

This series does not converge, as the consecutive sums are not getting closer and closer to any particular number. It's not enough that every other consecutive sum is zero; it's still not converging. Thus, this sum is undefined.

For more, you may be interested in reading the wikipedia articles Series, Convergent series, and Divergent series.

It may also be helpful to meditate on another series, similar to yours but a bit simpler: what is the sum of 1 + (-1) + 1 + (-1) + 1 + (-1) + 1 + (-1)...?

On preview: um, pretty much the same thing gleuschk said. I'd only add that the "switching the order around changes the answer" only works for sequences that converge, but do not absolutely converge (again, see the wikipedia article). Unless I'm mistaken, the sequence you give

posted by DevilsAdvocate at 10:20 AM on October 5, 2005

Here's a short, entertaining paper by a logician who indicates that your problem "threatens the foundations of transfinite arithmetic." I'm not sure if the paradox has an agreed-upon solution.

(Second time I've linked to Peter Suber today.)

posted by painquale at 10:21 AM on October 5, 2005 [1 favorite]

(Second time I've linked to Peter Suber today.)

posted by painquale at 10:21 AM on October 5, 2005 [1 favorite]

The numbers in the middle are negligible, but at either end of the sequence you get infinity + (-infinity), the result of which is undefined, and therefore S is also undefined. The thing about undefined numbers is that all numbers are [sort of] equal to them, so any proof that provides a value for S is [sort of] correct.

posted by cillit bang at 10:22 AM on October 5, 2005

posted by cillit bang at 10:22 AM on October 5, 2005

Also, note that the fact that *some* sums of infinite series have well-defined results demonstrates the fallacy of nicwolff's assertion that "...you cannot perform a calculation on all the integers (or all the reals, or all the complex numbers,...) and get a well-defined result." For a more prosaic example of a calculation on all the real numbers, note that in calculus you frequently calculate the integral of a function from -∞ to +∞ and the result is often well-defined.

posted by DevilsAdvocate at 10:32 AM on October 5, 2005

posted by DevilsAdvocate at 10:32 AM on October 5, 2005

I should have allowed both shuffling and grouping. As I wrote it, you're right -- the sequence doesn't converge for any ordering. (I think.) I was trying to steer away from technical terms like conditional, absolute, etc.

The article painquale links to is essentially the Ross-Littlewood paradox, mentioned in the supertask article linked earlier. I disagree that this question is an example of a supertask -- it's a strictly mathematical issue, and supertasks seem to me to get most of their interest from physical/computational considerations.

cillit bang's answer is (sorry, cb) nonsensical.

posted by gleuschk at 10:33 AM on October 5, 2005

(oops, I cut off the important clause in quoting you, DA. Should say, "for sequences that converge **but do not converge absolutely**. sorry.)

posted by gleuschk at 10:46 AM on October 5, 2005

posted by gleuschk at 10:46 AM on October 5, 2005

DevilsAdvocate is correct. If you expect one answer, convergence is an a priori requirement for applying an operation on a partial set from an infinite series. If you receive multiple answers from applying operations on a partial set of an infinite series, your series is divergent. Convergence tests help establish conditions when your series is convergent or divergent.

posted by Rothko at 10:51 AM on October 5, 2005

posted by Rothko at 10:51 AM on October 5, 2005

No. As pointed out above, the relevant property is absolute convergence. See the link to the Riemann Series Theorem.

posted by gleuschk at 11:00 AM on October 5, 2005

The great British number theorist G.H. Hardy beautifully sums up the modern way of thinking about this genre of problem in his book, *Divergent Series* (1948):

"...it does not occur to a modern mathematician that a collection of mathematical symbols should have a 'meaning' until one has been assigned to it by definition. It was not a triviality even to the greatest mathematicians of the eighteenth century. They had not the habit of definition: it was not natural to them to say, in so many words, `by X we*mean* Y.' ... it is broadly true to say that mathematicians before Cauchy asked not 'How shall we *define*

1 - 1 + 1 - 1 + ...

but

'What*is* 1 - 1 + 1 - 1 + ...

and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal."

In other words: "are these answers correct, or are all but one of them mistakes" is not the question a modern mathematician would ask: it does not even really make sense until we make a*decision* about what we mean when we extend the symbol "+" -- which after all is born to apply to only *two* arguments -- to apply, instead, to infinitely may. A modern mathematician would say: is there a mathematically congenial way to assign a particular value to the sum of all integers? And the answer, to my knowledge, is "there is no overwhelming favorite among the ways that you could do so -- thus, we do not choose to assign any particular value to this infinite sum."

posted by escabeche at 11:36 AM on October 5, 2005

"...it does not occur to a modern mathematician that a collection of mathematical symbols should have a 'meaning' until one has been assigned to it by definition. It was not a triviality even to the greatest mathematicians of the eighteenth century. They had not the habit of definition: it was not natural to them to say, in so many words, `by X we

1 - 1 + 1 - 1 + ...

but

'What

and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal."

In other words: "are these answers correct, or are all but one of them mistakes" is not the question a modern mathematician would ask: it does not even really make sense until we make a

posted by escabeche at 11:36 AM on October 5, 2005

That said, I should say that *one* popular way to assign values to sums like this is to add an extra parameter. For instance, we can define a function

zeta(s) = 1^-s + 2^-s + .... 3^-s

called the*Riemann zeta function*; this infinite sum has an uncontroversial value whenever s is a real number greater than 1, but not when s is a real number less than or equal to 1. On the other hand, one can define a function f(s) in a totally different way (by means of an integral) and find that, for all real numbers greater than 1, f(s) = zeta(s). But the difference is that f(s), unlike zeta(s), is well defined for *all* values of s. So *one* way to say what we mean by

1 + 2 + 3 + 4 + ....

is: well, I'd like to compute zeta(-1), but that infinite sum doesn't have a well-defined value. On the other hand, whenever both make sense, f(s) = zeta(s); so f is a good substitute for zeta; so wouldn't it be pretty reasonable to assign the value f(-1) to the sum above? And f(-1) is -1/12. And that is where the "physicists' summation" referred to by Johnny Assay comes from. The relevant keywords if you want to read more are "Riemann zeta function" and "analytic continuation."

posted by escabeche at 11:42 AM on October 5, 2005

zeta(s) = 1^-s + 2^-s + .... 3^-s

called the

1 + 2 + 3 + 4 + ....

is: well, I'd like to compute zeta(-1), but that infinite sum doesn't have a well-defined value. On the other hand, whenever both make sense, f(s) = zeta(s); so f is a good substitute for zeta; so wouldn't it be pretty reasonable to assign the value f(-1) to the sum above? And f(-1) is -1/12. And that is where the "physicists' summation" referred to by Johnny Assay comes from. The relevant keywords if you want to read more are "Riemann zeta function" and "analytic continuation."

posted by escabeche at 11:42 AM on October 5, 2005

I have to make one more comment, which is that one thing this is *not* is a paradox. If you are committed to the proposition "every infinite series has a well-defined sum", then the example you present provides difficulties for you. But that proposition is *not* part of standard mathematical practice, so there is no paradox. It is no more problematic than the observation that there are different reasonable answers to the question, "What is the sum of a sheep and a frog?"

posted by escabeche at 11:45 AM on October 5, 2005

posted by escabeche at 11:45 AM on October 5, 2005

Well yeah, I overgeneralized up top there. For another thing, presumably you can sum the product of each integer times zero. [Checks to make sure that he really got 800 on the math SATs, realizes that was a quarter-century ago, wonders how many brain cells his sleep apnea is killing.]

posted by nicwolff at 12:30 PM on October 5, 2005

posted by nicwolff at 12:30 PM on October 5, 2005

Everyone that gave great answers above seems to have failed to mention one term that might be useful (or might not). The business of finding whether or not an infinite sum converges or diverges is a business intimately related to finding a limit (hence the epsilon introduced above).

Adding up a series 1/2^{n}, n =1...n for some value of *n* works because the limit of the partial sums is defined. (Finding that that series converges is first realizing that the formula for the partial sum, S_{n} = (2^{n} -1)/2^{n}, and then determining that the limit of that as *n*->infinity is 1, thus we say that the series converges.

In the example you gave, Σ n, n=1...infinity, the limit is not defined. An easy reason to see why is because for the sum of all integers, each partial sum is simply getting bigger, it is not tending towards some limit. Each time you increase*n*, the amount added is getting bigger. In my example above, Σ 1/2^{n}, n=1...infinity, each time you increase *n*, it is adding an even smaller amount than it did previously. This can give one some small intuition about it.

This issue can get a little tricky for some folks, because they are missing the definition issue above, so I think*escabeche*'s quote is particularly germane. We define it to mean that the sum of all numbers of the form 1/2^{n} = 1. But what that definiton really says is that for any number you give me very close to 1, I can find an *n* such that the series is closer to 1. That definition is paramount, not some cosmic sense of one being the other. It's not really the same as say, 2+2 =4 (well, it is, but folks believe that 2+2=4 for is true in some cosmic sense, rather than that simply being another definition). There is a more complicated definition here, and that trips folks up (as it seems to the logician above).

Sorry, this may only add a very small bit to what's already been said, but I love talking about math.

posted by teece at 12:48 PM on October 5, 2005

Adding up a series 1/2

In the example you gave, Σ n, n=1...infinity, the limit is not defined. An easy reason to see why is because for the sum of all integers, each partial sum is simply getting bigger, it is not tending towards some limit. Each time you increase

This issue can get a little tricky for some folks, because they are missing the definition issue above, so I think

Sorry, this may only add a very small bit to what's already been said, but I love talking about math.

posted by teece at 12:48 PM on October 5, 2005

escabeche, that seems pretty weak... y=x and y=|x| have the same value for all x>=0, that doesn't mean we can take the two functions as interchangeable.

posted by Chuckles at 1:36 PM on October 5, 2005

posted by Chuckles at 1:36 PM on October 5, 2005

It's quite common Chuckles, and it's done as a matter of convention and definition. See the Gamma function for another example. x! = 1*2*3*...*x. But what about 1.4! Well, with thinking just like for the zeta function, gamma(1)=1! *, gamma(10) =10!, so we might as well define gamma(1.4) =1!. That's all that is being talked about.

This is a useful definition, and causes no problems. You could just say that 1.4! is not defined, but the gamma function provides a really useful and logical way to define it for any real number.

* Gamma is actually off by one from the factorial, so gamma(x!) = (x-1)!, but that is no big deal.

posted by teece at 1:48 PM on October 5, 2005

This is a useful definition, and causes no problems. You could just say that 1.4! is not defined, but the gamma function provides a really useful and logical way to define it for any real number.

* Gamma is actually off by one from the factorial, so gamma(x!) = (x-1)!, but that is no big deal.

posted by teece at 1:48 PM on October 5, 2005

Yep -- and the difference is that |x| is not an *meromorphic* function. I didn't want to go to much into complex analysis in a MeFi post, but I will simply say that among all functions of a complex variable, the ones which are *meromorphic* are the ones we most admire: so x is a perfectly good meromorphic function, as is the function f described in my post above; and indeed it is a theorem that if there is a meromorphic function f which agrees with some mystery function g in some range, then f is the *only* such function. Without meromorphicity, no uniqueness: x, |x|, 1/2(x+|x|), .... all agree with x on the range [0..infinity], but only x is meromorphic; so it is the "right" choice if we want to extend this function to negative numbers.

/morethanyouprobablywantedtoknow

posted by escabeche at 1:52 PM on October 5, 2005

/morethanyouprobablywantedtoknow

posted by escabeche at 1:52 PM on October 5, 2005

meromorphism eh - that is exactly what I wanted to know. Never heard of it (or if I have I have forgotten completely), but it does seem to justify the statement you were making. Thanks!

posted by Chuckles at 2:04 PM on October 5, 2005

The sum of all integers is the highest integer, plus 1, divided by 2.

posted by megatherium at 4:08 PM on October 5, 2005

posted by megatherium at 4:08 PM on October 5, 2005

Sorry, let me rephrase:

The sum of all integers is the highest integer, multiplied by that integer plus 1, divided by 2.

posted by megatherium at 4:16 PM on October 5, 2005 [1 favorite]

The sum of all integers is the highest integer, multiplied by that integer plus 1, divided by 2.

posted by megatherium at 4:16 PM on October 5, 2005 [1 favorite]

The sum of all integers in the sequence 1, 2,....x where x is any positive integer, you mean? There is no "highest integer"

posted by Orange Goblin at 5:35 PM on October 5, 2005

posted by Orange Goblin at 5:35 PM on October 5, 2005

Tangentially, question for Johnny Assay (and, indirectly, Polchinski, I guess) - negative one twelfth? How the hey does he justify that?

posted by wanderingmind at 10:56 PM on October 5, 2005

posted by wanderingmind at 10:56 PM on October 5, 2005

See escabeche's explanation upthread, wanderingmind.

posted by gleuschk at 5:26 AM on October 6, 2005

posted by gleuschk at 5:26 AM on October 6, 2005

This thread is closed to new comments.

posted by blue mustard at 9:35 AM on October 5, 2005