# 3.2: Series Anomalies

- Page ID
- 7931

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)- Explain series convergence and anomalies

Up to this point, we have been somewhat frivolous in our approach to series. This approach mirrors eighteenth century mathematicians who ingeniously exploited calculus and series to provide mathematical and physical results which were virtually unobtainable before. Mathematicians were eager to push these techniques as far as they could to obtain their results and they often showed good intuition regarding what was mathematically acceptable and what was not. However, as the envelope was pushed, questions about the validity of the methods surfaced.

As an illustration consider the series expansion

\[\frac{1}{1+x} = 1 - x + x^2 - x^3 + \cdots\]

If we substitute \(x = 1\) into this equation, we obtain

\[\frac{1}{2} = 1 - 1 + 1 - 1 + \cdots\]

If we group the terms as follows \((1-1)+(1-1)+···\), the series would equal \(0\). A regrouping of \(1 + (-1 + 1) + (-1 + 1) +···\) provides an answer of \(1\). This violation of the associative law of addition did not escape the mathematicians of the 1700’s. In his 1760 paper On Divergent Series Euler said:

*Notable enough, however are the controversies over the series \(1 - 1 + 1 - 1 +\) etc, whose sum was given by Leibniz as \(\frac{1}{2}\), although others disagree . . . Understanding of this question is to be sought in the word “sum;” this idea, if thus conceived namely, the sum of a series is said to be that quantity to which it is brought closer as more terms of a series are taken - has relevance only for the convergent series, and we should in general give up this idea of sum for divergent series. On the other hand, as series in analysis arise from the expansion of fractions or irrational quantities or even of transcendentals, it will, in turn, be permissible in calculation to substitute in place of such series that quantity out of whose development it is produced. *

Even with this formal approach to series, an interesting question arises. The series for the antiderivative of \(\frac{1}{1+x}\) does converge for \(x = 1\) while this one does not. Specifically, taking the antiderivative of the above series, we obtain

\[\ln (1+x) = x - \frac{1}{2}x^2 + \frac{1}{3}x^3 - \cdots\]

If we substitute \(x = 1\) into this series, we obtain \(\ln 2 = 1 - \frac{1}{2} + \frac{1}{3} - \cdots\). It is not hard to see that such an alternating series converges. The following picture shows why. In this diagram, \(S_n\) denotes the partial sum \(\ln 2 = x - \frac{1}{2} + \frac{1}{3} - \cdots + \frac{(-1)^{n+1}}{n}\).

**Figure \(\PageIndex{1}\):** Series convergence.

From the diagram we can see \(S_2 ≤ S_4 ≤ S_6 ≤···≤···≤ S_5 ≤ S_3 ≤ S_1\) and \(S_{2k+1} - S_{2k} = \frac{1}{2k+1}\). It seems that the sequence of partial sums will converge to whatever is in the “*middle*.” Our diagram indicates that it is \(\ln 2\) in the middle but actually this is not obvious. Nonetheless it is interesting that one series converges for \(x = 1\) but the other does not.

Use the fact that

\[1 - \frac{1}{2} + \frac{1}{3} - \cdots + \frac{(-1)^{2k+1}}{2k} \leq \ln 2 \leq 1 - \frac{1}{2} + \frac{1}{3} - \cdots + \frac{(-1)^{2k+2}}{2k+1}\]

to determine how many terms of the series \(\sum_{n=0}^{\infty } \frac{(-1)^{n+1}}{n}\) should be added together to approximate \(\ln 2\) to within \(0.0001\) without actually computing what \\(ln 2\) is.

There is an even more perplexing situation brought about by these examples. An infinite sum such as \(1 - 1 + 1 - 1 + ···\) appears to not satisfy the associative law for addition. While a convergent series such as \(1 - \frac{1}{2} + \frac{1}{3} - \cdots\) does satisfy the associative law, it does not satisfy the commutative law. In fact, it does not satisfy it rather spectacularly.

A generalization of the following result was stated and proved by Bernhard Riemann in 1854.

Let \(a\) be any real number. There exists a rearrangement of the series \(1 - \frac{1}{2} + \frac{1}{3} - \cdots\) which converges to \(a\).

This theorem shows that a series is most decidedly not a great big sum. It follows that a power series is not a great big polynomial.

To set the stage, consider the harmonic series

\[\sum_{n=1}^{\infty }\frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \cdots\]

Even though the individual terms in this series converge to \(0\), the series still diverges (to infinity) as evidenced by the inequality

\[\begin{align*} \left (1 + \frac{1}{2} \right ) + \left (\frac{1}{3} + \frac{1}{4} \right ) + \left (\frac{1}{5} + \frac{1}{6}+ \frac{1}{7} + \frac{1}{8} \right ) + \left (\frac{1}{9} + \cdots + \frac{1}{16} \right ) + \cdots & > \frac{1}{2} + \left (\frac{1}{4} + \frac{1}{4} \right ) + \left (\frac{1}{8} + \frac{1}{8}+ \frac{1}{8} + \frac{1}{8} \right ) + \left (\frac{1}{16} + \cdots + \frac{1}{16} \right ) + \cdots\\ &= \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \frac{1}{2} + \cdots \\ &= \infty \\ \end{align*}\]

Armed with this fact, we can see why Theorem \(\PageIndex{1}\) is true. First note that

\[-\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots = -\frac{1}{2}\left ( 1 + \frac{1}{2} + \frac{1}{3} + \cdots \right )= -\infty\]

and

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots \geq \frac{1}{2} + \frac{1}{4} +\frac{1}{6} + \cdots = \infty\]

This says that if we add enough terms of \(-\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots\) we can makesuch a sum as small as we wish and if we add enough terms of \(1 + \frac{1}{3} + \frac{1}{5} + \cdots\) we can make such a sum as large as we wish. This provides us with the general outline of the proof. The trick is to add just enough positive terms until the sum is just greater than a. Then we start to add on negative terms until the sum is just less than a. Picking up where we left off with the positive terms, we add on just enough positive terms until we are just above a again. We then add on negative terms until we are below a. In essence, we are bouncing back and forth around \(a\). If we do this carefully, then we can get this rearrangement to converge to \(a\). The notation in the proof below gets a bit hairy, but keep this general idea in mind as you read through it.

Let \(O_1\) be the first odd integer such that \(1 + \frac{1}{3} + \frac{1}{5} + \cdots +\frac{1}{O_1} > a\). Now choose \(E_1\) to be the first even integer such that

\[-\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} < a - \left (1 + \frac{1}{3} + \frac{1}{5} + \cdots +\frac{1}{O_1} \right )\]

Thus

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots +\frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} < a\]

Notice that we still have \(\frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots = \infty\). With this in mind, choose \(O_2\) to be the first odd integer with

\[\frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} > a - \left ( 1 + \frac{1}{3} + \frac{1}{5} + \cdots +\frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} \right )\]

Thus we have

\[a < 1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2}\]

Furthermore, since

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2 - 2} < a\]

we have

\[\left |1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} - a \right | < \frac{1}{O_2}\]

In a similar fashion choose \(E_2\) to be the first even integer such that

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} - \frac{1}{E_1+2} - \frac{1}{E_1+4} - \cdots - \frac{1}{E_2} < a\]

Since

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} - \frac{1}{E_1+2} - \frac{1}{E_1+4} - \cdots - \frac{1}{E_2-2} > a\]

then

\[\left |1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} + \cdots - \frac{1}{E_1+2} - \frac{1}{E_1+4} - \cdots - \frac{1}{E_2} - a \right | < \frac{1}{E_2}\]

Again choose \(O_3\) to be the first odd integer such that

\[a < 1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} + \cdots - \frac{1}{E_1+2} - \frac{1}{E_1+4} - \cdots - \frac{1}{E_2} + \frac{1}{O_2+2} + \frac{1}{O_2+4} + \cdots + \frac{1}{O_3}\]

and notice that

\[\left | 1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} - \frac{1}{E_1+2} - \frac{1}{E_1+4} - \cdots - \frac{1}{E_2} + \frac{1}{O_2+2} + \frac{1}{O_2+4} + \cdots + \frac{1}{O_3} - a \right | < \frac{1}{O_3}\]

Continue defining \(O_k\) and \(E_k\) in this fashion. Since \(\lim_{k \to \infty } \frac{1}{O_k} = \lim_{k \to \infty } \frac{1}{E_k} = 0\), it is evident that the partial sums

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} + \cdots - \frac{1}{E_{k-2}+2} - \frac{1}{E_{k-2}+4} - \cdots - \frac{1}{E_{k-1}} + \frac{1}{O_{k-1}+2} + \frac{1}{O_{k-1}+4} + \cdots + \frac{1}{O_k}\]

and

\[1 + \frac{1}{3} + \frac{1}{5} + \cdots + \frac{1}{O_1} -\frac{1}{2} - \frac{1}{4} - \frac{1}{6} - \cdots - \frac{1}{E_1} + \frac{1}{O_1+2} + \frac{1}{O_1+4} + \cdots + \frac{1}{O_2} + \cdots\]

is trapped between two such extreme partial sums. This forces the entire rearranged series to converge to \(a\).

The next two problems are similar to the above, but notationally are easier since we don’t need to worry about converging to an actual number. We only need to make the rearrangement grow (or shrink) without bound.

Show that there is a rearrangement of \(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots\) which diverges to \(∞\).

Show that there is a rearrangement of \(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \cdots\) which diverges to \(-\infty\).

It is fun to know that we can rearrange some series to make them add up to anything you like but there is a more fundamental idea at play here. That the negative terms of the alternating Harmonic Series *diverge *to negative infinity and the positive terms *diverge *to positive infinity make the *convergence *of the alternating series very special.

Consider, first we add \(1\) : This is one of the positive terms so our sum is starting to increase without bound. Next we add \(-1/2\) which is one of the negative terms so our sum has turned around and is now starting to decrease without bound. Then another positive term is added: increasing without bound. Then another negative term: decreasing. And so on. The convergence of the alternating Harmonic Series is the result of a delicate balance between a tendency to run off to positive infinity and back to negative infinity. When viewed in this light it is not really too surprising that rearranging the terms can destroy this delicate balance.

Naturally, the alternating Harmonic Series is not the only such series. Any such series is said to converge “*conditionally*” – the condition being the specific arrangement of the terms.

To stir the pot a bit more, some series do satisfy the commutative property. More specifically, one can show that any rearrangement of the series \(1 - \frac{1}{2^2} + \frac{1}{3^2} - \cdots\) must converge to the same value as the original series (which happens to be \(\int_{x=0}^{1} \frac{\ln (1+x)}{x}dx \approx 0.8224670334\)). Why does one series behave so nicely whereas the other does not?

Issues such as these and, more generally, the validity of using the infinitely small and infinitely large certainly existed in the 1700’s, but they were overshadowed by the utility of the calculus. Indeed, foundational questions raised by the above examples, while certainly interesting and of importance, did not significantly deter the exploitation of calculus in studying physical phenomena. However, the envelope eventually was pushed to the point that not even the most practically oriented mathematician could avoid the foundational issues.