3.3: Convergence Tests
 Page ID
 89253
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
It is very common to encounter series for which it is difficult, or even virtually impossible, to determine the sum exactly. Often you try to evaluate the sum approximately by truncating it, i.e. having the index run only up to some finite \(N\text{,}\) rather than infinity. But there is no point in doing so if the series diverges^{ 1}^{ 2}. So you like to at least know if the series converges or diverges. Furthermore you would also like to know what error is introduced when you approximate \(\sum_{n=1}^\infty a_n\) by the “truncated series” \(\sum_{n=1}^Na_n\text{.}\) That's called the truncation error. There are a number of “convergence tests” to help you with this.
The Divergence Test
Our first test is very easy to apply, but it is also rarely useful. It just allows us to quickly reject some “trivially divergent” series. It is based on the observation that
 by definition, a series \(\sum_{n=1}^\infty a_n\) converges to \(S\) when the partial sums \(S_N=\sum_{n=1}^N a_n\) converge to \(S\text{.}\)
 Then, as \(N\rightarrow\infty\text{,}\) we have \(S_N\rightarrow S\) and, because \(N1\rightarrow\infty\) too, we also have \(S_{N1}\rightarrow S\text{.}\)
 So \(a_N=S_NS_{N1}\rightarrow SS=0\text{.}\)
This tells us that, if we already know that a given series \(\sum a_n\) is convergent, then the \(n^{\rm th}\) term of the series, \(a_n\text{,}\) must converge to \(0\) as \(n\) tends to infinity. In this form, the test is not so useful. However the contrapositive^{ 3} of the statement is a useful test for divergence.
If the sequence \(\big\{a_n\big\}_{n=1}^\infty\) fails to converge to zero as \(n\rightarrow\infty\text{,}\) then the series \(\sum_{n=1}^\infty a_n\) diverges.
Let \(a_n=\frac{n}{n+1}\text{.}\) Then
\[ \lim_{n\rightarrow\infty} a_n =\lim_{n\rightarrow\infty}\frac{n}{n+1} =\lim_{n\rightarrow\infty}\frac{1}{1+\frac{1}{n}} =1\ne 0 \nonumber \]
So the series \(\sum_{n=1}^\infty \frac{n}{n+1}\) diverges.
The divergence test is a “one way test”. It tells us that if \(\lim_{n\rightarrow\infty}a_n\) is nonzero, or fails to exist, then the series \(\sum_{n=1}^\infty a_n\) diverges. But it tells us absolutely nothing when \(\lim_{n\rightarrow\infty}a_n=0\text{.}\) In particular, it is perfectly possible for a series \(\sum_{n=1}^\infty a_n\) to diverge even though \(\lim_{n\rightarrow\infty}a_n=0\text{.}\) An example is \(\sum_{n=1}^\infty \frac{1}{n}\text{.}\) We'll show in Example 3.3.6, below, that it diverges.
Now while convergence or divergence of series like \(\sum_{n=1}^\infty \frac{1}{n}\) can be determined using some clever tricks — see the optional §3.3.9 —, it would be much better of have methods that are more systematic and rely less on being sneaky. Over the next subsections we will discuss several methods for testing series for convergence.
Note that while these tests will tell us whether or not a series converges, they do not (except in rare cases) tell us what the series adds up to. For example, the test we will see in the next subsection tells us quite immediately that the series
\begin{gather*} \sum_{n=1}^\infty \frac{1}{n^3} \end{gather*}
converges. However it does not tell us its value^{ 4}.
The Integral Test
In the integral test, we think of a series \(\sum_{n=1}^\infty a_n\text{,}\) that we cannot evaluate explicitly, as the area of a union of rectangles, with \(a_n\) representing the area of a rectangle of width one and height \(a_n\text{.}\) Then we compare that area with the area represented by an integral, that we can evaluate explicitly, much as we did in Theorem 1.12.17, the comparison test for improper integrals. We'll start with a simple example, to illustrate the idea. Then we'll move on to a formulation of the test in general.
Visualise the terms of the harmonic series \(\sum_{n=1}^\infty\frac{1}{n}\) as a bar graph — each term is a rectangle of height \(\frac{1}{n}\) and width \(1\text{.}\) The limit of the series is then the limiting area of this union of rectangles. Consider the sketch on the left below.
It shows that the area of the shaded columns, \(\sum_{n=1}^4\frac{1}{n}\text{,}\) is bigger than the area under the curve \(y=\frac{1}{x}\) with \(1\le x\le 5\text{.}\) That is
\begin{align*} \sum_{n=1}^4 \frac{1}{n} & \ge \int_1^5 \frac{1}{x}\, d{x} \end{align*}
If we were to continue drawing the columns all the way out to infinity, then we would have
\begin{align*} \sum_{n=1}^\infty \frac{1}{n} & \ge \int_1^\infty \frac{1}{x}\, d{x} \end{align*}
We are able to compute this improper integral exactly:
\begin{align*} \int_1^\infty \frac{1}{x} \, d{x} &= \lim_{R \to \infty} \Big[ \logx \Big]_1^R = +\infty \end{align*}
That is the area under the curve diverges to \(+\infty\) and so the area represented by the columns must also diverge to \(+\infty\text{.}\)
It should be clear that the above argument can be quite easily generalised. For example the same argument holds mutatis mutandis^{ 5} for the series
\begin{gather*} \sum_{n=1}^\infty \frac{1}{n^2} \end{gather*}
Indeed we see from the sketch on the right above that
\begin{align*} \sum_{n=2}^N \frac{1}{n^2} &\le \int_1^N \frac{1}{x^2}\, d{x} \end{align*}
and hence
\begin{gather*} \sum_{n=2}^\infty \frac{1}{n^2} \leq \int_1^\infty \frac{1}{x^2}\, d{x} \end{gather*}
This last improper integral is easy to evaluate:
\begin{align*} \int_2^\infty \frac{1}{x^2}\, d{x} &= \lim_{R\to\infty} \left[  \frac{1}{x} \right]_2^R\\ &= \lim_{R\to\infty} \left( \frac{1}{2}  \frac{1}{R} \right) = \frac{1}{2} \end{align*}
Thus we know that
\begin{gather*} \sum_{n=1}^\infty \frac{1}{n^2} = 1+ \sum_{n=2}^\infty \frac{1}{n^2} \leq \frac{3}{2}. \end{gather*}
and so the series must converge.
The above arguments are formalised in the following theorem.
Let \(N_0\) be any natural number. If \(f(x)\) is a function which is defined and continuous for all \(x\ge N_0\) and which obeys
 \(f(x)\ge 0\) for all \(x\ge N_0\) and
 \(f(x)\) decreases as \(x\) increases and
 \(f(n)=a_n\) for all \(n\ge N_0\text{.}\)
Then
\[ \sum_{n=1}^\infty a_n\text{ converges }\iff \int_{N_0}^\infty f(x)\ dx\text{ converges} \nonumber \]
Furthermore, when the series converges, the truncation error
\[ \bigg\sum_{n=1}^\infty a_n\sum_{n=1}^N a_n\bigg\le \int_N^\infty f(x)\ dx\qquad\text{for all $N\ge N_0$} \nonumber \]

Let \(I\) be any fixed integer with \(I \gt N_0\text{.}\) Then
 \(\sum_{n=1}^\infty a_n\) converges if and only if \(\sum_{n=I}^\infty a_n\) converges — removing a fixed finite number of terms from a series cannot impact whether or not it converges.
 Since \(a_n\ge 0\) for all \(n\ge I \gt N_0\text{,}\) the sequence of partial sums \(s_\ell=\sum_{n=I}^\ell a_n\) obeys \(s_{\ell+1} = s_\ell+a_{n+1} \ge s_\ell\text{.}\) That is, \(s_\ell\) increases as \(\ell\) increases.
 So \(\big\{s_\ell\big\}\) must either converge to some finite number or increase to infinity. That is, either \(\sum_{n=I}^\infty a_n\) converges to a finite number or it is \(+\infty\text{.}\)
Look at the figure above. The shaded area in the figure is \(\sum_{n=I}^\infty a_n\)
 because the first shaded rectangle has height \(a_I\) and width \(1\text{,}\) and hence area \(a_I\) and
 the second shaded rectangle has height \(a_{I+1}\) and width \(1\text{,}\) and hence area \(a_{I+1}\text{,}\) and so on
This shaded area is smaller than the area under the curve \(y=f(x)\) for \(I1\le x \lt \infty\text{.}\) So
\[ \sum_{n=I}^\infty a_n \le \int_{I1}^\infty f(x)\ dx \nonumber \]
and, if the integral is finite, the sum \(\sum_{n=I}^\infty a_n\) is finite too. Furthermore, the desired bound on the truncation error is just the special case of this inequality with \(I=N+1\text{:}\)
\begin{gather*} \sum_{n=1}^\infty a_n  \sum_{n=1}^N a_n =\sum_{n=N+1}^\infty a_n \le \int_N^\infty f(x)\ dx \end{gather*}
For the “divergence case” look at the figure above. The (new) shaded area in the figure is again \(\sum_{n=I}^\infty a_n\) because
 the first shaded rectangle has height \(a_I\) and width \(1\text{,}\) and hence area \(a_I\) and
 the second shaded rectangle has height \(a_{I+1}\) and width \(1\text{,}\) and hence area \(a_{I+1}\text{,}\) and so on
This time the shaded area is larger than the area under the curve \(y=f(x)\) for \(I\le x \lt \infty\text{.}\) So
\[ \sum_{n=I}^\infty a_n \ge \int_I^\infty f(x)\ dx \nonumber \]
and, if the integral is infinite, the sum \(\sum_{n=I}^\infty a_n\) is infinite too.
Now that we have the integral test, it is straightforward to determine for which values of \(p\) the series ^{6}
\begin{gather*} \sum_{n=1}^\infty \frac{1}{n^p} \end{gather*}
converges.
Let \(p \gt 0\text{.}\) We'll now use the integral test to determine whether or not the series \(\sum_{n=1}^\infty\frac{1}{n^p}\) (which is sometimes called the \(p\)series) converges.
 To do so, we need a function \(f(x)\) that obeys \(f(n)=a_n=\frac{1}{n^p}\) for all \(n\) bigger than some \(N_0\text{.}\) Certainly \(f(x)=\frac{1}{x^p}\) obeys \(f(n)=\frac{1}{n^p}\) for all \(n\ge 1\text{.}\) So let's pick this \(f\) and try \(N_0=1\text{.}\) (We can always increase \(N_0\) later if we need to.)
 This function also obeys the other two conditions of Theorem 3.3.5:
 \(f(x) \gt 0\) for all \(x\ge N_0=1\) and
 \(f(x)\) decreases as \(x\) increases because \(f'(x)=p\frac{1}{x^{p+1}} \lt 0\) for all \(x\ge N_0=1\text{.}\)
 So the integral test tells us that the series \(\sum_{n=1}^\infty\frac{1}{n^p}\) converges if and only if the integral \(\int_1^\infty\frac{dx}{x^p}\) converges.
 We have already seen, in Example 1.12.8, that the integral \(\int_1^\infty\frac{dx}{x^p}\) converges if and only if \(p \gt 1\text{.}\)
So we conclude that \(\sum_{n=1}^\infty\frac{1}{n^p}\) converges if and only if \(p \gt 1\text{.}\) This is sometimes called the \(p\)test.
 In particular, the series \(\sum_{n=1}^\infty\frac{1}{n}\text{,}\) which is called the harmonic series, has \(p=1\) and so diverges. As we add more and more terms of this series together, the terms we add, namely \(\frac{1}{n}\text{,}\) get smaller and smaller and tend to zero, but they tend to zero so slowly that the full sum is still infinite.
 On the other hand, the series \(\sum_{n=1}^\infty\frac{1}{n^{1.000001}}\) has \(p = 1.000001 \gt 1\) and so converges. This time as we add more and more terms of this series together, the terms we add, namely \(\frac{1}{n^{1.000001}}\text{,}\) tend to zero (just) fast enough that the full sum is finite. Mind you, for this example, the convergence takes place very slowly — you have to take a huge number of terms to get a decent approximation to the full sum. If we approximate \(\sum_{n=1}^\infty\frac{1}{n^{1.000001}}\) by the truncated series \(\sum_{n=1}^N\frac{1}{n^{1.000001}}\text{,}\) we make an error of at most
\begin{align*} \int_N^\infty \frac{dx}{x^{1.000001}} & = \lim_{R\rightarrow\infty} \int_N^R \frac{dx}{x^{1.000001}}\\ & = \lim_{R\rightarrow\infty} \frac{1}{0.000001} \Big[\frac{1}{R^{0.000001}}\frac{1}{N^{0.000001}}\Big]\\ & =\frac{10^6}{N^{0.000001}} \end{align*}
This does tend to zero as \(N\rightarrow\infty\text{,}\) but really slowly.
We now know that the dividing line between convergence and divergence of \(\sum_{n=1}^\infty\frac{1}{n^p}\) occurs at \(p=1\text{.}\) We can dig a little deeper and ask ourselves how much more quickly than \(\frac{1}{n}\) the \(n^{\rm th}\) term needs to shrink in order for the series to converge. We know that for large \(x\text{,}\) the function \(\log x\) is smaller than \(x^a\) for any positive \(a\) — you can convince yourself of this with a quick application of L'Hôpital's rule. So it is not unreasonable to ask whether the series
\begin{gather*} \sum_{n=2}^\infty \frac{1}{n \log n} \end{gather*}
converges. Notice that we sum from \(n=2\) because when \(n=1, n\log n=0\text{.}\) And we don't need to stop there ^{7}. We can analyse the convergence of this sum with any power of \(\log n\text{.}\)
Let \(p \gt 0\text{.}\) We'll now use the integral test to determine whether or not the series \(\sum\limits_{n=2}^\infty\frac{1}{n(\log n)^p}\) converges.
 As in the last example, we start by choosing a function that obeys \(f(n)=a_n=\frac{1}{n(\log n)^p}\) for all \(n\) bigger than some \(N_0\text{.}\) Certainly \(f(x)=\frac{1}{x(\log x)^p}\) obeys \(f(n)=\frac{1}{n(\log n)^p}\) for all \(n\ge 2\text{.}\) So let's use that \(f\) and try \(N_0=2\text{.}\)
 Now let's check the other two conditions of Theorem 3.3.5:
 Both \(x\) and \(\log x\) are positive for all \(x \gt 1\text{,}\) so \(f(x) \gt 0\) for all \(x\ge N_0=2\text{.}\)
 As \(x\) increases both \(x\) and \(\log x\) increase and so \(x(\log x)^p\) increases and \(f(x)\) decreases.
 So the integral test tells us that the series \(\sum\limits_{n=2}^\infty\frac{1}{n(\log n)^p}\) converges if and only if the integral \(\int_2^\infty\frac{dx}{x (\log x)^p}\) converges.
 To test the convergence of the integral, we make the substitution \(u=\log x\text{,}\) \(du=\frac{dx}{x}\text{.}\)
\begin{gather*} \int_2^R \frac{dx}{x (\log x)^p} =\int_{\log 2}^{\log R}\frac{du}{u^p} \end{gather*}
We already know that the integral the integral \(\int_1^\infty\frac{du}{u^p}\text{,}\) and hence the integral \(\int_2^R \frac{dx}{x (\log x)^p}\text{,}\) converges if and only if \(p \gt 1\text{.}\)
So we conclude that \(\sum\limits_{n=2}^\infty\frac{1}{n(\log n)^p}\) converges if and only if \(p \gt 1\text{.}\)
The Comparison Test
Our next convergence test is the comparison test. It is much like the comparison test for improper integrals (see Theorem 1.12.17) and is true for much the same reasons. The rough idea is quite simple. A sum of larger terms must be bigger than a sum of smaller terms. So if we know the big sum converges, then the small sum must converge too. On the other hand, if we know the small sum diverges, then the big sum must also diverge. Formalising this idea gives the following theorem.
Let \(N_0\) be a natural number and let \(K \gt 0\text{.}\)
 If \(a_n\le K c_n\) for all \(n\ge N_0\) and \(\sum\limits_{n=0}^\infty c_n\) converges, then \(\sum\limits_{n=0}^\infty a_n\) converges.
 If \(a_n\ge K d_n\ge0 \) for all \(n\ge N_0\) and \(\sum\limits_{n=0}^\infty d_n\) diverges, then \(\sum\limits_{n=0}^\infty a_n\) diverges.

We will not prove this theorem here. We'll just observe that it is very reasonable. That's why there are quotation marks around “Proof”. For an actual proof see the optional section 3.3.10.
 If \(\sum\limits_{n=0}^\infty c_n\) converges to a finite number and if the terms in \(\sum\limits_{n=0}^\infty a_n\) are smaller than the terms in \(\sum\limits_{n=0}^\infty c_n\text{,}\) then it is no surprise that \(\sum\limits_{n=0}^\infty a_n\) converges too.
 If \(\sum\limits_{n=0}^\infty d_n\) diverges (i.e. adds up to \(\infty\)) and if the terms in \(\sum\limits_{n=0}^\infty a_n\) are larger than the terms in \(\sum\limits_{n=0}^\infty d_n\text{,}\) then of course \(\sum\limits_{n=0}^\infty a_n\) adds up to \(\infty\text{,}\) and so diverges, too.
The comparison test for series is also used in much the same way as is the comparison test for improper integrals. Of course, one needs a good series to compare against, and often the series \(\sum n^{p}\) (from Example 3.3.6), for some \(p \gt 0\text{,}\) turns out to be just what is needed.
We could determine whether or not the series \(\sum_{n=1}^\infty\frac{1}{n^2+2n+3}\) converges by applying the integral test. But it is not worth the effort^{ 8}. Whether or not any series converges is determined by the behaviour of the summand^{ 9} for very large \(n\text{.}\) So the first step in tackling such a problem is to develop some intuition about the behaviour of \(a_n\) when \(n\) is very large.
 Step 1: Develop intuition. In this case, when \(n\) is very large^{ 10} \(n^2\gg 2n \gg 3\) so that \(\frac{1}{n^2+2n+3}\approx\frac{1}{n^2}\text{.}\) We already know, from Example 3.3.6, that \(\sum_{n=1}^\infty\frac{1}{n^p}\) converges if and only if \(p \gt 1\text{.}\) So \(\sum_{n=1}^\infty\frac{1}{n^2}\text{,}\) which has \(p=2\text{,}\) converges, and we would expect that \(\sum_{n=1}^\infty\frac{1}{n^2+2n+3}\) converges too.
 Step 2: Verify intuition. We can use the comparison test to confirm that this is indeed the case. For any \(n\ge 1\text{,}\) \(n^2+2n+3 \gt n^2\text{,}\) so that \(\frac{1}{n^2+2n+3}\le\frac{1}{n^2}\text{.}\) So the comparison test, Theorem 3.3.8, with \(a_n=\frac{1}{n^2+2n+3}\) and \(c_n=\frac{1}{n^2}\text{,}\) tells us that \(\sum_{n=1}^\infty\frac{1}{n^2+2n+3}\) converges.
Of course the previous example was “rigged” to give an easy application of the comparison test. It is often relatively easy, using arguments like those in Example 3.3.9, to find a “simple” series \(\sum_{n=1}^\infty b_n\) with \(b_n\) almost the same as \(a_n\) when \(n\) is large. However it is pretty rare that \(a_n\le b_n\) for all \(n\text{.}\) It is much more common that \(a_n\le K b_n\) for some constant \(K\text{.}\) This is enough to allow application of the comparison test. Here is an example.
As in the previous example, the first step is to develop some intuition about the behaviour of \(a_n\) when \(n\) is very large.
 Step 1: Develop intuition. When \(n\) is very large,
 \(n\gg \cos n\) so that the numerator \(n+\cos n\approx n\) and
 \(n^3 \gg \frac{1}{3}\) so that the denominator \(n^3\frac{1}{3}\approx n^3\text{.}\)
So when \(n\) is very large
\[ a_n=\frac{n+\cos n}{n^3\frac{1}{3}}\approx\frac{n}{n^3}=\frac{1}{n^2} \nonumber \]
We already know from Example 3.3.6, with \(p=2\text{,}\) that \(\sum_{n=1}^\infty\frac{1}{n^2}\) converges, so we would expect that \(\sum_{n=1}^\infty\frac{n+\cos n}{n^3\frac{1}{3}}\) converges too.
 Step 2: Verify intuition. We can use the comparison test to confirm that this is indeed the case. To do so we need to find a constant \(K\) such that \(a_n= \frac{n+\cos n}{n^31/3}=\frac{n+\cos n}{n^31/3}\) is smaller than \(\frac{K}{n^2}\) for all \(n\text{.}\) A good way^{ 11} to do that is to factor the dominant term (in this case \(n\)) out of the numerator and also factor the dominant term (in this case \(n^3\)) out of the denominator.
\[ a_n=\frac{n+\cos n}{n^3\frac{1}{3}} =\frac{n}{n^3}\ \frac{1+\frac{\cos n}{n}}{1\frac{1}{3n^3}} =\frac{1}{n^2}\ \frac{1+\frac{\cos n}{n}}{1\frac{1}{3n^3}} \nonumber \]
So now we need to find a constant \(K\) such that \(\frac{1+\frac{(\cos n)}{n}}{1\frac{1}{3n^3}}\) is smaller than \(K\) for all \(n\ge 1\text{.}\)
 First consider the numerator \(1+(\cos n)\frac{1}{n}\text{.}\) For all \(n\ge 1\)
 \(\frac{1}{n}\le 1\) and
 \(\cos n\le 1\)
So the numerator \(1+(\cos n)\frac{1}{n}\) is always smaller than \(1+(1)\frac{1}{1}=2\text{.}\)
 Next consider the denominator \(1\frac{1}{3n^3}\text{.}\)
 When \(n\ge 1\text{,}\) \(\frac{1}{3n^3}\) lies between \(\frac{1}{3}\) and \(0\) so that
 \(1\frac{1}{3n^3}\) is between \(\frac{2}{3}\) and \(1\) and consequently
 \(\frac{1}{1\frac{1}{3n^3}}\) is between \(\frac{3}{2}\) and \(1\text{.}\)
 As the numerator \(1+(\cos n)\frac{1}{n}\) is always smaller than \(2\) and \(\frac{1}{1\frac{1}{3n^3}}\) is always smaller than \(\frac{3}{2}\text{,}\) the fraction
\[ \frac{1+\frac{\cos n}{n}}{1\frac{1}{3n^3}} \le 2\Big(\frac{3}{2}\Big) =3 \nonumber \]
We now know that
\[ a_n =\frac{1}{n^2}\ \frac{1+\frac{2}{n}}{1\frac{1}{3n^3}} \le \frac{3}{n^2} \nonumber \]
and, since we know \(\sum_{n=1}^\infty n^{2}\) converges, the comparison test tells us that \(\sum_{n=1}^\infty\frac{n+\cos n}{n^31/3}\) converges.
 First consider the numerator \(1+(\cos n)\frac{1}{n}\text{.}\) For all \(n\ge 1\)
The last example was actually a relatively simple application of the comparison theorem — finding a suitable constant \(K\) can be really tedious^{ 12}. Fortunately, there is a variant of the comparison test that completely eliminates the need to explicitly find \(K\text{.}\)
The idea behind this isn't too complicated. We have already seen that the convergence or divergence of a series depends not on its first few terms, but just on what happens when \(n\) is really large. Consequently, if we can work out how the series terms behave for really big \(n\) then we can work out if the series converges. So instead of comparing the terms of our series for all \(n\text{,}\) just compare them when \(n\) is big.
Let \(\sum_{n=1}^\infty a_n\) and \(\sum_{n=1}^\infty b_n\) be two series with \(b_n \gt 0\) for all \(n\text{.}\) Assume that
\[ \lim_{n\rightarrow\infty}\frac{a_n}{b_n}=L \nonumber \]
exists.
 If \(\sum_{n=1}^\infty b_n\) converges, then \(\sum_{n=1}^\infty a_n\) converges too.
 If \(L\ne 0\) and \(\sum_{n=1}^\infty b_n\) diverges, then \(\sum_{n=1}^\infty a_n\) diverges too.
In particular, if \(L\ne 0\text{,}\) then \(\sum_{n=1}^\infty a_n\) converges if and only if \(\sum_{n=1}^\infty b_n\) converges.

(a) Because we are told that \(\lim_{n\rightarrow\infty}\frac{a_n}{b_n}=L \text{,}\) we know that,
 when \(n\) is large, \(\frac{a_n}{b_n}\) is very close to \(L\text{,}\) so that \(\Big\frac{a_n}{b_n}\Big\) is very close to \(L\text{.}\)
 In particular, there is some natural number \(N_0\) so that \(\Big\frac{a_n}{b_n}\Big\le L+1\text{,}\) for all \(n\ge N_0\text{,}\) and hence
 \(a_n\le Kb_n\) with \(K=L+1\text{,}\) for all \(n\ge N_0\text{.}\)
 The comparison Theorem 3.3.8 now implies that \(\sum_{n=1}^\infty a_n\) converges.
(b) Let's suppose that \(L \gt 0\text{.}\) (If \(L \lt 0\text{,}\) just replace \(a_n\) with \(a_n\text{.}\)) Because we are told that \(\lim_{n\rightarrow\infty}\frac{a_n}{b_n}=L \text{,}\) we know that,
 when \(n\) is large, \(\frac{a_n}{b_n}\) is very close to \(L\text{.}\)
 In particular, there is some natural number \(N\) so that \(\frac{a_n}{b_n}\ge \frac{L}{2}\text{,}\) and hence
 \(a_n\ge Kb_n\) with \(K=\frac{L}{2} \gt 0\text{,}\) for all \(n\ge N\text{.}\)
 The comparison Theorem 3.3.8 now implies that \(\sum_{n=1}^\infty a_n\) diverges.
The next two examples illustrate how much of an improvement the above theorem is over the straight comparison test (though of course, we needed the comparison test to develop the limit comparison test).
Set \(a_n= \frac{\sqrt{n+1}}{n^22n+3}\text{.}\) We first try to develop some intuition about the behaviour of \(a_n\) for large \(n\) and then we confirm that our intuition was correct.
 Step 1: Develop intuition. When \(n\gg 1\text{,}\) the numerator \(\sqrt{n+1}\approx \sqrt{n}\text{,}\) and the denominator \(n^22n+3\approx n^2\) so that \(a_n\approx \frac{\sqrt{n}}{n^2}=\frac{1}{n^{3/2}}\) and it looks like our series should converge by Example 3.3.6 with \(p=\frac{3}{2}\text{.}\)
 Step 2: Verify intuition. To confirm our intuition we set \(b_n=\frac{1}{n^{3/2}}\) and compute the limit
\[ \lim_{n\rightarrow\infty}\frac{a_n}{b_n} =\lim_{n\rightarrow\infty}\frac{\frac{\sqrt{n+1}}{n^22n+3}}{\frac{1}{n^{3/2}}} =\lim_{n\rightarrow\infty}\frac{n^{3/2}\sqrt{n+1}}{n^22n+3} \nonumber \]
Again it is a good idea to factor the dominant term out of the numerator and the dominant term out of the denominator.\[ \lim_{n\rightarrow\infty}\frac{a_n}{b_n} =\lim_{n\rightarrow\infty}\frac{n^2\sqrt{1+\frac{1}{n}}} {n^2\big(1\frac{2}{n}+\frac{3}{n^2}\big)} =\lim_{n\rightarrow\infty}\frac{\sqrt{1+\frac{1}{n}}} {1\frac{2}{n}+\frac{3}{n^2}} =1 \nonumber \]
We already know that the series \(\sum_{n=1}^\infty b_n =\sum_{n=1}^\infty\frac{1}{n^{3/2}}\) converges by Example 3.3.6 with \(p=\frac{3}{2}\text{.}\) So our series converges by the limit comparison test, Theorem 3.3.11.
We can also try to deal with the series of Example 3.3.12, using the comparison test directly. But that requires us to find \(K\) so that
\begin{align*} \frac{\sqrt{n+1}}{n^22n+3} & \leq \frac{K}{n^{3/2}} \end{align*}
We might do this by examining the numerator and denominator separately:
 The numerator isn't too bad since for all \(n \geq 1\text{:}\)
\begin{align*} n+1 &\leq 2n \qquad \text{and so}\\ \sqrt{n+1} &\leq \sqrt{2n} \end{align*}
 The denominator is quite a bit more tricky, since we need a lower bound, rather than an upper bound, and we cannot just write \(n^22n+3 \ge n^2\text{,}\) which is false. Instead we have to make a more careful argument. In particular, we'd like to find \(N_0\) and \(K'\) so that \(n^22n+3\ge K'n^2\text{,}\) i.e. \(\frac{1}{n^22n+3}\le\frac{1}{K'n^2}\) for all \(n \geq N_0\text{.}\) For \(n\ge 4\text{,}\) we have \(2n = \frac{1}{2} 4n\le \frac{1}{2}n\cdot n=\frac{1}{2}n^2\text{.}\) So for \(n\ge 4\text{,}\)
\begin{align*} n^22n+3 & \geq n^2 \frac{1}{2}n^2 + 3 \ge \frac{1}{2} n^2 \end{align*}
Putting the numerator and denominator back together we have
\begin{align*} \frac{\sqrt{n+1}}{n^22n+3} & \leq \frac{\sqrt{2n}}{n^2/2} = 2\sqrt{2}\frac{1}{n^{3/2}} \qquad\text{for all $n\ge 4$} \end{align*}
and the comparison test then tells us that our series converges. It is pretty clear that the approach of Example 3.3.12 was much more straightforward.
The Alternating Series Test
When the signs of successive terms in a series alternate between \(+\) and \(\text{,}\) like for example in \(\ 1\frac{1}{2} +\frac{1}{3}\frac{1}{4}+ \cdots\ \text{,}\) the series is called an alternating series. More generally, the series
\[ A_1A_2+A_3A_4+\cdots =\sum_{n=1}^\infty (1)^{n1} A_n \nonumber \]
is alternating if every \(A_n\ge 0\text{.}\) Often (but not always) the terms in alternating series get successively smaller. That is, then \(A_1\ge A_2 \ge A_3 \ge \cdots\text{.}\) In this case:
 The first partial sum is \(S_1=A_1\text{.}\)
 The second partial sum, \(S_2=A_1A_2\text{,}\) is smaller than \(S_1\) by \(A_2\text{.}\)
 The third partial sum, \(S_3=S_2+A_3\text{,}\) is bigger than \(S_2\) by \(A_3\text{,}\) but because \(A_3\le A_2\text{,}\) \(S_3\) remains smaller than \(S_1\text{.}\) See the figure below.
 The fourth partial sum, \(S_4=S_3A_4\text{,}\) is smaller than \(S_3\) by \(A_4\text{,}\) but because \(A_4\le A_3\text{,}\) \(S_4\) remains bigger than \(S_2\text{.}\) Again, see the figure below.
 And so on.
So the successive partial sums oscillate, but with ever decreasing amplitude. If, in addition, \(A_n\) tends to \(0\) as \(n\) tends to \(\infty\text{,}\) the amplitude of oscillation tends to zero and the sequence \(S_1\text{,}\) \(S_2\text{,}\) \(S_3\text{,}\) \(\cdots\) converges to some limit \(S\text{.}\)
This is illustrated in the figure
Here is a convergence test for alternating series that exploits this structure, and that is really easy to apply.
Let \(\big\{A_n\big\}_{n=1}^\infty\) be a sequence of real numbers that obeys
 \(A_n\ge 0\) for all \(n\ge 1\) and
 \(A_{n+1}\le A_n\) for all \(n\ge 1\) (i.e. the sequence is monotone decreasing) and
 \(\lim_{n\rightarrow\infty}A_n=0\text{.}\)
Then
\[ A_1A_2+A_3A_4+\cdots=\sum\limits_{n=1}^\infty (1)^{n1} A_n =S \nonumber \]
converges and, for each natural number \(N\text{,}\) \(SS_N\) is between \(0\) and (the first dropped term) \((1)^N A_{N+1}\text{.}\) Here \(S_N\) is, as previously, the \(N^{\rm th}\) partial sum \(\sum\limits_{n=1}^N (1)^{n1} A_n\text{.}\)

We shall only give part of the proof here. For the rest of the proof see the optional section 3.3.10. We shall fix any natural number \(N\) and concentrate on the last statement, which gives a bound on the truncation error (which is the error introduced when you approximate the full series by the partial sum \(S_N\))
\begin{align*} E_N &= SS_N= \sum_{n=N+1}^\infty (1)^{n1} A_n\\ & = (1)^N\Big[A_{N+1}A_{N+2} +A_{N+3}A_{N+4}+\cdots\Big] \end{align*}
This is of course another series. We're going to study the partial sums
\[ S_{N,\ell} = \sum_{n=N+1}^\ell (1)^{n1} A_n = (1)^N\sum_{m=1}^{\ellN} (1)^{m1} A_{N+m} \nonumber \]
for that series.
 If \(\ell' \gt N+1\text{,}\) with \(\ell'N\) even,\begin{align*} (1)^N S_{N,\ell'}&=\overbrace{(A_{N+1}A_{N+2})}^{\ge 0} +\overbrace{(A_{N+3}A_{N+4})}^{\ge 0}+\cdots\\ & \hskip1in+\overbrace{(A_{\ell'1}A_{\ell'})}^{\ge 0}\\ & \ge 0\\ \\ \text{and}\\ (1)^N S_{N,\ell'+1}&=\overbrace{(1)^N S_{N,\ell'}}^{\ge 0} +\overbrace{A_{\ell'+1}}^{\ge 0} \ge 0 \end{align*} This tells us that \((1)^N S_{N,\ell}\ge 0\) for all \(\ell \gt N+1\text{,}\) both even and odd.
 Similarly, if \(\ell' \gt N+1\text{,}\) with \(\ell'N\) odd,
\begin{align*} (1)^N S_{N,\ell'}&=A_{N+1}(\overbrace{A_{N+2}A_{N+3}}^{\ge 0}) (\overbrace{A_{N+4}A_{N+5}}^{\ge 0})\cdots\\ & \hskip1in \overbrace{(A_{\ell'1}A_{\ell'})}^{\ge 0}\\ & \le A_{N+1}\\ (1)^NS_{N,\ell'+1}&=\overbrace{(1)^N S_{N,\ell'}}^{\le A_{N+1}} \overbrace{A_{\ell'+1}}^{\ge 0} \le A_{N+1} \end{align*}
This tells us that \((1)^N S_{N,\ell}\le A_{N+1}\) for all for all \(\ell \gt N+1\text{,}\) both even and odd.
So we now know that \(S_{N,\ell}\) lies between its first term, \((1)^NA_{N+1}\text{,}\) and \(0\) for all \(\ell \gt N+1\text{.}\) While we are not going to prove it here (see the optional section 3.3.10), this implies that, since \(A_{N+1}\rightarrow 0\) as \(N\rightarrow\infty\text{,}\) the series converges and that
\[ SS_N=\lim_{\ell\rightarrow\infty} S_{N,\ell} \nonumber \]
lies between \((1)^NA_{N+1}\) and \(0\text{.}\)
We have already seen, in Example 3.3.6, that the harmonic series \(\sum_{n=1}^\infty\frac{1}{n}\) diverges. On the other hand, the series \(\sum_{n=1}^\infty(1)^{n1}\frac{1}{n}\) converges by the alternating series test with \(A_n=\frac{1}{n}\text{.}\) Note that
 \(A_n=\frac{1}{n}\ge 0\) for all \(n\ge 1\text{,}\) so that \(\sum_{n=1}^\infty(1)^{n1}\frac{1}{n}\) really is an alternating series, and
 \(A_n=\frac{1}{n}\) decreases as \(n\) increases, and
 \(\lim\limits_{n\rightarrow\infty}A_n =\lim\limits_{n\rightarrow\infty}\frac{1}{n}=0\text{.}\)
so that all of the hypotheses of the alternating series test, i.e. of Theorem 3.3.14, are satisfied. We shall see, in Example 3.5.20, that
\begin{align*} \sum_{n=1}^\infty \frac{(1)^{n1}}{n} &= \log 2. \end{align*}
You may already know that \(e^x=\sum_{n=0}^\infty\frac{x^n}{n!} \text{.}\) In any event, we shall prove this in Example 3.6.5, below. In particular
\begin{gather*} \frac{1}{e}=e^{1} = \sum_{n=0}^\infty\frac{(1)^n}{n!} = 1 \frac{1}{1!} +\frac{1}{2!} \frac{1}{3!} +\frac{1}{4!}  \frac{1}{5!}+\cdots \end{gather*}
is an alternating series and satisfies all of the conditions of the alternating series test, Theorem 3.3.14a:
 The terms in the series alternate in sign.
 The magnitude of the \(n^{\rm th}\) term in the series decreases monotonically as \(n\) increases.
 The \(n^{\rm th}\) term in the series converges to zero as \(n\rightarrow\infty\text{.}\)
So the alternating series test guarantees that, if we approximate, for example,
\begin{gather*} \frac{1}{e} \approx \frac{1}{2!}\frac{1}{3!} +\frac{1}{4!}\frac{1}{5!}+\frac{1}{6!}\frac{1}{7!} +\frac{1}{8!}\frac{1}{9!} \end{gather*}
then the error in this approximation lies between \(0\) and the next term in the series, which is \(\frac{1}{10!}\text{.}\) That is
\begin{gather*} \frac{1}{2!}\frac{1}{3!} +\frac{1}{4!}\frac{1}{5!}+\frac{1}{6!}\frac{1}{7!} +\frac{1}{8!}\frac{1}{9!} \le \frac{1}{e} \qquad\qquad\qquad\qquad\\ \qquad\qquad\qquad\qquad \le \frac{1}{2!}\frac{1}{3!} +\frac{1}{4!}\frac{1}{5!}+\frac{1}{6!}\frac{1}{7!} +\frac{1}{8!}\frac{1}{9!}+\frac{1}{10!} \end{gather*}
so that
\begin{gather*} \frac{1}{ \frac{1}{2!}\frac{1}{3!} +\frac{1}{4!}\frac{1}{5!}+\frac{1}{6!}\frac{1}{7!} +\frac{1}{8!}\frac{1}{9!}+\frac{1}{10!}} \le e \qquad\qquad\qquad\qquad\\ \qquad\qquad\qquad\qquad \le \frac{1}{ \frac{1}{2!}\frac{1}{3!} +\frac{1}{4!}\frac{1}{5!}+\frac{1}{6!}\frac{1}{7!} +\frac{1}{8!}\frac{1}{9!}} \end{gather*}
which, to seven decimal places says
\begin{align*} 2.7182816 \le e\le &2.7182837 \end{align*}
(To seven decimal places \(e=2.7182818\text{.}\))
The alternating series test tells us that, for any natural number \(N\text{,}\) the error that we make when we approximate \(\frac{1}{e}\) by the partial sum \(S_N= \sum_{n=0}^N\frac{(1)^n}{n!}\) has magnitude no larger than \(\frac{1}{(N+1)!}\text{.}\) This tends to zero spectacularly quickly as \(N\) increases, simply because \((N+1)!\) increases spectacularly quickly as \(N\) increases^{ 13}. For example \(20!\approx 2.4\times 10^{27}\text{.}\)
We will shortly see, in Example 3.5.20, that if \(1 \lt x\le 1\text{,}\) then
\[ \log(1+x) = x\frac{x^2}{2}+\frac{x^3}{3}\frac{x^4}{4}+\cdots = \sum_{n=1}^\infty (1)^{n1}\frac{x^n}{n} \nonumber \]
Suppose that we have to compute \(\log\frac{11}{10}\) to within an accuracy of \(10^{12}\text{.}\) Since \(\frac{11}{10}=1+\frac{1}{10}\text{,}\) we can get \(\log\frac{11}{10}\) by evaluating \(\log(1+x)\) at \(x=\frac{1}{10}\text{,}\) so that
\begin{align*} \log\frac{11}{10} & = \log\Big(1+\frac{1}{10}\Big) =\frac{1}{10} \frac{1}{2\times 10^2} +\frac{1}{3\times 10^3} \frac{1}{4\times 10^4}+\cdots\\ & = \sum_{n=1}^\infty (1)^{n1}\frac{1}{n\times 10^n} \end{align*}
By the alternating series test, this series converges. Also by the alternating series test, approximating \(\log\frac{11}{10}\) by throwing away all but the first \(N\) terms
\begin{align*} \log\frac{11}{10} & \approx \frac{1}{10} \frac{1}{2\times 10^2} +\frac{1}{3\times 10^3} \frac{1}{4\times 10^4}+\cdots +(1)^{N1}\frac{1}{N\times 10^N}\\ & = \sum_{n=1}^{N} (1)^{n1}\frac{1}{n\times 10^n} \end{align*}
introduces an error whose magnitude is no more than the magnitude of the first term that we threw away.
\[ \text{error} \le \frac{1}{(N+1)\times 10^{N+1}} \nonumber \]
To achieve an error that is no more than \(10^{12}\text{,}\) we have to choose \(N\) so that
\[ \frac{1}{(N+1)\times 10^{N+1}} \le 10^{12} \nonumber \]
The best way to do so is simply to guess — we are not going to be able to manipulate the inequality \(\frac{1}{(N+1)\times 10^{N+1}} \le \frac{1}{10^{12}}\) into the form \(N\le \cdots\text{,}\) and even if we could, it would not be worth the effort. We need to choose \(N\) so that the denominator \((N+1)\times 10^{N+1}\) is at least \(10^{12}\text{.}\) That is easy, because the denominator contains the factor \(10^{N+1}\) which is at least \(10^{12}\) whenever \(N+1\ge 12\text{,}\) i.e. whenever \(N\ge 11\text{.}\) So we will achieve an error of less than \(10^{12}\) if we choose \(N=11\text{.}\)
\[ \frac{1}{(N+1)\times 10^{N+1}}\bigg_{N=11} = \frac{1}{12\times 10^{12}} \lt \frac{1}{10^{12}} \nonumber \]
This is not the smallest possible choice of \(N\text{,}\) but in practice that just doesn't matter — your computer is not going to care whether or not you ask it to compute a few extra terms. If you really need the smallest \(N\) that obeys \(\frac{1}{(N+1)\times 10^{N+1}} \le \frac{1}{10^{12}}\text{,}\) you can next just try \(N=10\text{,}\) then \(N=9\text{,}\) and so on.
\begin{align*} \frac{1}{(N+1)\times 10^{N+1}}\bigg_{N=11} &= \frac{1}{12\times 10^{12}} \lt \frac{1}{10^{12}}\\ \frac{1}{(N+1)\times 10^{N+1}}\bigg_{N=10} &= \frac{1}{11\times 10^{11}} \lt \frac{1}{10\times 10^{11}} = \frac{1}{10^{12}}\\ \frac{1}{(N+1)\times 10^{N+1}}\bigg_{N=9} &= \frac{1}{10\times 10^{10}} = \frac{1}{10^{11}} \gt \frac{1}{10^{12}} \end{align*}
So in this problem, the smallest acceptable \(N=10\text{.}\)
The Ratio Test
The idea behind the ratio test comes from a reexamination of the geometric series. Recall that the geometric series
\begin{gather*} \sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a r^n \end{gather*}
converges when \(r \lt 1\) and diverges otherwise. So the convergence of this series is completely determined by the number \(r\text{.}\) This number is just the ratio of successive terms — that is \(r = a_{n+1}/a_n\text{.}\)
In general the ratio of successive terms of a series, \(\frac{a_{n+1}}{a_n}\text{,}\) is not constant, but depends on \(n\text{.}\) However, as we have noted above, the convergence of a series \(\sum a_n\) is determined by the behaviour of its terms when \(n\) is large. In this way, the behaviour of this ratio when \(n\) is small tells us nothing about the convergence of the series, but the limit of the ratio as \(n\to\infty\) does. This is the basis of the ratio test.
Let \(N\) be any positive integer and assume that \(a_n\ne 0\) for all \(n\ge N\text{.}\)
 If \(\lim\limits_{n\rightarrow\infty}\Big\frac{a_{n+1}}{a_n}\Big = L \lt 1\text{,}\) then \(\sum\limits_{n=1}^\infty a_n\) converges.
 If \(\lim\limits_{n\rightarrow\infty}\Big\frac{a_{n+1}}{a_n}\Big = L \gt 1\text{,}\) or \(\lim\limits_{n\rightarrow\infty}\Big\frac{a_{n+1}}{a_n}\Big = +\infty\text{,}\) then \(\sum\limits_{n=1}^\infty a_n\) diverges.
Beware that the ratio test provides absolutely no conclusion about the convergence or divergence of the series \(\sum\limits_{n=1}^\infty a_n\) if \(\lim\limits_{n\rightarrow\infty}\Big\frac{a_{n+1}}{a_n}\Big = 1\text{.}\) See Example 3.3.22, below.
 Proof

(a) Pick any number \(R\) obeying \(L \lt R \lt 1\text{.}\) We are assuming that \(\Big\frac{a_{n+1}}{a_n}\Big\) approaches \(L\) as \(n\rightarrow\infty\text{.}\) In particular there must be some natural number \(M\) so that \(\Big\frac{a_{n+1}}{a_n}\Big\le R\) for all \(n\ge M\text{.}\) So \(a_{n+1}\le Ra_n\) for all \(n\ge M\text{.}\) In particular
\begin{align*} a_{M+1} & \ \le\ R\,a_M\\ a_{M+2} & \ \le\ R\,a_{M+1} & \le\ R^2 \,a_M\\ a_{M+3} & \ \le\ R\,a_{M+2} & \le\ R^3 \,a_M\\ &\vdots\\ a_{M+\ell} &\le R^\ell \,a_M \end{align*}
for all \(\ell\ge 0\text{.}\) The series \(\sum_{\ell=0}^\infty R^\ell \,a_M\) is a geometric series with ratio \(R\) smaller than one in magnitude and so converges. Consequently, by the comparison test with \(a_n\) replaced by \(A_\ell = a_{n+\ell}\) and \(c_n\) replaced by \(C_\ell= R^\ell \, a_M\text{,}\) the series \(\sum\limits_{\ell=1}^\infty a_{M+\ell} =\sum\limits_{n=M+1}^\infty a_n\) converges. So the series \(\sum\limits_{n=1}^\infty a_n\) converges too.
(b) We are assuming that \(\Big\frac{a_{n+1}}{a_n}\Big\) approaches \(L \gt 1\) as \(n\rightarrow\infty\text{.}\) In particular there must be some natural number \(M \gt N\) so that \(\Big\frac{a_{n+1}}{a_n}\Big\ge 1\) for all \(n\ge M\text{.}\) So \(a_{n+1}\ge a_n\) for all \(n\ge M\text{.}\) That is, \(a_n\) increases as \(n\) increases as long as \(n\ge M\text{.}\) So \(a_n\ge a_M\) for all \(n\ge M\) and \(a_n\) cannot converge to zero as \(n\rightarrow\infty\text{.}\) So the series diverges by the divergence test.
Fix any two nonzero real numbers \(a\) and \(x\text{.}\) We have already seen in Example 3.2.4 and Lemma 3.2.5 — we have just renamed \(r\) to \(x\) — that the geometric series \(\sum_{n=0}^\infty a x^n\) converges when \(x \lt 1\) and diverges when \(x\ge 1\text{.}\) We are now going to consider a new series, constructed by differentiating^{ 14} each term in the geometric series \(\sum_{n=0}^\infty a x^n\text{.}\) This new series is
\[ \sum_{n=0}^\infty a_n\qquad\text{with}\quad a_n = a\, n\, x^{n1} \nonumber \]
Let's apply the ratio test.
\begin{align*} \Big\frac{a_{n+1}}{a_n}\Big &= \Big\frac{a\, (n+1)\, x^n}{a\, n\, x^{n1}}\Big = \frac{n+1}{n} x = \Big(1+\frac{1}{n}\Big) x \rightarrow L=x\quad\text{as $n\rightarrow\infty$} \end{align*}
The ratio test now tells us that the series \(\sum_{n=0}^\infty a\, n\, x^{n1}\) converges if \(x \lt 1\) and diverges if \(x \gt 1\text{.}\) It says nothing about the cases \(x=\pm 1\text{.}\) But in both of those cases \(a_n=a\,n\,(\pm 1)^n\) does not converge to zero as \(n\rightarrow\infty\) and the series diverges by the divergence test.
Notice that in the above example, we had to apply another convergence test in addition to the ratio test. This will be commonplace when we reach power series and Taylor series — the ratio test will tell us something like
The series converges for \(x \lt R\) and diverges for \(x \gt R\text{.}\)
Of course, we will still have to to determine what happens when \(x=+R, R\text{.}\) To determine convergence or divergence in those cases we will need to use one of the other tests we have seen.
Once again, fix any two nonzero real numbers \(a\) and \(X\text{.}\) We again start with the geometric series \(\sum_{n=0}^\infty a x^n\) but this time we construct a new series by integrating^{ 15} each term, \(a x^n\text{,}\) from \(x=0\) to \(x=X\) giving \(\frac{a}{n+1} X^{n + 1}\text{.}\) The resulting new series is
\[ \sum_{n=0}^\infty a_n\qquad\text{with }a_n = \frac{a}{n+1} X^{n + 1} \nonumber \]
To apply the ratio test we need to compute
\begin{align*} \Big\frac{a_{n+1}}{a_n}\Big &= \bigg\frac{\frac{a}{n+2} X^{n + 2}}{\frac{a}{n+1} X^{n + 1}}\bigg = \frac{n+1}{n+2} X = \frac{1+\frac{1}{n}}{1+\frac{2}{n}} X \rightarrow L=X\quad\text{as $n\rightarrow\infty$} \end{align*}
The ratio test now tells us that the series \(\sum_{n=0}^\infty \frac{a}{n+1} X^{n + 1}\) converges if \(X \lt 1\) and diverges if \(X \gt 1\text{.}\) It says nothing about the cases \(X=\pm 1\text{.}\)
If \(X=1\text{,}\) the series reduces to
\[ \sum_{n=0}^\infty \frac{a}{n+1} X^{n + 1}\bigg_{X=1} =\sum_{n=0}^\infty \frac{a}{n+1} =a\sum_{m=1}^\infty \frac{1}{m}\qquad\text{with }m=n+1 \nonumber \]
which is just \(a\) times the harmonic series, which we know diverges, by Example 3.3.6.
If \(X=1\text{,}\) the series reduces to
\[ \sum_{n=0}^\infty \frac{a}{n+1} X^{n + 1}\bigg_{X=1} =\sum_{n=0}^\infty (1)^{n+1}\frac{a}{n+1} \nonumber \]
which converges by the alternating series test. See Example 3.3.15.
In conclusion, the series \(\sum_{n=0}^\infty \frac{a}{n+1} X^{n + 1}\) converges if and only if \(1\le X \lt 1\text{.}\)
The ratio test is often quite easy to apply, but one must always be careful when the limit of the ratio is \(1\text{.}\) The next example illustrates this.
In this example, we are going to see three different series that all have \(\lim_{n\rightarrow\infty}\Big\frac{a_{n+1}}{a_n}\Big = 1\text{.}\) One is going to diverge and the other two are going to converge.
 The first series is the harmonic series
\[ \sum_{n=1}^\infty a_n\qquad\text{with }a_n = \frac{1}{n} \nonumber \]
We have already seen, in Example 3.3.6, that this series diverges. It has\[ \Big\frac{a_{n+1}}{a_n}\Big = \bigg\frac{\frac{1}{n+1}}{\frac{1}{n}}\bigg = \frac{n}{n+1} = \frac{1}{1+\frac{1}{n}} \rightarrow L=1\quad\text{as $n\rightarrow\infty$} \nonumber \]
 The second series is the alternating harmonic series
\[ \sum_{n=1}^\infty a_n\qquad\text{with }a_n = (1)^{n1}\frac{1}{n} \nonumber \]
We have already seen, in Example 3.3.15, that this series converges. But it also has\[ \Big\frac{a_{n+1}}{a_n}\Big = \bigg\frac{(1)^n\frac{1}{n+1}}{(1)^{n1}\frac{1}{n}}\bigg = \frac{n}{n+1} = \frac{1}{1+\frac{1}{n}} \rightarrow L=1\quad\text{as $n\rightarrow\infty$} \nonumber \]
 The third series is
\[ \sum_{n=1}^\infty a_n\qquad\text{with }a_n = \frac{1}{n^2} \nonumber \]
We have already seen, in Example 3.3.6 with \(p=2\text{,}\) that this series converges. But it also has\[ \Big\frac{a_{n+1}}{a_n}\Big = \bigg\frac{\frac{1}{(n+1)^2}}{\frac{1}{n^2}}\bigg = \frac{n^2}{(n+1)^2} = \frac{1}{(1+\frac{1}{n})^2} \rightarrow L=1\quad\text{as $n\rightarrow\infty$} \nonumber \]
Let's do a somewhat artificial example that forces us to combine a few of the techniques we have seen.
Again, the convergence of this series will depend on \(x\text{.}\)
 Let us start with the ratio test — so we compute \[\begin{align*} \left\frac{a_{n+1}}{a_n}\right &= \left\frac{(3)^{n+1} \sqrt{n+2} (2n+3) x^{n+1} }{(3)^n \sqrt{n+1} (2n+5) x^n} \right\\ &= 3 \cdot \frac{\sqrt{n+2}}{\sqrt{n+1}} \cdot \frac{2n+3}{2n+5} \cdot x\\ \end{align*}\]
So in the limit as \(n \to \infty\) we are left with
\begin{align*} \lim_{n\to\infty} \left\frac{a_{n+1}}{a_n}\right &= 3 x \end{align*}  The ratio test then tells us that if \(3x \gt 1\) the series diverges, while when \(3x \lt 1\) the series converges.
 This leaves us with the cases \(x=+\frac{1}{3}\) and \(\frac{1}{3}\text{.}\)
 Setting \(x=\frac{1}{3}\) gives the series
\begin{gather*} \sum_{n=1}^\infty \frac{ (1)^n \sqrt{n+1}}{2n+3} \end{gather*}
The fact that the terms alternate here suggests that we use the alternating series test. That will show that this series converges provided \(\frac{\sqrt{n+1}}{2n+3}\) decreases as \(n\) increases. So we define the function\begin{align*} f(t) &= \frac{\sqrt{t+1}}{2t+3} \end{align*}
(which is constructed by replacing the \(n\) in \(\frac{\sqrt{n+1}}{2n+3}\) with \(t\)) and verify that \(f(t)\) is a decreasing function of \(t\text{.}\) To prove that, it suffices to show its derivative is negative when \(t\geq 1\text{:}\)\begin{align*} f'(t) &= \frac{(2t+3)\cdot \frac{1}{2} \cdot(t+1)^{1/2}  2\sqrt{t+1} }{(2t+3)^2}\\ &=\frac{(2t+3)  4(t+1) }{2 \sqrt{t+1} (2t+3)^2}\\ &= \frac{2t1}{2 \sqrt{t+1} (2t+3)^2} \end{align*}
When \(t \geq 1\) this is negative and so \(f(t)\) is a decreasing function. Thus we can apply the alternating series test to show that the series converges when \(x=\frac{1}{3}\text{.}\)  When \(x = \frac{1}{3}\) the series becomes
\begin{gather*} \sum_{n=1}^\infty \frac{\sqrt{n+1}}{2n+3}. \end{gather*}
Notice that when \(n\) is large, the summand is approximately \(\frac{\sqrt{n}}{2n}\) which suggests that the series will diverge by comparison with \(\sum n^{1/2}\text{.}\) To formalise this, we can use the limit comparison theorem:\begin{align*} \lim_{n \to \infty} \frac{\sqrt{n+1}}{2n+3}\ \frac{1}{ n^{1/2} } &= \lim_{n \to \infty} \frac{\sqrt{n} \cdot \sqrt{1+1/n}}{n(2+3/n)} \cdot n^{1/2}\\ &= \lim_{n \to \infty} \frac{n \cdot \sqrt{1+1/n}}{n(2+3/n)}\\ &= \frac{1}{2} \end{align*}
So since this ratio has a finite limit and the series \(\sum n^{1/2}\) diverges, we know that our series also diverges.
So in summary the series converges when \(\frac{1}{3} \lt x \leq \frac{1}{3}\) and diverges otherwise.
Convergence Test List
We now have half a dozen convergence tests:
 Divergence Test
 works well when the \(n^{\mathrm{th}}\) term in the series fails to converge to zero as \(n\) tends to infinity
 Alternating Series Test
 works well when successive terms in the series alternate in sign
 don't forget to check that successive terms decrease in magnitude and tend to zero as \(n\) tends to infinity
 Integral Test
 works well when, if you substitute \(x\) for \(n\) in the \(n^{\mathrm{th}}\) term you get a function, \(f(x)\text{,}\) that you can integrate
 don't forget to check that \(f(x)\ge 0\) and that \(f(x)\) decreases as \(x\) increases
 Ratio Test
 works well when \(\frac{a_{n+1}}{a_n}\) simplifies enough that you can easily compute \(\lim\limits_{n\rightarrow\infty}\big\frac{a_{n+1}}{a_n}\big=L\)
 this often happens when \(a_n\) contains powers, like \(7^n\text{,}\) or factorials, like \(n!\)
 don't forget that \(L=1\) tells you nothing about the convergence/divergence of the series
 Comparison Test and Limit Comparison Test
 works well when, for very large \(n\text{,}\) the \(n^{\mathrm{th}}\) term \(a_n\) is approximately the same as a simpler term \(b_n\) (see Example 3.3.10) and it is easy to determine whether or not \(\sum_{n=1}^\infty b_n\) converges
 don't forget to check that \(b_n\ge 0\)
 usually the Limit Comparison Test is easier to apply than the Comparison Test
Optional — The Leaning Tower of Books
Imagine that you are about to stack a bunch of identical books on a table. But you don't want to just stack them exactly vertically. You want to built a “leaning tower of books” that overhangs the edge of the table as much as possible.
How big an overhang can you get? The answer to that question, which we'll now derive, uses a series!
 Let's start by just putting book #1 on the table. It's the red book labelled “\(B_1\)” in the figure below.
Use a horizontal \(x\)axis with \(x=0\) corresponding to the right hand edge of the table. Imagine that we have placed book #1 so that its right hand edge overhangs the end of the table by a distance \(x_1\text{.}\)
 In order for the book to not topple off of the table, we need its center of mass to lie above the table. That is, we need the \(x\)coordinate of the center mass of \(B_1\text{,}\) which we shall denote \(\bar X(B_1)\text{,}\) to obey
\[ \bar X(B_1) \le 0 \nonumber \]
Assuming that our books have uniform density and are of length \(L\text{,}\) \(\bar X(B_1)\) will be exactly half way between the right hand end of the book, which is at \(x=x_1\text{,}\) and the left hand end of the book, which is at \(x=x_1L\text{.}\) So\[ \bar X(B_1) =\frac{1}{2} x_1+\frac{1}{2}(x_1L) = x_1\frac{L}{2} \nonumber \]
Thus book #1 does not topple off of the table provided
\[ x_1\le\frac{L}{2} \nonumber \]
 In order for the book to not topple off of the table, we need its center of mass to lie above the table. That is, we need the \(x\)coordinate of the center mass of \(B_1\text{,}\) which we shall denote \(\bar X(B_1)\text{,}\) to obey
 Now let's put books #1 and #2 on the table, with the right hand edge of book #1 at \(x=x_1\) and the right hand edge of book #2 at \(x=x_2\text{,}\) as in the figure below.
 In order for book #2 to not topple off of book #1, we need the center of mass of book #2 to lie above book #1. That is, we need the \(x\)coordinate of the center mass of \(B_2\text{,}\) which is \(\bar X(B_2)=x_2\frac{L}{2}\text{,}\) to obey
\[ \bar X(B_2) \le x_1 \iff x_2\frac{L}{2} \le x_1 \iff x_2\le x_1+\frac{L}{2} \nonumber \]
 Assuming that book #2 does not topple off of book #1, we still need to arrange that the pair of books does not topple off of the table. Think of the pair of books as the combined red object in the figure
In order for the combined red object to not topple off of the table, we need the center of mass of the combined red object to lie above the table. That is, we need the \(x\)coordinate of the center mass of the combined red object, which we shall denote \(\bar X(B_1\cup B_2)\text{,}\) to obey
\[ \bar X(B_1\cup B_2) \le 0 \nonumber \]
The center of mass of the combined red object is the weighted average^{ 16} of the centers of mass of \(B_1\) and \(B_2\text{.}\) As \(B_1\) and \(B_2\) have the same weight,
\begin{align*} \bar X(B_1\cup B_2) &= \frac{1}{2}\bar X(B_1) +\frac{1}{2}\bar X(B_2) = \frac{1}{2}\Big(x_1\frac{L}{2}\Big) +\frac{1}{2}\Big(x_2\frac{L}{2}\Big)\\ &= \frac{1}{2}(x_1+x_2) \frac{L}{2} \end{align*}
and the combined red object does not topple off of the table if
\[ \bar X(B_1\cup B_2) =\frac{1}{2}(x_1+x_2) \frac{L}{2} \le 0 \iff x_1+x_2\le L \nonumber \]
In conclusion, our twobook tower survives if
\begin{gather*} x_2\le x_1+\frac{L}{2}\qquad\text{and}\qquad x_1+x_2\le L \end{gather*}
In particular we may choose \(x_1\) and \(x_2\) to satisfy \(x_2 = x_1+\frac{L}{2}\) and \(x_1+x_2 = L\text{.}\) Then, substituting \(x_2 = x_1+\frac{L}{2}\) into \(x_1+x_2 = L\) gives
\[ x_1 + \Big(x_1+\frac{L}{2}\Big) = L \iff 2x_1 = \frac{L}{2} \iff x_1 = \frac{L}{2}\Big(\frac{1}{2}\Big),\quad x_2 = \frac{L}{2}\Big(1+\frac{1}{2}\Big) \nonumber \]
 In order for book #2 to not topple off of book #1, we need the center of mass of book #2 to lie above book #1. That is, we need the \(x\)coordinate of the center mass of \(B_2\text{,}\) which is \(\bar X(B_2)=x_2\frac{L}{2}\text{,}\) to obey
 Before considering the general “\(n\)book tower”, let's now put books #1, #2 and #3 on the table, with the right hand edge of book #1 at \(x=x_1\text{,}\) the right hand edge of book #2 at \(x=x_2\text{,}\) and the right hand edge of book #3 at \(x=x_3\text{,}\) as in the figure below.
 In order for book #3 to not topple off of book #2, we need the center of mass of book #3 to lie above book #2. That is, we need the \(x\)coordinate of the center mass of \(B_3\text{,}\) which is \(\bar X(B_3)=x_3\frac{L}{2}\text{,}\) to obey
\[ \bar X(B_3) \le x_2 \iff x_3\frac{L}{2} \le x_2 \iff x_3\le x_2+\frac{L}{2} \nonumber \]
 Assuming that book #3 does not topple off of book #2, we still need to arrange that the pair of books, book #2 plus book #3 (the red object in the figure below), does not topple off of book #1.
In order for this combined red object to not topple off of book #1, we need the \(x\)coordinate of its center mass, which we denote \(\bar X(B_2\cup B_3)\text{,}\) to obey
\[ \bar X(B_2\cup B_3) \le x_1 \nonumber \]
The center of mass of the combined red object is the weighted average of the center of masses of \(B_2\) and \(B_3\text{.}\) As \(B_2\) and \(B_3\) have the same weight,
\begin{align*} \bar X(B_2\cup B_3) &= \frac{1}{2}\bar X(B_2) +\frac{1}{2}\bar X(B_3) = \frac{1}{2}\Big(x_2\frac{L}{2}\Big) +\frac{1}{2}\Big(x_3\frac{L}{2}\Big)\\ &= \frac{1}{2}(x_2+x_3) \frac{L}{2} \end{align*}
and the combined red object does not topple off of book #1 if
\[ \frac{1}{2}(x_2+x_3) \frac{L}{2} \le x_1 \iff x_2+x_3\le 2x_1+L \nonumber \]
 Assuming that book #3 does not topple off of book #2, and also that the combined book #2 plus book #3 does not topple off of book #1, we still need to arrange that the whole tower of books, book #1 plus book #2 plus book #3 (the red object in the figure below), does not topple off of the table.
In order for this combined red object to not topple off of the table, we need the \(x\)coordinate of its center mass, which we denote \(\bar X(B_1\cup B_2\cup B_3)\text{,}\) to obey
\[ \bar X(B_1\cup B_2\cup B_3) \le 0 \nonumber \]
The center of mass of the combined red object is the weighted average of the center of masses of \(B_1\) and \(B_2\) and \(B_3\text{.}\) As they all have the same weight,
\begin{align*} \bar X(B_1\cup B_2\cup B_3) &= \frac{1}{3}\bar X(B_1) +\frac{1}{3}\bar X(B_2) +\frac{1}{3}\bar X(B_3)\\ &= \frac{1}{3}\Big(x_1\frac{L}{2}\Big) +\frac{1}{3}\Big(x_2\frac{L}{2}\Big) +\frac{1}{3}\Big(x_3\frac{L}{2}\Big)\\ &= \frac{1}{3}(x_1+x_2+x_3) \frac{L}{2} \end{align*}
and the combined red object does not topple off of the table if
\[ \frac{1}{3}(x_1+ x_2+x_3) \frac{L}{2} \le 0 \iff x_1+ x_2+x_3\le \frac{3L}{2} \nonumber \]
In conclusion, our threebook tower survives if
\begin{gather*} x_3\le x_2+\frac{L}{2}\qquad\text{and}\qquad x_2+x_3\le 2x_1 + L \qquad\text{and}\qquad x_1+ x_2+x_3\le \frac{3L}{2} \end{gather*}
In particular, we may choose \(x_1\text{,}\) \(x_2\) and \(x_3\) to satisfy
\begin{align*} x_1+ x_2+x_3&= \frac{3L}{2}\qquad\text{and}\\ x_2+x_3&= 2x_1 + L \qquad\text{and}\\ x_3 &= \frac{L}{2} + x_2 \end{align*}
Substituting the second equation into the first gives
\begin{gather*} 3x_1 +L = \frac{3L}{2} \implies x_1 = \frac{L}{2}\Big(\frac{1}{3}\Big) \end{gather*}
Next substituting the third equation into the second, and then using the formula above for \(x_1\text{,}\) gives
\begin{gather*} 2x_2 +\frac{L}{2} = 2x_1+L = \frac{L}{3} + L \implies x_2 = \frac{L}{2}\Big(\frac{1}{2}+\frac{1}{3}\Big) \end{gather*}
and finally
\begin{gather*} x_3 = \frac{L}{2} + x_2 = \frac{L}{2}\Big(1+\frac{1}{2}+\frac{1}{3}\Big) \end{gather*}
 In order for book #3 to not topple off of book #2, we need the center of mass of book #3 to lie above book #2. That is, we need the \(x\)coordinate of the center mass of \(B_3\text{,}\) which is \(\bar X(B_3)=x_3\frac{L}{2}\text{,}\) to obey
 We are finally ready for the general “\(n\)book tower”. Stack \(n\) books on the table, with book \(B_1\) on the bottom and book \(B_n\) at the top, and with the right hand edge of book #\(j\) at \(x=x_j\text{.}\) The same center of mass considerations as above show that the tower survives if
\begin{align*} \bar X(B_n) &\le x_{n1} & x_n\frac{L}{2}&\le x_{n1}\\ \bar X(B_{n1}\cup B_n) &\le x_{n2} & \frac{1}{2}(x_{n1}+x_n)\frac{L}{2}&\le x_{n2}\\ &\ \ \vdots &\quad\vdots\\ \bar X(B_3\cup\cdots\cup B_n) &\le x_2& \frac{1}{n2}(x_3+\cdots+x_n)\frac{L}{2}&\le x_2\\ \bar X(B_2\cup B_3\cup\cdots\cup B_n) &\le x_1& \frac{1}{n1}(x_2+x_3+\cdots+x_n)\frac{L}{2}&\le x_1\\ \bar X(B_1\cup B_2\cup B_3\cup\cdots\cup B_n) &\le 0 & \frac{1}{n}(x_1+x_2+x_3+\cdots+x_n)\frac{L}{2}&\le 0 \end{align*}
In particular, we may choose the \(x_j\)'s to obey\begin{align*} \frac{1}{n}(x_1+x_2+x_3+\cdots+x_n)& = \frac{L}{2}\\ \frac{1}{n1}(x_2+x_3+\cdots+x_n)&= \frac{L}{2} + x_1\\ \frac{1}{n2}(x_3+\cdots+x_n)&= \frac{L}{2} + x_2\\ &\ \ \vdots &\vdots&\\ \frac{1}{2}(x_{n1}+x_n)&= \frac{L}{2} + x_{n2}\\ x_n&= \frac{L}{2} + x_{n1} \end{align*}
Substituting \(x_2+x_3+\cdots+x_n=(n1) x_1 +\frac{L}{2}(n1)\) from the second equation into the first equation gives\begin{align*} \frac{1}{n}\Big\{\overbrace{x_1+(n1) x_1}^{nx_1} +\frac{L}{2}(n1)\Big\} = \frac{L}{2} &\implies x_1 +\frac{L}{2}\Big(1\frac{1}{n}\Big) = \frac{L}{2}\Big(\frac{1}{2}\Big)\\ &\implies x_1 = \frac{L}{2}\Big(\frac{1}{n}\Big) \end{align*}
Substituting \(x_3+\cdots+x_n=(n2) x_2+\frac{L}{2}(n2)\) from the third equation into the second equation gives\begin{align*} &\frac{1}{n1}\Big\{\overbrace{x_2+(n2) x_2}^{(n1)x_2} +\frac{L}{2}(\overbrace{n2}^{(n1)1})\Big\} = \frac{L}{2} +x_1 =\frac{L}{2}\Big(1+\frac{1}{n}\Big)\\ &\hskip1in\implies x_2 + \frac{L}{2}\Big(1\frac{1}{n1}\Big) =\frac{L}{2}\Big(1+\frac{1}{n}\Big)\\ &\hskip1in\implies x_2 = \frac{L}{2}\Big(\frac{1}{n1}+\frac{1}{n}\Big) \end{align*}
Just keep going. We end up with\begin{align*} x_1 &= \frac{L}{2}\Big(\frac{1}{n}\Big)\\ x_2 &= \frac{L}{2}\Big(\frac{1}{n1}+\frac{1}{n}\Big)\\ x_3 &= \frac{L}{2}\Big(\frac{1}{n2}+\frac{1}{n1}+\frac{1}{n}\Big)\\ &\ \ \vdots\\ x_{n2} &= \frac{L}{2}\Big(\frac{1}{3}+\cdots+\frac{1}{n}\Big)\\ x_{n1} &= \frac{L}{2}\Big(\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}\Big)\\ x_n &= \frac{L}{2}\Big(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}\Big) \end{align*}
Our overhang is \(x_n = \frac{L}{2}\big(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}\big)\text{.}\) This is \(\frac{L}{2}\) times the \(n^{\rm th}\) partial sum of the harmonic series \(\sum_{m=1}^\infty\frac{1}{m}\text{.}\) As we saw in Example 3.3.6 (the \(p\) test), the harmonic series diverges. So, as \(n\) goes to infinity \(1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}\) also goes to infinity. We may make the overhang as large^{ 17} as we like!
Optional — The Root Test
There is another test that is very similar in spirit to the ratio test. It also comes from a reexamination of the geometric series
\begin{gather*} \sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a r^n \end{gather*}
The ratio test was based on the observation that \(r\text{,}\) which largely determines whether or not the series converges, could be found by computing the ratio \(r = a_{n+1}/a_n\text{.}\) The root test is based on the observation that \(r\) can also be determined by looking that the \(n^{\rm th}\) root of the \(n^{\rm th}\) term with \(n\) very large:
\[ \lim_{n\to\infty}\root{n}\of{\bigar^n\big} =r\lim_{n\to\infty}\root{n}\of{\biga\big} =r\qquad\text{if $a\ne 0$} \nonumber \]
Of course, in general, the \(n^{\rm th}\) term is not exactly \(ar^n\text{.}\) However, if for very large \(n\text{,}\) the \(n^{\rm th}\) term is approximately proportional to \(r^n\text{,}\) with \(r\) given by the above limit, we would expect the series to converge when \(r \lt 1\) and diverge when \(r \gt 1\text{.}\) That is indeed the case.
Assume that
\[ L = \lim_{n\to\infty}\root{n}\of{\biga_n\big} \nonumber \]
exists or is \(+\infty\text{.}\)
 If \(L \lt 1\text{,}\) then \(\sum\limits_{n=1}^\infty a_n\) converges.
 If \(L \gt 1\text{,}\) or \(L=+\infty\text{,}\) then \(\sum\limits_{n=1}^\infty a_n\) diverges.
Beware that the root test provides absolutely no conclusion about the convergence or divergence of the series \(\sum\limits_{n=1}^\infty a_n\) if \(\lim\limits_{n\rightarrow\infty}\root{n}\of{\biga_n\big} = 1\text{.}\)

(a) Pick any number \(R\) obeying \(L \lt R \lt 1\text{.}\) We are assuming that \(\root{n}\of{a_n}\) approaches \(L\) as \(n\rightarrow\infty\text{.}\) In particular there must be some natural number \(M\) so that \(\root{n}\of{a_n}\le R\) for all \(n\ge M\text{.}\) So \(a_n\le R^n\) for all \(n\ge M\) and the series \(\sum\limits_{n=1}^\infty a_n\) converges by comparison to the geometric series \(\sum\limits_{n=1}^\infty R^n\)
(b) We are assuming that \(\root{n}\of{a_n}\) approaches \(L \gt 1\) (or grows unboundedly) as \(n\rightarrow\infty\text{.}\) In particular there must be some natural number \(M\) so that \(\root{n}\of{a_n}\ge 1\) for all \(n\ge M\text{.}\) So \(a_n\ge 1\) for all \(n\ge M\) and the series diverges by the divergence test.
We have already used the ratio test, in Example 3.3.23, to show that this series converges when \(x \lt \frac{1}{3}\) and diverges when \(x \gt \frac{1}{3}\text{.}\) We'll now use the root test to draw the same conclusions.
 Write \(a_n= \frac{ (3)^n \sqrt{n+1}}{2n+3}x^n\text{.}\)
 We compute
\begin{align*} \root{n}\of{a_n} &= \root{n}\of{ \bigg\frac{ (3)^n \sqrt{n+1}}{2n+3}x^n\bigg}\\ &= 3 x\big(n+1\big)^{\frac{1}{2n}} \big(2n+3)^{\frac{1}{n}} \end{align*}
 We'll now show that the limit of \(\big(n+1\big)^{\frac{1}{2n}}\) as \(n\to\infty\) is exactly \(1\text{.}\) To do, so we first compute the limit of the logarithm.
\begin{align*} \lim_{n\to\infty}\log \big(n+1\big)^{\frac{1}{2n}} &=\lim_{n\to\infty}\frac{\log \big(n+1\big)}{2n} \qquad&\text{now apply Theorem }{\text{3.1.6}}\\ &=\lim_{t\to\infty}\frac{\log \big(t+1\big)}{2t}\\ &=\lim_{t\to\infty}\frac{\frac{1}{t+1}}{2} \qquad&\text{by l'Hôpital}\\ &=0 \end{align*}
So\begin{gather*} \lim_{n\to\infty}\big(n+1\big)^{\frac{1}{2n}} =\lim_{n\to\infty}\exp\big\{\log \big(n+1\big)^{\frac{1}{2n}}\big\} = e^0=1 \end{gather*}
An essentially identical computation also gives that \(\lim_{n\to\infty}\big(2n+3)^{\frac{1}{n}} = e^0=1\text{.}\)  So
\begin{gather*} \lim_{n\to\infty}\root{n}\of{a_n} = 3 x \end{gather*}
and the root test also tells us that if \(3x \gt 1\) the series diverges, while when \(3x \lt 1\) the series converges.
We have done the last example once, in Example 3.3.23, using the ratio test and once, in Example 3.3.26, using the root test. It was clearly much easier to use the ratio test. Here is an example that is most easily handled by the root test.
Write \(a_n= \big(\frac{n}{n+1}\big)^{n^2}\text{.}\) Then
\begin{align*} \root{n}\of{a_n} &= \root{n}\of{ \Big(\frac{n}{n+1}\Big)^{n^2}} = \Big(\frac{n}{n+1}\Big)^{n} = \Big(1+\frac{1}{n}\Big)^{n} \end{align*}
Now we take the limit,
\begin{align*} \lim_{n\to\infty}\Big(1+\frac{1}{n}\Big)^{n} &=\lim_{X\to\infty}\Big(1+\frac{1}{X}\Big)^{X} \qquad&\text{by Theorem }{\text{3.1.6}}\\ &=\lim_{x\to 0}\big(1+x\big)^{1/x} \qquad&\text{where $x=\frac{1}{X}$}\\ &= e^{1} \end{align*}
by Example 3.7.20 in the CLP1 text with \(a=1\text{.}\) As the limit is strictly smaller than \(1\text{,}\) the series \(\sum_{n=1}^\infty \big(\frac{n}{n+1}\big)^{n^2}\) converges.
To draw the same conclusion using the ratio test, one would have to show that the limit of
\begin{align*} \frac{a_{n+1}}{a_n} &= \Big(\frac{n+1}{n+2}\Big)^{(n+1)^2} \Big(\frac{n+1}{n}\Big)^{n^2} \end{align*}
as \(n\rightarrow\infty\) is strictly smaller than 1. It's clearly better to stick with the root test.
Optional — Harmonic and Basel Series
The Harmonic Series
The series
\begin{gather*} \sum_{n=1}^\infty \frac{1}{n} \end{gather*}
that appeared in Warning 3.3.3, is called the Harmonic series^{ 18}, and its partial sums
\begin{align*} H_N &= \sum_{n=1}^N \frac{1}{n} \end{align*}
are called the Harmonic numbers. Though these numbers have been studied at least as far back as Pythagoras, the divergence of the series was first proved in around 1350 by Nicholas Oresme (13205 – 1382), though the proof was lost for many years and rediscovered by Mengoli (1626–1686) and the Bernoulli brothers (Johann 1667–1748 and Jacob 1655–1705).
Oresme's proof is beautiful and all the more remarkable that it was produced more than 300 years before calculus was developed by Newton and Leibnitz. It starts by grouping the terms of the harmonic series carefully:
\begin{align*} & \sum_{n=1}^\infty \frac{1}{n} = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \cdots\\ &= 1 + \frac{1}{2} + \left( \frac{1}{3} + \frac{1}{4} \right) + \left( \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} \right) + \left( \frac{1}{9} + \frac{1}{10} + \cdots + \frac{1}{15} + \frac{1}{16} \right) + \cdots\\ & \gt 1 + \frac{1}{2} + \left( \frac{1}{4} + \frac{1}{4} \right) + \left( \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} \right) + \left( \frac{1}{16} + \frac{1}{16} + \cdots + \frac{1}{16} + \frac{1}{16} \right) + \cdots\\ &= 1 + \frac{1}{2} + \left( \frac{2}{4} \right) + \left( \frac{4}{8} \right) + \left( \frac{8}{16} \right) + \cdots \end{align*}
So one can see that this is \(1 + \frac{1}{2} +\frac{1}{2}+\frac{1}{2} +\frac{1}{2} +\cdots\) and so must diverge^{ 19}.
There are many variations on Oresme's proof — for example, using groups of two or three. A rather different proof relies on the inequality
\begin{gather*} e^x \gt 1 + x \qquad \text{ for $x \gt 0$} \end{gather*}
which follows immediately from the Taylor series for \(e^x\) given in Theorem 3.6.7. From this we can bound the exponential of the Harmonic numbers:
\begin{align*} e^{H_n} &= e^{1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots + \frac{1}{n}}\\ &= e^1 \cdot e^{1/2} \cdot e^{1/3} \cdot e^{1/4} \cdots e^{1/n}\\ & \gt (1+1)\cdot(1+1/2)\cdot(1+1/3)\cdot(1+1/4)\cdots(1+1/n)\\ &= \frac{2}{1} \cdot \frac{3}{2} \cdot \frac{4}{3} \cdot \frac{5}{4} \cdots \frac{n+1}{n}\\ &= n+1 \end{align*}
Since \(e^{H_n}\) grows unboundedly with \(n\text{,}\) the harmonic series diverges.
The Basel Problem
The problem of determining the exact value of the sum of the series
\begin{gather*} \sum_{n=1}^\infty \frac{1}{n^2} \end{gather*}
is called the Basel problem. The problem is named after the home town of Leonhard Euler, who solved it. One can use telescoping series to show that this series must converge. Notice that
\begin{align*} \frac{1}{n^2} & \lt \frac{1}{n(n1)} = \frac{1}{n1}  \frac{1}{n} \end{align*}
Hence we can bound the partial sum:
\begin{align*} S_k=\sum_{n=1}^k \frac{1}{n^2} & \lt 1 + \sum_{n=2}^k \frac{1}{n(n1)} && \text{avoid dividing by $0$}\\ &= 1 + \sum_{n=2}^k \left(\frac{1}{n1}  \frac{1}{n} \right) && \text{which telescopes to}\\ &= 1 + 1  \frac{1}{k} \end{align*}
Thus, as \(k\) increases, the partial sum \(S_k\) increases (the series is a sum of positive terms), but is always smaller than \(2\text{.}\) So the sequence of partial sums converges.
Mengoli posed the problem of evaluating the series exactly in 1644 and it was solved — not entirely rigorously — by Euler in 1734. A rigorous proof had to wait another 7 years. Euler used some extremely cunning observations and manipulations of the sine function to show that
\begin{align*} \sum_{n=1}^\infty \frac{1}{n^2} &= \frac{\pi^2}{6}. \end{align*}
He used the Maclaurin series
\begin{align*} \sin x &= 1  \frac{x^3}{6} + \frac{x^5}{24}  \cdots \end{align*}
and a product formula for sine
\begin{align} \begin{split} \sin x &= x \cdot \left(1 \frac{x}{\pi} \right) \cdot \left(1 + \frac{x}{\pi} \right) \cdot \left(1 \frac{x}{2\pi} \right) \cdot \left(1 + \frac{x}{2\pi} \right) \cdot \left(1 \frac{x}{3\pi} \right) \cdot \left(1 + \frac{x}{3\pi} \right) \cdots\\ &= x \cdot \left(1 \frac{x^2}{\pi} \right) \cdot \left(1  \frac{x^2}{4\pi} \right) \cdot \left(1 \frac{x^2}{9\pi} \right) \cdots \end{split}\label{eqn_sinProdFormula}\tag{\(\star\)} \end{align}
Extracting the coefficient of \(x^3\) from both expansions gives the desired result. The proof of the product formula is well beyond the scope of this course. But notice that at least the values of \(x\) which make the left hand side of (\(\star\)) zero, namely \(x=n\pi\) with \(n\) integer, are exactly the same as the values of \(x\) which make the right hand side of (\(\star\)) zero^{ 20}.
This approach can also be used to compute \(\sum_{n=1}^\infty n^{2p}\) for \(p=1,2,3,\cdots\) and show that they are rational multiples^{ 21} of \(\pi^{2p}\text{.}\) The corresponding series of odd powers are significantly nastier and getting closed form expressions for them remains a famous open problem.
Optional — Some Proofs
In this optional section we provide proofs of two convergence tests. We shall repeatedly use the fact that any sequence \(a_1\text{,}\) \(a_2\text{,}\) \(a_3\text{,}\) \(\cdots\text{,}\) of real numbers which is increasing (i.e. \(a_{n+1}\ge a_n\) for all \(n\)) and bounded (i.e. there is a constant \(M\) such that \(a_n\le M\) for all \(n\)) converges. We shall not prove this fact^{ 22}.
We start with the comparison test, and then move on to the alternating series test.
Let \(N_0\) be a natural number and let \(K \gt 0\text{.}\)
 If \(a_n\le K c_n\) for all \(n\ge N_0\) and \(\sum\limits_{n=0}^\infty c_n\) converges, then \(\sum\limits_{n=0}^\infty a_n\) converges.
 If \(a_n\ge K d_n\ge0 \) for all \(n\ge N_0\) and \(\sum\limits_{n=0}^\infty d_n\) diverges, then \(\sum\limits_{n=0}^\infty a_n\) diverges.

(a) By hypothesis \(\sum_{n=0}^\infty c_n\) converges. So it suffices to prove that \(\sum_{n=0}^\infty [Kc_na_n]\) converges, because then, by our Arithmetic of series Theorem 3.2.9,
\[ \sum_{n=0}^\infty a_n = \sum_{n=0}^\infty K c_n \sum_{n=0}^\infty [Kc_na_n] \nonumber \]
will converge too. But for all \(n\ge N_0\text{,}\) \(Kc_na_n\ge 0\) so that, for all \(N\ge N_0\text{,}\) the partial sums
\[ S_N = \sum_{n=0}^N [Kc_na_n] \nonumber \]
increase with \(N\text{,}\) but never gets bigger than the finite number \(\sum\limits_{n=0}^{N_0} [Kc_na_n] + K \sum\limits_{n=N_0+1}^\infty c_n\text{.}\) So the partial sums \(S_N\) converge as \(N\rightarrow\infty\text{.}\)
(b) For all \(N \gt N_0\text{,}\) the partial sum
\[ S_N = \sum_{n=0}^N a_n \ge \sum_{n=0}^{N_0} a_n + K\hskip10pt\sum_{n=N_0+1}^N\hskip10pt d_n \nonumber \]
By hypothesis, \(\sum_{n=N_0+1}^N d_n\text{,}\) and hence \(S_N\text{,}\) grows without bound as \(N\rightarrow\infty\text{.}\) So \(S_N\rightarrow\infty\) as \(N\rightarrow\infty\text{.}\)
Let \(\big\{a_n\big\}_{n=1}^\infty\) be a sequence of real numbers that obeys
 \(a_n\ge 0\) for all \(n\ge 1\) and
 \(a_{n+1}\le a_n\) for all \(n\ge 1\) (i.e. the sequence is monotone decreasing) and
 \(\lim_{n\rightarrow\infty}a_n=0\text{.}\)
Then
\[ a_1a_2+a_3a_4+\cdots=\sum\limits_{n=1}^\infty (1)^{n1} a_n =S \nonumber \]
converges and, for each natural number \(N\text{,}\) \(SS_N\) is between \(0\) and (the first dropped term) \((1)^N a_{N+1}\text{.}\) Here \(S_N\) is, as previously, the \(N^{\rm th}\) partial sum \(\sum\limits_{n=1}^N (1)^{n1} a_n\text{.}\)

Let \(2n\) be an even natural number. Then the \(2n^{\rm th}\) partial sum obeys
\begin{align*} S_{2n}&=\overbrace{(a_1a_2)}^{\ge 0} +\overbrace{(a_3a_4)}^{\ge 0}+\cdots +\overbrace{(a_{2n1}a_{2n})}^{\ge 0}\\ &\le\overbrace{(a_1a_2)}^{\ge 0} +\overbrace{(a_3a_4)}^{\ge 0}+\cdots +\overbrace{(a_{2n1}a_{2n})}^{\ge 0} +\overbrace{(a_{2n+1}a_{2n+2})}^{\ge 0}\\ & =S_{2(n+1)} \end{align*}
and
\begin{align*} S_{2n}&=a_1(\overbrace{a_2a_3}^{\ge 0}) (\overbrace{a_4a_5}^{\ge 0})\cdots \overbrace{(a_{2n2}a_{2n1})}^{\ge 0} \overbrace{a_{2n}}^{\ge 0}\\ &\le a_1 \end{align*}
So the sequence \(S_2\text{,}\) \(S_4\text{,}\) \(S_6\text{,}\) \(\cdots\) of even partial sums is a bounded, increasing sequence and hence converges to some real number \(S\text{.}\) Since \(S_{2n+1} = S_{2n} +a_{2n+1}\) and \(a_{2n+1}\) converges zero as \(n\rightarrow\infty\text{,}\) the odd partial sums \(S_{2n+1}\) also converge to \(S\text{.}\) That \(SS_N\) is between \(0\) and (the first dropped term) \((1)^N a_{N+1}\) was already proved in §3.3.4.
Exercises
Stage 1
Select the series below that diverge by the divergence test.
(A) \(\displaystyle\sum_{n=1}^\infty \frac{1}{n}\)
(B) \(\displaystyle\sum_{n=1}^\infty \frac{n^2}{n+1}\)
(C) \(\displaystyle\sum_{n=1}^\infty \sin n\)
(D) \(\displaystyle\sum_{n=1}^\infty \sin (\pi n)\)
Select the series below whose terms satisfy the conditions to apply the integral test.
(A) \(\displaystyle\sum_{n=1}^\infty \frac{1}{n}\)
(B) \(\displaystyle\sum_{n=1}^\infty \frac{n^2}{n+1}\)
(C) \(\displaystyle\sum_{n=1}^\infty \sin n\)
(D) \(\displaystyle\sum_{n=1}^\infty \frac{\sin n+1}{n^2}\)
Suppose there is some threshold after which a person is considered old, and before which they are young.
Let Olaf be an old person, and let Yuan be a young person.
 Suppose I am older than Olaf. Am I old?
 Suppose I am younger than Olaf. Am I old?
 Suppose I am older than Yuan. Am I young?
 Suppose I am younger than Yuan. Am I young?
Below are graphs of two sequences with positive terms. Assume the sequences continue as shown. Fill in the table with conclusions that can be made from the direct comparison test, if any.
if \(\sum a_n\) converges  if \(\sum a_n\) diverges  
and if \(\{a_n\}\) is the red series  then \(\sum b_n\) \(\Rule{2cm}{1pt}{0pt}\)  then \(\sum b_n\) \(\Rule{2cm}{1pt}{0pt}\) 
and if \(\{a_n\}\) is the blue series  then \(\sum b_n\) \(\Rule{2cm}{1pt}{0pt}\)  then \(\sum b_n\) \(\Rule{2cm}{1pt}{0pt}\) 
For each pair of series below, decide whether the second series is a valid comparison series to determine the convergence of the first series, using the direct comparison test and/or the limit comparison test.
 \(\displaystyle\sum_{n=10}^{\infty} \frac{1}{n1},\) compared to the divergent series \(\displaystyle\sum_{n=10}^{\infty} \frac{1}{n}.\)
 \(\displaystyle\sum_{n=1}^{\infty} \frac{\sin n}{n^2+1},\) compared to the convergent series \(\displaystyle\sum_{n=1}^{\infty} \frac{1}{n^2}.\)
 \(\displaystyle\sum_{n=5}^{\infty} \frac{n^3+5n+1}{n^62},\) compared to the convergent series \(\displaystyle\sum_{n=5}^{\infty} \frac{1}{n^3}.\)
 \(\displaystyle\sum_{n=5}^{\infty} \frac{1}{\sqrt{n}},\) compared to the divergent series \(\displaystyle\sum_{n=5}^{\infty} \frac{1}{\sqrt[4]n}.\)
Suppose \(a_n\) is a sequence with \(\displaystyle\lim_{n \to \infty}a_n = \frac{1}{2}\text{.}\) Does \(\displaystyle\sum_{n=7}^\infty a_n\) converge or diverge, or is it not possible to determine this from the information given? Why?
What flaw renders the following reasoning invalid?
Q: Determine whether \(\displaystyle\sum_{n=1}^\infty \dfrac{\sin n}{n}\) converges or diverges.
A: First, we will evaluate \(\displaystyle\lim_{n \to \infty} \dfrac{\sin n}{n}\text{.}\)
 Note \(\dfrac{1}{n} \leq \dfrac{\sin n}{n} \leq \dfrac{1}{n}\) for \(n \ge 1\text{.}\)
 Note also that \(\displaystyle\lim_{n \to \infty}\frac{1}{n}=\displaystyle\lim_{n \to \infty}\frac{1}{n}=0\text{.}\)
 Therefore, by the Squeeze Theorem, \(\displaystyle\lim_{n \to \infty} \dfrac{\sin n}{n}=0\) as well.
So, by the divergence test, \(\displaystyle\sum_{n=1}^\infty \dfrac{\sin n}{n}\) converges.
What flaw renders the following reasoning invalid?
Q: Determine whether \(\displaystyle\sum_{n=1}^\infty \left(\sin(\pi n)+2\right)\) converges or diverges.
A: We use the integral test. Let \(f(x)=\sin(\pi x)+2\text{.}\) Note \(f(x)\) is always positive, since \(\sin(x)+2 \geq 1+2 =1\text{.}\) Also, \(f(x)\) is continuous.
\begin{align*} \int_1^\infty [\sin(\pi x)+2] \, d{x} &= \lim_{b \to \infty}\int_1^b [\sin(\pi x)+2 ] \, d{x}\\ &=\lim_{b \to \infty} \left[\left.\frac{1}{\pi}\cos(\pi x)+2x \right_1^b\right]\\ &=\lim_{b \to \infty}\left[ \frac{1}{\pi}\cos(\pi b)+2b +\frac{1}{\pi}(1)2\right]\\ &=\infty \end{align*}
By the integral test, since the integral diverges, also \(\displaystyle\sum_{n=1}^\infty\left( \sin(\pi n)+2\right)\) diverges.
What flaw renders the following reasoning invalid?
Q: Determine whether the series \(\displaystyle\sum_{n=1}^\infty \dfrac{2^{n+1}n^2}{e^n+2n}\) converges or diverges.
A: We want to compare this series to the series \(\displaystyle\sum_{n=1}^\infty \dfrac{2^{n+1}}{e^n}\text{.}\) Note both this series and the series in the question have positive terms.
First, we find that \(\dfrac{2^{n+1}n^2}{e^n+2n} \gt \dfrac{2^{n+1}}{e^n}\) when \(n\) is sufficiently large. The justification for this claim is as follows:
 We note that \(e^n(n^21) \gt n^21 \gt 2n\) for \(n\) sufficiently large.
 Therefore, \(e^n \cdot n^2 \gt e^n+2n\)
 Therefore, \(2^{n+1}\cdot e^n \cdot n^2 \gt 2^{n+1}(e^n+2n)\)
 Since \(e^n+2n\) and \(e^n\) are both expressions that work out to be positive for the values of \(n\) under consideration, we can divide both sides of the inequality by these terms without having to flip the inequality. So, \(\dfrac{2^{n+1}n^2}{e^n+2n} \gt \dfrac{2^{n+1}}{e^n}\text{.}\)
Now, we claim \(\displaystyle\sum_{n=1}^\infty \dfrac{2^{n+1}}{e^n}\) converges.
Note \(\displaystyle\sum_{n=1}^\infty \dfrac{2^{n+1}}{e^n}= 2\displaystyle\sum_{n=1}^\infty \dfrac{2^{n}}{e^n}= 2\displaystyle\sum_{n=1}^\infty \left(\dfrac{2}{e}\right)^n\text{.}\) This is a geometric series with \(r=\frac{2}{e}\text{.}\) Since \(2/e \lt 1\text{,}\) the series converges.
Now, by the Direct Comparison Test, we conclude that \(\displaystyle\sum_{n=1}^\infty \dfrac{2^{n+1}n^2}{e^n+2n}\) converges.
Which of the series below are alternating?
(A) \(\displaystyle\sum_{n=1}^\infty \sin n\)
(B) \(\displaystyle\sum_{n=1}^\infty \frac{\cos(\pi n)}{n^3}\)
(C) \(\displaystyle\sum_{n=1}^\infty \frac{7}{(n)^{2n}}\)
(D) \(\displaystyle\sum_{n=1}^\infty \frac{(2)^n}{3^{n+1}}\)
Give an example of a convergent series for which the ratio test is inconclusive.
Imagine you're taking an exam, and you momentarily forget exactly how the inequality in the ratio test works. You remember there's a ratio, but you don't remember which term goes on top; you remember there's something about the limit being greater than or less than one, but you don't remember which way implies convergence.
Explain why
\[ \lim_{n \to \infty}\left\frac{a_{n+1}}{a_{n}}\right \gt 1 \nonumber \]
or, equivalently,
\[ \lim_{n \to \infty}\left\frac{a_n}{a_{n+1}}\right \lt 1 \nonumber \]
should mean that the sum \(\sum\limits_{n=1}^\infty a_n\) diverges (rather than converging).
Give an example of a series \(\displaystyle\sum_{n=a}^\infty a_n\text{,}\) with a function \(f(x)\) such that \(f(n)=a_n\) for all whole numbers \(n\text{,}\) such that:
 \(\displaystyle\int_a^\infty f(x)\,\, d{x}\) diverges, while
 \(\displaystyle\sum_{n=a}^\infty a_n\) converges.
Suppose that you want to use the Limit Comparison Test on the series \(\displaystyle \sum_{n=0}^{\infty} a_n\) where \(\displaystyle a_n = \frac{2^n+n}{3^n+1}\text{.}\)
Write down a sequence \(\{b_n\}\) such that \(\displaystyle \lim\limits_{n\to\infty} \frac{a_n}{b_n}\) exists and is nonzero. (You don't have to carry out the Limit Comparison Test)
Decide whether each of the following statements is true or false. If false, provide a counterexample. If true provide a brief justification.
 If \(\displaystyle\lim_{n\rightarrow\infty}a_n=0\text{,}\) then \(\sum\limits_{n=1}^{\infty} a_n\) converges.
 If \(\displaystyle\lim_{n\rightarrow\infty}a_n=0\text{,}\) then \(\sum\limits_{n=1}^{\infty} (1)^{n\mathstrut} a_n\) converges.
 If \(0\le a_n \le b_n\) and \(\sum\limits_{n=1}^{\infty} b_n\) diverges, then \(\sum\limits_{n=1}^{\infty} a_n\) diverges.
Stage 2
Does the series \(\displaystyle \sum_{n=2}^\infty \frac{n^2}{3n^2+\sqrt n}\) converge?
Determine, with explanation, whether the series \(\displaystyle \sum_{n=1}^\infty \frac{5^k}{4^k+3^k}\) converges or diverges.
Determine whether the series \(\displaystyle\sum_{n=0}^\infty\frac{1}{n+\frac{1}{2}}\) is convergent or divergent. If it is convergent, find its value.
Does the following series converge or diverge? \(\displaystyle\sum_{k=1}^\infty\frac{1}{\sqrt{k}\sqrt{k+1}}\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_{k=30}^\infty 3(1.001)^k\text{.}\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_{n=3}^\infty \left(\frac{1}{5}\right)^n\text{.}\)
Does the following series converge or diverge? \(\displaystyle\sum_{n=7}^\infty \sin(\pi n)\)
Does the following series converge or diverge? \(\displaystyle\sum_{n=7}^\infty\cos(\pi n)\)
Does the following series converge or diverge? \(\displaystyle\sum_{k=1}^\infty \frac{e^k}{k!}\text{.}\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_{k=0}^\infty\frac{2^k}{3^{k+2}}\text{.}\)
Does the following series converge or diverge? \(\displaystyle\sum_{n=1}^\infty\frac{n!n!}{(2n)!}\text{.}\)
Does the following series converge or diverge? \(\displaystyle\sum_{n=1}^\infty\frac{n^2+1}{2n^4+n}\text{.}\)
Show that the series \(\displaystyle\sum_{n=3}^\infty \frac{5}{n(\log n)^{3/2}}\) converges.
Find the values of \(p\) for which the series \(\displaystyle{\sum_{n=2}^\infty \frac{1}{n(\log n)^p}}\) converges.
Does \({\displaystyle\sum_{n=1}^\infty\frac{e^{\sqrt{n}}}{\sqrt{n}}}\) converge or diverge?
Use the comparison test (not the limit comparison test) to show whether the series
\[ \sum_{n=2}^{\infty} \frac{\sqrt{3 n^2  7}}{n^{3}} \nonumber \]
converges or diverges.
Determine whether the series \(\displaystyle\sum_{k=1}^\infty\frac{ \root{3}\of{k^4+1} } {\sqrt{k^5+9}}\) converges.
Does \(\displaystyle\sum_{n=1}^\infty\frac{n^4 2^{n/3}}{(2n+7)^4}\) converge or diverge?
Determine, with explanation, whether each of the following series converge or diverge.
 \(\displaystyle\sum_{n=1}^\infty\frac{1}{\sqrt{n^2+1}}\)
 \(\displaystyle\sum_{n=1}^\infty\frac{n\cos(n\pi)}{2^n}\)
Determine whether the series
\[ \sum_{k=1}^\infty\frac{k^42k^3+2}{k^5+k^2+k} \nonumber \]
converges or diverges.
Determine whether each of the following series converge or diverge.
 \(\displaystyle\sum_{n=2}^\infty\frac{n^2+n+1}{n^5n}\)
 \(\displaystyle\sum_{m=1}^\infty\frac{3m+\sin\sqrt{m}}{m^2}\)
Evaluate the following series, or show that it diverges: \(\displaystyle\sum_{n=5}^\infty \frac{1}{e^n}\text{.}\)
Determine whether the series \(\displaystyle\sum_{n=2}^\infty\frac{6}{7^n}\) is convergent or divergent. If it is convergent, find its value.
Determine, with explanation, whether each of the following series converge or diverge.
 \(1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\frac{1}{9}+\cdots\text{.}\)
 \({\displaystyle\sum_{n=1}^\infty \frac{2n+1}{2^{2n+1}}}\)
Determine, with explanation, whether each of the following series converges or diverges.
 \({\displaystyle \sum_{k=2}^\infty \frac{\root{3}\of{k}}{k^2k}}\text{.}\)
 \({\displaystyle \sum_{k=1}^\infty \frac{k^{10}10^k(k!)^2}{(2k)!}}\text{.}\)
 \({\displaystyle \sum_{k=3}^\infty \frac{1}{k(\log k) (\log\log k)}}\text{.}\)
Determine whether the series \(\displaystyle\sum_{n=1}^\infty\frac{n^34}{2n^56n}\) is convergent or divergent.
What is the smallest value of \(N\) such that the partial sum \(\displaystyle\sum_{n=1}^N\frac{(1)^n}{n\cdot 10^n}\) approximates \(\displaystyle\sum_{n=1}^\infty\frac{(1)^n}{n\cdot 10^n}\) within an accuracy of \(10^{6}\text{?}\)
It is known that \(\displaystyle \sum_{n=1}^\infty \frac{(1)^{n1}}{n^2} = \frac{\pi^2}{12}\) (you don't have to show this). Find \(N\) so that \(S_N\text{,}\) the \(N^{\rm th}\) partial sum of the series, satisfies \( \frac{\pi^2}{12}  S_N  \le 10^{6}\text{.}\) Be sure to say why your method can be applied to this particular series.
The series \(\displaystyle \sum_{n=1}^\infty \frac{(1)^{n+1}}{(2n+1)^2}\) converges to some number \(S\) (you don't have to prove this). According to the Alternating Series Estimation Theorem, what is the smallest value of \(N\) for which the \(N^{\rm th}\) partial sum of the series is at most \(\frac1{100}\) away from \(S\text{?}\) For this value of \(N\text{,}\) write out the \(N^{\rm th}\) partial sum of the series.
Stage 3
A number of phenomena roughly follow a distribution called Zipf's law. We discuss some of these in Questions 52 and 53.
Determine, with explanation, whether the following series converge or diverge.
 \(\displaystyle\sum_{n=1}^\infty\frac{n^n}{9^n n!}\)
 \(\displaystyle\sum_{n=1}^\infty\frac{1}{n^{\log n}}\)
(a) Prove that \(\displaystyle \int_2^\infty\frac{x+\sin x}{1+x^2}\ \, d{x}\) diverges.
(b) Explain why you cannot conclude that \(\displaystyle\sum\limits_{n=1}^\infty \frac{n+\sin n}{1+n^2}\) diverges from part (a) and the Integral Test.
(c) Determine, with explanation, whether \(\displaystyle\sum\limits_{n=1}^\infty \frac{n+\sin n}{1+n^2}\) converges or diverges.
Show that \(\displaystyle\sum\limits_{n=1}^\infty\frac{e^{\sqrt{n}}}{\sqrt{n}}\) converges and find an interval of length \(0.05\) or less that contains its exact value.
Suppose that the series \(\displaystyle\sum\limits_{n=1}^\infty a_n\) converges and that \(1 \gt a_n\ge 0\) for all \(n\text{.}\) Prove that the series \(\displaystyle\sum\limits_{n=1}^\infty \frac{a_n}{1a_n}\) also converges.
Suppose that the series \(\sum\limits_{n=0}^{\infty}(1a_n)\) converges, where \(a_n \gt 0\) for \(n=0,1,2,3,\cdots\text{.}\) Determine whether the series \(\sum\limits_{n=0}^\infty 2^n a_n\) converges or diverges.
Assume that the series \(\displaystyle\sum_{n=1}^\infty\frac{na_n2n+1}{n+1}\) converges, where \(a_n \gt 0\) for \(n = 1, 2, \cdots\text{.}\) Is the following series
\begin{gather*} \log a_1 + \sum_{n=1}^\infty \log\Big(\frac{a_n}{a_{n+1}}\Big) \end{gather*}
convergent? If your answer is NO, justify your answer. If your answer is YES, evaluate the sum of the series \(\log a_1 + \sum\limits_{n=1}^\infty \log\big(\frac{a_n}{a_{n+1}}\big)\text{.}\)
Prove that if \(a_n\ge 0\) for all \(n\) and if the series \(\displaystyle\sum_{n=1}^\infty a_n\) converges, then the series \(\displaystyle\sum_{n=1}^\infty a^2_n\) also converges.
Suppose the frequency of word use in a language has the following pattern:
So, in a text of 100 words, we expect the most frequently used word to appear \(\alpha\) times, while the secondmostfrequently used word should appear about \(\frac{\alpha}{2}\) times, and so on.The \(n\)th most frequently used word accounts for \(\dfrac{\alpha}{n}\) percent of the total words used.
If books written in this language use \(20,000\) distinct words, then the most commonly used word accounts for roughly what percentage of total words used?
Suppose the sizes of cities in a country adhere to the following pattern: if the largest city has population \(\alpha\text{,}\) then the \(n\)th largest city has population \(\frac{\alpha}{n}\text{.}\)
If the largest city in this country has 2 million people and the smallest city has 1 person, then the population of the entire country is \(\sum_{n=1}^{2 \times 10^6}\frac{2\times 10^6}{n}\text{.}\) (For many \(n\)'s in this sum \(\frac{2\times 10^6}{n}\) is not an integer. Ignore that.) Evaluate this sum approximately, with an error of no more than 1 million people.
 The authors should be a little more careful making such a blanket statement. While it is true that it is not wise to approximate a divergent series by taking \(N\) terms with \(N\) large, there are cases when one can get a very good approximation by taking \(N\) terms with \(N\) small! For example, the Taylor remainder theorem shows us that when the \(n^{\rm th}\) derivative of a function \(f(x)\) grows very quickly with \(n\), Taylor polynomials of degree \(N\), with \(N\) large, can give bad approximations of \(f(x)\), while the Taylor polynomials of degree one or two can still provide very good approximations of \(f(x)\) when \(x\) is very small. As an example of this, one of the triumphs of quantum electrodynamics, namely the computation of the anomalous magnetic moment of the electron, depends on precisely this. A number of important quantities were predicted using the first few terms of divergent power series. When those quantities were measured experimentally, the predictions turned out to be incredibly accurate.
 The field of asymptotic analysis often makes use of the first few terms of divergent series to generate approximate solutions to problems; this, along with numerical computations, is one of the most important techniques in applied mathematics. Indeed, there is a whole wonderful book (which, unfortunately, is too advanced for most Calculus 2 students) devoted to playing with divergent series called, unsurprisingly, “Divergent Series” by G.H. Hardy. This is not to be confused with the “Divergent” series by V. Roth set in a postapocalyptic dystopian Chicago. That latter series diverges quite dramatically from mathematical topics, while the former does not have a film adaptation (yet).
 We have discussed the contrapositive a few times in the CLP notes, but it doesn't hurt to discuss it again here (or for the reader to quickly look up the relevant footnote in Section 1.3 of the CLP1 text). At any rate, given a statement of the form “If A is true, then B is true” the contrapositive is “If B is not true, then A is not true”. The two statements in quotation marks are logically equivalent — if one is true, then so is the other. In the present context we have “If (\(\sum a_n\) converges) then (\(a_n\) converges to \(0\)).” The contrapositive of this statement is then “If (\(a_n\) does not converge to 0) then (\(\sum a_n\) does not converge).”
 This series converges to Apéry's constant \(1.2020569031\dots\text{.}\) The constant is named for Roger Apéry (1916–1994) who proved that this number must be irrational. This number appears in many contexts including the following cute fact — the reciprocal of Apéry's constant gives the probability that three positive integers, chosen at random, do not share a common prime factor.
 Latin for “Once the necessary changes are made”. This phrase still gets used a little, but these days mathematicians tend to write something equivalent in English. Indeed, English is pretty much the lingua franca for mathematical publishing. Quidquid erit.
 This series, viewed as a function of \(p\), is called the Riemann zeta function, \(\zeta(p)\text{,}\) or the EulerRiemann zeta function. It is extremely important because of its connections to prime numbers (among many other things). Indeed Euler proved that \(\zeta(p) = \sum_{n=1}^\infty \frac{1}{n^p}
= \prod_{\text{P prime}} \left(1  {\rm P}^{p} \right)^{1} \text{.}\). Riemann showed the connections between the zeros of this function (over complex numbers \(p\)) and the distribution of prime numbers. Arguably the most famous unsolved problem in mathematics, the Riemann hypothesis, concerns the locations of zeros of this function.  We could go even further and see what happens if we include powers of \(\log(\log(n))\) and other more exotic slow growing functions.
 Go back and quickly scan Theorem 3.3.5; to apply it we need to show that \(\frac{1} {n^2+2n+3}\) is positive and decreasing (it is), and then we need to integrate \(\int \frac{1}{x^2+2x+3}\, d{x}\text{.}\) To do that we reread the notes on partial fractions, then rewrite \(x^2+2x+3 = (x+1)^2+2\) and so \(\int_1^\infty \frac{1}{x^2+2x+3}\, d{x} = \int_1^\infty \frac{1}{(x+1)^2+2}\, d{x} \cdots\) and then arctangent appears, etc etc. Urgh. Okay — let's go back to the text now and see how to avoid this.
 To understand this consider any series \(\sum_{n=1}^\infty a_n\text{.}\) We can always cut such a series into two parts — pick some huge number like \(10^6\text{.}\) Then \(\sum_{n=1}^\infty a_n = \sum_{n=1}^{10^6} a_n + \sum_{n=10^6+1}^\infty a_n \text{.}\) The first sum, though it could be humongous, is finite. So the left hand side, \(\sum_{n=1}^\infty a_n\text{,}\) is a welldefined finite number if and only if \(\sum_{n=10^6+1}^\infty a_n\text{,}\) is a welldefined finite number. The convergence or divergence of the series is determined by the second sum, which only contains \(a_n\) for “large” \(n\text{.}\)
 The symbol “\(\gg\)” means “much larger than”. Similarly, the symbol “\(\ll\)” means “much less than”. Good shorthand symbols can be quite expressive.
 This is very similar to how we computed limits at infinity way way back near the beginning of CLP1.
 Really, really tedious. And you thought some of those partial fractions computations were bad …
 The interested reader may wish to check out “Stirling's approximation”, which says that \(n!\approx \sqrt{2\pi n}\left(\frac {n}{e}\right)^{n}\text{.}\)
 We shall see later, in Theorem 3.5.13, that the function \(\sum_{n=0}^\infty a n x^{n1}\) is indeed the derivative of the function \(\sum_{n=0}^\infty a x^n\text{.}\) Of course, such a statement only makes sense where these series converge — how can you differentiate a divergent series? (This is not an allusion to a popular series of dystopian novels.) Actually, there is quite a bit of interesting and useful mathematics involving divergent series, but it is well beyond the scope of this course.
 We shall also see later, in Theorem 3.5.13, that the function \(\sum_{n=0}^\infty \frac{a}{n+1} x^{n + 1}\) is indeed an antiderivative of the function \(\sum_{n=0}^\infty a x^n\text{.}\)
 It might be a good idea to review the beginning of §2.3 at this point.
 At least if our table is strong enough.
 The interested reader should use their favourite search engine to read more on the link between this series and musical harmonics. You can also find interesting links between the Harmonic series and the socalled “jeep problem” and also the problem of stacking a tower of dominoes to create an overhang that does not topple over.
 The grouping argument can be generalised further and the interested reader should look up Cauchy's condensation test.
 Knowing that the left and right hand sides of (\(\star\)) are zero for the same values of \(x\) is far from the end of the story. Two functions \(f(x)\) and \(g(x)\) having the same zeros, need not be equal. It is certainly possible that \(f(x)=g(x)*A(x)\) where \(A(x)\) is a function that is nowhere zero. The interested reader should look up the Weierstrass factorisation theorem.
 Searchengine your way to “Riemann zeta function”.
 It is one way to state a property of the real number system called “completeness”. The interested reader should use their favourite search engine to look up “completeness of the real numbers”.