8.4: Alternating Series
 Page ID
 107849
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left#1\right}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) What is an alternating series?
 Under what conditions does an alternating series converge? Why?
 How well does the \(n\)th partial sum of a convergent alternating series approximate the actual sum of the series? Why?
So far, we've considered series with exclusively nonnegative terms. Next, we consider series that have some negative terms. For instance, the geometric series
\[ 2  \frac{4}{3} + \frac{8}{9}  \cdots + 2 \left(\frac{2}{3} \right)^n + \cdots\text{,} \nonumber \]
has \(a = 2\) and \(r = \frac{2}{3}\text{,}\) so that every other term alternates in sign. This series converges to
In Preview Activity \(\PageIndex{1}\) and our following discussion, we investigate the behavior of similar series where consecutive terms have opposite signs.
Preview Activity 8.3.1 showed how we can approximate the number \(e\) with linear, quadratic, and other polynomial approximations. We use a similar approach in this activity to obtain linear and quadratic approximations to \(\ln(2)\text{.}\) Along the way, we encounter a type of series that is different than most of the ones we have seen so far. Throughout this activity, let \(f(x) = \ln(1+x)\text{.}\)
 Find the tangent line to \(f\) at \(x=0\) and use this linearization to approximate \(\ln(2)\text{.}\) That is, find \(L(x)\text{,}\) the tangent line approximation to \(f(x)\text{,}\) and use the fact that \(L(1) \approx f(1)\) to estimate \(\ln(2)\text{.}\)
 The linearization of \(\ln(1+x)\) does not provide a very good approximation to \(\ln(2)\) since \(1\) is not that close to \(0\text{.}\) To obtain a better approximation, we alter our approach; instead of using a straight line to approximate \(\ln(2)\text{,}\) we use a quadratic function to account for the concavity of \(\ln(1+x)\) for \(x\) close to \(0\text{.}\) With the linearization, both the function's value and slope agree with the linearization's value and slope at \(x=0\text{.}\) We will now make a quadratic approximation \(P_2(x)\) to \(f(x) = \ln(1+x)\) centered at \(x=0\) with the property that \(P_2(0) = f(0)\text{,}\) \(P'_2(0) = f'(0)\text{,}\) and \(P''_2(0) = f''(0)\text{.}\)
 Let \(P_2(x) = x  \frac{x^2}{2}\text{.}\) Show that \(P_2(0) = f(0)\text{,}\) \(P'_2(0) = f'(0)\text{,}\) and \(P''_2(0) = f''(0)\text{.}\) Use \(P_2(x)\) to approximate \(\ln(2)\) by using the fact that \(P_2(1) \approx f(1)\text{.}\)
 We can continue approximating \(\ln(2)\) with polynomials of larger degree whose derivatives agree with those of \(f\) at \(0\text{.}\) This makes the polynomials fit the graph of \(f\) better for more values of \(x\) around \(0\text{.}\) For example, let \(P_3(x) = x  \frac{x^2}{2}+\frac{x^3}{3}\text{.}\) Show that \(P_3(0) = f(0)\text{,}\) \(P'_3(0) = f'(0)\text{,}\) \(P''_3(0) = f''(0)\text{,}\) and \(P'''_3(0) = f'''(0)\text{.}\) Taking a similar approach to preceding questions, use \(P_3(x)\) to approximate \(\ln(2)\text{.}\)
 If we used a degree \(4\) or degree \(5\) polynomial to approximate \(\ln(1+x)\text{,}\) what approximations of \(\ln(2)\) do you think would result? Use the preceding questions to conjecture a pattern that holds, and state the degree \(4\) and degree \(5\) approximation.
The Alternating Series Test
Preview Activity \(\PageIndex{1}\) gives us several approximations to \(\ln(2)\text{.}\) The linear approximation is \(1\text{,}\) and the quadratic approximation is \(1  \frac{1}{2} = \frac{1}{2}\text{.}\) If we continue this process, cubic, quartic (degree \(4\)), quintic (degree \(5\)), and higher degree polynomials give us the approximations to \(\ln(2)\) in Table \(\PageIndex{1}\).
linear  \(1\)  \(1\) 

quadratic  \(1  \frac{1}{2}\)  \(0.5\) 
cubic  \(1  \frac{1}{2} + \frac{1}{3}\)  \(0.8\overline{3}\) 
quartic  \(1  \frac{1}{2} + \frac{1}{3}  \frac{1}{4}\)  \(0.58\overline{3}\) 
quintic  \(1  \frac{1}{2} + \frac{1}{3}  \frac{1}{4} + \frac{1}{5}\)  \(0.78\overline{3}\) 
The pattern here shows that \(\ln(2)\) can be approximated by the partial sums of the infinite series
where the alternating signs are indicated by the factor \((1)^{k+1}\text{.}\) We call such a series an alternating series.
Using computational technology, we find that the sum of the first 100 terms in this series is 0.6881721793. As a comparison, \(\ln(2) \approx 0.6931471806\text{.}\) This shows that even though the series (\(\PageIndex{1}\)) converges to \(\ln(2)\text{,}\) it must do so quite slowly, since the sum of the first 100 terms isn't particularly close to \(\ln(2)\text{.}\) We will investigate the issue of how quickly an alternating series converges later in this section.
An alternating series is a series of the form
where \(a_k \gt 0\) for each \(k\text{.}\)
We have some flexibility in how we write an alternating series; for example, the series
whose index starts at \(k = 1\text{,}\) is also alternating. As we will soon see, there are several very nice results that hold for alternating series, while alternating series can also demonstrate some unusual behaivior.
It is important to remember that most of the series tests we have seen in previous sections apply only to series with nonnegative terms. Alternating series require a different test.
Remember that, by definition, a series converges if and only if its corresponding sequence of partial sums converges.
 Calculate the first few partial sums (to 10 decimal places) of the alternating series
\[ \sum_{k=1}^{\infty} (1)^{k+1}\frac{1}{k}\text{.} \nonumber \]
Label each partial sum with the notation \(S_n = \sum_{k=1}^{n} (1)^{k+1}\frac{1}{k}\) for an appropriate choice of \(n\text{.}\)
 Plot the sequence of partial sums from part (a). What do you notice about this sequence?
Activity \(\PageIndex{2}\) illustrates the general behavior of any convergent alternating series. We see that the partial sums of the alternating harmonic series oscillate around a fixed number that turns out to be the sum of the series.
Recall that if \(\lim_{k \to \infty} a_k \neq 0\text{,}\) then the series \(\sum a_k\) diverges by the Divergence Test. From this point forward, we will thus only consider alternating series
in which the sequence \(a_k\) consists of positive numbers that decrease to \(0\text{.}\) The \(n\)th partial sum \(S_n\) is
Notice that
 \(S_2 = a_1  a_2\text{,}\) and since \(a_1 \gt a_2\) we have \(0 \lt S_2 \lt S_1 \text{.}\)
 \(S_3 = S_2+a_3\) and so \(S_2 \lt S_3\text{.}\) But \(a_3 \lt a_2\text{,}\) so \(S_3 \lt S_1\text{.}\) Thus, \(0 \lt S_2 \lt S_3 \lt S_1 \text{.}\)
 \(S_4 = S_3a_4\) and so \(S_4 \lt S_3\text{.}\) But \(a_4 \lt a_3\text{,}\) so \(S_2 \lt S_4\text{.}\) Thus, \(0 \lt S_2 \lt S_4 \lt S_3 \lt S_1 \text{.}\)
 \(S_5 = S_4+a_5\) and so \(S_4 \lt S_5\text{.}\) But \(a_5 \lt a_4\text{,}\) so \(S_5 \lt S_3\text{.}\) Thus, \(0 \lt S_2 \lt S_4 \lt S_5 \lt S_3 \lt S_1 \text{.}\)
This pattern continues as illustrated in Figure \(\PageIndex{1}\) (with \(n\) odd) so that each partial sum lies between the previous two partial sums.
Note further that the absolute value of the difference between the \((n1)\)st partial sum \(S_{n1}\) and the \(n\)th partial sum \(S_n\) is
Because the sequence \(\{a_n\}\) converges to \(0\text{,}\) the distance between successive partial sums becomes as close to zero as we'd like, and thus the sequence of partial sums converges (even though we don't know the exact value to which it converges).
The preceding discussion has demonstrated the truth of the Alternating Series Test.
Given an alternating series \(\sum (1)^k a_k \text{,}\) if the sequence \(\{a_k\}\) of positive terms decreases to 0 as \(k \to \infty\text{,}\) then the alternating series converges.
Note that if the limit of the sequence \(\{a_k\}\) is not 0, then the alternating series diverges.
Which series converge and which diverge? Justify your answers.
 \(\displaystyle \displaystyle\sum_{k=1}^{\infty} \frac{(1)^k}{k^2+2}\)
 \(\displaystyle \displaystyle\sum_{k=1}^{\infty} \frac{(1)^{k+1}2k}{k+5}\)
 \(\displaystyle \displaystyle\sum_{k=2}^{\infty} \frac{(1)^{k}}{\ln(k)}\)
Estimating Alternating Sums
If the series converges, the argument for the Alternating Series Test also provides us with a method to determine how close the \(n\)th partial sum \(S_n\) is to the actual sum of the series. To see how this works, let \(S\) be the sum of a convergent alternating series, so
Recall that the sequence of partial sums oscillates around the sum \(S\) so that
Therefore, the value of the term \(a_{n+1}\) provides an error estimate for how well the partial sum \(S_n\) approximates the actual sum \(S\text{.}\) We summarize this fact in the statement of the Alternating Series Estimation Theorem.
If the alternating series \(\sum_{k=1}^{\infty} (1)^{k+1}a_k\) has positive terms \(a_k\) that decrease to zero as \(k \to \infty\text{,}\) and \(S_n = \sum_{k=1}^{n} (1)^{k+1}a_k\) is the \(n\)th partial sum of the alternating series, then
\[ \left\lvert \sum_{k=1}^{\infty} (1)^{k+1}a_k  S_n \right\rvert \leq a_{n+1}\text{.} \nonumber \]
Determine how well the \(100\)th partial sum \(S_{100}\) of
approximates the sum of the series.
 Answer

If we let \(S\) be the sum of the series \(\sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k}\text{,}\) then we know that
\[ \left S_{100}  S \right \lt a_{101}\text{.} \nonumber \]Now
\[ a_{101} = \frac{1}{101} \approx 0.0099\text{,} \nonumber \]so the 100th partial sum is within 0.0099 of the sum of the series. We have discussed the fact (and will later verify) that
\[ S = \sum_{k=1}^{\infty} \frac{(1)^{k+1}}{k} = \ln(2)\text{,} \nonumber \]and so \(S \approx 0.693147\) while
\[ S_{100} = \sum_{k=1}^{100} \frac{(1)^{k+1}}{k} \approx 0.6881721793\text{.} \nonumber \]We see that the actual difference between \(S\) and \(S_{100}\) is approximately \(0.0049750013\text{,}\) which is indeed less than \(0.0099\text{.}\)
Determine the number of terms it takes to approximate the sum of the convergent alternating series
to within 0.0001.
Absolute and Conditional Convergence
A series such as
whose terms are neither all nonnegative nor alternating is different from any series that we have considered so far. The behavior of such a series can be rather complicated, but there is an important connection between a series with some negative terms and series with all positive terms.
 Explain why the series
\[ 1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots \nonumber \]
must have a sum that is less than the series
\[ \sum_{k=1}^{\infty} \frac{1}{k^2}\text{.} \nonumber \]  Explain why the series
\[ 1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots \nonumber \]
must have a sum that is greater than the series
\[ \sum_{k=1}^{\infty} \frac{1}{k^2}\text{.} \nonumber \]  Given that the terms in the series
\[ 1  \frac{1}{4}  \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \frac{1}{36}  \frac{1}{49}  \frac{1}{64}  \frac{1}{81}  \frac{1}{100} + \cdots \nonumber \]
converge to 0, what do you think the previous two results tell us about the convergence status of this series?
As the example in Activity \(\PageIndex{5}\) suggests, if a series \(\sum a_k\) has some negative terms but \(\sum a_k\) converges, then the original series, \(\sum a_k\text{,}\) must also converge. That is, if \(\sum  a_k \) converges, then so must \(\sum a_k\text{.}\)
As we just observed, this is the case for the series (\(\PageIndex{2}\)), because the corresponding series of the absolute values of its terms is the convergent \(p\)series \(\sum \frac{1}{k^2}\text{.}\) But there are series, such as the alternating harmonic series \(\sum (1)^{k+1} \frac{1}{k}\text{,}\) that converge while the corresponding series of absolute values, \(\sum \frac{1}{k}\text{,}\) diverges. We distinguish between these behaviors by introducing the following language.
Consider a series \(\sum a_k\text{.}\)
 The series \(\sum a_k\) converges absolutely (or is absolutely convergent) provided that \(\sum  a_k \) converges.
 The series \(\sum a_k\) converges conditionally (or is conditionally convergent) provided that \(\sum  a_k \) diverges and \(\sum a_k\) converges.
In this terminology, the series (\(\PageIndex{2}\)) converges absolutely while the alternating harmonic series is conditionally convergent.
 Consider the series \(\sum (1)^k \frac{\ln(k)}{k}\text{.}\)
 Does this series converge? Explain.
 Does this series converge absolutely? Explain what test you use to determine your answer.
 Consider the series \(\sum (1)^k \frac{\ln(k)}{k^2}\text{.}\)
 Does this series converge? Explain.
 Does this series converge absolutely? Hint: Use the fact that \(\ln(k) \lt \sqrt{k}\) for large values of \(k\) and then compare to an appropriate \(p\)series.
Conditionally convergent series turn out to be very interesting. If the sequence \(\{a_n\}\) decreases to 0, but the series \(\sum a_k\) diverges, the conditionally convergent series \(\sum (1)^k a_k\) is right on the borderline of being a divergent series. As a result, any conditionally convergent series converges very slowly. Furthermore, some very strange things can happen with conditionally convergent series, as illustrated in some of the exercises.
Summary of Tests for Convergence of Series
We have discussed several tests for convergence/divergence of series in our sections and in exercises. We close this section of the text with a summary of all the tests we have encountered, followed by an activity that challenges you to decide which convergence test to apply to several different series.
 Geometric Series

The geometric series \(\sum ar^k\) with ratio \(r\) converges for \(1 \lt r \lt 1\) and diverges for \(r \geq 1\text{.}\)
The sum of the convergent geometric series \(\displaystyle \sum_{k=0}^{\infty} ar^k\) is \(\frac{a}{1r}\text{.}\)
 Divergence Test

If the sequence \(a_n\) does not converge to 0, then the series \(\sum a_k\) diverges.
This is the first test to apply because the conclusion is simple. However, if \(\lim_{n \to \infty} a_n = 0\text{,}\) no conclusion can be drawn.
 Integral Test

Let \(f\) be a positive, decreasing function on an interval \([c,\infty)\) and let \(a_k = f(k)\) for each positive integer \(k \geq c\text{.}\)
 If \(\int_c^{\infty} f(t) \ dt\) converges, then \(\sum a_k\) converges.
 If \(\int_c^{\infty} f(t) \ dt\) diverges, then \(\sum a_k\) diverges.
Use this test when \(f(x)\) is easy to integrate.
 Direct Comp. Test

(see Ex 4 in Section 8.3)
Let \(0 \leq a_k \leq b_k\) for each positive integer \(k\text{.}\)
 If \(\sum b_k\) converges, then \(\sum a_k\) converges.
 If \(\sum a_k\) diverges, then \(\sum b_k\) diverges.
Use this test when you have a series with known behavior that you can compare to — this test can be difficult to apply.
 Limit Comp. Test

Let \(a_n\) and \(b_n\) be sequences of positive terms. If
\[ \displaystyle \lim_{k \to \infty} \frac{a_k}{b_k} = L \nonumber \]for some positive finite number \(L\text{,}\) then the two series \(\sum a_k\) and \(\sum b_k\) either both converge or both diverge.
Easier to apply in general than the comparison test, but you must have a series with known behavior to compare. Useful to apply to series of rational functions.
 Ratio Test

Let \(a_k \neq 0\) for each \(k\) and suppose
\[ \displaystyle \lim_{k \to \infty} \frac{a_{k+1}}{a_k} = r\text{.} \nonumber \] If \(r \lt 1\text{,}\) then the series \(\sum a_k\) converges absolutely.
 If \(r \gt 1\text{,}\) then the series \(\sum a_k\) diverges.
 If \(r=1\text{,}\) then test is inconclusive.
This test is useful when a series involves factorials and powers.
 Root Test

(see Exercise 2 in Section 8.3)
Let \(a_k \geq 0\) for each \(k\) and suppose
\[ \displaystyle \lim_{k \to \infty} \sqrt[k]{a_k} = r\text{.} \nonumber \] If \(r \lt 1\text{,}\) then the series \(\sum a_k\) converges.
 If \(r \gt 1\text{,}\) then the series \(\sum a_k\) diverges.
 If \(r=1\text{,}\) then test is inconclusive.
In general, the Ratio Test can usually be used in place of the Root Test. However, the Root Test can be quick to use when \(a_k\) involves \(k\)th powers.
 Alt. Series Test

If \(a_n\) is a positive, decreasing sequence so that \(\displaystyle \lim_{n \to \infty} a_n = 0\text{,}\) then the alternating series \(\sum (1)^{k+1} a_k\) converges.
This test applies only to alternating series — we assume that the terms \(a_n\) are all positive and that the sequence \(\{a_n\}\) is decreasing.
 Alt. Series Est.

Let \(S_n = \displaystyle \sum_{k=1}^n (1)^{k+1} a_k\) be the \(n\)th partial sum of the alternating series \(\displaystyle \sum_{k=1}^{\infty} (1)^{k+1} a_k\text{.}\) Assume \(a_n \gt 0\) for each positive integer \(n\text{,}\) the sequence \(a_n\) decreases to 0 and \(\displaystyle \lim_{n \to \infty} S_n = S\text{.}\) Then it follows that \(S  S_n \lt a_{n+1}\text{.}\)
This bound can be used to determine the accuracy of the partial sum \(S_n\) as an approximation of the sum of a convergent alternating series.
For (a)(j), use appropriate tests to determine the convergence or divergence of the following series. Throughout, if a series is a convergent geometric series, find its sum.
 \(\displaystyle \displaystyle\sum_{k=3}^{\infty} \ \frac{2}{\sqrt{k2}}\)
 \(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{k}{1+2k}\)
 \(\displaystyle \displaystyle\sum_{k=0}^{\infty} \ \frac{2k^2+1}{k^3+k+1}\)
 \(\displaystyle \displaystyle\sum_{k=0}^{\infty} \ \frac{100^k}{k!}\)
 \(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{2^k}{5^k}\)
 \(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{k^31}{k^5+1}\)
 \(\displaystyle \displaystyle\sum_{k=2}^{\infty} \ \frac{3^{k1}}{7^k}\)
 \(\displaystyle \displaystyle\sum_{k=2}^{\infty} \ \frac{1}{k^k}\)
 \(\displaystyle \displaystyle\sum_{k=1}^{\infty} \ \frac{(1)^{k+1}}{\sqrt{k+1}}\)
 \(\displaystyle \displaystyle\sum_{k=2}^{\infty} \ \frac{1}{k \ln(k)}\)
 Determine a value of \(n\) so that the \(n\)th partial sum \(S_n\) of the alternating series \(\displaystyle\sum_{n=2}^{\infty} \frac{(1)^n}{\ln(n)}\) approximates the sum to within 0.001.
Summary
 An alternating series is a series whose terms alternate in sign. It has the form
\[ \sum (1)^ka_k \nonumber \]
where \(a_k\) is a positive real number for each \(k\text{.}\)
 The sequence of partial sums of a convergent alternating series oscillates around the sum of the series if the sequence of \(n\)th terms converges to 0. That is why the Alternating Series Test shows that the alternating series \(\sum_{k=1}^{\infty} (1)^ka_k\) converges whenever the sequence \(\{a_n\}\) of \(n\)th terms decreases to 0.
 The difference between the \(n1\)st partial sum \(S_{n1}\) and the \(n\)th partial sum \(S_n\) of a convergent alternating series \(\sum_{k=1}^{\infty} (1)^ka_k\) is \(S_n  S_{n1} = a_n\text{.}\) Since the partial sums oscillate around the sum \(S\) of the series, it follows that
\[ S  S_n \lt a_n\text{.} \nonumber \]
So the \(n\)th partial sum of a convergent alternating series \(\sum_{k=1}^{\infty} (1)^ka_k\) approximates the actual sum of the series to within \(a_n\text{.}\)