Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

4.13: Absolutely Convergent Series. Power Series

  • Page ID
    21179
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)

    I. A series \(\sum f_{m}\) is said to be absolutely convergent on a set \(B\) iff the series \(\sum\left|f_{m}(x)\right|\) (briefly, \(\sum\left|f_{m}\right| )\) of the absolute values of \(f_{m}\) converges on \(B\) (pointwise or uniformly). Notation:

    \[
    f=\sum\left|f_{m}\right| \text { (pointwise or uniformly } ) \text { on } B.
    \]

    In general, \(\sum f_{m}\) may converge while \(\sum\left|f_{m}\right|\) does not (see Problem 12\() .\) In this case, the convergence of \(\sum f_{m}\) is said to be conditional. (It may be absolute for some \(x\) and conditional for others.) As we shall see, absolute convergence ensures the commutative law for series, and it implies ordinary convergence (i.e., that of \(\sum f_{m} ),\) if the range space of the \(f_{m}\) is complete.

    Note 1. Let

    \[
    \sigma_{m}=\sum_{k=1}^{m}\left|f_{k}\right|.
    \]

    Then

    \[
    \sigma_{m+1}=\sigma_{m}+\left|f_{m+1}\right| \geq \sigma_{m} \quad \text { on } B;
    \]

    i.e., the \(\sigma_{m}(x)\) form a monotone sequence for each \(x \in B .\) Hence by Theorem 3 of Chapter 3, §15,

    \[
    \lim _{m \rightarrow \infty} \sigma_{m}=\sum_{m=1}^{\infty}\left|f_{m}\right| \text { always exists in } E^{*};
    \]

    \(\sum\left|f_{m}\right|\) converges iff \(\sum_{m=1}^{\infty}\left|f_{m}\right|<+\infty\).

    For the rest of this section we consider only complete range spaces.

    Theorem \(\PageIndex{1}\)

    Let the range space of the functions \(f_{m}\) (all defined on \(A )\) be \(E^{1}\), \(C,\) or \(E^{n}\left(^{*} \text { or another complete normed space). Then for } B \subseteq A, \text { we have the }\right.\) following:

    (i) If \(\sum\left|f_{m}\right|\) converges on \(B\) (pointwise or uniformly), so does \(\sum f_{m}\) itself. Moreover,

    \[
    \left|\sum_{m=1}^{\infty} f_{m}\right| \leq \sum_{m=1}^{\infty}\left|f_{m}\right| \quad \text { on } B.
    \]

    (ii) (Commutative law for absolute convergence.) If \(\sum\left|f_{m}\right|\) converges (pointwise or uniformly on \(B,\) so does any series \(\sum\left|g_{m}\right|\) obtained by rearranging the \(f_{m}\) in any different order. Moreover,

    \[
    \sum_{m=1}^{\infty} f_{m}=\sum_{m=1}^{\infty} g_{m} \quad(\text { both exist on } B)/
    \]

    Note 2. More precisely, a sequence \(\left\{g_{m}\right\}\) is called a rearrangement of \(\left\{f_{m}\right\}\) iff there is a map \(u : N \longleftrightarrow_{onto} N\) such that

    \[
    (\forall m \in N) \quad g_{m}=f_{u(m)}.
    \]

    Proof

    (i) If \(\sum\left|f_{m}\right|\) converges uniformly on \(B,\) then by Theorem \(3^{\prime}\) of §12,

    \[
    \begin{array}{l}{(\forall \varepsilon>0)(\exists k)(\forall n>m>k)(\forall x \in B)} \\ {\qquad \varepsilon>\sum_{i=m}^{n}\left|f_{i}(x)\right| \geq\left|\sum_{i=m}^{n} f_{i}(x)\right| \text { (triangle law) }}\end{array}.
    \]

    However, this shows that \(\sum f_{n}\) satisfies Cauchy's criterion (6) of §12, so it converges uniformly on \(B\).

    Moreover, letting \(n \rightarrow \infty\) in the inequality

    \[
    \left|\sum_{m=1}^{n} f_{m}\right| \leq \sum_{m=1}^{n}\left|f_{m}\right|,
    \]

    we get

    \[
    \left|\sum_{m=1}^{\infty} f_{m}\right| \leq \sum_{m=1}^{\infty}\left|f_{m}\right|<+\infty \quad \text { on } B, \text { as claimed. }
    \]

    By Note \(1,\) this also proves the theorem for pointwise convergence.

    (ii) Again, if \(\sum f_{m} |\) converges uniformly on \(B,\) the inequalities \((1)\) hold for all \(f_{i}\) except (possibly) for \(f_{1}, f_{2}, \ldots, f_{k}\) . Now when \(\sum f_{m}\) is rearranged, these \(k\) functions will be renumbered as certain \(g_{i} .\) Let \(q\) be the largest of their new subscripts i. Then all of them (and possibly some more functions) are among \(g_{1}, g_{2}, \ldots, g_{q}\) (so that \(q \geq k ) .\) Hence if we exclude \(g_{1}, \ldots, g_{q},\) the inequalities \((1)\) will certainly hold for the remaining \(g_{i}\) \((i>q) .\) Thus

    \[
    (\forall \varepsilon>0)(\exists q)(\forall n>m>q)(\forall x \in B) \quad \varepsilon>\sum_{i=m}^{n}\left|g_{i}\right| \geq\left|\sum_{i=m}^{n} g_{i}\right|.
    \]

    By Cauchy's criterion, then, both \(\sum g_{i}\) and \(\sum\left|g_{i}\right|\) converge uniformly.

    Moreover, by construction, the two partial sums

    \[
    s_{k}=\sum_{i=1}^{k} f_{i} \text { and } s_{q}^{\prime}=\sum_{i=1}^{q} g_{i}
    \]

    can differ only in those terms whose original subscripts (before the rearrangement) were \(>k . \mathrm{By}(1),\) however, any finite sum of such terms is less than \(\varepsilon\) in absolute value. Thus \(\left|s_{q}^{\prime}-s_{k}\right|<\varepsilon\).

    This argument holds also if \(k\) in \((1)\) is replaced by a larger integer.
    (Then also \(q\) increases, since \(q \geq k\) as noted above.) Thus we may let \(k \rightarrow+\infty(\text { hence also } q \rightarrow+\infty)\) in the inequality \(\left|s_{q}^{\prime}-s_{k}\right|<\varepsilon,\) with \(\varepsilon\) fixed. Then

    \[
    s_{k} \rightarrow \sum_{m=1}^{\infty} f_{m} \text { and } s_{q}^{\prime} \rightarrow \sum_{i=1}^{\infty} g_{i},
    \]

    so

    \[
    \left|\sum_{i=1}^{\infty} g_{i}-\sum_{m=1}^{\infty} f_{m}\right| \leq \varepsilon.
    \]

    Now let \(\varepsilon \rightarrow 0\) to get

    \[
    \sum_{i=1}^{\infty} g_{i}=\sum_{m=1}^{\infty} f_{m};
    \]

    similarly for pointwise convergence. \(\square\)

    II. Next, we develop some simple tests for absolute convergence.

    Theorem \(\PageIndex{2}\)

    (comparison test). Suppose

    \[
    (\forall m) \quad\left|f_{m}\right| \leq\left|g_{m}\right| \text { on } B.
    \]

    Then

    (i) \(\sum_{m=1}^{\infty}\left|f_{m}\right| \leq \sum_{m=1}^{\infty}\left|g_{m}\right|\) on \(B\);

    (ii) \(\sum_{m=1}^{\infty}\left|f_{m}\right|=+\infty\) implies \(\sum_{m=1}^{\infty}\left|g_{m}\right|=+\infty\) on \(B ;\) and

    (iii) If \(\sum\left|g_{m}\right|\) converges (pointwise or uniformly \()\) on \(B,\) so does \(\sum\left|f_{m}\right|\).

    Proof

    Conclusion (i) follows by letting \(n \rightarrow \infty\) in

    \[
    \sum_{m=1}^{n}\left|f_{m}\right| \leq \sum_{m=1}^{n}\left|g_{m}\right|.
    \]

    In turn, (ii) is a direct consequence of \((\mathrm{i})\).

    Also, by (i),

    \[
    \sum_{m=1}^{\infty}\left|g_{m}\right|<+\infty \text { implies } \sum_{m=1}^{\infty}\left|f_{m}\right|<+\infty.
    \]

    This proves (iii) for the pointwise case (see Note 1\() .\) The uniform case follows exactly as in Theorem 1\((\mathrm{i})\) on noting that

    \[
    \sum_{k=m}^{n}\left|f_{k}\right| \leq \sum_{k=m}^{n}\left|g_{k}\right|
    \]

    and that the functions \(\left|f_{k}\right|\) and \(\left|g_{k}\right|\) are real (so Theorem \(3^{\prime}\) in §12 does apply). \(\square\)

    Theorem \(\PageIndex{3}\) (Weierstrass "M-test")

    If \(\sum M_{n}\) is a convergent series of real constants \(M_{n} \geq 0\) and if

    \[(\forall n) \quad\left|f_{n}\right| \leq M_{n}\]

    on a set \(B,\) then \(\sum\left|f_{n}\right|\) converges uniformly on \(B.\) Moreover,

    \[\sum_{n=1}^{\infty}\left|f_{n}\right| \leq \sum_{n=1}^{\infty} M_{n} \quad \text { on } B.\]

    Proof

    Use Theorem 2 with \(\left|g_{n}\right|=M_{n},\) noting that \(\sum\left|g_{n}\right|\) converges uniformly since the \(\left|g_{n}\right|\) are constant (§12, Problem 7). \(\square\)

    ExampleS

    (a) Let

    \[f_{n}(x)=\left(\frac{1}{2} \sin x\right)^{n} \text { on } E^{1}.\]

    Then

    \[(\forall n)\left(\forall x \in E^{1}\right) \quad\left|f_{n}(x)\right| \leq 2^{-n},\]

    and \(\sum 2^{-n}\) converges (geometric series with ratio \(\frac{1}{2}\); see §12, Problem 18). Thus, setting \(M_{n}=2^{-n}\) in Theorem 3, we infer that the series \(\sum\left|\frac{1}{2} \sin x\right|^{n}\) converges uniformly on \(E^{1},\) as does \(\sum\left(\frac{1}{2} \sin x\right)^{n};\) moreover,

    \[\sum_{n=1}^{\infty}\left|f_{n}\right| \leq \sum_{n=1}^{\infty} 2^{-n}=1.\]

    Theorem \(\PageIndex{4}\) (necessary condition of convergence)

    If \(\sum f_{m}\) or \(\sum\left|f_{m}\right|\) converges on \(B\) (pointwise or uniformly), then \(\left|f_{m}\right| \rightarrow 0\) on \(B\) (in the same sense).

    Thus a series cannot converge unless its general term tends to 0 (respectively, \(\overline{0})\).

    Proof

    If \(\sum f_{m}=f,\) say, then \(s_{m} \rightarrow f\) and also \(s_{m-1} \rightarrow f .\) Hence

    \[s_{m}-s_{m-1} \rightarrow f-f=\overline{0}.\]

    However, \(s_{m}-s_{m-1}=f_{m} .\) Thus \(f_{m} \rightarrow \overline{0},\) and \(\left|f_{m}\right| \rightarrow 0,\) as claimed.

    This holds for pointwise and uniform convergence alike (see Problem 14 in §12) . \(\quad \square\)

    Caution: The condition \(\left|f_{m}\right| \rightarrow 0\) is necessary but not sufficient. Indeed, there are divergent series with general term tending to \(0,\) as we show next.

    Examples (Continued)

    (b) \(\sum_{n=1}^{\infty} \frac{1}{n}=+\infty\) (the so-called harmonic series).

    Indeed, by Note 1,

    \[\sum_{n=1}^{\infty} \frac{1}{n} \quad \text { exists }\left(\text {in } E^{*}\right),\]

    so Theorem 4 of §12 applies. We group the series as follows:

    \[\begin{aligned} \sum \frac{1}{n} &=1+\frac{1}{2}+\left(\frac{1}{3}+\frac{1}{4}\right)+\left(\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}\right)+\left(\frac{1}{9}+\cdots+\frac{1}{16}\right)+\cdots \\ & \geq \frac{1}{2}+\frac{1}{2}+\left(\frac{1}{4}+\frac{1}{4}\right)+\left(\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}\right)+\left(\frac{1}{16}+\cdots+\frac{1}{16}\right)+\cdots. \end{aligned}\]

    Each bracketed expression now equals \(\frac{1}{2}.\) Thus

    \[\sum \frac{1}{n} \geq \sum g_{m}, \quad g_{m}=\frac{1}{2}.\]

    As \(g_{m}\) does not tend to \(0, \sum g_{m}\) diverges, i.e., \(\sum_{m=1}^{\infty} g_{m}\) is infinite, by Theorem 4. A fortiori, so is \(\sum_{n=1}^{\infty} \frac{1}{n}\).

    Theorem \(\PageIndex{5}\) (root and ratio tests)

    A series of constants \(\sum a_{n}\left(\left|a_{n}\right| \neq 0\right)\) converges absolutely if

    \[\overline{\lim } \sqrt[n]{\left|a_{n}\right|}<1 \text { or } \overline{\lim }\left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right)<1.\]

    It diverges if

    \[\overline{\lim } \sqrt[n]{\left|a_{n}\right|}>1 \text { or } \underline{\lim } \left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right)>1.\]

    It may converge or diverge if

    \[\overline{\lim } \sqrt[n]{\left|a_{n}\right|}=1\]

    or if

    \[\underline{\lim } \left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right) \leq 1 \leq \overline{\lim }\left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right).\]

    (The \(a_{n}\) may be scalars or vectors.)

    Proof

    If \(\overline{\lim } \sqrt[n]{\left|a_{n}\right|}<1,\) choose \(r>0\) such that

    \[\overline{\lim } \sqrt[n]{\left|a_{n}\right|}<r<1.\]

    Then by Corollary 2 of Chapter 2, §13, \(\sqrt[n]{\left|a_{n}\right|}<r\) for all but finitely many \(n .\) Thus, dropping a finite number of terms (§12, Problem 17), we may assume that

    \[\left|a_{n}\right|<r^{n} \text { for all } n.\]

    As \(0<r<1,\) the geometric series \(\sum r^{n}\) converges. Hence so does \(\sum\left|a_{n}\right|\) by Theorem 2.

    In the case

    \[\overline{\lim }\left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right)<1,\]

    we similarly obtain \((\exists m)(\forall n \geq m)\left|a_{n+1}\right|<\left|a_{n}\right| r;\) hence by induction,

    \[(\forall n \geq m) \quad\left|a_{n}\right| \leq\left|a_{m}\right| r^{n-m}. \quad \text { (Verify!) }\]

    Thus \(\sum\left|a_{n}\right|\) converges, as before.

    If \(\overline{\lim } \sqrt[n]{\left|a_{n}\right|}>1,\) then by Corollary 2 of Chapter 2, §13,\left|a_{n}\right|>1\) for infinitely many \(n .\) Hence \(\left|a_{n}\right|\) cannot tend to \(0,\) and so \(\sum a_{n}\) diverges by Theorem 4.

    Similarly, if

    \[\underline{\lim } \left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right)>1,\]

    then \(\left|a_{n+1}\right|>\left|a_{n}\right|\) for all but finitely many \(n,\) so \(\left|a_{n}\right|\) cannot tend to 0 again. \(\quad \square\)

    Note 3. We have

    \[\underline{\lim } \left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right) \leq \underline{\lim } \sqrt[n]{\left|a_{n}\right|} \leq \overline{\lim } \sqrt[n]{\left|a_{n}\right|} \leq \overline{\lim }\left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right).\]

    Thus

    \[\begin{array}{l}{\quad \overline{\lim }\left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right)<1 \text { implies } \overline{\lim } \sqrt[n]{\left|a_{n}\right|}<1 ; \text { and }} \\ {\qquad \underline{\lim } \left(\frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\right)>1 \text { implies } \overline{\lim } \sqrt[n]{\left|a_{n}\right|}>1.}\end{array}\]

    Hence whenever the ratio test indicates convergence or divergence, so certainly does the root test. On the other hand, there are cases where the root test yields a result while the ratio test does not. Thus the root test is stronger (but the ratio test is often easier to apply).

    Examples (continued)

    (c) Let \(a_{n}=2^{-k}\) if \(n=2 k-1\) (odd) and \(a_{n}=3^{-k}\) if \(n=2 k\) (even). Thus

    \[\sum a_{n}=\frac{1}{2^{1}}+\frac{1}{3^{1}}+\frac{1}{2^{2}}+\frac{1}{3^{2}}+\frac{1}{2^{3}}+\frac{1}{3^{3}}+\frac{1}{2^{4}}+\frac{1}{3^{4}}+\cdots.\]

    Here

    \[\underline{\lim } \left(\frac{a_{n+1}}{a_{n}}\right)=\lim _{k \rightarrow \infty} \frac{3^{-k}}{2^{-k}}=0 \text { and } \overline{\lim }\left(\frac{a_{n+1}}{a_{n}}\right)=\lim _{k \rightarrow \infty} \frac{2^{-k-1}}{3^{-k}}=+\infty,\]

    while

    \[\overline{\lim } \sqrt[n]{a_{n}}=\lim \sqrt[2n-1]{2^{-n}}=\frac{1}{\sqrt{2}}<1.\quad(\text {Verify!})\]

    Thus the ratio test fails, but the root test proves convergence.

    Note 4. The assumption \(\left|a_{n}\right| \neq 0\) is needed for the ratio test only.

    III. Power Series. As an application, we now consider so-called power series,

    \[\sum a_{n}(x-p)^{n},\]

    where \(x, p, a_{n} \in E^{1}(C);\) the \(a_{n}\) may also be vectors.

    Theorem \(\PageIndex{6}\)

    For any power series \(\sum a_{n}(x-p)^{n},\) there is a unique \(r \in E^{*}\) \((0 \leq r \leq+\infty),\) called its convergence radius, such that the series converges absolutely for each \(x\) with \(|x-p|<r\) and does not converge (even conditionally) if \(|x-p|>r.\)

    Specifically,

    \[r=\frac{1}{d}, \text { where } d=\overline{\lim } \sqrt[n]{\left|a_{n}\right|} \quad \text { (with } r=+\infty \text { if } d=0).\]

    Proof

    Fix any \(x=x_{0}.\) By Theorem 5, the series \(\sum a_{n}\left(x_{0}-p\right)^{n}\) converges absolutely if \(\overline{\lim } \sqrt[n]{\left|a_{n}\right|}\left|x_{0}-p\right|<1,\) i.e., if

    \[\left|x_{0}-p\right|<r \quad\left(r=\frac{1}{\lim \sqrt[n]{\left|a_{n}\right|}}=\frac{1}{d}\right),\]

    and diverges if \(\left|x_{0}-p\right|>r . \quad\) (Here we assumed \(d \neq 0;\) but if \(d=0,\) the condition \(d\left|x_{0}-p\right|<1\) is trivial for any\(x_{0},\) so \(r=+\infty\) in this case.) Thus \(r\) is the required radius, and clearly there can be only one such \(r.\) (Why?) \(\square\)

    Note 5. If \(\lim _{n \rightarrow \infty} \frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\) exists, it equals \(\lim _{n \rightarrow \infty} \sqrt[n]{\left|a_{n}\right|},\) by Note 3 (for \(\overline{lim}\) and \(\underline{lim}\) coincide here). In this case, one can use the ratio test to find

    \[d=\lim _{n \rightarrow \infty} \frac{\left|a_{n+1}\right|}{\left|a_{n}\right|}\]

    and hence (if \(d \neq 0 )\)

    \[r=\frac{1}{d}=\lim _{n \rightarrow \infty} \frac{\left|a_{n}\right|}{\left|a_{n+1}\right|}.\]

    Theorem \(\PageIndex{7}\)

    If a power series \(\sum a_{n}(x-p)^{n}\) converges absolutely for some \(x=x_{0} \neq p,\) then \(\sum\left|a_{n}(x-p)^{n}\right|\) converges uniformly on the closed globe \(\overline{G}_{p}(\delta)\) \(\delta=\left|x_{0}-p\right|.\) So does \(\sum a_{n}(x-p)^{n}\) if the range space is complete (Theorem 1).

    Proof

    Suppose \(\sum\left|a_{n}\left(x_{0}-p\right)^{n}\right|\) converges. Let

    \[\delta=\left|x_{0}-p\right| \text { and } M_{n}=\left|a_{n}\right| \delta^{n};\]

    thus \(\sum M_{n}\) converges.

    Now if \(x \in \overline{G}_{p}(\delta),\) then \(|x-p| \leq \delta,\) so

    \[\left|a_{n}(x-p)^{n}\right| \leq\left|a_{n}\right| \delta^{n}=M_{n}.\]

    Hence by Theorem 3, \(\sum\left|a_{n}(x-p)^{n}\right|\) converges uniformly on \(\overline{G}_{p}(\delta). \square\)

    Examples (Continued)

    (d) Consider \(\sum \frac{x^{n}}{n !}\) Here

    \[p=0 \text { and } a_{n}=\frac{1}{n !}, \text { so } \frac{\left|a_{n}\right|}{\left|a_{n+1}\right|}=n+1 \rightarrow+\infty.\]

    By Note 5, then, \(r=+\infty ;\) i.e., the series converges absolutely on all of \(E^{1} .\) Hence by Theorem 7, it converges uniformly on any \(\overline{G}_{0}(\delta),\) hence on any finite interval in \(E^{1}\). (The pointwise convergence is on all of \(E^{1}\).)