Skip to main content
Mathematics LibreTexts

4.1: Sequences of Real Numbers

  • Page ID
    33452
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    An {} (more briefly, a {}) of real numbers is a real-valued function defined on a set of integers \(\set{n}{n\ge k}\). We call the values of the function the {} of the sequence. We denote a sequence by listing its terms in order; thus, \[\begin{equation} \label{eq:4.1.1} \{s_n\}^\infty_k=\{s_k,s_{k+1}, \dots\}. \end{equation} \nonumber \] For example, \[\begin{eqnarray*} \left\{\frac{1}{ n^2+1}\right\}^\infty_0\ar=\left\{1,\frac{1}{2},\frac{1}{5}, \dots, \frac{1}{ n^2+1}, \dots\right\},\\ \left\{(-1)^n\right\}^\infty_0\ar= \left\{1,-1,1, \dots,(-1)^n, \dots\right\},\\ \arraytext{and}\\ \left\{\frac{1}{ n-2}\right\}^\infty_3\ar=\left\{1,\frac{1}{2},\frac{1}{ 3}, \dots, \frac{1}{ n-2}, \dots\right\}. \end{eqnarray*} \nonumber \] The real number \(s_n\) is the \(n\)th {} of the sequence. Usually we are interested only in the terms of a sequence and the order in which they appear, but not in the particular value of \(k\) in . Therefore, we regard the sequences \[ \left\{\frac{1}{ n-2}\right\}^\infty_3\mbox{\quad and\quad}\left\{\frac{1}{ n}\right\}^\infty_1 \nonumber \] as identical.

    We will usually write \(\{s_n\}\) rather than \(\{s_n\}^\infty_k\). In the absence of any indication to the contrary, we take \(k=0\) unless \(s_n\) is given by a rule that is invalid for some nonnegative integer, in which case \(k\) is understood to be the smallest positive integer such that \(s_n\) is defined for all \(n\ge k\). For example, if \[ s_n=\frac{1}{(n-1)(n-5)}, \nonumber \] then \(k=6\).

    The interesting questions about a sequence \(\{s_n\}\) concern the behavior of \(s_n\) for large \(n\).

    As we saw in Section~2.1 when discussing limits of functions, Definition~ is not changed by replacing with \[ |s_n-s|<K\epsilon\mbox{\quad if\quad} n\ge N, \nonumber \] where \(K\) is a positive constant.

    Definition~ does not require that there be an integer \(N\) such that holds for all \(\epsilon\); rather, it requires that for each positive \(\epsilon\) there be an integer \(N\) that satisfies for that particular \(\epsilon\). Usually, \(N\) depends on \(\epsilon\) and must be increased if \(\epsilon\) is decreased. The constant sequences (Example~) are essentially the only ones for which \(N\) does not depend on \(\epsilon\) (Exercise~).

    We say that the terms of a sequence \(\{s_n\}^\infty_k\) satisfy a given condition {} \(n\) if \(s_n\) satisfies the condition for all \(n\ge k\), or {} \(n\) if there is an integer \(N>k\) such that \(s_n\) satisfies the condition whenever \(n\ge N\). For example, the terms of \(\{1/n\}^\infty_1\) are positive for all \(n\), while those of \(\{1-7/n\}^\infty_1\) are positive for large \(n\) (take \(N=8\)).

    Suppose that \[ \lim_{n\to\infty}s_n=s\mbox{\quad and \quad} \lim_{n\to\infty}s_n=s'. \nonumber \] 5pt We must show that \(s=s'\). Let \(\epsilon>0\). From Definition~, there are integers \(N_1\) and \(N_2\) such that \[ |s_n-s|<\epsilon\mbox{\quad if\quad} n\ge N_1 \nonumber \] 5pt (because \(\lim_{n\to\infty} s_n=s\)), and \[ |s_n-s'|<\epsilon\mbox{\quad if\quad} n\ge N_2 \nonumber \]

    (because \(\lim_{n\to\infty}s_n=s'\)). These inequalities both hold if \(n\ge N=\max (N_1,N_2)\), which implies that \[\begin{eqnarray*} |s-s'|\ar=|(s-s_N)+(s_N-s')|\\ \ar\le |s-s_N|+|s_N-s'|<\epsilon+\epsilon=2\epsilon. \end{eqnarray*} \nonumber \] Since this inequality holds for every \(\epsilon>0\) and \(|s-s'|\) is independent of \(\epsilon\), we conclude that \(|s-s'|=0\); that is, \(s=s'\).

    We say that \[ \lim_{n\to\infty} s_n=\infty \nonumber \] if for any real number \(a\), \(s_n>a\) for large \(n\). Similarly, \[ \lim_{n\to\infty} s_n=-\infty \nonumber \] if for any real number \(a\), \(s_n<a\) for large \(n\). However, we do not regard \(\{s_n\}\) as convergent unless \(\lim_{n\to\infty}s_n\) is finite, as required by Definition~. To emphasize this distinction, we say that \(\{s_n\}\) {} \(\infty\ (-\infty)\) if \(\lim_{n\to\infty}s_n=\infty\ (-\infty)\).

    \begin{definition} A sequence \(\{s_n\}\) is {} if there is a real number \(b\) such that \[ s_n\le b\mbox{\quad for all $n$}, \nonumber \] {} if there is a real number \(a\) such that \[ s_n\ge a\mbox{\quad for all $n$}, \nonumber \] or {} if there is a real number \(r\) such that \[ |s_n|\le r\mbox{\quad for all $n$}. \nonumber \] \end{definition}

    By taking \(\epsilon=1\) in , we see that if \(\lim_{n\to\infty} s_n=s\), then there is an integer \(N\) such that \[ |s_n-s|<1\mbox{\quad if\quad} n\ge N. \nonumber \] Therefore, \[ |s_n|=|(s_n-s)+s|\le|s_n-s|+|s|<1+|s|\mbox{\quad if\quad} n\ge N, \nonumber \] and \[ |s_n|\le\max\{|s_0|,|s_1|, \dots,|s_{N-1}|, 1+|s|\} \nonumber \] for all \(n\), so \(\{s_n\}\) is bounded.

    . Let \(\beta=\sup\{s_n\}\). If \(\beta<\infty\), Theorem~ implies that if \(\epsilon>0\) then \[ \beta-\epsilon<s_N\le\beta \nonumber \] for some integer \(N\). Since \(s_N\le s_n\le\beta\) if \(n\ge N\), it follows that \[ \beta-\epsilon<s_n\le\beta\mbox{\quad if\quad} n\ge N. \nonumber \] This implies that \(|s_n-\beta|<\epsilon\) if \(n\ge N\), so \(\lim_{n\to\infty}s_n=\beta\), by Definition~. If \(\beta=\infty\) and \(b\) is any real number, then \(s_N>b\) for some integer \(N\). Then \(s_n>b\) for \(n\ge N\), so \(\lim_{n\to\infty}s_n=\infty\).

    We leave the proof of

    to you (Exercise~)

    The next theorem enables us to apply the theory of limits developed in Section~2.1 to some sequences. We leave the proof to you (Exercise~).

    The next theorem enables us to investigate convergence of sequences by examining simpler sequences. It is analogous to Theorem~.

    We prove and and leave the rest to you (Exercises~ and ). For , we write \[ s_nt_n-st=s_nt_n-st_n+st_n-st =(s_n-s)t_n+s(t_n-t); \nonumber \]

    hence, \[\begin{equation}\label{eq:4.1.10} |s_nt_n-st|\le |s_n-s|\,|t_n|+|s|\,|t_n-t|. \end{equation} \nonumber \] Since \(\{t_n\}\) converges, it is bounded (Theorem~). Therefore, there is a number \(R\) such that \(|t_n|\le R\) for all \(n\), and implies that \[\begin{equation}\label{eq:4.1.11} |s_nt_n-st|\le R|s_n-s|+|s|\,|t_n-t|. \end{equation} \nonumber \] From , if \(\epsilon>0\) there are integers \(N_1\) and \(N_2\) such that

    \[\begin{eqnarray} |s_n-s|\ar<\epsilon\mbox{\quad if\quad} n\ge N_1 \label{eq:4.1.12}\\ \arraytext{and}\nonumber\\ |t_n-t|\ar<\epsilon\mbox{\quad if\quad} n\ge N_2.\label{eq:4.1.13} \end{eqnarray} \nonumber \]

    If \(N=\max (N_1,N_2)\), then and both hold when \(n\ge N\), and implies that \[ |s_nt_n-st|\le (R+|s|)\epsilon\mbox{\quad if\quad} n\ge N. \nonumber \] This proves .

    Now consider in the special case where \(s_n=1\) for all \(n\) and \(t\ne 0\); thus, we want to show that \[ \lim_{n\to\infty}\frac{1}{ t_n}=\frac{1}{ t}. \nonumber \]

    First, observe that since \(\lim_{n\to\infty} t_n=t\ne0\), there is an integer \(M\) such that \(|t_n|\ge |t|/2\) if \(n\ge M\). To see this, we apply Definition~ with \(\epsilon=|t|/2\); thus, there is an integer \(M\) such that \(|t_n-t|<|t/2|\) if \(n\ge M\). Therefore, \[ |t_n|=|t+(t_n-t)|\ge ||t|-|t_n-t||\ge\frac{|t|}{2}\mbox{\quad if \quad} n\ge M. \nonumber \] If \(\epsilon>0\), choose \(N_0\) so that \(|t_n-t|<\epsilon\) if \(n\ge N_0\), and let \(N=\max (N_0,M)\). Then \[ \left|\frac{1}{ t_n}-\frac{1}{ t}\right|=\frac{|t-t_n|}{ |t_n|\,|t|}\le\frac {2 \epsilon}{ |t|^2}\mbox{\quad if\quad} n\ge N; \nonumber \] hence, \(\lim_{n\to\infty} 1/t_n=1/t\). Now we obtain in the general case from with \(\{t_n\}\) replaced by \(\{1/t_n\}\).

    Equations – are valid even if \(s\) and \(t\) are arbitrary extended reals, provided that their right sides are defined in the extended reals (Exercises~, , and ); is valid if \(s/t\) is defined in the extended reals and \(t\ne0\) (Exercise~).

    Requiring a sequence to converge may be unnecessarily restrictive in some situations. Often, useful results can be obtained from assumptions on the {} and {} of a sequence, which we consider next.

    We will prove and leave the proof of

    to you (Exercise~). Since \(\{s_n\}\) is bounded above, there is a number \(\beta\) such that \(s_n<\beta\) for all \(n\). Since \(\{s_n\}\) does not diverge to \(-\infty\), there is a number \(\alpha\) such that \(s_n> \alpha\) for infinitely many \(n\). If we define \[ M_k=\sup\{s_k,s_{k+1}, \dots,s_{k+r}, \dots\}, \nonumber \]

    then \(\alpha\le M_k\le\beta\), so \(\{M_k\}\) is bounded. Since \(\{M_k\}\) is nonincreasing (why?), it converges, by Theorem~. Let \[\begin{equation} \label{eq:4.1.20} \overline{s}=\lim_{k\to\infty} M_k. \end{equation} \nonumber \] If \(\epsilon>0\), then \(M_k<\overline{s}+\epsilon\) for large \(k\), and since \(s_n\le M_k\) for \(n\ge k\), \(\overline{s}\) satisfies .

    If were false for some positive \(\epsilon\), there would be an integer \(K\) such that \[ s_n\le\overline{s}-\epsilon\mbox{\quad if\quad} n\ge K. \nonumber \] However, this implies that \[ M_k\le\overline{s}-\epsilon\mbox{\quad if\quad} k\ge K, \nonumber \] which contradicts . Therefore, \(\overline{s}\) has the stated properties.

    Now we must show that \(\overline{s}\) is the only real number with the stated properties. If \(t<\overline{s}\), the inequality \[ s_n<t+\frac{\overline{s}-t}{2}=\overline{s}-\frac{\overline{s}-t}{2} \nonumber \] cannot hold for all large \(n\), because this would contradict with \(\epsilon=(\overline{s}-t)/2\). If \(\overline{s}<t\), the inequality \[ s_n> t-\frac{t-\overline{s}}{2}=\overline{s}+\frac{t-\overline{s}}{ 2} \nonumber \] cannot hold for infinitely many \(n\), because this would contradict with \(\epsilon=(t-\overline{s})/2\). Therefore, \(\overline{s}\) is the only real number with the stated properties.

    The existence and uniqueness of \(\overline{s}\) and \(\underline{s}\) follow from Theorem~ and Definition~. If \(\overline{s}\) and \(\underline{s}\) are both finite, then and imply that \[ \underline{s}-\epsilon<\overline{s}+\epsilon \nonumber \] for every \(\epsilon>0\), which implies . If \(\underline{s}=-\infty\) or \(\overline{s}=\infty\), then is obvious. If \(\underline{s}=\infty\) or \(\overline{s}=-\infty\), then follows immediately from Definition~.

    If \(s=\pm\infty\), the equivalence of and follows immediately from their definitions. If \(\lim_{n\to\infty}s_n=s\) (finite), then Definition~ implies that – hold with \(\overline{s}\) and \(\underline{s}\) replaced by \(s\). Hence, follows from the uniqueness of \(\overline{s}\) and \(\underline{s}\). For the converse, suppose that \(\overline{s}=\underline{s}\) and let \(s\) denote their common value. Then and imply that \[ s-\epsilon<s_n<s+\epsilon \nonumber \] for large \(n\), and follows from Definition~ and the uniqueness of \(\lim_{n\to\infty}s_n\) (Theorem~).

    To determine from Definition~ whether a sequence has a limit, it is necessary to guess what the limit is. (This is particularly difficult if the sequence diverges!) To use Theorem~ for this purpose requires finding \(\overline{s}\) and \(\underline{s}\). The following convergence criterion has neither of these defects.

    Suppose that \(\lim_{n\to\infty}s_n=s\) and \(\epsilon>0\). By Definition~, there is an integer \(N\) such that \[ |s_r-s|<\frac{\epsilon}{2}\mbox{\quad if\quad} r\ge N. \nonumber \] Therefore, \[ |s_n-s_m|=|(s_n-s)+(s-s_m)|\le |s_n-s|+|s-s_m|<\epsilon \mbox{\quad if\quad} n,m\ge N. \nonumber \] Therefore, the stated condition is necessary for convergence of \(\{s_n\}\). To see that it is sufficient, we first observe that it implies that \(\{s_n\}\) is bounded (Exercise~), so \(\overline{s}\) and \(\underline{s}\) are finite (Theorem~). Now suppose that \(\epsilon>0\) and \(N\) satisfies . From and , \[\begin{equation}\label{eq:4.1.25} |s_n-\overline{s}|<\epsilon, \end{equation} \nonumber \] for some integer \(n>N\) and, from and , \[\begin{equation}\label{eq:4.1.26} |s_m-\underline{s}|<\epsilon \end{equation} \nonumber \] for some integer \(m>N\). Since \[\begin{eqnarray*} |\overline{s}-\underline{s}|\ar=|(\overline{s}-s_n)+ (s_n-s_m)+(s_m-\underline{s})|\\ \ar\le |\overline{s}-s_n|+|s_n-s_m|+|s_m-\underline{s}|, \end{eqnarray*} \nonumber \] – imply that \[ |\overline{s}-\underline{s}|<3\epsilon. \nonumber \] Since \(\epsilon\) is an arbitrary positive number, this implies that \(\overline{s}=\underline{s}\), so \(\{s_n\}\) converges, by Theorem~.

    To see that cannot have more than one solution, suppose that \(x=f(x)\) and \(x'=f(x')\). From and the mean value theorem (Theorem~), \[ x-x'=f'(c)(x-x') \nonumber \] for some \(c\) between \(x\) and \(x'\). This and imply that \[ |x-x'|\le r|x-x'|. \nonumber \] Since \(r<1\), \(x=x'\).

    We will now show that has a solution. With \(x_0\) arbitrary, define \[\begin{equation}\label{eq:4.1.29} x_n=f(x_{n-1}),\quad n\ge1. \end{equation} \nonumber \] We will show that \(\{x_n\}\) converges. From and the mean value theorem, \[ x_{n+1}-x_n=f(x_n)-f(x_{n-1}) =f'(c_n)(x_n-x_{n-1}), \nonumber \] where \(c_n\) is between \(x_{n-1}\) and \(x_n\). This and imply that \[\begin{equation}\label{eq:4.1.30} |x_{n+1}-x_n|\le r|x_n-x_{n-1}|\mbox{\quad if\quad} n\ge1. \end{equation} \nonumber \] The inequality \[\begin{equation}\label{eq:4.1.31} |x_{n+1}-x_n|\le r^n |x_1-x_0|\mbox{\quad if\quad} n\ge0, \end{equation} \nonumber \] follows by induction from . Now, if \(n>m\), \[\begin{eqnarray*} |x_n-x_m|\ar=|(x_n-x_{n-1})+(x_{n-1}-x_{n-2})+\cdots+(x_{m+1}-x_m)| \\ \ar\le |x_n-x_{n-1}|+|x_{n-1}-x_{n-2}|+\cdots+|x_{m+1}-x_m|, \end{eqnarray*} \nonumber \] and yields \[\begin{equation}\label{eq:4.1.32} |x_n-x_m|\le|x_1-x_0|\,r^m(1+r+\cdots+r^{n-m-1}). \end{equation} \nonumber \] In Example~ we saw that the sequence \(\{s_k\}\) defined by \[ s_k=1+r+\cdots+r^k \nonumber \] converges to \(1/(1-r)\) if \(|r|<1\); moreover, since we have assumed here that \(0<r<1\), \(\{s_k\}\) is nondecreasing, and therefore \(s_k<1/(1-r)\) for all \(k\). Therefore, yields \[ |x_n-x_m|<\frac{|x_1-x_0|}{1-r}r^m\mbox{\quad if\quad} n>m. \nonumber \] Now it follows that \[ |x_n-x_m|<\frac{|x_1-x_0|}{1-r}r^N\mbox{\quad if\quad} n,m>N, \nonumber \] and, since \(\lim_{N\to\infty} r^N=0\), \(\{x_n\}\) converges, by Theorem~. If \(\widehat x=\lim_{n\to\infty}x_n\), then and the continuity of \(f\) imply that \(\widehat x=f(\widehat x)\).

    In Chapter~2.3 we used \(\epsilon\)–\(\delta\) definitions and arguments to develop the theory of limits, continuity, and differentiability; for example, \(f\) is continuous at \(x_0\) if for each \(\epsilon>0\) there is a \(\delta>0\) such that \(|f(x)-f(x_0)|<\epsilon\) when \(|x-x_0|<\delta\). The same theory can be developed by methods based on sequences. Although we will not carry this out in detail, we will develop it enough to give some examples. First, we need another definition about sequences.

    Note that \(\{s_n\}\) is a subsequence of itself, as can be seen by taking \(n_k=k\). All other subsequences of \(\{s_n\}\) are obtained by deleting terms from \(\{s_n\}\) and leaving those remaining in their original relative order.

    Since a subsequence \(\{s_{n_k}\}\) is again a sequence (with respect to \(k\)), we may ask whether \(\{s_{n_k}\}\) converges.

    The sequence in this example has subsequences that converge to different limits. The next theorem shows that if a sequence converges to a finite limit or diverges to \(\pm\infty\), then all its subsequences do also.

    We consider the case where \(s\) is finite and leave the rest to you (Exercise~). If holds and \(\epsilon>0\), there is an integer \(N\) such that \[ |s_n-s|<\epsilon\mbox{\quad if\quad} n\ge N. \nonumber \] Since \(\{n_k\}\) is an increasing sequence, there is an integer \(K\) such that \(n_k\ge N\) if \(k\ge K\). Therefore, \[ |s_{n_k}-L|<\epsilon\mbox{\quad if\quad} k\ge K, \nonumber \] which implies .

    We consider the case where \(\{s_n\}\) is nondecreasing and leave the rest to you (Exercise~). Since \(\{s_{n_k}\}\) is also nondecreasing in this case, it suffices to show that \[\begin{equation}\label{eq:4.2.3} \sup\{s_{n_k}\}=\sup\{s_n\} \end{equation} \nonumber \] and then apply Theorem~

    . Since the set of terms of \(\{s_{n_k}\}\) is contained in the set of terms of \(\{s_n\}\), \[\begin{equation} \label{eq:4.2.4} \sup\{s_n\}\ge\sup\{s_{n_k}\}. \end{equation} \nonumber \] Since \(\{s_n\}\) is nondecreasing, there is for every \(n\) an integer \(n_k\) such that \(s_n\le s_{n_k}\). This implies that \[ \sup\{s_n\}\le\,\sup\{s_{n_k}\}. \nonumber \] This and imply .

    In Section~1.3 we defined {} in terms of neighborhoods: \(\overline{x}\) is a limit point of a set \(S\) if every neighborhood of \(\overline{x}\) contains points of \(S\) distinct from \(\overline{x}\). The next theorem shows that an equivalent definition can be stated in terms of sequences.

    For sufficiency, suppose that the stated condition holds. Then, for each \(\epsilon>0\), there is an integer \(N\) such that \(0<|x_n-x|<\epsilon\) if \(n\ge N\). Therefore, every \(\epsilon\)-neighborhood of \(\overline{x}\) contains infinitely many points of \(S\). This means that \(\overline{x}\) is a limit point of \(S\).

    For necessity, let \(\overline{x}\) be a limit point of \(S\). Then, for every integer \(n\ge1\), the interval \((\overline{x}-1/n,\overline{x}+1/n)\) contains a point \(x_n\ (\ne\overline{x})\) in \(S\). Since \(|x_m-\overline{x}|\le1/n\) if \(m\ge n\), \(\lim_{n\to\infty}x_n= \overline{x}\).

    We will use the next theorem to show that continuity can be defined in terms of sequences.

    We prove and leave and

    to you (Exercise~). Let \(S\) be the set of distinct numbers that occur as terms of \(\{x_n\}\). (For example, if \(\{x_n\}=\{(-1)^n\}\), \(S=\{1,-1\}\); if \(\{x_n\}=\{1,\frac{1}{2}, 1, \frac{1}{3}, \dots, 1, 1/n, \dots\}\), \(S=\{1,\frac{1}{2}, \dots, 1/n, \dots\}\).) If \(S\) contains only finitely many points, then some \(\overline{x}\) in \(S\) occurs infinitely often in \(\{x_n\}\); that is, \(\{x_n\}\) has a subsequence \(\{x_{n_k}\}\) such that \(x_{n_k}=\overline{x}\) for all \(k\). Then \(\lim_{k\to\infty} x_{n_k}=\overline{x}\), and we are finished in this case.

    If \(S\) is infinite, then, since \(S\) is bounded (by assumption), the Bolzano–Weierstrass theorem (Theorem~) implies that \(S\) has a limit point \(\overline{x}\). From Theorem~, there is a sequence of points \(\{y_j\}\) in \(S\), distinct from \(\overline{x}\), such that \[\begin{equation}\label{eq:4.2.5} \lim_{j\to\infty} y_j=\overline{x}. \end{equation} \nonumber \] Although each \(y_j\) occurs as a term of \(\{x_n\}\), \(\{y_j\}\) is not necessarily a subsequence of \(\{x_n\}\), because if we write \[ y_j=x_{n_j}, \nonumber \] there is no reason to expect that \(\{n_j\}\) is an increasing sequence as required in Definition~. However, it is always possible to pick a subsequence \(\{n_{j_k}\}\) of \(\{n_j\}\) that is increasing, and then the sequence \(\{y_{j_k}\}=\{s_{n_{j_k}}\}\) is a subsequence of both \(\{y_j\}\) and \(\{x_n\}\). Because of and Theorem~ this subsequence converges to~\(\overline{x}\).

    -.3em We now show that continuity can be defined and studied in terms of sequences.

    Assume that \(a<\overline{x}<b\); only minor changes in the proof are needed if \(\overline{x}=a\) or \(\overline{x}=b\). First, suppose that \(f\) is continuous at \(\overline{x}\) and \(\{x_n\}\) is a sequence of points in \([a,b]\) satisfying . If \(\epsilon>0\), there is a \(\delta> 0\) such that \[\begin{equation} \label{eq:4.2.8} |f(x)-f(\overline{x})|<\epsilon\mbox{\quad if\quad} |x-\overline{x}| <\delta. \end{equation} \nonumber \] From , there is an integer \(N\) such that \(|x_n-\overline{x}|<\delta\) if \(n\ge N\). This and imply that \(|f(x_n)-f(\overline{x})|<\epsilon\) if \(n\ge N\). This implies , which shows that the stated condition is necessary.

    For sufficiency, suppose that \(f\) is discontinuous at \(\overline{x}\). Then there is an \(\epsilon_0>0\) such that, for each positive integer \(n\), there is a point \(x_n\) that satisfies the inequality \[ |x_n-\overline{x}|<\frac{1}{ n} \nonumber \]

    while \[ |f(x_n)-f(\overline{x})|\ge\epsilon_0. \nonumber \] The sequence \(\{x_n\}\) therefore satisfies , but not . Hence, the stated condition cannot hold if \(f\) is discontinuous at \(\overline{x}\). This proves sufficiency.

    Armed with the theorems we have proved so far in this section, we could develop the theory of continuous functions by means of definitions and proofs based on sequences and subsequences. We give one example, a new proof of Theorem~, and leave others for exercises.

    The proof is by contradiction. If \(f\) is not bounded on \([a,b]\), there is for each positive integer \(n\) a point \(x_n\) in \([a,b]\) such that \(|f(x_n)|>n\). This implies that \[\begin{equation}\label{eq:4.2.9} \lim_{n\to\infty}|f(x_n)|=\infty. \end{equation} \nonumber \] Since \(\{x_n\}\) is bounded, \(\{x_n\}\) has a convergent subsequence \(\{x_{n_k}\}\) (Theorem~

    ). If \[ \overline{x}=\lim_{k\to\infty} x_{n_k}, \nonumber \] then \(\overline{x}\) is a limit point of \([a,b]\), so \(\overline{x}\in [a,b]\). If \(f\) is continuous on \([a,b]\), then \[ \lim_{k\to\infty} f(x_{n_k})=f(\overline{x}) \nonumber \] by Theorem~, so \[ \lim_{k\to\infty} |f(x_{n_k})|=|f(\overline{x})| \nonumber \] (Exercise~), which contradicts . Therefore, \(f\) cannot be both continuous and unbounded on \([a,b]\)

    The theory of sequences developed in the last two sections can be combined with the familiar notion of a finite sum to produce the theory of infinite series. We begin the study of infinite series in this section.

    2pt We will usually refer to infinite series more briefly as {}. 2pt

    The series \(\sum_{n=0}^\infty r^n\) is called the {}. It occurs in many applications.

    An infinite series can be viewed as a generalization of a finite sum \[ A=\sum_{n=k}^N a_n=a_k+a_{k+1}+\cdots+a_N \nonumber \] by thinking of the finite sequence \(\{a_k,a_{k+1}, \dots,a_N\}\) as being extended to an infinite sequence \(\{a_n\}_k^\infty\) with \(a_n=0\) for \(n >N\). Then the partial sums of \(\sum_{n=k}^\infty a_n\) are \[ A_n=a_k+a_{k+1}+\cdots+a_n,\quad k\le n<N, \nonumber \] and \[ A_n=A,\quad n\ge N; \nonumber \] that is, the terms of \(\{A_n\}_k^\infty\) equal the finite sum \(A\) for \(n \ge k\). Therefore, \(\lim_{n\to\infty}A_n\\=A\).

    The next two theorems can be proved by applying Theorems~ and to the partial sums of the series in question (Exercises~ and ).

    Dropping finitely many terms from a series does not alter convergence or divergence, although it does change the sum of a convergent series if the terms dropped have a nonzero sum. For example, suppose that we drop the first \(k\) terms of a series \(\sum_{n=0}^\infty a_n\), and consider the new series \(\sum_{n=k}^\infty a_n\). Denote the partial sums of the two series by \[\begin{eqnarray*} A_n\ar=a_0+a_1+\cdots+a_n,\quad n\ge0,\\ \arraytext{and}\\ A'_n\ar=a_k+a_{k+1}+\cdots+a_n,\quad n\ge k. \end{eqnarray*} \nonumber \]

    Since \[ A_n=(a_0+a_1+\cdots+a_{k-1})+A'_n,\quad n\ge k, \nonumber \]
    it follows that \(A=\lim_{n\to\infty} A_n\) exists (in the extended reals) if and only if \(A'=\lim_{n\to\infty}A'_n\) does, and in this case \[ A=(a_0+a_1+\cdots+a_{k-1})+A'. \nonumber \] An important principle follows from this.

    We will soon give several conditions concerning convergence of a series \(\sum_{n=k}^\infty a_n\) with nonnegative terms. According to Lemma~, these results apply to series that have at most finitely many negative terms, as long as \(a_n\) is nonnegative and satisfies the conditions for \(n\) sufficiently large.

    When we are interested only in whether \(\sum_{n=k}^\infty a_n\) converges or diverges and not in its sum, we will simply say $\sum a_n$ converges'' or\(\sum a_n\) diverges.’’ Lemma~ justifies this convention, subject to the understanding that \(\sum a_n\) stands for \(\sum_{n=k}^\infty a_n\), where \(k\) is an integer such that \(a_n\) is defined for \(n\ge k\). (For example, \[ \sum \frac{1}{(n-6)^2}\mbox{\quad stands for\quad} \sum_{n=k}^\infty\frac{1}{(n-6)^2}, \nonumber \] -.3em where \(k\ge7\).) We write \(\sum a_n=\infty\) \((-\infty)\) if \(\sum a_n\) diverges to \(\infty\) \((-\infty)\). Finally, let us agree that \[ \sum_{n=k}^\infty a_n \mbox{\quad and \quad} \sum_{n=k-j}^\infty a_{n+j} \nonumber \] -.35em (where we obtain the second expression by shifting the index in the first) both represent the same series.

    The Cauchy convergence criterion for sequences (Theorem~) yields a useful criterion for convergence of series.

    In terms of the partial sums \(\{A_n\}\) of \(\sum a_n\), \[ a_n+a_{n+1}+\cdots+a_m=A_m-A_{n-1}. \nonumber \] Therefore, can be written as \[ |A_m-A_{n-1}|<\epsilon\mbox{\quad if\quad} m\ge n\ge N. \nonumber \] Since \(\sum a_n\) converges if and only if \(\{A_n\}\) converges, Theorem~ implies the conclusion.

    Intuitively, Theorem~ means that \(\sum a_n\) converges if and only if arbitrarily long sums \[ a_n+a_{n+1}+\cdots+a_m,\quad m\ge n, \nonumber \] can be made as small as we please by picking \(n\) large enough.

    Letting \(m=n\) in yields the following important corollary of Theorem~.

    It must be emphasized that Corollary~ gives a {} condition for convergence; that is, \(\sum a_n\) cannot converge unless \(\lim_{n\to\infty} a_n=0\). The condition is {}; \(\sum a_n\) may diverge even if \(\lim_{n\to\infty} a_n=0\). We will see examples below.

    We leave the proof of the following corollary of Theorem~ to you (Exercise~).

    The theory of series \(\sum a_n\) with terms that are nonnegative for sufficiently large \(n\) is simpler than the general theory, since such a series either converges to a finite limit or diverges to \(\infty\), as the next theorem shows.

    Since \(A_n=A_{n-1}+a_n\) and \(a_n\ge0\) \((n\ge k)\), the sequence \(\{A_n\}\) is nondecreasing, so the conclusion follows from Theorem~

    and Definition~.

    If \(a_n\ge0\) for sufficiently large \(n\), we will write \(\sum a_n< \infty\) if \(\sum a_n\) converges. This convention is based on Theorem~, which says that such a series diverges only if \(\sum a_n=\infty\). The convention does not apply to series with infinitely many negative terms, because such series may diverge without diverging to \(\infty\); for example, the series \(\sum_{n=0}^\infty (-1)^n\) oscillates, since its partial sums are alternately \(1\) and \(0\).

    If \[ A_n=a_k+a_{k+1}+\cdots+a_n\mbox{\quad and\quad} B_n=b_k+ b_{k+1}+\cdots+b_n,\quad n\ge k, \nonumber \] then, from , \[\begin{equation}\label{eq:4.3.6} A_n\le B_n. \end{equation} \nonumber \] Now we use Theorem~. If \(\sum b_n<\infty\), then \(\{B_n\}\) is bounded above and implies that \(\{A_n\}\) is also; therefore, \(\sum a_n<\infty\). On the other hand, if \(\sum a_n=\infty\), then \(\{A_n\}\) is unbounded above and implies that \(\{B_n\}\) is also; therefore, \(\sum b_n~=~\infty\).

    We leave it to you to show that implies

    .

    The comparison test is useful if we have a collection of series with nonnegative terms and known convergence properties. We will now use the comparison test to build such a collection.

    We first observe that holds if and only if \[\begin{equation}\label{eq:4.3.10} \sum_{n=k}^\infty \int^{n+1}_n f(x)\,dx<\infty \end{equation} \nonumber \] (Exercise~), so it is enough to show that holds if and only if does. From and the assumption that \(f\) is nonincreasing, \[ c_{n+1}=f(n+1)\le f(x)\le f(n)=c_n,\quad n\le x\le n+1,\quad n\ge k. \nonumber \] Therefore, \[ c_{n+1}=\int^{n+1}_n c_{n+1}\,dx\le\int^{n+1}_n f(x)\,dx\le \int^{n+1}_n c_n\,dx=c_n,\quad n\ge k \nonumber \] (Theorem~). From the first inequality and Theorem~ with \(a_n=c_{n+1}\) and \(b_n=\int^{n+1}_n f(x)\,dx\), implies that \(\sum c_{n+1}<\infty\), which is equivalent to . From the second inequality and Theorem~

    with \(a_n=\int^{n+1}_n f(x)\,dx\) and \(b_n=c_n\), implies .

    This example provides an infinite family of series with known convergence properties that can be used as standards for the comparison test.

    Except for the series of Example~, the integral test is of limited practical value, since convergence or divergence of most of the series to which it can be applied can be determined by simpler tests that do not require integration. However, the method used to prove the integral test is often useful for estimating the rate of convergence or divergence of a series. This idea is developed in Exercises~ and .

    The next theorem is often applicable where the integral test is not. It does not require the kind of trickery that we used in Example~.

    If \(\limsup_{n\to\infty} a_n/b_n<\infty\), then \(\{a_n/b_n\}\) is bounded, so there is a constant \(M\) and an integer \(k\) such that \[ a_n\le Mb_n,\quad n\ge k. \nonumber \] Since \(\sum b_n<\infty\), Theorem~ implies that \(\sum (Mb_n)< \infty\). Now \(\sum a_n<\infty\), by the comparison test.

    If \(\liminf_{n\to\infty} a_n/b_n>0\), there is a constant \(m\) and an integer \(k\) such that \[ a_n\ge mb_n,\quad n\ge k. \nonumber \] Since \(\sum b_n=\infty\), Theorem~ implies that \(\sum (mb_n)= \infty\). Now \(\sum a_n=\infty\), by the comparison test.

    The following corollary of Theorem~ is often useful, although it does not apply to the series of Example~.

    It is sometimes possible to determine whether a series with positive terms converges by comparing the ratios of successive terms with the corresponding ratios of a series known to converge or diverge.

    Rewriting as \[ \frac{a_{n+1}}{ b_{n+1}}\le \frac{a_n}{ b_n}, \nonumber \] we see that \(\{a_n/b_n\}\) is nonincreasing. Therefore, \(\limsup_{n \to\infty} a_n/b_n<\infty\), and Theorem~ implies

    .

    To prove , suppose that \(\sum a_n=\infty\). Since \(\{a_n/b_n\}\) is nonincreasing, there is a number \(\rho\) such that \(b_n\ge \rho a_n\) for large \(n\). Since \(\sum (\rho a_n)=\infty\) if \(\sum a_n=\infty\), Theorem~

    (with \(a_n\) replaced by \(\rho a_n\)) implies that \(\sum b_n=\infty\).

    We will use this theorem to obtain two other widely applicable tests: the ratio test and Raabe’s test.

    If \[ \limsup_{n\to\infty}\frac{a_{n+1}}{ a_n}<1, \nonumber \] there is a number \(r\) such that \(0<r<1\) and \[ \frac{a_{n+1}}{ a_n}<r \nonumber \] for \(n\) sufficiently large. This can be rewritten as \[ \frac{a_{n+1}}{ a_n}<\frac{r^{n+1}}{ r^n}. \nonumber \] Since \(\sum r^n<\infty\), Theorem~

    with \(b_n=r^n\) implies that \(\sum a_n<\infty\).

    If \[ \liminf_{n\to\infty}\frac{a_{n+1}}{ a_n}>1, \nonumber \] there is a number \(r\) such that \(r>1\) and \[ \frac{a_{n+1}}{ a_n}>r \nonumber \] for \(n\) sufficiently large. This can be rewritten as \[ \frac{a_{n+1}}{ a_n}>\frac{r^{n+1}}{ r^n}. \nonumber \] Since \(\sum r^n=\infty\), Theorem~

    with \(a_n=r^n\) implies that \(\sum b_n=\infty\).

    To see that no conclusion can be drawn if holds, consider \[ \sum a_n=\sum\frac{1}{ n^p}. \nonumber \] This series converges if \(p>1\) or diverges if \(p\le1\); however, \[ \limsup_{n\to\infty}\frac{a_{n+1}}{ a_n}=\liminf_{n \to\infty} \frac{a_{n+1}}{ a_n}=1 \nonumber \] for every \(p\).

    The following corollary of the ratio test is the familiar ratio rest from calculus.

    The ratio test does not imply that \(\sum a_n<\infty\) if merely \[\begin{equation}\label{eq:4.3.14} \frac{a_{n+1}}{ a_n}<1 \end{equation} \nonumber \] for large \(n\), since this could occur with \(\lim_{n\to\infty}a_{n+1}/a_n=1\), in which case the test is inconclusive. However, the next theorem shows that \(\sum a_n< \infty\) if is replaced by the stronger condition that \[ \frac{a_{n+1}}{ a_n}\le1-\frac{p}{ n} \nonumber \] for some \(p>1\) and large \(n\). It also shows that \(\sum a_n=\infty\) if \[ \frac{a_{n+1}}{ a_n}\ge1-\frac{q}{ n} \nonumber \] for some \(q<1\) and large \(n\).

    We need the inequality \[\begin{equation}\label{eq:4.3.15} \frac{1}{(1+x)^p}>1-px,\quad x>0,\ p>0. \end{equation} \nonumber \] This follows from Taylor’s theorem (Theorem~), which implies that \[ \frac{1}{(1+x)^p}=1-px+\frac{1}{2}\frac{p(p+1)}{(1+c)^{p+2}}x^2, \nonumber \] where \(0<c<x\). (Verify.) Since the last term is positive if \(p>0\), this implies .

    Now suppose that \(M<-p<-1\). Then there is an integer \(k\) such that \[ n\left(\frac{a_{n+1}}{ a_n}-1\right)<-p,\quad n\ge k, \nonumber \] so \[ \frac{a_{n+1}}{ a_n}<1-\frac{p}{ n},\quad n\ge k. \nonumber \] Hence, \[ \frac{a_{n+1}}{ a_n}<\frac{1}{(1+1/n)^p},\quad n\ge k, \nonumber \] as can be seen by letting \(x=1/n\) in . From this, \[ \frac{a_{n+1}}{ a_n}<\frac{1}{(n+1)^p}\bigg/\frac{1}{ n^p},\quad n\ge k. \nonumber \] Since \(\sum 1/n^p<\infty\) if \(p>1\), Theorem~

    implies that \(\sum a_n<\infty\).

    Here we need the inequality \[\begin{equation}\label{eq:4.3.16} (1-x)^q<1-qx,\quad 0<x<1,\quad 0<q<1. \end{equation} \nonumber \] This also follows from Taylor’s theorem, which implies that \[ (1-x)^q=1-qx+q(q-1)(1-c)^{q-2}\frac{x^2}{2}, \nonumber \] where \(0<c<x\).

    Now suppose that \(-1<-q<m\). Then there is an integer \(k\) such that \[ n\left(\frac{a_{n+1}}{ a_n}-1\right)>-q,\quad n\ge k, \nonumber \] so \[ \frac{a_{n+1}}{ a_n}\ge1-\frac{q}{ n},\quad n\ge k. \nonumber \] If \(q\le0\), then \(\sum a_n=\infty\), by Corollary~. Hence, we may assume that \(0<q<1\), so the last inequality implies that \[ \frac{a_{n+1}}{ a_n}>\left(1-\frac{1}{ n}\right)^q,\quad n\ge k, \nonumber \]

    as can be seen by setting \(x=1/n\) in . Hence, \[ \frac{a_{n+1}}{ a_n}>\frac{1}{ n^q}\bigg/\frac{1}{(n-1)^q},\quad n\ge k. \nonumber \] Since \(\sum 1/n^q=\infty\) if \(q<1\), Theorem~

    implies that \(\sum a_n=\infty\).

    The next theorem, which will be useful when we study power series (Section~4.5), concludes our discussion of series with nonnegative terms.

    If \(\limsup_{n\to\infty}a^{1/n}_n<1\), there is an \(r\) such that \(0<r<1\) and \(a^{1/n}_n<r\) for large \(n\). Therefore, \(a_n<r^n\) for large \(n\). Since \(\sum r^n<\infty\), the comparison test implies that \(\sum a_n<\infty\).

    If \(\limsup_{n\to\infty} a^{1/n}_n>1\), then \(a^{1/n}_n>1\) for infinitely many values of \(n\), so \(\sum a_n=\infty\), by Corollary~.

    We now drop the assumption that the terms of \(\sum a_n\) are nonnegative for large \(n\). In this case, \(\sum a_n\) may converge in two quite different ways. The first is defined as follows.

    Any test for convergence of a series with nonnegative terms can be used to test an arbitrary series \(\sum a_n\) for absolute convergence by applying it to \(\sum |a_n|\). We used the comparison test this way in Examples~ and .

    The proof of the next theorem is analogous to the proof of Theorem~. We leave it to you (Exercise~).

    For example, Theorem~ implies that \[ \sum\frac{\sin n\theta}{ n^p} \nonumber \] converges if \(p>1\), since it then converges absolutely (Example~).

    The converse of Theorem~ is false; a series may converge without converging absolutely. We say then that the series converges {}, or is {}; thus, \(\sum (-1)^n/n^p\) converges conditionally if \(0~<~p~\le~1\).

    Except for Theorem~ and Corollary~, the convergence tests we have studied so far apply only to series whose terms have the same sign for large \(n\). The following theorem does not require this. It is analogous to Dirichlet’s test for improper integrals (Theorem~).

    The proof is similar to the proof of Dirichlet’s test for integrals. Define \[ B_n=b_k+b_{k+1}+\cdots+b_n,\quad n\ge k \nonumber \] and consider the partial sums of \(\sum_{n=k}^\infty a_nb_n\): \[\begin{equation}\label{eq:4.3.20} S_n=a_kb_k+a_{k+1}b_{k+1}+\cdots+a_nb_n,\quad n\ge k. \end{equation} \nonumber \] By substituting \[ b_k=B_k\mbox{\quad and\quad} b_n=B_n-B_{n-1},\quad n\ge k+1, \nonumber \] into , we obtain \[ S_n=a_kB_k+a_{k+1}(B_{k+1}-B_k)+\cdots+a_n(B_n-B_{n-1}), \nonumber \] which we rewrite as \[\begin{equation}\label{eq:4.3.21} \begin{array}{rcl} S_n\ar=(a_k-a_{k+1})B_k+(a_{k+1}-a_{k+2})B_{k+1}+\cdots\\ \ar{}+\,(a_{n-1}-a_n)B_{n-1}+a_nB_n. \end{array} \end{equation} \nonumber \]

    (The procedure that led from to is called {}. It is analogous to integration by parts.) Now can be viewed as \[\begin{equation}\label{eq:4.3.22} S_n=T_{n-1}+a_nB_n, \end{equation} \nonumber \] where \[ T_{n-1}=(a_k-a_{k+1})B_k+(a_{k+1}-a_{k+2}) B_{k+1}+\cdots+(a_{n-1}-a_n)B_{n-1}; \nonumber \] that is, \(\{T_n\}\) is the sequence of partial sums of the series \[\begin{equation}\label{eq:4.3.23} \sum_{j=k}^\infty (a_j-a_{j+1})B_j. \end{equation} \nonumber \] Since \[ |(a_j-a_{j+1})B_j|\le M|a_j-a_{j+1}| \nonumber \] from , the comparison test and imply that the series converges absolutely. Theorem~ now implies that \(\{T_n\}\) converges. Let \(T=\lim_{n\to\infty}T_n\). Since \(\{B_n\}\) is bounded and \(\lim_{n\to \infty}a_n=0\), we infer from that \[ \lim_{n\to\infty} S_n=\lim_{n\to\infty}T_{n-1}+\lim_{n\to \infty}a_nB_n=T+0=T. \nonumber \] Therefore, \(\sum a_nb_n\) converges.

    \begin{example}\rm To apply Dirichlet’s test to \[ \sum_{n=2}^\infty\frac{\sin n\theta}{ n+(-1)^n},\quad \theta\ne k\pi \mbox{\quad ($k=$ integer)}, \nonumber \] we take \[ a_n=\frac{1}{ n+(-1)^n}\mbox{\quad and\quad} b_n=\sin n\theta. \nonumber \] Then \(\lim_{n\to\infty}a_n=0\), and \[ |a_{n+1}-a_n|<\frac{3}{ n(n-1)} \nonumber \] (verify), so \[ \sum|a_{n+1}-a_n|<\infty. \nonumber \] Now \[ B_n=\sin2\theta+\sin3\theta+\cdots+\sin n\theta. \nonumber \] To show that \(\{B_n\}\) is bounded, we use the trigonometric identity \[ \sin r\theta=\frac{\cos\left(r-\frac{1}{2}\right)\theta-\cos\left(r+\frac{1}{ 2}\right)\theta}{2\sin(\theta/2)},\quad\theta\ne2k\pi, \nonumber \]

    to write \[\begin{eqnarray*} B_n\ar=\frac{(\cos\frac{3}{2}\theta-\cos\frac{5}{2}\theta)+(\cos\frac{5}{2} \theta-\cos\frac{7}{2}\theta)+\cdots+\left(\cos\left(n-\frac{1}{2} \right)\theta-\cos(n+\frac{1}{2})\theta\right)}{2\sin(\theta/2)}\\[2\jot] \ar=\frac{\cos\frac{3}{2}\theta-\cos(n+\frac{1}{2})\theta}{2\sin (\theta/2)}, \end{eqnarray*} \nonumber \] which implies that \[ |B_n|\le\left|\frac{1}{\sin(\theta/2)}\right|,\quad n\ge 2. \nonumber \] Since \(\{a_n\}\) and \(\{b_n\}\) satisfy the hypotheses of Dirichlet’s theorem, \(\sum a_nb_n\) converges. \end{example}

    Dirichlet’s test takes a simpler form if \(\{a_n\}\) is nonincreasing, as follows.

    If \(a_{n+1}\le a_n\), then \[ \sum_{n=k}^m |a_{n+1}-a_n|=\sum_{n=k}^m (a_n-a_{n+1})=a_k-a_{m+1}. \nonumber \] Since \(\lim_{m\to\infty} a_{m+1}=0\), it follows that \[ \sum_{n=k}^\infty |a_{n+1}-a_n|=a_k<\infty. \nonumber \] Therefore, the hypotheses of Dirichlet’s test are satisfied, so \(\sum a_nb_n\) converges.

    The alternating series test from calculus follows easily from Abel’s test.

    Let \(b_n=(-1)^n\); then \(\{|B_n|\}\) is a sequence of zeros and ones and therefore bounded. The conclusion now follows from Abel’s test.

    The terms of a finite sum can be grouped by inserting parentheses arbitrarily. For example, \[ (1+7)+(6+5)+4=(1+7+6)+(5+4)=(1+7)+(6+5+4). \nonumber \] According to the next theorem, the same is true of an infinite series that converges or diverges to \(\pm\infty\).

    If \(T_r\) is the \(r\)th partial sum of \(\sum_{j=1}^\infty b_{n_j}\) and \(\{A_n\}\) is the \(n\)th partial sum of \(\sum_{s=k}^\infty a_s\), then \[\begin{eqnarray*} T_r\ar=b_1+b_2+\cdots+b_r\\ \ar=(a_1+\cdots+a_{n_1})+(a_{n_1+1}+\cdots+a_{n_2})+\cdots+ (a_{n_{r-1}+1}+\cdots+a_{n_r})\\ \ar=A_{n_r}. \end{eqnarray*} \nonumber \] Thus, \(\{T_r\}\) is a subsequence of \(\{A_n\}\), so \(\lim_{r\to\infty} T_r=\lim_{n\to\infty}A_n=A\) by Theorem~.

    -.4em A finite sum is not changed by rearranging its terms; thus, \[ 1+3+7=1+7+3=3+1+7=3+7+1=7+1+3=7+3+1. \nonumber \] This is not true of all infinite series. Let us say that \(\sum b_n\) is a {} of \(\sum a_n\) if the two series have the same terms, written in possibly different orders. Since the partial sums of the two series may form entirely different sequences, there is no apparent reason to expect them to exhibit the same convergence properties, and in general they do not.

    We are interested in what happens if we rearrange the terms of a convergent series. We will see that every rearrangement of an absolutely convergent series has the same sum, but that conditionally convergent series fail, spectacularly, to have this property.

    Let \[ \overline{A}_n=|a_1|+|a_2|+\cdots+|a_n|\mbox{\quad and\quad} \overline{B}_n=|b_1|+|b_2|+\cdots+|b_n|. \nonumber \] For each \(n\ge1\), there is an integer \(k_n\) such that \(b_1\), \(b_2\), , \(b_n\) are included among \(a_1\), \(a_2\), , \(a_{k_n}\), so \(\overline{B}_n\le\overline{A}_{k_n}\). Since \(\{\overline{A}_n\}\) is bounded, so is \(\{\overline{B}_n\}\), and therefore \(\sum |b_n|<\infty\) (Theorem~).

    Now let \[\begin{eqnarray*} A_n\ar=a_1+a_2+\cdots+a_n,\quad B_n=b_1+b_2+\cdots+ b_n,\\ A\ar=\sum_{n=1}^\infty a_n,\mbox{\quad and\quad} B=\sum_{n=1}^\infty b_n. \end{eqnarray*} \nonumber \]

    We must show that \(A=B\). Suppose that \(\epsilon>0\). From Cauchy’s convergence criterion for series and the absolute convergence of \(\sum a_n\), there is an integer \(N\) such that \[ |a_{N+1}|+|a_{N+2}|+\cdots+|a_{N+k}|<\epsilon,\quad k\ge1. \nonumber \] -.3em Choose \(N_1\) so that \(a_1\), \(a_2\), , \(a_N\) are included among \(b_1\), \(b_2\), , \(b_{N_1}\). If \(n\ge N_1\), then \(A_n\) and \(B_n\) both include the terms \(a_1\), \(a_2\), , \(a_N\), which cancel on subtraction; thus, \(|A_n-B_n|\) is dominated by the sum of the absolute values of finitely many terms from \(\sum a_n\) with subscripts greater than \(N\). Since every such sum is less than~\(\epsilon\),

    \[ |A_n-B_n|<\epsilon\mbox{\quad if\quad} n\ge N_1. \nonumber \] Therefore, \(\lim_{n\to\infty}(A_n-B_n)=0\) and \(A=B\).

    To investigate the consequences of rearranging a conditionally convergent series, we need the next theorem, which is itself important.

    If both series in converge, then \(\sum a_n\) converges absolutely, while if one converges and the other diverges, then \(\sum a_n\) diverges to \(\infty\) or \(-\infty\). Hence, both must diverge.

    The next theorem implies that a conditionally convergent series can be rearranged to produce a series that converges to any given number, diverges to \(\pm\infty\), or oscillates.

    We consider the case where \(\mu\) and \(\nu\) are finite and leave the other cases to you (Exercise~). We may ignore any zero terms that occur in \(\sum_{n=1}^\infty a_n\). For convenience, we denote the positive terms by \(P=\{\alpha_i\}_1^\infty\) and and the negative terms by \(Q=\{-\beta_j\}_1^\infty\). We construct the sequence \[\begin{equation} \label{eq:4.3.26} \{b_n\}_1^\infty=\{\alpha_1, \dots,\alpha_{m_1},-\beta_1, \dots,-\beta_{n_1}, \alpha_{m_1+1}, \dots,\alpha_{m_2},-\beta_{n_1+1}, \dots,-\beta_{n_2}, \dots\}, \end{equation} \nonumber \]

    with segments chosen alternately from \(P\) and \(Q\). Let \(m_0=n_0=0\). If \(k\ge1\), let \(m_k\) and \(n_k\) be the smallest integers such that \(m_k>m_{k-1}\), \(n_k>n_{k-1}\), \[ \sum_{i=1}^{m_k}\alpha_i-\sum_{j=1}^{n_{k-1}}\beta_j\ge\nu, \mbox{\quad and \quad} \sum_{i=1}^{m_k}\alpha_i-\sum_{j=1}^{n_k}\beta_j\le\mu. \nonumber \] Theorem~ implies that this construction is possible: since \(\sum \alpha_i=\sum\beta_j=\infty\), we can choose \(m_k\) and \(n_k\) so that \[ \sum_{i=m_{k-1}}^{m_k}\alpha_i\mbox{\quad and\quad} \sum_{j=n_{k-1}}^{n_k}\beta_j \nonumber \] are as large as we please, no matter how large \(m_{k-1}\) and \(n_{k-1}\) are (Exercise~). Since \(m_k\) and \(n_k\) are the smallest integers with the specified properties, \[\begin{eqnarray} \nu\le B_{m_k+n_{k-1}}\ar<\nu+\alpha_{m_k},\quad k\ge2, \label{eq:4.3.27}\\ \arraytext{and}\nonumber\\ \mu-\beta_{n_k}\ar<B_{m_k+n_k}\le\mu,\quad k\ge2. \label{eq:4.3.28} \end{eqnarray} \nonumber \] From , \(b_n<0\) if \(m_k+n_{k-1}<n\le m_k+n_k\), so \[\begin{equation}\label{eq:4.3.29} B_{m_k+n_k}\le B_n\le B_{m_k+n_{k-1}},\quad m_k+n_{k-1}\le n\le m_k+n_k, \end{equation} \nonumber \] while \(b_n>0\) if \(m_k+n_k< n\le m_{k+1}+n_k\), so \[\begin{equation}\label{eq:4.3.30} B_{m_k+n_k}\le B_n\le B_{m_{k+1}+n_k},\quad m_k+n_k\le n\le m_{k+1}+n_k. \end{equation} \nonumber \] Because of and , and imply that \[\begin{eqnarray} \mu-\beta_{n_k}\ar<B_n<\nu+\alpha_{m_k},\quad m_k+n_{k-1}\le n\le m_k+n_k, \label{eq:4.3.31} \\ \arraytext{and}\nonumber\\ \mu-\beta_{n_k}\ar<B_n<\nu+\alpha_{m_{k+1}},\quad m_k+n_k\le n\le m_{k+1}+n_k. \label{eq:4.3.32} \end{eqnarray} \nonumber \]

    From the first inequality of , \(B_n\ge \nu\) for infinitely many values of \(n\). However, since \(\lim_{i\to\infty}\alpha_i=0\), the second inequalities in and imply that if \(\epsilon>0\) then \(B_n>\nu+ \epsilon\) for only finitely many values of \(n\). Therefore, \(\limsup_{n\to\infty} B_n=\nu\). From the second inequality in , \(B_n\le \mu\) for infinitely many values of \(n\). However, since \(\lim_{j\to\infty}\beta_j=0\), the first inequalities in and imply that if \(\epsilon>0\) then \(B_n<\mu-\epsilon\) for only finitely many values of \(n\). Therefore, \(\liminf_{n\to\infty} B_n=\mu\).

    -.5em The product of two finite sums can be written as another finite sum: for example, \[\begin{eqnarray*} (a_0+a_1+a_2)(b_0+b_1+b_2)\ar=a_0b_0+a_0b_1+a_0b_2\\ \ar{}+a_1b_0+a_1b_1+a_1b_2\\ \ar{}+a_2b_0+a_2b_1+a_2b_2, \end{eqnarray*} \nonumber \]

    where the sum on the right contains each product \(a_ib_j\) \((i,j=0,1,2)\) exactly once. These products can be rearranged arbitrarily without changing their sum. The corresponding situation for series is more complicated.

      \enlargethispage{100pt}

    Given two series 1pc \[ \sum_{n=0}^\infty a_n\mbox{\quad and\quad}\sum_{n=0}^\infty b_n \nonumber \] 1pc (because of applications in Section~4.5, it is convenient here to start the summation index at zero), we can arrange all possible products \(a_ib_j\) \((i,j\ge0)\) in a two-dimensional array: 1pc \[\begin{equation}\label{eq:4.3.33} \begin{array}{ccccc} a_0b_0&a_0b_1&a_0b_2&a_0b_3&\cdots\\ a_1b_0&a_1b_1&a_1b_2&a_1b_3&\cdots\\ a_2b_0&a_2b_1&a_2b_2&a_2b_3&\cdots\\ a_3b_0&a_3b_1&a_3b_2&a_3b_3&\cdots\\ \vdots&\vdots&\vdots&\vdots\end{array} \end{equation} \nonumber \] 1pc

    where the subscript on \(a\) is constant in each row and the subscript on \(b\) is constant in each column. Any sensible definition of the product 1pc \[ \left(\sum_{n=0}^\infty a_n\right)\left(\sum_{n=0}^\infty b_n\right) \nonumber \] 1pc clearly must involve every product in this array exactly once; thus, we might define the product of the two series to be the series \(\sum_{n=0}^\infty p_n\), where \(\{p_n\}\) is a sequence obtained by ordering the products in according to some method that chooses every product exactly once. One way to do this is indicated by 2pc \[\begin{equation}\label{eq:4.3.34} \begin{array}{cccccccc} a_0b_0&\rightarrow&a_0b_1&{}&a_0b_2&\rightarrow&a_0b_3 &\cdots\\ {}&{}&\downarrow&{}&\uparrow&{}&\downarrow\\ a_1b_0&\leftarrow&a_1b_1&{}&a_1b_2&{}&a_1b_3&\cdots\\ \downarrow&{}&{}&{}&\uparrow&{}&\downarrow\\ a_2b_0&\rightarrow&a_2b_1&\rightarrow&a_2b_2&{}&a_2b_3&\cdots \\ {}&{}&{}&{}&{}&{}&\downarrow\\ a_3b_0&\leftarrow&a_3b_1&\leftarrow&a_3b_2&\leftarrow&a_3b_3& \cdots\\ \downarrow\\ \vdots&{}&\vdots&{}&\vdots&{}&\vdots\\ \end{array} \end{equation} \nonumber \]

    and another by \[\begin{equation}\label{eq:4.3.35} \begin{array}{cccccccccc} a_0b_0&\rightarrow&a_0b_1&{}&a_0b_2&\rightarrow&{} a_0b_3&{}&a_0b_4&\cdots\\ {}&\swarrow&{}&\nearrow&{}&\swarrow&{}&\nearrow\\ a_1b_0&{}&a_1b_1&{}&a_1b_2&{}&a_1b_3&{}&\cdots\\ \downarrow&\nearrow&{}&\swarrow&{}&\nearrow\\ a_2b_0&{}&a_2b_1&{}&a_2b_2&{}&a_2b_3&{}&\cdots\\ {}&\swarrow&{}&\nearrow\\ a_3b_0&{}&a_3b_1&{}&a_3b_2&{}&a_3b_3& {}&\cdots\\ \downarrow&\nearrow\\ a_4b_0&{}&\vdots&{}&\vdots&{}&\vdots\\ \end{array} \end{equation} \nonumber \] There are infinitely many others, and to each corresponds a series that we might consider to be the product of the given series. This raises a question: If \[ \sum_{n=0}^\infty a_n=A\mbox{\quad and\quad}\sum_{n=0}^\infty b_n= B \nonumber \] where \(A\) and \(B\) are finite, does every product series \(\sum_{n=0}^\infty p_n\) constructed by ordering the products in converge to \(AB\)?

    The next theorem tells us when the answer is yes.

    First, let \(\{p_n\}\) be the sequence obtained by arranging the products \(\{a_ib_j\}\) according to the scheme indicated in , and define \[ \begin{array}{ll} A_n=a_0+a_1+\cdots+a_n,& \overline{A}_n=|a_0|+|a_1|+\cdots+|a_n|,\\[2\jot] B_n=b_0+b_1+\cdots+b_n,& \overline{B}_n=|b_0|+|b_1|+\cdots+|b_n|,\\[2\jot] P_n\hskip.1em=p_0+p_1+\cdots+p_n,&\overline{P}_n\hskip.1em=|p_0|+|p_1|+\cdots+|p_n|. \end{array} \nonumber \] From , we see that \[ P_0=A_0B_0,\quad P_3=A_1B_1,\quad P_8=A_2B_2, \nonumber \] and, in general, \[\begin{equation}\label{eq:4.3.36} P_{(m+1)^2-1}=A_mB_m. \end{equation} \nonumber \]

    Similarly, \[\begin{equation}\label{eq:4.3.37} \overline{P}_{(m+1)^2-1}=\overline{A}_m\overline{B}_m. \end{equation} \nonumber \] If \(\sum |a_n|<\infty\) and \(\sum |b_n|<\infty\), then \(\{\overline{A}_m\overline{B}_m\}\) is bounded and, since \(\overline{P}_m\le\overline{P}_{(m+1)^2-1}\), implies that \(\{\overline{P}_m\}\) is bounded. Therefore, \(\sum |p_n| <\infty\), so \(\sum p_n\) converges. Now \[ \begin{array}{rcll} \dst{\sum ^\infty_{n=0}p_n}\ar=\dst{\lim_{n\to\infty}P_n}&\mbox{(by definition)}\\[2\jot] \ar=\dst{\lim_{m\to\infty} P_{(m+1)^2-1}}&\mbox{(by Theorem~\ref{thmtype:4.2.2})}\\[2\jot] \ar=\dst{\lim_{m\to\infty} A_mB_m}&\mbox{(from \eqref{eq:4.3.36})}\\[2\jot] \ar=\dst{\left(\lim_{m\to\infty} A_m\right)\left(\lim_{m\to\infty}B_m\right)} &\mbox{(by Theorem~\ref{thmtype:4.1.8})}\\[2\jot] \ar=AB. \end{array} \nonumber \] Since any other ordering of the products in produces a a rearrangement of the absolutely convergent series \(\sum_{n=0}^\infty p_n\), Theorem~ implies that \(\sum |q_n|<\infty\) for every such ordering and that \(\sum_{n=0}^\infty q_n=AB\). This shows that the stated condition is sufficient.

    For necessity, again let \(\sum_{n=0}^\infty p_n\) be obtained from the ordering indicated in , and suppose that \(\sum_{n=0}^\infty p_n\) and all its rearrangements converge to \(AB\). Then \(\sum p_n\) must converge absolutely, by Theorem~. Therefore, \(\{\overline{P}_{m^2-1}\}\) is bounded, and implies that \(\{\overline{A}_m\}\) and \(\{\overline{B}_m\}\) are bounded. (Here we need the assumption that neither \(\sum a_n\) nor \(\sum b_n\) consists entirely of zeros. Why?) Therefore, \(\sum |a_n|<\infty\) and \(\sum |b_n|<\infty\).

    The following definition of the product of two series is due to Cauchy. We will see the importance of this definition in Section~4.5.

    -3em3em

    Henceforth, \(\left(\sum_{n=0}^\infty a_n\right)\left(\sum_{n=0}^\infty b_n\right)\) should be interpreted as the Cauchy product. Notice that \[ \left(\sum_{n=0}^\infty a_n\right)\left(\sum_{n=0}^\infty b_n\right)= \left(\sum_{n=0}^\infty b_n\right)\left(\sum_{n=0}^\infty a_n\right), \nonumber \] and that the Cauchy product of two series is defined even if one or both diverge. In the case where both converge, it is natural to inquire about the relationship between the product of their sums and the sum of the Cauchy product. Theorem~ yields a partial answer to this question, as follows.

    Let \(C_n\) be the \(n\)th partial sum of the Cauchy product; that is, \[ C_n=c_0+c_1+\cdots+c_n \nonumber \] (see ). Let \(\sum_{n=0}^\infty p_n\) be the series obtained by ordering the products \(\{a_i,b_j\}\) according to the scheme indicated in , and define \(P_n\) to be its \(n\)th partial sum; thus, \[ P_n=p_0+p_1+\cdots+p_n. \nonumber \] Inspection of shows that \(c_n\) is the sum of the \(n+1\) terms connected by the diagonal arrows. Therefore, \(C_n=P_{m_n}\), where \[ m_n=1+2+\cdots+(n+1)-1=\frac{n(n+3)}{2}. \nonumber \] From Theorem~, \(\lim_{n\to\infty} P_{m_n}=AB\), so \(\lim_{n\to\infty} C_n=AB\). To see that \(\sum |c_n|<\infty\), we observe that \[ \sum_{r=0}^n |c_r|\le\sum_{s=0}^{m_n} |p_s| \nonumber \] and recall that \(\sum |p_s|<\infty\), from Theorem~.

    The Cauchy product of two series may converge under conditions weaker than those of Theorem~. If one series converges absolutely and the other converges conditionally, the Cauchy product of the two series converges to the product of the two sums (Exercise~). If two series and their Cauchy product all converge, then the sum of the Cauchy product equals the product of the sums of the two series (Exercise~). However, the next example shows that the Cauchy product of two conditionally convergent series may diverge.

    Until now we have considered sequences and series of constants. Now we turn our attention to sequences and series of real-valued functions defined on subsets of the reals. Throughout this section, subset'' meansnonempty subset.’’

    If \(F_k\), \(F_{k+1}\), , \(F_n, \dots\) are real-valued functions defined on a subset \(D\) of the reals, we say that \(\{F_n\}\) is an {} or (simply a {}) {} \(D\). If the sequence of values \(\{F_n(x)\}\) converges for each \(x\) in some subset \(S\) of \(D\), then \(\{F_n\}\) defines a limit function on \(S\). The formal definition is as follows.

    \begin{example}

    If \(x\) is irrational, then \(x\not\in S_n\) for any \(n\), so \(F_n(x)=0\), \(n\ge 1\). If \(x\) is rational, then \(x\in S_n\) and \(F_n(x)=1\) for all sufficiently large \(n\). Therefore, \[ \lim_{n\to\infty} F_n(x)=F(x)=\left\{\casespace\begin{array}{ll} 1&\mbox{if $x$ is rational},\\ 0&\mbox{if $x$ is irrational}.\end{array}\right. \nonumber \] \end{example}

    10pt

    18pt

    -.4em The pointwise limit of a sequence of functions may differ radically from the functions in the sequence. In Example~, each \(F_n\) is continuous on \((-\infty,1]\), but \(F\) is not. In Example~, the graph of each \(F_n\) has two triangular spikes with heights that tend to \(\infty\) as \(n\to\infty\), while the graph of \(F\) (the \(x\)-axis) has none. In Example~, each \(F_n\) is integrable, while \(F\) is nonintegrable on every finite interval. (Exercise~). There is nothing in Definition~ to preclude these apparent anomalies; although the definition implies that for each \(x_0\) in \(S\), \(F_n(x_0)\) approximates \(F(x_0)\) if \(n\) is sufficiently large, it does not imply that any particular \(F_n\) approximates \(F\) well over {} of \(S\). To formulate a definition that does, it is convenient to introduce the notation \[ \|g\|_S=\sup_{x\in S}|g(x)| \nonumber \] and to state the following lemma. We leave the proof to you (Exercise~).

    -1em1em

    If \(S=[a,b]\) and \(F\) is the function with graph shown in Figure~, then implies that the graph of \[ y=F_n(x),\quad a\le x\le b, \nonumber \] lies in the shaded band \[ F(x)-\epsilon<y<F(x)+\epsilon,\quad a\le x\le b, \nonumber \] if \(n\ge N\).

    From Definition~, if \(\{F_n\}\) converges uniformly on \(S\), then \(\{F_n\}\) converges uniformly on any subset of \(S\) (Exercise~).

    12pt 6pt

    12pt

    The next theorem provides alternative definitions of pointwise and uniform convergence. It follows immediately from Definitions~ and .

    \begin{theorem} Let \(\{F_n\}\) be defined on \(S.\) Then \begin{alist} % (a) \(\{F_n\}\) converges pointwise to \(F\) on \(S\) if and only if there is, for each \(\epsilon>0\) and \(x\in S\), an integer \(N\) \((\)which may depend on \(x\) as well as \(\epsilon)\) such that \[ |F_n(x)-F(x)|<\epsilon\mbox{\quad if\quad}\ n\ge N. \nonumber \]

    % (b) \(\{F_n\}\) converges uniformly to \(F\) on \(S\) if and only if there is for each \(\epsilon>0\) an integer \(N\) \((\)which depends only on \(\epsilon\) and not on any particular \(x\) in \(S)\) such that \[ |F_n(x)-F(x)|<\epsilon\mbox{\quad for all $x$ in $S$ if $n\ge N$}. \nonumber \] \end{alist} \end{theorem}

    6pt

    12pt

    The next theorem follows immediately from Theorem~ and Example~.

    The next theorem enables us to test a sequence for uniform convergence without guessing what the limit function might be. It is analogous to Cauchy’s convergence criterion for sequences of constants (Theorem~).

    For necessity, suppose that \(\{F_n\}\) converges uniformly to \(F\) on \(S\). Then, if \(\epsilon>0\), there is an integer \(N\) such that \[ \|F_k-F\|_S<\frac{\epsilon}{2}\mbox{\quad if\quad} k\ge N. \nonumber \] Therefore, \[\begin{eqnarray*} \|F_n-F_m\|_S\ar=\|(F_n-F)+(F-F_m)\|_S\\ \ar\le \|F_n-F\|_S+\|F-F_m\|_S \mbox{\quad (Lemma~\ref{thmtype:4.4.2})\quad}\\ &<&\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon\mbox{\quad if\quad} m, n\ge N. \end{eqnarray*} \nonumber \]

    For sufficiency, we first observe that implies that \[ |F_n(x)-F_m(x)|<\epsilon\mbox{\quad if\quad} n, m\ge N, \nonumber \] for any fixed \(x\) in \(S\). Therefore, Cauchy’s convergence criterion for sequences of constants (Theorem~) implies that \(\{F_n(x)\}\) converges for each \(x\) in \(S\); that is, \(\{F_n\}\) converges pointwise to a limit function \(F\) on \(S\). To see that the convergence is uniform, we write \[\begin{eqnarray*} |F_m(x)-F(x) |\ar=|[F_m(x)-F_n(x)]+[F_n(x)-F(x)]|\\ \ar\le |F_m(x)-F_n(x)|+| F_n(x)-F(x)|\\ \ar\le \|F_m-F_n\|_S+|F_n(x)-F(x)|. \end{eqnarray*} \nonumber \] This and imply that \[\begin{equation} \label{eq:4.4.3} |F_m(x)-F(x)|<\epsilon+|F_n(x)-F(x)|\quad\mbox {if}\quad n, m\ge N. \end{equation} \nonumber \] Since \(\lim_{n\to\infty}F_n(x)=F(x)\), \[ |F_n(x)-F(x)|<\epsilon \nonumber \] for some \(n\ge N\), so implies that \[ |F_m(x)-F(x)|<2\epsilon\mbox{\quad if\quad} m\ge N. \nonumber \] But this inequality holds for all \(x\) in \(S\), so \[ \|F_m-F\|_S\le2\epsilon\mbox{\quad if\quad} m\ge N. \nonumber \] Since \(\epsilon\) is an arbitrary positive number, this implies that \(\{F_n\}\) converges uniformly to \(F\) on~\(S\).

    The next example is similar to Example~.

    We now study properties of the functions of a uniformly convergent sequence that are inherited by the limit function. We first consider continuity.

    Suppose that each \(F_n\) is continuous at \(x_0\). If \(x\in S\) and \(n\ge1\), then \[\begin{equation} \label{eq:4.4.8} \begin{array}{rcl} |F(x)-F(x_0)|\ar\le |F(x)-F_n(x)|+|F_n(x)-F_n(x_0)|+|F_n(x_0)-F(x_0)| \\ \ar\le |F_n(x)-F_n(x_0)|+2\|F_n-F\|_S. \end{array} \end{equation} \nonumber \] Suppose that \(\epsilon>0\). Since \(\{F_n\}\) converges uniformly to \(F\) on \(S\), we can choose \(n\) so that \(\|F_n-F\|_S<\epsilon\). For this fixed \(n\), implies that \[\begin{equation} \label{eq:4.4.9} |F(x)-F(x_0)|<|F_n(x)-F_n(x_0)|+2\epsilon,\quad x\in S. \end{equation} \nonumber \] Since \(F_n\) is continuous at \(x_0\), there is a \(\delta>0\) such that \[ |F_n(x)-F_n(x_0)|<\epsilon\mbox{\quad if\quad} |x-x_0|<\delta, \nonumber \] so, from , \[ |F(x)-F(x_0)|<3\epsilon,\mbox{\quad if\quad} |x-x_0|<\delta. \nonumber \] Therefore, \(F\) is continuous at \(x_0\). Similar arguments apply to the assertions on continuity from the right and left.

    Now we consider the question of integrability of the uniform limit of integrable functions.

    Since \[\begin{eqnarray*} \left|\int_a^b F_n(x)\,dx-\int_a^b F(x)\,dx\right|\ar\le \int_a^b |F_n(x)-F(x)|\,dx\\ \ar\le (b-a)\|F_n-F\|_S \end{eqnarray*} \nonumber \] and \(\lim_{n\to\infty}\|F_n-F\|_S=0\), the conclusion follows.

    In particular, this theorem implies that holds if each \(F_n\) is continuous on \([a,b]\), because then \(F\) is continuous (Corollary~) and therefore integrable on \([a,b]\).

    The hypotheses of Theorem~ are stronger than necessary. We state the next theorem so that you will be better informed on this subject. We omit the proof, which is inaccessible if you skipped Section~3.5, and quite involved in any case.

    Part of this theorem shows that it is not necessary to assume in Theorem~ that \(F\) is integrable on \([a,b]\), since this follows from the uniform convergence. Part is known as the {}. Neither of the assumptions of

    can be omitted. Thus, in Example~, where \(\{\|F_n\|_{[0,1]}\}\) is unbounded while \(F\) is integrable on \([0,1]\), \[ \int^1_0 F_n(x)\,dx=1,\quad n\ge1,\mbox{\quad but\quad} \int^1_0 F(x)\,dx=0. \nonumber \] In Example~, where \(\|F_n\|_{[a,b]}=1\) for every finite interval \([a,b]\), \(F_n\) is integrable for all \(n\ge1\), and \(F\) is nonintegrable on every interval (Exercise~).

    After Theorems~ and , it may seem reasonable to expect that if a sequence \(\{F_n\}\) of differentiable functions converges uniformly to \(F\) on \(S\), then \(F'=\lim_{n\to\infty}F'_n\) on \(S\). The next example shows that this is not true in general.

    Since \(F'_n\) is continuous on \([a,b]\), we can write \[\begin{equation} \label{eq:4.4.13} F_n(x)=F_n(x_0)+\int^x_{x_0} F'_n(t)\,dt,\quad a\le x\le b \end{equation} \nonumber \] (Theorem~). Now let \[\begin{eqnarray} L\ar=\lim_{n\to\infty}F_n(x_0)\nonumber\\ \arraytext{and}\nonumber\\ G(x)\ar=\lim_{n\to\infty} F'_n(x).\label{eq:4.4.14} \end{eqnarray} \nonumber \] Since \(F'_n\) is continuous and \(\{F'_n\}\) converges uniformly to \(G\) on \([a,b]\), \(G\) is continuous on \([a,b]\) (Corollary~); therefore, and Theorem~ (with \(F\) and \(F_n\) replaced by \(G\) and \(F_n'\)) imply that \(\{F_n\}\) converges pointwise on \([a,b]\) to the limit function \[\begin{equation} \label{eq:4.4.15} F(x)=L+\int^x_{x_0} G(t)\,dt. \end{equation} \nonumber \] The convergence is actually uniform on \([a,b]\), since subtracting from yields \[\begin{eqnarray*} |F(x)-F_n(x)|\ar\le |L-F_n(x_0)|+\left|\int_{x_0}^x|G(t)-F_n'(t)|\,dt\right|\\ \ar\le |L-F_n(x_0)|+|x-x_0|\,\|G-F_n'\|_{[a,b]}, \end{eqnarray*} \nonumber \] so \[ \|F-F_n\|_{[a,b]}\le|L-F_n(x_0)|+(b-a)\|G-F'_n\|_{[a,b]}, \nonumber \] where the right side approaches zero as \(n\to\infty\).

    Since \(G\) is continuous on \([a,b]\), , , Definition~, and Theorem~ imply and .

    In Section~4.3 we defined the sum of an infinite series of constants as the limit of the sequence of partial sums. The same definition can be applied to series of functions, as follows.

    As for series of constants, the convergence, pointwise or uniform, of a series of functions is not changed by altering or omitting finitely many terms. This justifies adopting the convention that we used for series of constants: when we are interested only in whether a series of functions converges, and not in its sum, we will omit the limits on the summation sign and write simply \(\sum f_n\).

    Theorem~ is easily converted to a theorem on uniform convergence of series, as follows.

    5pt Apply Theorem~ to the partial sums of \(\sum f_n\), observing that \[ f_n+f_{n+1}+\cdots+f_m=F_m-F_{n-1}. \nonumber \] -2em2em

    Setting \(m=n\) in yields the following necessary, but not sufficient, condition for uniform convergence of series. It is analogous to Corollary~.

    5pt

    Theorem~ leads immediately to the following important test for uniform convergence of series.

    5pt

    5pt

    From Cauchy’s convergence criterion for series of constants, there is for each \(\epsilon>0\) an integer \(N\) such that \[ M_n+M_{n+1}+\cdots+M_m<\epsilon\mbox{\quad if\quad} m\ge n\ge N, \nonumber \] which, because of , implies that \[ \|f_n\|_S+\|f_{n+1}\|_S+\cdots+\|f_m\|_S<\epsilon\mbox{\quad if\quad} m, n\ge N. \nonumber \] Lemma~ and Theorem~ imply that \(\sum f_n\) converges uniformly on \(S\).

    Weierstrass’s test is very important, but applicable only to series that actually exhibit a stronger kind of convergence than we have considered so far. We say that \(\sum f_n\) {} \(S\) if \(\sum |f_n|\) converges pointwise on \(S\), and {} on \(S\) if \(\sum |f_n|\) converges uniformly on \(S\). We leave it to you (Exercise~) to verify that our proof of Weierstrass’s test actually shows that \(\sum f_n\) converges absolutely uniformly on \(S\). We also leave it to you to show that if a series converges absolutely uniformly on \(S\), then it converges uniformly on \(S\) (Exercise~).

    The next theorem applies to series that converge uniformly, but perhaps not absolutely uniformly, on a set \(S\).

    The proof is similar to the proof of Theorem~. Let \[ G_n=g_k+g_{k+1}+\cdots+g_n, \nonumber \] and consider the partial sums of \(\sum_{n=k}^\infty f_ng_n\): \[\begin{equation} \label{eq:4.4.20} H_n=f_kg_k+f_{k+1}g_{k+1}+\cdots+f_ng_n. \end{equation} \nonumber \] By substituting \[ g_k=G_k\mbox{\quad and\quad} g_n=G_n-G_{n-1},\quad n\ge k+1, \nonumber \] into , we obtain \[ H_n=f_kG_k+f_{k+1}(G_{k+1}-G_k)+\cdots+f_n(G_n-G_{n-1}), \nonumber \] which we rewrite as \[ H_n=(f_k-f_{k+1}) G_k+(f_{k+1}-f_{k+2})G_{k+1}+\cdots+(f_{n-1}-f_n)G_{n-1}+f_nG_n, \nonumber \] or \[\begin{equation} \label{eq:4.4.21} H_n=J_{n-1}+f_nG_n, \end{equation} \nonumber \] where \[\begin{equation} \label{eq:4.4.22} J_{n-1}=(f_k-f_{k+1})G_k+(f_{k+1}-f_{k+2}) G_{k+1}+\cdots+(f_{n-1}-f_n)G_{n-1}. \end{equation} \nonumber \] That is, \(\{J_n\}\) is the sequence of partial sums of the series \[\begin{equation} \label{eq:4.4.23} \sum_{j=k}^\infty (f_j-f_{j+1})G_j. \end{equation} \nonumber \]

    From and the definition of \(G_j\), \[ \left|\sum^m_{j=n}[f_j(x)-f_{j+1}(x)]G_j(x)\right|\le M \sum^m_{j=n}|f_j(x)-f_{j+1}(x)|,\quad x\in S, \nonumber \]

    so \[ \left\|\sum^m_{j=n} (f_j-f_{j+1})G_j\right\|_S\le M\left\|\sum^m_{j=n} |f_j-f_{j+1}|\right\|_S. \nonumber \] Now suppose that \(\epsilon>0\). Since \(\sum (f_j-f_{j+1})\) converges absolutely uniformly on \(S\), Theorem~ implies that there is an integer \(N\) such that the right side of the last inequality is less than \(\epsilon\) if \(m\ge n\ge N\). The same is then true of the left side, so Theorem~ implies that converges uniformly on~\(S\).

    We have now shown that \(\{J_n\}\) as defined in converges uniformly to a limit function \(J\) on \(S\). Returning to , we see that \[ H_n-J=J_{n-1}-J+f_nG_n. \nonumber \] Hence, from Lemma~ and , \[\begin{eqnarray*} \|H_n-J\|_S\ar\le \|J_{n-1}-J\|_S+\|f_n\|_S\|G_n\|_S\\ \ar\le \|J_{n-1}-J\|_S+M\|f_n\|_S. \end{eqnarray*} \nonumber \] Since \(\{J_{n-1}-J\}\) and \(\{f_n\}\) converge uniformly to zero on \(S\), it now follows that \(\lim_{n\to\infty}\|H_n-J\|_S=0\). Therefore, \(\{H_n\}\) converges uniformly on~\(S\).

    The proof is similar to that of Corollary~. We leave it to you (Exercise~).

    \begin{example}\rm Consider the series \[ \sum_{n=1}^\infty \frac{\sin nx}{ n} \nonumber \] with \(f_n=1/n\) (constant), \(g_n(x)=\sin nx\), and \[ G_n(x)=\sin x+\sin2x+\cdots+\sin nx. \nonumber \] We saw in Example~ that \[ |G_n(x)|\le \frac{1}{ |\sin(x/2)|},\quad n\ge1,\quad n\ne2k\pi \quad \mbox{\quad ($k=$ integer)}. \nonumber \]

    Therefore, \(\{\|G_n\|_S\}\) is bounded, and the series converges uniformly on any set \(S\) on which \(\sin x/2\) is bounded away from zero. For example, if \(0<\delta<\pi\), then \[ \left|\sin \frac{x}{2}\right|\ge\sin \frac{\delta}{2} \nonumber \] if \(x\) is at least \(\delta\) away from any multiple of \(2\pi\); hence, the series converges uniformly on \[ S=\bigcup^\infty_{k=-\infty}[2k\pi+\delta, 2(k+1)\pi-\delta]. \nonumber \] Since \[ \sum\left|\frac{\sin nx}{ n}\right|=\infty,\quad x\ne k\pi \nonumber \] (Exercise~

    ), this result cannot be obtained from Weierstrass’s test. \end{example}

    We can obtain results on the continuity, differentiability, and integrability of infinite series by applying Theorems~, , and to their partial sums. We will state the theorems and give some examples, leaving the proofs to you.

    Theorem~ implies the following theorem (Exercise~).

    The next theorem gives conditions that permit the interchange of summation and integration of infinite series. It follows from Theorem~ (Exercise~). We leave it to you to formulate an analog of Theorem~ for series (Exercise~).

    We say in this case that \(\sum_{n=k}^\infty f_n\) can be integrated {} over \([a,b]\).

    The next theorem gives conditions that permit the interchange of summation and differentiation of infinite series. It follows from Theorem~ (Exercise~).

    We say in this case that \(\sum_{n=k}^\infty f_n\) can be differentiated {} on \([a,b]\). To apply Theorem~, we first verify that \(\sum_{n=k}^\infty f_n(x_0)\) converges for some \(x_0\) in \([a,b]\) and then differentiate \(\sum_{n=k}^\infty f_n\) term by term. If the resulting series converges uniformly, then term by term differentiation was legitimate.

    We now consider a class of series sufficiently general to be interesting, but sufficiently specialized to be easily understood.

    The following theorem summarizes the convergence properties of power series.

    In any case, the series converges to \(a_0\) if \(x=x_0\). If \[\begin{equation}\label{eq:4.5.3} \sum |a_n|r^n<\infty \end{equation} \nonumber \] for some \(r>0\), then \(\sum a_n (x-x_0)^n\) converges absolutely uniformly in \([x_0-r, x_0+r]\), by Weierstrass’s test (Theorem~) and Exercise~. From Cauchy’s root test (Theorem~), holds if \[ \limsup_{n\to\infty} (|a_n|r^n)^{1/n}<1, \nonumber \] which is equivalent to \[ r\,\limsup_{n\to\infty} |a_n|^{1/n}<1 \nonumber \] (Exercise~ ). From , this can be rewritten as \(r<R\), which proves the assertions concerning convergence in and

    .

    If \(0\le R<\infty\) and \(|x-x_0|>R\), then

    \[ \frac{1}{ R}>\frac{1}{ |x-x_0|}, \nonumber \] so implies that \[ |a_n|^{1/n}\ge\frac{1}{ |x-x_0|}\mbox{\quad and therefore\quad} |a_n(x-x_0)^n|\ge1 \nonumber \] for infinitely many values of \(n\). Therefore, \(\sum a_n(x-x_0)^n\) diverges (Corollary~) if \(|x-x_0|>R\). In particular, the series diverges for all \(x\ne x_0\) if \(R=0\).

    To prove the assertions concerning the possibilities at \(x=x_0+R\) and \(x=x_0-R\) requires examples, which follow. (Also, see Exercise~.)

    The number \(R\) defined by is the {} of \(\sum a_n(x-x_0)^n\). If \(R>0\), the {} interval \((x_0-R, x_0+R)\), or \((-\infty,\infty)\) if \(R=\infty\), is the {} of the series. Theorem~ says that a power series with a nonzero radius of convergence converges absolutely uniformly in every compact subset of its interval of convergence and diverges at every point in the exterior of this interval. On this last we can make a stronger statement: Not only does \(\sum a_n(x-x_0)^n\) diverge if \(|x-x_0|>R\), but the sequence \(\{a_n(x-x_0)^n\}\) is unbounded in this case (Exercise~

    ).

    The next theorem provides an expression for \(R\) that, if applicable, is usually easier to use than .

    From Theorem~, it suffices to show that if \[\begin{equation}\label{eq:4.5.4} L=\lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right| \end{equation} \nonumber \] exists in the extended reals, then \[\begin{equation}\label{eq:4.5.5} L=\limsup_{n\to\infty}|a_n|^{1/n}. \end{equation} \nonumber \] We will show that this is so if \(0<L<\infty\) and leave the cases where \(L=0\) or \(L=\infty\) to you (Exercise~).

    If holds with \(0<L<\infty\) and \(0<\epsilon<L\), there is an integer \(N\) such that \[ L-\epsilon<\left|\frac{a_{m+1}}{ a_m}\right|<L+\epsilon\mbox{\quad if \quad} m\ge N, \nonumber \] so \[ |a_m|(L-\epsilon)<|a_{m+1}|<|a_m|(L+\epsilon)\mbox{\quad if\quad} m \ge N. \nonumber \] By induction, \[ |a_N|(L-\epsilon)^{n-N}<|a_n|<|a_N| (L+\epsilon)^{n-N}\mbox{\quad if \quad} n> N. \nonumber \] Therefore, if \[ K_1=|a_N|(L-\epsilon)^{-N}\mbox{\quad and\quad} K_2=|a_N|(L+ \epsilon)^{-N}, \nonumber \] then \[\begin{equation}\label{eq:4.5.6} K^{1/n}_1(L-\epsilon)<|a_n|^{1/n}<K^{1/n}_2(L+\epsilon). \end{equation} \nonumber \] Since \(\lim_{n\to\infty} K^{1/n}=1\) if \(K\) is any positive number, implies that \[ L-\epsilon\le\liminf_{n\to\infty} |a_n|^{1/n}\le \limsup_{n\to\infty} |a_n|^{1/n}\le L+\epsilon. \nonumber \] Since \(\epsilon\) is an arbitrary positive number, it follows that \[ \lim_{n\to\infty} |a_n|^{1/n}=L, \nonumber \] which implies .

    We now study the properties of functions defined by power series. Henceforth, we consider only power series with nonzero radii of convergence.

    First, the series in and are the same, since the latter is obtained by shifting the index of summation in the former. Since \[\begin{eqnarray*} \limsup_{n\to\infty} ((n+1)|a_n|)^{1/n}\ar= \limsup_{n\to\infty} (n+1)^{1/n}|a_n|^{1/n}\\ \ar=\left(\lim_{n\to\infty} (n+1)^{1/n}\right)\left(\limsup_{n\to \infty} |a_n|^{1/n}\right)\mbox{\quad (Exercise~\ref{exer:4.1.30}\part{a}})\\ \ar=\left[\lim_{n\to\infty}\exp\left(\frac{\log(n+1)}{ n}\right) \right]\left(\limsup_{n\to\infty} |a_n|^{1/n}\right)=\frac{e^0}{ R}=\frac{1}{ R}, \end{eqnarray*} \nonumber \] the radius of convergence of the power series in is \(R\) (Theorem~). Therefore, the power series in converges uniformly in every interval \([x_0-r, x_0+r]\) such that \(0<r<R\), and Theorem~ now implies for all \(x\) in \((x_0-R, x_0+R)\).

    Theorem~ can be strengthened as follows.

    The proof is by induction. The assertion is true for \(k=1\), by Theorem~. Suppose that it is true for some \(k\ge1\). By shifting the index of summation, we can rewrite as \[ f^{(k)}(x)=\sum^\infty_{n=0} (n+k)(n+k-1)\cdots (n+1)a_{n+k}(x-x_0)^n, \quad |x-x_0|<R. \nonumber \]

    Defining \[\begin{equation}\label{eq:4.5.12} b_n=(n+k)(n+k-1)\cdots (n+1)a_{n+k}, \end{equation} \nonumber \] we rewrite this as \[ f^{(k)}(x)=\sum^\infty_{n=0} b_n(x-x_0)^n,\quad |x-x_0|<R. \nonumber \] By Theorem~, we can differentiate this series term by term to obtain \[ f^{(k+1)}(x)=\sum^\infty_{n=1} nb_n(x-x_0)^{n-1},\quad |x-x_0|<R. \nonumber \] Substituting from for \(b_n\) yields \[ f^{(k+1)}(x)=\sum^\infty_{n=1}(n+k)(n+k-1)\cdots(n+1) na_{n+k}(x-x_0)^{n-1},\quad |x-x_0|<R. \nonumber \] Shifting the summation index yields \[ f^{(k+1)}(x)=\sum^\infty_{n=k+1} n(n-1)\cdots (n-k)a_n(x-x_0)^{n-k-1}, \quad |x-x_0|<R, \nonumber \] which is with \(k\) replaced by \(k+1\). This completes the induction.

    Theorem~ has two important corollaries.

    Setting \(x=x_0\) in yields \[ f^{(k)} (x_0)=k! a_k. \nonumber \] -2em2em

    Let \[ f(x)=\sum^\infty_{n=0} a_n(x-x_0)^n\mbox{\quad and\quad} g(x)= \sum^\infty_{n=0} b_n(x-x_0)^n. \nonumber \] From Corollary~, \[\begin{equation}\label{eq:4.5.15} a_n=\frac{f^{(n)}(x_0)}{ n!}\mbox{\quad and\quad} b_n= \frac{g^{(n)}(x_0)}{ n!}. \end{equation} \nonumber \]

    From , \(f=g\) in \((x_0-r,x_0+r)\). Therefore, \[ f^{(n)} (x_0)=g^{(n)}(x_0),\quad n\ge0. \nonumber \] This and imply .

    Theorems~ and imply the following theorem. We leave the proof to you (Exercise~).

    Example~ presents an application of this theorem.

    So far we have asked for what values of \(x\) a given power series converges, and what are the properties of its sum. Now we ask a related question: What properties guarantee that a given function \(f\) can be represented as the sum of a convergent power series in \(x-x_0\)? A partial answer to this question is provided by what we already know: Theorem~ tells us that \(f\) must have derivatives of all orders in some neighborhood of \(x_0\), and Corollary~ tells us that the only power series in \(x-x_0\) that can possibly converge to \(f\) in such a neighborhood is \[\begin{equation}\label{eq:4.5.16} \sum^\infty_{n=0}\frac{f^{(n)}(x_0)}{ n!} (x-x_0)^n. \end{equation} \nonumber \] This is called the {} (also, the {} of \(f\), if \(x_0=0\)). The \(m\)th partial sum of is the Taylor polynomial \[ T_m(x)=\sum^m_{n=0}\frac{f^{(n)}(x_0)}{ n!} (x-x_0)^n, \nonumber \] defined in Section~2.5.

    The Taylor series of an infinitely differentiable function \(f\) may converge to a sum different from \(f\). For example, the function \[ f(x)=\left\{\casespace\begin{array}{ll} e^{-1/x^2},&x\ne0,\\ 0,&x=0,\end{array}\right. \nonumber \]

    is infinitely differentiable on \((-\infty,\infty)\) and \(f^{(n)}(0)=0\) for \(n\ge0\) (Exercise~), so its Maclaurin series is identically zero.

    The answer to our question is provided by Taylor’s theorem (Theorem~), which says that if \(f\) is infinitely differentiable on \((a,b)\) and \(x\) and \(x_0\) are in \((a,b)\) then, for every integer \(n\ge0\), \[\begin{equation}\label{eq:4.5.17} f(x)-T_n(x)=\frac{f^{(n+1)}(c_n)}{(n+1)!} (x-x_0)^{n-1}, \end{equation} \nonumber \] where \(c_n\) is between \(x\) and \(x_0\). Therefore, \[ f(x)=\sum^\infty_{n=0}\frac{f^{(n)}(x_0)}{ n!} (x-x_0)^n \nonumber \] for an \(x\) in \((a,b)\) if and only if \[ \lim_{n\to\infty}\frac{f^{(n+1)}(c_n)}{(n+1)!} (x-x_0)^{n+1}=0. \nonumber \] It is not always easy to check this condition, because the sequence \(\{c_n\}\) is usually not precisely known, or even uniquely defined; however, the next theorem is sufficiently general to be useful.

    From , \[ \|f-T_n\|_{I_r}\le\frac{r^{n+1}}{(n+1)!}\|f^{(n+1)}\|_{I_r}\le \frac{r^{n+1}}{(n+1)!}\|f^{(n+1)}\|_I, \nonumber \] so implies the conclusion.

    We cannot prove in this way that the binomial series converges to \((1+x)^q\) on \((-1,0)\). This requires a form of the remainder in Taylor’s theorem that we have not considered, or a different kind of proof altogether (Exercise~). The complete result is that \[\begin{equation}\label{eq:4.5.21} (1+x)^q=\sum^\infty_{n=0}\binom{q}{n} x^n,\quad-1<x<1, \end{equation} \nonumber \] for all \(q\), and, as we said earlier, the identity holds for all \(x\) if \(q\) is a nonnegative integer.

    We now consider addition and multiplication of power series, and division of one by another.

    We leave the proof of the next theorem to you (Exercise~).

    Suppose that \(R_1\le R_2\). Since the series and converge absolutely to \(f(x)\) and \(g(x)\) if \(|x-x_0|<R_1\), their Cauchy product converges to \(f(x)g(x)\) if \(|x-x_0|<R_1\), by Theorem~. The \(n\)th term of this product is \[ \sum^n_{r=0} a_r(x-x_0)^r b_{n-r}(x-x_0)^{n-r}=\left(\sum^n_{r=0} a_rb_{n-r}\right) (x-x_0)^n=c_n(x-x_0)^n. \nonumber \] -4em4em

    The quotient \[\begin{equation}\label{eq:4.5.25} f(x)=\frac{h(x)}{ g(x)} \end{equation} \nonumber \] of two power series \[\begin{eqnarray*} h(x)\ar=\sum^\infty_{n=0} c_n(x-x_0)^n,\quad |x-x_0|<R_1,\\ \noalign{\hbox{and}} g(x)\ar=\sum^\infty_{n=0} b_n(x-x_0)^n,\quad |x-x_0|<R_2, \end{eqnarray*} \nonumber \] can be represented as a power series \[\begin{equation}\label{eq:4.5.26} f(x)=\sum^\infty_{n=0} a_n(x-x_0)^n \end{equation} \nonumber \] with a positive radius of convergence, provided that \[ b_0=g(x_0)\ne0. \nonumber \] This is surely plausible. Since \(g(x_0)\ne0\) and \(g\) is continuous near \(x_0\), the denominator of differs from zero on an interval about \(x_0\). Therefore, \(f\) has derivatives of all orders on this interval, because \(g\) and \(h\) do. However, the proof that the Taylor series of \(f\) about \(x_0\) converges to \(f\) near \(x_0\) requires the use of the theory of functions of a complex variable. Therefore, we omit it. However, it is straightforward to compute the coefficients in if we accept the validity of the expansion. Since \[ f(x)g(x)=h(x), \nonumber \]

    Theorem~ implies that \[ \sum^n_{r=0} a_rb_{n-r}=c_n,\quad n\ge0. \nonumber \] Solving these equations successively yields \[\begin{eqnarray*} a_0\ar=\frac{c_0}{ b_0},\\ a_n\ar=\frac{1}{ b_0}\left(c_n-\sum^{n-1}_{r=0} b_{n-r}a_r\right),\quad n\ge1. \end{eqnarray*} \nonumber \]

    It is not worthwhile to memorize these formulas. Rather, it is usually better to view the procedure as follows: Multiply the series \(f\) (with unknown coefficients) and \(g\) according to the procedure of Theorem~, equate the resulting coefficients with those of \(h\), and solve the resulting equations successively for \(a_0\), \(a_1\), .

    From Theorem~, we know that a function \(f\) defined by a convergent power series \[\begin{equation}\label{eq:4.5.29} f(x)=\sum^\infty_{n=0} a_n(x-x_0)^n,\quad |x-x_0|<R, \end{equation} \nonumber \] is continuous in the open interval \((x_0-R,x_0+R)\). The next theorem concerns the behavior of \(f\) as \(x\) approaches an endpoint of the interval of convergence.

    We consider a simpler problem first. Let \[\begin{eqnarray*} g(y)\ar=\sum^\infty_{n=0} b_ny^n\\ \arraytext{and}\\ \sum^\infty_{n=0} b_n\ar=s\mbox{\quad (finite)}. \end{eqnarray*} \nonumber \] We will show that \[\begin{equation}\label{eq:4.5.30} \lim_{y\to 1-} g(y)=s. \end{equation} \nonumber \]

    From Example~, \[\begin{equation}\label{eq:4.5.31} g(y)=(1-y)\sum^\infty_{n=0} s_ny^n, \end{equation} \nonumber \] where \[ s_n=b_0+b_1+\cdots+b_n. \nonumber \] Since \[\begin{equation} \label{eq:4.5.32} \frac{1}{1-y}=\sum^\infty_{n=0} y^n\mbox{\quad and therefore \quad} 1=(1-y)\sum_{n=0}^\infty y^n,\quad|y|<1, \end{equation} \nonumber \] we can multiply through by \(s\) and write \[ s=(1-y)\sum^\infty_{n=0} sy^n,\quad |y|<1. \nonumber \] Subtracting this from yields \[ g(y)-s=(1-y)\sum^\infty_{n=0} (s_n-s)y^n,\quad |y|<1. \nonumber \] If \(\epsilon>0\), choose \(N\) so that \[ |s_n-s|<\epsilon\mbox{\quad if\quad} n\ge N+1. \nonumber \] Then, if \(0<y<1\), \[\begin{eqnarray*} |g(y)-s|\ar\le (1-y)\sum^N_{n=0} |s_n-s| y^n+(1-y)\sum^\infty_{n=N+1} |s_n-s|y^n\\ \ar<(1-y)\sum^N_{n=0} |s_n-s|y^n+(1-y)\epsilon y^{N+1} \sum^\infty_{n=0}y^n\\ \ar<(1-y)\sum^N_{n=0} |s_n-s|+\epsilon, \end{eqnarray*} \nonumber \] because of the second equality in . Therefore, \[ |g(y)-s|<2\epsilon \nonumber \] if \[ (1-y)\sum^N_{n=0} |s_n-s|<\epsilon. \nonumber \] This proves .

    To obtain from this, let \(b_n=a_nR^n\) and \(g(y)=f(x_0+Ry)\); to obtain

    , let \(b_n=(-1)^na_nR^n\) and \(g(y)=f(x_0-Ry)\).


    This page titled 4.1: Sequences of Real Numbers is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.