# 8.5: Taylor Polynomials and Taylor Series

- Page ID
- 4345

Learning Objectives

In this section, we strive to understand the ideas generated by the following important questions:

- What is a Taylor polynomial? For what purposes are Taylor polynomials used?
- What is a Taylor series?
- How are Taylor polynomials and Taylor series different? How are they related?
- How do we determine the accuracy when we use a Taylor polynomial to approximate a function?

In our work to date in Chapter 8, essentially every sum we have considered has been a sum of numbers. In particular, each infinite series that we have discussed has been a series of real numbers, such as

\[1 + \dfrac{1}{2} + \dfrac{1}{2} + \ldots + \dfrac{1}{2^k} = \sum_{k=0}^{\infty} \dfrac{1}{2^k} . \label{8.18}\]

In the remainder of this chapter, we will expand our notion of series to include series that involve a variable, say \(x\). For instance, if in the geometric series in Equation \(\ref{8.18}\) we replace the ratio \(r = \frac{1}{2}\) with the variable \(x\), then we have the infinite (still geometric) series

\[1 + x + x^2 + \ldots + x^k + · · · = \sum_{k=0}^{\infty} x^k . \label{8.19)} \]

Here we see something very interesting: since a geometric series converges whenever its ratio \(r\) satisfies \(|r| < 1\), and the sum of a convergent geometric series is \(\frac{a}{1−r}\), we can say that for \(|x| < 1,\)

\[ 1 + x + x^2 + \ldots + x^k + \ldots = \dfrac{1}{1 − x}. \label{8.20}\]

Note well what Equation \(\ref{8.20}\) states: the non-polynomial function \(\frac{1}{1−x}\) on the right is equal to the infinite polynomial expression on the left. Moreover, it appears natural to truncate the infinite sum on the left (whose terms get very small as \(k\) gets large) and say, for example, that

\[1 + x + x^2 + x^3 \approx \dfrac{1}{1 − x} \]

for small values of \(x\). This shows one way that a polynomial function can be used to approximate a non-polynomial function; such approximations are one of the main themes in this section and the next.

A polynomial function can be used to approximate a non-polynomial function.

In Preview Activity \(\PageIndex{1}\), we begin our explorations of approximating non-polynomial functions with polynomials, from which we will also develop ideas regarding infinite series that involve a variable, \(x\).

Preview Activity \(\PageIndex{1}\)

Preview Activity 8.5.3 showed how we can approximate the number e using linear, quadratic, and other polynomial functions; we then used similar ideas in Preview Activity 8.4 to approximate \(\ln(2)\). In this activity, we review and extend the process to find the “best" quadratic approximation to the exponential function \(e^x\) around the origin. Let \(f (x) = e^ x\) throughout this activity.

- Find a formula for \(P_1(x)\), the linearization of \(f (x)\) at \(x = 0\). (We label this linearization \(P_1\) because it is a first degree polynomial approximation.) Recall that \(P_1(x)\) is a good approximation to \(f (x)\) for values of \(x\) close to 0. Plot \(f\) and \(P_1\) near \(x = 0\) to illustrate this fact.
- Since \(f (x) = e^x\) is not linear, the linear approximation eventually is not a very good one. To obtain better approximations, we want to develop a different approximation that “bends” to make it more closely fit the graph of f near \(x = 0\). To do so, we add a quadratic term to \(P_1(x)\). In other words, we let

\[P_2(x) = P_1(x) + c_2 x^2 \]

for some real number \(c_2\). We need to determine the value of \(c_2\) that makes the graph of \(P_2(x)\) best fit the graph of \(f (x)\) near \(x = 0\).

Remember that \(P_1(x)\) was a good linear approximation to \(f (x)\) near 0; this is because \(P_1(0) = f (0)\) and \(P'1 (0) = f'(0)\). It is therefore reasonable to seek a value of \(c_2\) so that

\[P_2(0) = f (0)\, \]

\[P'_2 (0) = f'(0),\ \text{and} \]

\[P''_2 (0) = f''(0). \]

Remember, we are letting \(P_2(x) = P_1(x) + c_2 x^2.\)

- Calculate \(P_2(0)\) to show that \(P_2(0) = f (0)\).
- Calculate \(P'_2 (0)\) to show that \(P'_2 (0) = f'(0)\).
- Calculate \(P''_2 (x)\). Then find a value for \(c_2\) so that \(P''_2 (0) = f''(0)\).
- Explain why the condition \(P''_2 (0) = f''(0)\) will put an appropriate “bend" in the graph of \(P_2\) to make \(P_2\) fit the graph of \(f\) around \(x = 0\).

## Taylor Polynomials Preview

Activity 8.5 illustrates the first steps in the process of approximating complicated functions with polynomials. Using this process we can approximate trigonometric, exponential, logarithmic, and other nonpolynomial functions as closely as we like (for certain values of \(x\)) with polynomials. This is extraordinarily useful in that it allows us to calculate values of these functions to whatever precision we like using only the operations of addition, subtraction, multiplication, and division, which are operations that can be easily programmed in a computer.

We next extend the approach in Preview Activity \(\PageIndex{1}\) to arbitrary functions at arbitrary points. Let \(f\) be a function that has as many derivatives at a point \(x = a\) as we need. Since first learning it in Section 1.8, we have regularly used the linear approximation \(P_1(x)\) to \(f\) at \(x = a\), which in one sense is the best linear approximation to f near a. Recall that \(P_1(x)\) is the tangent line to \(f\) at \((a, f (a))\) and is given by the formula

\[P_1(x) = f (a) + f'(a)(x − a). \]

If we proceed as in Preview Activity \(\PageIndex{1}\), we then want to find the best quadratic approximation

\[P_2(x) = P_1(x) + c_2(x − a)^2 \]

so that \(P_2(x)\) more closely models \(f (x)\) near \(x = a\). Consider the following calculations of the values and derivatives of \(P_2(x)\):

\[\begin{align}P_2(x) & = P_1(x) + c_2(x − a)^2 \\ P'_2 (x) & = P'_1(x) + 2c_2(x − a) \\ P''_2 (x) & = 2c^2 \end{align} \]

and then evaluated at \(x=a\)

\[\begin{align} P_2(a) & = P_1(a) = f (a) \\ P'_2 (a) & = P'_1 (a) = f'(a) \\ P''_2 (a) & = 2c_2. \end{align} \]

To make \(P_2(x)\) fit \(f (x)\) better than \(P_1(x)\), we want \(P_2(x)\) and \(f (x)\) to have the same concavity at \(x = a\). That is, we want to have

\[ P''_2 (a) = f''(a). \]

This implies that

\[ 2c_2 = f''(a) \]

and thus

\[ c_2 = f''(a) 2. \]

Therefore, the quadratic approximation \(P_2(x)\) to \(f\) centered at \(x = 0\) is

\[P_2(x) = f (a) + f '(a)(x − a) + \dfrac{f ''(a)}{2!} (x − a)^2. \]

This approach extends naturally to polynomials of higher degree. In this situation, we define polynomials

\[ \begin{align} P_3(x) & = P_2(x) + c_3(x − a)^3 \\ P_4(x) & = P_3(x) + c_4(x − a)^4 \\ P_5(x) & = P_4(x) + c_5(x − a)^5 \end{align} \]

and so on, with the general one being

\[P_n(x) = P_{n−1}(x) + c_n(x − a)^n. \]

The defining property of these polynomials is that for each \(n\), \(P_n(x)\) must have its value and all its first n derivatives agree with those of \(f\) at \(x = a\). In other words we require that

\[ P^{(k)}_n (a) = f^{(k)}(a) \]

for all \( k\) from 0 to \( n\).

To see the conditions under which this happens, suppose

\[P_n(x) = c_0 + c_1(x − a) + c_2(x − a)^2 + \ldots + c_n(x − a)^n . \]

Then

\[ \begin{align} P^{(0)}_n (a) & = c_0 \\ P^{(1)}_n (a) & = c_1 \\ P^{(2)}_n (a) &= 2c_2 \\ P^{(3)}_n (a) &= (2)(3)c_3 \\ P^{(4)}_n (a) &= (2)(3)(4)c_4 \\ P^{(5)}_n (a) & = (2)(3)(4)(5)c_5 \end {align} \]

and, in general,

\[P^{(k)}_n (a) = (2)(3)(4) \ldots (k − 1)(k)c_k = k!c_k . \]

So having

\[P^{(k)}_n (a) = f^{(k)} (a) \]

means that

\[k!c_k = f^{(k)} (a) \]

and therefore

\[c_k = \dfrac{f^{(k)}(a)}{k!} \]

for each value of \(k\). The expression for \(c_k\) we have found the formula for the degree \(n\) polynomial approximation of \(f\) that we seek.

Taylor Polynomials

The \(n\)th order Taylor polynomial of \(f\) centered at \(x = a\) is given by

\[ \begin{align} P_n(x) & = f (a) + f' (a)(x − a) + \dfrac{f''(a)}{2!} (x − a)^2 + \ldots + \dfrac{f^{(n)}(a)}{n!} (x − a)^n \\ & = \sum_{k=0}^n \dfrac{f^{(k)} (a)}{ k! } (x − a)^k. \label{Taylor} \end {align} \]

This degree n polynomial approximates \(f (x)\) near \(x = a\) and has the property that \(P^{(k)}_n (a) = f^{(k)} (a)\) for \(k = 0 \ldots n\).

Example \(\PageIndex{1}\)

Determine the third order Taylor polynomial for \(f (x) =e^x\), as well as the general \( n\)th order Taylor polynomial for \(f\) centered at \(x = 0\).

**Solution**

We know that \(f'(x) = e^x\) and so \(f''(x) = e^x\) and \(f'''(x) = e^x\). Thus,

\[f (0) = f'(0) = f''(0) = f'''(0) = 1. \]

So the third order Taylor polynomial of \(f (x) = e^ x\) centered at \(x = 0\) is (Equation \ref{Taylor})

\[ \begin{align} P_3(x) & = f (0) + f' (0)(x − 0) + \dfrac{f ''(0) }{2!} (x − 0)^2 + \dfrac{f '''(0)}{3!} (x − 0)^ 3 \\ & = 1 + x + \dfrac{x^2}{2} + \dfrac{x^3}{6} \end{align} \]

In general, for the exponential function \(f\) we have \(f^{(k)} (x) = e^ x\) for every positive integer \(k\). Thus, the \(k\)th term in the \(n\)th order Taylor polynomial for \(f (x)\) centered at \(x = 0\) is

\[ \dfrac{f^{(k)} (0)}{ k!} (x − 0)^k = \dfrac{1}{ k!} x^ k . \]

Therefore, the \(n\)th order Taylor polynomial for \(f (x) = e^x\) centered at \(x = 0\) is

\[P_n(x) = 1 + x + \dfrac{x^2}{ 2!} + \ldots + \dfrac{1}{n!} x^n = \sum_{k=0}^n \dfrac{ x^k}{k!}. \]

Activity \(\PageIndex{2}\): Other Functions

We have just seen that the \(n\)th order Taylor polynomial centered at \(a = 0\) for the exponential function \(e^x \) is

\[\sum _{k=0}^n \dfrac{x^k}{k!}. \]

In this activity, we determine small order Taylor polynomials for several other familiar functions, and look for general patterns that will help us find the Taylor series expansions a bit later.

- Let \(f (x) = \frac{1}{1−x}\).
- Calculate the first four derivatives of \(f (x)\) at \(x = 0\). Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\frac{1}{1−x}\) centered at 0.
- Based on your results from part (i), determine a general formula for \(f^{(k)} (0)\).

- Let \(f (x) = \cos(x)\).
- Calculate the first four derivatives of \(f (x)\) at \(x = 0\). Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\cos(x)\) centered at 0.
- Based on your results from part (i), find a general formula for \(f^{(k)} (0)\). (Think about how \(k\) being even or odd affects the value of the \( k\)th derivative.)

- Let \(f (x) = sin(x)\).
- Calculate the first four derivatives of \(f (x)\) at \(x = 0\). Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\sin(x)\) centered at 0.
- Based on your results from part (i), find a general formula for \(f^{(k)} (0)\). (Think about how \(k\) being even or odd affects the value of the \(k\)th derivative.)

It is possible that an \(n\)th order Taylor polynomial is not a polynomial of degree \(n\); that is, the order of the approximation can be different from the degree of the polynomial. For example, in Activity \(\PageIndex{2}\) we found that the second order Taylor polynomial \(P_2(x)\) centered at 0 for \(\sin(x)\) is \(P_2(x) = x\). In this case, the second order Taylor polynomial is a degree 1 polynomial.

## Taylor Series

In Activity \(\PageIndex{2}\) we saw that the fourth order Taylor polynomial \(P_4(x)\) for \(\sin(x)\) centered at 0 is

\[P_4(x) = x − \dfrac{x^3}{3!} \]

The pattern we found for the derivatives \(f^{(k)} (0)\) describe the higher-order Taylor polynomials, e.g.,

\[ \begin{align} P_5(x) &= x − \dfrac{x^3}{3!} + \dfrac{x^{(5)}}{5!} \\ P_7(x) &= x − \dfrac{ x^3}{3!} + \dfrac{x^{(5)}}{5!} − \dfrac{x^{(7)}}{7!} \\ P_9(x) &= x − \dfrac{x^3}{3!} + \dfrac{x^{(5)}}{5!} − \dfrac{x^{(7)}}{7!} + \dfrac{x^{(9)}}{9!} \end{align} \]

and so on. It is instructive to consider the graphical behavior of these functions; Figure \(\PageIndex{1}\) shows the graphs of a few of the Taylor polynomials centered at 0 for the sine function.

**Figure \(\PageIndex{1}\):** The order 1, 5, 7, and 9 Taylor polynomials centered at \(x = 0\) for \(f (x) = \sin(x)\).

Notice that \(P_1(x)\) is close to the sine function only for values of \(x\) that are close to 0, but as we increase the degree of the Taylor polynomial the Taylor polynomials provide a better fit to the graph of the sine function over larger intervals. This illustrates the general behavior of Taylor polynomials: for any sufficiently *well-behaved* function, the sequence \(\{P_n(x)\}\) of Taylor polynomials converges to the function \(f\) on larger and larger intervals (though those intervals may not necessarily increase without bound). If the Taylor polynomials ultimately converge to \(f\) on its entire domain, we write

\[f (x) = \sum_{k=0}^{\infty} f (k) (a) k! (x − a) k \]

Definition: Taylor and Maclaurin Series

Let \(f\) be a function all of whose derivatives exist at \(x = a\). The *Taylor series* for \(f\) centered at \(x = a\) is the series \(T_f (x)\) defined by

\[T_f (x) = \sum_{k=0}^{\infty} \dfrac{f^{ (k)} (a)}{ k!} (x − a)^k . \label{TaylorDef} \]

In the special case where \(a = 0\) in Equation \ref{TaylorDef}, the Taylor series is also called the *Maclaurin series* for \(f\). From Example \(\PageIndex{1}\) we know the nth order Taylor polynomial centered at 0 for the exponential function \(e^x\); thus, the Maclaurin series for \(e^x\) is

\[\sum_{k=0}^{\infty} \dfrac{x^k}{k!} . \label{MacDef}\]

Activity \(\PageIndex{3}\)

In Activity \(\PageIndex{2}\) we determined small order Taylor polynomials for a few familiar functions, and also found general patterns in the derivatives evaluated at 0. Use that information to write the Taylor series centered at 0 for the following functions.

- \(f (x) = \frac{1}{1−x}\)
- \(f (x) = \cos(x)\) (You will need to carefully consider how to indicate that many of the coefficients are 0. Think about a general way to represent an even integer.)
- \(f (x) = \sin(x)\) (You will need to carefully consider how to indicate that many of the coefficients are 0. Think about a general way to represent an odd integer.)

The next activity further considers the important issue of the x-values for which the Taylor series of a function converges to the function itself.

Activity \(\PageIndex{4}\)

- Plot the graphs of several of the Taylor polynomials centered at 0 (of order at least 5) for \(e^x\) and convince yourself that these Taylor polynomials converge to e x for every value of \(x\).
- Draw the graphs of several of the Taylor polynomials centered at 0 (of order at least 6) for \(\cos(x)\) and convince yourself that these Taylor polynomials converge to \(\cos(x)\) for every value of \(x\). Write the Taylor series centered at 0 for \(\cos(x)\).
- Draw the graphs of several of the Taylor polynomials centered at 0 for 1 1−x . Based on your graphs, for what values of \(x\) do these Taylor polynomials appear to converge to \frac{1}{1−x}\)? How is this situation different from what we observe with \(e^x\) and \(\cos(x)\)? In addition, write the Taylor series centered at 0 for 1 1−x .

The Maclaurin series for \(e^x\), \(\sin(x)\), \(\cos(x)\), and \(\frac{1}{1−x}\) will be used frequently, so we should be certain to know and recognize them well.

## The Interval of Convergence of a Taylor Series

In the previous section (in Figure 8.6 and Activity 8.24) we observed that the Taylor polynomials centered at 0 for \(e^x\), \(\cos(x)\), and \(\sin(x)\) converged to these functions for all values of \(x\) in their domain, but that the Taylor polynomials centered at 0 for \(\frac{1}{1−x}\) converged to \(\frac{1}{1−x}\) for only some values of \(x\). In fact, the Taylor polynomials centered at 0 for 1 1−x converge to 1 1−x on the interval (−1, 1) and diverge for all other values of x. So the Taylor series for a function \(f (x)\) does not need to converge for all values of \(x\) in the domain of \(f\). Our observations to date suggest two natural questions: can we determine the values of x for which a given Taylor series converges? Moreover, given the Taylor series for a function \(f\), does it actually converge to \(f (x)\) for those values of x for which the Taylor series converges?

Example \(\PageIndex{2}\): The Ratio Test

Graphical evidence suggests that the Taylor series centered at 0 for \(e^x\) converges for all values of \(x\). To verify this, use the Ratio Test to determine all values of \(x\) for which the Taylor series

\[\sum_{k=0}^{\infty} \dfrac{x^k}{ k!} \tag{8.21}\label{8.21}\]

converges absolutely.

**Solution**

In previous work, we used the Ratio Test on series of numbers that did not involve a variable; recall, too, that the Ratio Test only applies to series of nonnegative terms. In this example, we have to address the presence of the variable \(x\). Because we are interested in absolute convergence, we apply the Ratio Test to the series

\[\sum_{k=0}^{\infty} | \dfrac{x^k}{k!} = \sum_{k=0}^{\infty} \dfrac{|x|^k}{k!}. \]

Now, observe that

\[\lim_{k \rightarrow \infty} \dfrac{a_{k+1}}{a_k}= \lim_{k \rightarrow \infty} \dfrac{\dfrac{|x|^{k+1}}{(k+1)!}}{\dfrac{|x|^k}{k}} \]

\[= \lim_{k \rightarrow \infty} \dfrac{|x|^{k+1}k!}{|x|^{k+1}(k+1)!} \]

\[= \lim_{k \rightarrow \infty} \dfrac{|x|}{k+1} \]

\[=0 \]

for any value of \(x\). So the Taylor series (Equation \(\ref{8.21}\)) converges absolutely for every value of x, and thus converges for every value of x.

One key question remains: while the Taylor series for \(e^x\) converges for all \(x\), what we have done does not tell us that this Taylor series actually converges to \(e^x\) for each \(x\). We’ll return to this question when we consider the error in a Taylor approximation near the end of this section.

We can apply the main idea from Example \(\PageIndex{2}\) in general. To determine the values of x for which a Taylor series

\[\sum_{k=0}^{\infty} c_k (x − a) k \]

centered at \(x = a\) will converge, we apply the Ratio Test with \(a_k = |c_k (x − a)^k |\) and recall that the series to which the Ratio Test is applied converges if limk→∞ ak+1 ak < 1.

Observe that

\[\dfrac{a_{k+1}}{a_k} = |x − a| \dfrac{|c_{k+1}|}{|c_k |} , \]

so when we apply the Ratio Test, we get that

\[\lim_{k \rightarrow \infty}\dfrac{a_{k+1}}{a_k} = \lim_{k \rightarrow \infty}|x-a|\dfrac{c_{k+1}}{c_k}. \]

Note further that \(c_k = \dfrac{f^(k) (a)}{k!} , and say that

\[\lim_{k \rightarrow \infty} \dfrac{c_{k+1}}{ck} = L. \]

Thus, we have found that

\[\lim_{k \rightarrow \infty} \dfrac{a_{k+1}}{ak} = | x - a | L. \]

There are three important possibilities for \(L: L\) can be 0, a finite positive value, or infinite. Based on this value of \(L\), we can therefore determine for which values of \(x\) the original Taylor series converges.

- If \(L = 0\), then the Taylor series converges on (−\infty, \infty).
- If \(L\) is infinite, then the Taylor series converges only at \(x = a\).
- If \(L\) is finite and nonzero, then the Taylor series converges
**absolutely**for all \(x) that satisfy

\[|x − a| · L < 1 . \]

In other words, the series converges absolutely for all x such that

\[|x − a| < 1 L , \]

which is also the interval

\[ \left( a − \dfrac{1}{L} , a + \dfrac{1}{L} \right). \]

Because the Ratio Test is inconclusive when the \(|x − a| · L = 1\), the endpoints \(a ± \dfrac{1}{L} \) have to be checked separately. It is important to notice that the set of x values at which a Taylor series converges is always an interval centered at x = a. For this reason, the set on which a Taylor series converges is called the interval of convergence. Half the length of the interval of convergence is called the radius of convergence. If the interval of convergence of a Taylor series is infinite, then we say that the radius of convergence is infinite.

Activity \(\PageIndex{5}\): Using the Ratio Test

- Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f (x) = \frac{1}{1−x}\) centered at \(x = 0\).
- Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f (x) = \cos(x)\) centered at \(x = 0\).
- Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f (x) = \sin(x)\) centered at \(x = 0\).

The Ratio Test tells us how we can determine the set of \(x\) values for which a Taylor series converges absolutely. However, just because a Taylor series for a function \(f\) converges, we cannot be certain that the Taylor series actually converges to \(f (x)\) on its interval of convergence. To show why and where a Taylor series does in fact converge to the function \(f\), we next consider the error that is present in Taylor polynomials.

## Error Approximations for Taylor Polynomials

We now know how to find Taylor polynomials for functions such as \(\sin(x)\), as well as how to determine the interval of convergence of the corresponding Taylor series. We next develop an error bound that will tell us how well an nth order Taylor polynomial \(P_n(x)\) approximates its generating function \(f (x)\). This error bound will also allow us to determine whether a Taylor series on its interval of convergence actually equals the function \(f\) from which the Taylor series is derived. Finally, we will be able to use the error bound to determine the order of the Taylor polynomial \(P_n(x)\) for a function \(f\) that we need to ensure that \(P_n(x)\) approximates \(f (x)\) to any desired degree of accuracy.

In all of this, we need to compare \(P_n(x)\) to \(f(x)\). For this argument, we assume throughout that we center our approximations at 0 (a similar argument holds for approximations centered at a). We define the exact error, \(E_n(x)\), that results from approximating \(f (x)\) with \(P_n(x)\) by

\[E_n(x) = f (x) − P_n(x). \]

We are particularly interested in \(|E_n(x)|\), the distance between \(P_n\) and \(f\). Note that since

\[P^{(k)} _n (0) = f^{(k)} (0) \]

for \(0 ≤ k ≤ n\), we know that

\[E^{(k)}_n (0) = 0 \]

for \(0 ≤ k ≤ n\). Furthermore, since \(P_n(x)\) is a polynomial of degree less than or equal to \(n\), we know that

\[P^{(n+1)}_n (x) = 0. \]

Thus, since

\[E^{(n+1)}_n (x) = f^{(n+1)} (x) − P^{(n+1)}_n (x), \]

it follows that

\[E^{(n+1)}_n (x) = f^{(n+1)} (x) \]

for all \(x\).

Suppose that we want to approximate \(f (x)\) at a number \(c\) close to 0 using \(P_n(c)\). If we assume \(| f^{(n+1)} (t)|\) is bounded by some number \(M\) on \([0, c]\), so that \(f^{(n+1)} (t) ≤ M\) for all \(0 ≤ t ≤ c\), then we can say that

\[ | E^{(n+1)}_n (t) | =| f^{(n+1)} (t)| ≤ M \]

for all \(t\) between 0 and \(c\). Equivalently,

\[− M ≤ E^{(n+1)}_n (t) ≤ M \tag{8.22}\label{8.22}\]

on \( [0, c]\). Next, we integrate the three terms in the inequality \(\ref{8.22}\) from \(t = 0\) to \(t = x\), and thus find that

\[ \int^x_0 −M dt ≤ \int^x_0 E (n+1) n (t) dt ≤ \int^x_0 M dt \]

for every value of \(x\) in \([0, c]\). Since \(E^{(n)}_n (0) = 0\), the First FTC tells us that

\[−M x ≤ E (n) n (x) ≤ M x \]

for every \(x\) in \([0, c]\). Integrating the most recent inequality, we obtain

\[\int^x_0 −Mt dt ≤ \int^x_0 E (n) n (t) dt ≤ \int^x_0 Mt dt \]

and thus

\[−M \dfrac{x^2}{2} ≤ E^(n−1)_n (x) ≤ M \dfrac{x^2}{2} \]

for all \(x\) in \([0, c]\). Integrating n times, we arrive at

\[−M \dfrac{x^{n+1}}{(n + 1)!} ≤ E_n(x) ≤ M \dfrac{x^{ n+1}}{(n + 1)!} \]

for all \(x\) in \([0, c]\). This enables us to conclude that

\[ |E_n(x)| ≤ M \dfrac{|x|^{n+1}}{(n + 1)!} \]

for all \(x\) in \([0, c]\), which shows an important bound on the approximation’s error, \(E_n\). Our work above was based on the approximation centered at \(a = 0\); the argument may be generalized to hold for any value of a, which results in the following theorem.

### The Lagrange Error Bound

For \(P_n(x)\). Let f be a continuous function with \(n + 1\) continuous derivatives. Suppose that \(M\) is a positive real number such that \( |f ^{(n+1)} (x)| ≤ M\) on the interval \([a, c]\). If \(P_n(x)\) is the \(n\)th order Taylor polynomial for \(f (x)\) centered at \(x = a\), then

\[|P_n(c) − f (c)| ≤ M \dfrac{|c − a|^{n+1}}{ (n + 1)!}. \]

This error bound may now be used to tell us important information about Taylor polynomials and Taylor series, as we see in the following examples and activities.

Exercise \(\PageIndex{6}\):

Determine how well the 10th order Taylor polynomial \(P_{10}(x)\) for \(\sin(x)\), centered at 0, approximates \(\sin(2)\).

**Solution. **

To answer this question we use \(f (x) = \sin(x), c = 2, a = 0\), and \(n = 10\) in the Lagrange error bound formula. To use the bound, we also need to find an appropriate value for \(M\). Note that the derivatives of \(f (x) = \sin(x)\) are all equal to \(± \sin(x) or ± \cos(x)\). Thus,

\(|f^(n+1) (x) | ≤ 1 \)

for any \(n\) and \(x\). Therefore, we can choose \(M\) to be 1. Then

\(|P10(2) − f (2)| ≤ (1) \dfrac{|2 − 0|^{11}}{(11)!} = \dfrac{2^{11}}{(11)!} \approx 0.00005130671797.\)

So \(P_10(2)\) approximates \(\sin(2)\) to within at most 0.00005130671797. A computer algebra system tells us that

\(P_10(2) \approx 0.9093474427 \) and \(\sin(2) \approx 0.9092974268\)

with an actual difference of about 0.0000500159.

Activity \(\PageIndex{7}\):

Let \(Pn(x)\) be the nth order Taylor polynomial for \(\sin(x)\) centered at \(x = 0\). Determine how large we need to choose \(n\) so that \(P_n(2)\) approximates \(\sin(2)\) to 20 decimal places.

Example \(\PageIndex{3}\):

Show that the Taylor series for \(\sin(x)\) actually converges to \(\sin(x)\) for all \(x\).

**Solution**

Recall from the previous example that since \(f (x) = \sin(x)\), we know

\(|f^{(n+1)} (x) |≤ 1\)

for any \(n\) and \(x\). This allows us to choose \(M = 1\) in the Lagrange error bound formula. Thus,

\[|P_n(x) − \sin(x)| ≤ \dfrac{|x| ^{n+1}}{ (n + 1)!} \tag{8.23} \label{8.23}\]

for every \(x\). We showed in earlier work with the Taylor series \(\sum_{k=0}^\infty \dfrac{x^k}{k!}\) converges for every value of \(x\). Since the terms of any convergent series must approach zero, it follows that

\(\lim_{n \rightarrow \infty}\dfrac{x^{n+1}}{(n+1)!} =0 \)

for every value of \(x\). Thus, taking the limit as \(n \rightarrow \infty \) in the inequality (\(\ref{8.23}\)), it follows that

\( \lim_{n \rightarrow \infty} | P_n (x) - \sin (x) |=0. \)

As a result, we can now write

\(\sum_{n=0}^\infty \dfrac{(-1)^nx^{2n+1}}{(2n+1)!} \)

for every real number \(x\).

Activity \(\PageIndex{8}\):

- Show that the Taylor series centered at 0 for \(\cos(x)\) converges to \( \cos(x)\) for every real number \(x\).
- Next we consider the Taylor series for \(e^x\).
- Show that the Taylor series centered at 0 for \(e^x\) converges to \(e^x\) for every nonnegative value of \(x\).
- Show that the Taylor series centered at 0 for \(e^x\) converges to \(e^x\) for every negative value of \(x\).
- Explain why the Taylor series centered at 0 for \(e^x\) converges to \(e^x\) for every real number \(x\). Recall that we earlier showed that the Taylor series centered at 0 for \(e^x\) converges for all \(x\), and we have now completed the argument that the Taylor series for \(e^x\) actually converges to \(e^x\) for all \(x\).

- Let \(P_n(x)\) be the \(n\)th order Taylor polynomial for \(e^x\) centered at 0. Find a value of \(n\) so that \(P_n(5)\) approximates \(e^5\) correct to 8 decimal places.

## Summary

In this section, we encountered the following important ideas:

- We can use Taylor polynomials to approximate complicated functions. This allows us to approximate values of complicated functions using only addition, subtraction, multiplication, and division of real numbers. The \(n\)th order Taylor polynomial centered at \(x = a\) of a function \(f\) is

\(P_n(x) = f (a) + f'(a)(x − a) + \dfrac{f''(a)}{2!} (x − a)^2 + · · · + \dfrac{f^{(n)} (a)}{n!} (x − a)^n = \sum_{k=0}^n \dfrac{f^(k) (a)}{k!} (x − a)^k .\)

- The Taylor series centered at \(x = a\) for a function \(f\) is

\(\sum_{k=0}^{\infty} f (k) (a) k! (x − a) k \).

- The \(n\)th order Taylor polynomial centered at \(a\) for \(f\) is the \(n\)th partial sum of its Taylor series centered at \(a\). So the \(n\)th order Taylor polynomial for a function \(f\) is an approximation to \(f\) on the interval where the Taylor series converges; for the values of \(x\) for which the Taylor series converges to \(f\) we write

\(f (x) = \sum_{k=0}^{\infty} \dfrac{f^(k) (a)}{k!} (x − a)^k \).

- The Lagrange Error Bound shows us how to determine the accuracy in using a Taylor polynomial to approximate a function. More specifically, if \(P_n(x)\) is the nth order Taylor polynomial for \(f\) centered at \(x = a\) and if \(M\) is an upper bound for \( |f^{(n+1)} (x) |\) on the interval \([a, c]\), then

\(|P_n(c) − f (c)| ≤ M \dfrac{|c − a| ^{n+1}}{ (n + 1)!} \).

## Contributors and Attributions

Matt Boelkins (Grand Valley State University), David Austin (Grand Valley State University), Steve Schlicker (Grand Valley State University)