3.4: Approximating Functions Near a Specified Point — Taylor Polynomials
 Page ID
 89731
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\ #1 \}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
Suppose that you are interested in the values of some function \(f(x)\) for \(x\) near some fixed point \(a\text{.}\) When the function is a polynomial or a rational function we can use some arithmetic (and maybe some hard work) to write down the answer. For example:
\begin{align*} f(x) &= \frac{x^23}{x^22x+4}\\ f(1/5) &= \frac{ \frac{1}{25}3}{\frac{1}{25}\frac{2}{5}+4 } = \frac{\frac{175}{25} }{\frac{110+100}{25}}\\ &= \frac{74}{91} \end{align*}
Tedious, but we can do it. On the other hand if you are asked to compute \(\sin(1/10)\) then what can we do? We know that a calculator can work it out
\begin{align*} \sin(1/10) &= 0.09983341\dots \end{align*}
but how does the calculator do this? How did people compute this before calculators ^{1} A hint comes from the following sketch of \(\sin(x)\) for \(x\) around \(0\text{.}\)
The above figure shows that the curves \(y=x\) and \(y=\sin x\) are almost the same when \(x\) is close to \(0\text{.}\) Hence if we want the value of \(\sin(1/10)\) we could just use this approximation \(y=x\) to get
\begin{gather*} \sin(1/10) \approx 1/10. \end{gather*}
Of course, in this case we simply observed that one function was a good approximation of the other. We need to know how to find such approximations more systematically.
More precisely, say we are given a function \(f(x)\) that we wish to approximate close to some point \(x=a\text{,}\) and we need to find another function \(F(x)\) that
 is simple and easy to compute ^{2}
 is a good approximation to \(f(x)\) for \(x\) values close to \(a\text{.}\)
Further, we would like to understand how good our approximation actually is. Namely we need to be able to estimate the error \(f(x)F(x)\text{.}\)
There are many different ways to approximate a function and we will discuss one family of approximations: Taylor polynomials. This is an infinite family of ever improving approximations, and our starting point is the very simplest.
Zeroth Approximation — the Constant Approximation
The simplest functions are those that are constants. And our zeroth ^{3} approximation will be by a constant function. That is, the approximating function will have the form \(F(x)=A\text{,}\) for some constant \(A\text{.}\) Notice that this function is a polynomial of degree zero.
To ensure that \(F(x)\) is a good approximation for \(x\) close to \(a\text{,}\) we choose \(A\) so that \(f(x)\) and \(F(x)\) take exactly the same value when \(x=a\text{.}\)
\begin{gather*} F(x)=A\qquad\text{so}\qquad F(a)=A=f(a)\implies A=f(a) \end{gather*}
Our first, and crudest, approximation rule is
\begin{gather*} f(x)\approx f(a) \end{gather*}
An important point to note is that we need to know \(f(a)\) — if we cannot compute that easily then we are not going to be able to proceed. We will often have to choose \(a\) (the point around which we are approximating \(f(x)\)) with some care to ensure that we can compute \(f(a)\text{.}\)
Here is a figure showing the graphs of a typical \(f(x)\) and approximating function \(F(x)\text{.}\)
At \(x=a\text{,}\) \(f(x)\) and \(F(x)\) take the same value. For \(x\) very near \(a\text{,}\) the values of \(f(x)\) and \(F(x)\) remain close together. But the quality of the approximation deteriorates fairly quickly as \(x\) moves away from \(a\text{.}\) Clearly we could do better with a straight line that follows the slope of the curve. That is our next approximation.
But before then, an example:
Use the constant approximation to estimate \(e^{0.1}\text{.}\)
Solution First set \(f(x) = e^x\text{.}\)
 Now we first need to pick a point \(x=a\) to approximate the function. This point needs to be close to \(0.1\) and we need to be able to evaluate \(f(a)\) easily. The obvious choice is \(a=0\text{.}\)
 Then our constant approximation is just
\begin{align*} F(x) &= f(0) = e^0 = 1\\ F(0.1) &= 1 \end{align*}
Note that \(e^{0.1} = 1.105170918\dots\text{,}\) so even this approximation isn't too bad..
First Approximation — the Linear Approximation
Our first ^{4} approximation improves on our zeroth approximation by allowing the approximating function to be a linear function of \(x\) rather than just a constant function. That is, we allow \(F(x)\) to be of the form \(A+Bx\text{,}\) for some constants \(A\) and \(B\text{.}\)
To ensure that \(F(x)\) is a good approximation for \(x\) close to \(a\text{,}\) we still require that \(f(x)\) and \(F(x)\) have the same value at \(x=a\) (that was our zeroth approximation). Our additional requirement is that their tangent lines at \(x=a\) have the same slope — that the derivatives of \(f(x)\) and \(F(x)\) are the same at \(x=a\text{.}\) Hence
\begin{align*} F(x)&=A+Bx & &\implies & F(a)=A+Ba&=f(a)\\ F'(x)&=B & &\implies & F'(a)=\phantom{A+a}B&=f'(a) \end{align*}
So we must have \(B=f'(a)\text{.}\) Substituting this into \(A+Ba=f(a)\) we get \(A=f(a)af'(a)\text{.}\) So we can write
\begin{align*} F(x) &= A+Bx = \overbrace{f(a) af'(a)}^A+ f'(a) \cdot x\\ &= f(a) + f'(a) \cdot(xa) \end{align*}
We write it in this form because we can now clearly see that our first approximation is just an extension of our zeroth approximation. This first approximation is also often called the linear approximation of \(f(x)\) about \(x=a\text{.}\)
\begin{gather*} f(x) \approx f(a)+f'(a)(xa) \end{gather*}
We should again stress that in order to form this approximation we need to know \(f(a)\) and \(f'(a)\) — if we cannot compute them easily then we are not going to be able to proceed.
Recall, from Theorem 2.3.4, that \(y=f(a)+f'(a)(xa)\) is exactly the equation of the tangent line to the curve \(y=f(x)\) at \(a\text{.}\) Here is a figure showing the graphs of a typical \(f(x)\) and the approximating function \(F(x)\text{.}\)
Observe that the graph of \(f(a)+f'(a)(xa)\) remains close to the graph of \(f(x)\) for a much larger range of \(x\) than did the graph of our constant approximation, \(f(a)\text{.}\) One can also see that we can improve this approximation if we can use a function that curves down rather than being perfectly straight. That is our next approximation.
But before then, back to our example:
Use the linear approximation to estimate \(e^{0.1}\text{.}\)
Solution First set \(f(x) = e^x\) and \(a=0\) as before.
 To form the linear approximation we need \(f(a)\) and \(f'(a)\text{:}\)
\begin{align*} f(x) &= e^x & f(0) & = 1\\ f'(x) &= e^x & f'(0) & = 1 \end{align*}
 Then our linear approximation is
\begin{align*} F(x) &= f(0) + x f'(0) = 1 + x\\ F(0.1) &= 1.1 \end{align*}
Recall that \(e^{0.1} = 1.105170918\dots\text{,}\) so the linear approximation is almost correct to 3 digits.
It is worth doing another simple example here.
Use a linear approximation to estimate \(\sqrt{4.1}\text{.}\)
Solution First set \(f(x)=\sqrt{x}\text{.}\) Hence \(f'(x) = \frac{1}{2\sqrt{x}}\text{.}\) Then we are trying to approximate \(f(4.1)\text{.}\) Now we need to choose a sensible \(a\) value.
 We need to choose \(a\) so that \(f(a)\) and \(f'(a)\) are easy to compute.
 We could try \(a=4.1\) — but then we need to compute \(f(4.1)\) and \(f'(4.1)\) — which is our original problem and more!
 We could try \(a=0\) — then \(f(0)=0\) and \(f'(0) = DNE\text{.}\)
 Setting \(a=1\) gives us \(f(1)=1\) and \(f'(1)=\frac{1}{2}\text{.}\) This would work, but we can get a better approximation by choosing \(a\) is closer to \(4.1\text{.}\)
 Indeed we can set \(a\) to be the square of any rational number and we'll get a result that is easy to compute.
 Setting \(a=4\) gives \(f(4)=2\) and \(f'(4) = \frac{1}{4}\text{.}\) This seems good enough.
 Substitute this into equation 3.4.3 to get
\begin{align*} f(4.1) &\approx f(4) + f'(4) \cdot(4.14)\\ &= 2 + \frac{0.1}{4} = 2 + 0.025 = 2.025 \end{align*}
Notice that the true value is \(\sqrt{4.1} = 2.024845673\dots\text{.}\)
Second Approximation — the Quadratic Approximation
We next develop a still better approximation by now allowing the approximating function be to a quadratic function of \(x\text{.}\) That is, we allow \(F(x)\) to be of the form \(A+Bx+Cx^2\text{,}\) for some constants \(A\text{,}\) \(B\) and \(C\text{.}\) To ensure that \(F(x)\) is a good approximation for \(x\) close to \(a\text{,}\) we choose \(A\text{,}\) \(B\) and \(C\) so that
 \(f(a)=F(a)\) (just as in our zeroth approximation),
 \(f'(a)=F'(a)\) (just as in our first approximation), and
 \(f''(a)=F''(a)\) — this is a new condition.
These conditions give us the following equations
\begin{align*} F(x)&=A+Bx+Cx^2 & &\implies & F(a)=A+Ba+\phantom{2}Ca^2&=f(a)\\ F'(x)&=B+2Cx & &\implies & F'(a)=\phantom{A+a}B+2Ca&=f'(a)\\ F''(x)&=2C & &\implies & F''(a)=\phantom{A+aB+a}2C&=f''(a) \end{align*}
Solve these for \(C\) first, then \(B\) and finally \(A\text{.}\)
\begin{align*} C &=\frac{1}{2} f''(a) & \text{substitute}\\ B &= f'(a)  2Ca = f'(a)af''(a) & \text{substitute again}\\ A &= f(a)BaCa^2 = f(a)a[f'(a)af''(a)]\frac{1}{2} f''(a)a^2\hskip0.5in \end{align*}
Then put things back together to build up \(F(x)\text{:}\)
\begin{align*} F(x)&=f(a)f'(a)a+\frac{1}{2} f''(a)a^2 & &\text{(this line is $A$)}\cr &\phantom{=f(a)\hskip3pt}+f'(a)\,x\hskip3pt f''(a)ax & & \text{(this line is $Bx$)}\\ &\phantom{=f(a)f'(a)a\hskip3.5pt}+\frac{1}{2} f''(a)x^2 & &\text{(this line is $Cx^2$)}\\ &=f(a)+f'(a)(xa)+\frac{1}{2} f''(a)(xa)^2 \end{align*}
Oof! We again write it in this form because we can now clearly see that our second approximation is just an extension of our first approximation.
Our second approximation is called the quadratic approximation:
\begin{gather*} f(x)\approx f(a)+f'(a)(xa)+\frac{1}{2} f''(a)(xa)^2 \end{gather*}
Here is a figure showing the graphs of a typical \(f(x)\) and approximating function \(F(x)\text{.}\)
This new approximation looks better than both the first and second.
Now there is actually an easier way to derive this approximation, which we show you now. Let us rewrite ^{5}
\(F(x)\) so that it is easy to evaluate it and its derivatives at \(x=a\text{:}\)
\begin{align*} F(x) &= \alpha + \beta\cdot (xa) + \gamma \cdot(xa)^2 \end{align*}
Then
\begin{align*} F(x) &= \alpha + \beta\cdot (xa) + \gamma \cdot(xa)^2 & F(a) &= \alpha = f(a)\\ F'(x) &= \beta + 2\gamma \cdot(xa) & F'(a)&=\beta = f'(a)\\ F''(x) &= 2\gamma & F''(a) &= 2\gamma = f''(a) \end{align*}
And from these we can clearly read off the values of \(\alpha,\beta\) and \(\gamma\) and so recover our function \(F(x)\text{.}\) Additionally if we write things this way, then it is quite clear how to extend this to a cubic approximation and a quartic approximation and so on.
Return to our example:
Use the quadratic approximation to estimate \(e^{0.1}\text{.}\)
Solution Set \(f(x) = e^x\) and \(a=0\) as before.
 To form the quadratic approximation we need \(f(a), f'(a)\) and \(f''(a)\text{:}\)
\begin{align*} f(x) &= e^x & f(0) & = 1\\ f'(x) &= e^x & f'(0) & = 1\\ f''(x) &= e^x & f''(0) & = 1 \end{align*}
 Then our quadratic approximation is
\begin{align*} F(x) &= f(0) + x f'(0) + \frac{1}{2} x^2 f''(0) = 1 + x + \frac{x^2}{2}\\ F(0.1) &= 1.105 \end{align*}
Recall that \(e^{0.1} = 1.105170918\dots\text{,}\) so the quadratic approximation is quite accurate with very little effort.
Before we go on, let us first introduce (or revise) some notation that will make our discussion easier.
Whirlwind Tour of Summation Notation
In the remainder of this section we will frequently need to write sums involving a large number of terms. Writing out the summands explicitly can become quite impractical — for example, say we need the sum of the first 11 squares:
\begin{gather*} 1 + 2^2 + 3^2 + 4^2+ 5^2 + 6^2 + 7^2 + 8^2 + 9^2 + 10^2 + 11^2 \end{gather*}
This becomes tedious. Where the pattern is clear, we will often skip the middle few terms and instead write
\begin{gather*} 1 + 2^2 + \cdots + 11^2. \end{gather*}
A far more precise way to write this is using \(\Sigma\) (capitalsigma) notation. For example, we can write the above sum as
\begin{gather*} \sum_{k=1}^{11} k^2 \end{gather*}
This is read as
The sum from \(k\) equals 1 to 11 of \(k^2\text{.}\)
More generally
Let \(m\leq n\) be integers and let \(f(x)\) be a function defined on the integers. Then we write
\begin{gather*} \sum_{k=m}^n f(k) \end{gather*}
to mean the sum of \(f(k)\) for \(k\) from \(m\) to \(n\text{:}\)
\begin{gather*} f(m) + f(m+1) + f(m+2) + \cdots + f(n1) + f(n). \end{gather*}
Similarly we write
\begin{gather*} \sum_{i=m}^n a_i \end{gather*}
to mean
\begin{gather*} a_m+a_{m+1}+a_{m+2}+\cdots+a_{n1}+a_n \end{gather*}
for some set of coefficients \(\{ a_m, \ldots, a_n \}\text{.}\)
Consider the example
\begin{gather*} \sum_{k=3}^7 \frac{1}{k^2}=\frac{1}{3^2}+\frac{1}{4^2}+\frac{1}{5^2}+ \frac{1}{6^2}+\frac{1}{7^2} \end{gather*}
It is important to note that the right hand side of this expression evaluates to a number ^{6}; it does not contain “\(k\)”. The summation index \(k\) is just a “dummy” variable and it does not have to be called \(k\text{.}\) For example
\begin{gather*} \sum_{k=3}^7 \frac{1}{k^2} =\sum_{i=3}^7 \frac{1}{i^2} =\sum_{j=3}^7 \frac{1}{j^2} =\sum_{\ell=3}^7 \frac{1}{\ell^2} \end{gather*}
Also the summation index has no meaning outside the sum. For example
\begin{gather*} k\sum_{k=3}^7 \frac{1}{k^2} \end{gather*}
has no mathematical meaning; It is gibberish ^{7}.
Still Better Approximations — Taylor Polynomials
We can use the same strategy to generate still better approximations by polynomials ^{8} of any degree we like. As was the case with the approximations above, we determine the coefficients of the polynomial by requiring, that at the point \(x=a\text{,}\) the approximation and its first \(n\) derivatives agree with those of the original function.
Rather than simply moving to a cubic polynomial, let us try to write things in a more general way. We will consider approximating the function \(f(x)\) using a polynomial, \(T_n(x)\text{,}\) of degree \(n\) — where \(n\) is a nonnegative integer. As we discussed above, the algebra is easier if we write
\begin{align*} T_n(x) &= c_0 + c_1(xa) + c_2 (xa)^2 + \cdots + c_n (xa)^n\\ &= \sum_{k=0}^n c_k (xa)^k & \text{using } \Sigma \text{ notation} \end{align*}
The above form ^{9 }^{10} makes it very easy to evaluate this polynomial and its derivatives at \(x=a\text{.}\) Before we proceed, we remind the reader of some notation (see Notation 2.2.8):
 Let \(f(x)\) be a function and \(k\) be a positive integer. We can denote its \(k^\mathrm{th}\) derivative with respect to \(x\) by
\begin{align*} \frac{\mathrm{d} ^{k}f}{\mathrm{d} x^{k}} && \left( \dfrac{d}{dx}\right)^k f(x) && f^{(k)}(x) \end{align*}
Additionally we will need
Let \(n\) be a positive integer ^{11}, then \(n\)factorial, denoted \(n!\text{,}\) is the product
\begin{align*} n! &= n \times (n1) \times \cdots \times 3 \times 2 \times 1 \end{align*}
Further, we use the convention that
\begin{align*} 0! &= 1 \end{align*}
The first few factorials are
\begin{align*} 1! &=1 & 2! &=2 & 3! &=6\\ 4! &=24 & 5! &=120 & 6! &=720 \end{align*}
Now consider \(T_n(x)\) and its derivatives:
\begin{alignat*}{4} T_n(x) &=& c_0 &+ c_1(xa) & + c_2 (xa)^2 & + c_3(xa)^3 &+ \cdots+ & c_n (xa)^n\\ T_n'(x) &=& &c_1 & + 2 c_2 (xa) & + 3c_3(xa)^2 &+ \cdots +& n c_n (xa)^{n1}\\ T_n''(x) &=& & & 2 c_2 & + 6c_3(xa) &+ \cdots +& n(n1) c_n (xa)^{n2}\\ T_n'''(x) &=& & & & 6c_3 &+ \cdots + & n(n1)(n2) c_n (xa)^{n3}\\ & \vdots\\ T_n^{(n)}(x) &=& & & & & & n! \cdot c_n \end{alignat*}
Now notice that when we substitute \(x=a\) into the above expressions only the constant terms survive and we get
\begin{align*} T_n(a) &= c_0\\ T_n'(a) &= c_1\\ T_n''(a) &= 2\cdot c_2\\ T_n'''(a) &= 6 \cdot c_3\\ &\vdots\\ T_n^{(n)}(a) &= n! \cdot c_n \end{align*}
So now if we want to set the coefficients of \(T_n(x)\) so that it agrees with \(f(x)\) at \(x=a\) then we need
\begin{align*} T_n(a) &= c_0 = f(a) & c_0 &= f(a) = \frac{1}{0!} f(a)\\ \end{align*}
We also want the first \(n\) derivatives of \(T_n(x)\) to agree with the derivatives of \(f(x)\) at \(x=a\text{,}\) so
\begin{align*} T_n'(a) &= c_1 = f'(a) & c_1 &= f'(a) = \frac{1}{1!} f'(a)\\ T_n''(a) &= 2\cdot c_2 = f''(a) & c_2 &= \frac{1}{2} f''(a) = \frac{1}{2!}f''(a)\\ T_n'''(a) &= 6\cdot c_3 = f'''(a) & c_3 &= \frac{1}{6} f'''(a) = \frac{1}{3!} f'''(a)\\ \end{align*}
More generally, making the \(k^\mathrm{th}\) derivatives agree at \(x=a\) requires :
\begin{align*} T_n^{(k)}(a) &= k!\cdot c_k = f^{(k)}(a) & c_k &= \frac{1}{k!} f^{(k)}(a)\\ \end{align*}
And finally the \(n^\mathrm{th}\) derivative:
\begin{align*} T_n^{(n)}(a) &= n!\cdot c_n = f^{(n)}(a) & c_n &= \frac{1}{n!} f^{(n)}(a) \end{align*}
Putting this all together we have
\begin{align*} f(x) \approx T_n(x) &= f(a) + f'(a) (xa) + \frac{1}{2} f''(a) \cdot(xa)^2 + \cdots \\ &\hskip2in+ \frac{1}{n!}f^{(n)}(a) \cdot (xa)^n\\ &= \sum_{k=0}^n \frac{1}{k!} f^{(k)}(a) \cdot (xa)^k \end{align*}
Let us formalise this definition.
Let \(a\) be a constant and let \(n\) be a nonnegative integer. The \(n^\mathrm{th}\) degree Taylor polynomial for \(f(x)\) about \(x=a\) is
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} f^{(k)}(a) \cdot (xa)^k. \end{align*}
The special case \(a=0\) is called a Maclaurin ^{12} polynomial.
Before we proceed with some examples, a couple of remarks are in order.
 While we can compute a Taylor polynomial about any \(a\)value (providing the derivatives exist), in order to be a useful approximation, we must be able to compute \(f(a),f'(a),\cdots,f^{(n)}(a)\) easily. This means we must choose the point \(a\) with care. Indeed for many functions the choice \(a=0\) is very natural — hence the prominence of Maclaurin polynomials.
 If we have computed the approximation \(T_n(x)\text{,}\) then we can readily extend this to the next Taylor polynomial \(T_{n+1}(x)\) since
\begin{align*} T_{n+1}(x) &= T_n(x) + \frac{1}{(n+1)!} f^{(n+1)}(a) \cdot (xa)^{n+1} \end{align*}
This is very useful if we discover that \(T_n(x)\) is an insufficient approximation, because then we can produce \(T_{n+1}(x)\) without having to start again from scratch.
Some Examples
Let us return to our running example of \(e^x\text{:}\)
The constant, linear and quadratic approximations we used above were the first few Maclaurin polynomial approximations of \(e^x\text{.}\) That is
\begin{align*} T_0 (x) & = 1 & T_1(x) &= 1+x & T_2(x) &= 1+x+\frac{x^2}{2} \end{align*}
Since \(\dfrac{d}{dx} e^x = e^x\text{,}\) the Maclaurin polynomials are very easy to compute. Indeed this invariance under differentiation means that
\begin{align*} f^{(n)}(x) &= e^x & n=0,1,2,\dots && \text{so}\\ f^{(n)}(0) &= 1 \end{align*}
Substituting this into equation 3.4.10 we get
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} x^k \end{align*}
Thus we can write down the seventh Maclaurin polynomial very easily:
\begin{align*} T_7(x) &= 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \frac{x^5}{120} + \frac{x^6}{720} + \frac{x^7}{5040} \end{align*}
The following figure contains sketches of the graphs of \(e^x\) and its Taylor polynomials \(T_n(x)\) for \(n=0,1,2,3,4\text{.}\)
Also notice that if we use \(T_7(1)\) to approximate the value of \(e^1\) we obtain:
\begin{align*} e^1 \approx T_7(1) &= 1 + 1 + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \frac{1}{120} + \frac{1}{720} + \frac{1}{5040}\\ &= \frac{685}{252} = 2.718253968\dots \end{align*}
The true value of \(e\) is \(2.718281828\dots\text{,}\) so the approximation has an error of about \(3\times10^{5}\text{.}\)
Under the assumption that the accuracy of the approximation improves with \(n\) (an assumption we examine in Subsection 3.4.9 below) we can see that the approximation of \(e\) above can be improved by adding more and more terms. Indeed this is how the expression for \(e\) in equation 2.7.4 in Section 2.7 comes about.
Now that we have examined Maclaurin polynomials for \(e^x\) we should take a look at \(\log x\text{.}\) Notice that we cannot compute a Maclaurin polynomial for \(\log x\) since it is not defined at \(x=0\text{.}\)
Compute the \(5^\mathrm{th}\) Taylor polynomial for \(\log x\) about \(x=1\text{.}\)
Solution We have been told \(a=1\) and fifth degree, so we should start by writing down the function and its first five derivatives:
\begin{align*} f(x) &= \log x & f(1) &= \log 1 = 0\\ f'(x) &= \frac{1}{x} & f'(1) &= 1\\ f''(x) &= \frac{1}{x^2} & f''(1) &= 1\\ f'''(x) &= \frac{2}{x^3} & f'''(1) &= 2\\ f^{(4)}(x) &= \frac{6}{x^4} & f^{(4)}(1) &= 6\\ f^{(5)}(x) &= \frac{24}{x^5} & f^{(5)}(1) &= 24 \end{align*}
Substituting this into equation 3.4.10 gives
\begin{align*} T_5(x)&= 0 + 1\cdot (x1) + \frac{1}{2} \cdot (1) \cdot (x1)^2 + \frac{1}{6} \cdot 2 \cdot (x1)^3\\ &\hskip0.5in+ \frac{1}{24} \cdot (6) \cdot (x1)^4 + \frac{1}{120} \cdot 24 \cdot (x1)^5\\ &= (x1)  \frac{1}{2}(x1)^2 + \frac{1}{3}(x1)^3  \frac{1}{4}(x1)^4 + \frac{1}{5}(x1)^5 \end{align*}
Again, it is not too hard to generalise the above work to find the Taylor polynomial of degree \(n\text{:}\) With a little work one can show that
\begin{align*} T_n(x) &= \sum_{k=1}^n \frac{(1)^{k+1}}{k} (x1)^k. \end{align*}
For cosine:
Find the 4th degree Maclaurin polynomial for \(\cos x\text{.}\)
Solution We have \(a=0\) and we need to find the first 4 derivatives of \(\cos x\text{.}\)
\begin{align*} f(x) &= \cos x & f(0) &= 1\\ f'(x) &= \sin x & f'(0) &= 0\\ f''(x) &= \cos x & f''(0) &= 1\\ f'''(x) &= \sin x & f'''(0) &= 0\\ f^{(4)}(x) &= \cos x & f^{(4)}(0) &= 1 \end{align*}
Substituting this into equation 3.4.10 gives
\begin{align*} T_4(x)&= 1 + 1\cdot (0) \cdot x + \frac{1}{2} \cdot (1) \cdot x^2 + \frac{1}{6} \cdot 0 \cdot x^3 + \frac{1}{24} \cdot (1) \cdot x^4\\ &= 1  \frac{x^2}{2} + \frac{x^4}{24} \end{align*}
Notice that since the \(4^\mathrm{th}\) derivative of \(\cos x\) is \(\cos x\) again, we also have that the fifth derivative is the same as the first derivative, and the sixth derivative is the same as the second derivative and so on. Hence the next four derivatives are
\begin{align*} f^{(4)}(x) &= \cos x & f^{(4)}(0) &= 1\\ f^{(5)}(x) &= \sin x & f^{(5)}(0) &= 0\\ f^{(6)}(x) &= \cos x & f^{(6)}(0) &= 1\\ f^{(7)}(x) &= \sin x & f^{(7)}(0) &= 0\\ f^{(8)}(x) &= \cos x & f^{(8)}(0) &= 1 \end{align*}
Using this we can find the \(8^\mathrm{th}\) degree Maclaurin polynomial:
\begin{align*} T_8(x) &= 1  \frac{x^2}{2} + \frac{x^4}{24} \frac{x^6}{6!} + \frac{x^8}{8!} \end{align*}
Continuing this process gives us the \(2n^\mathrm{th}\) Maclaurin polynomial
\begin{align*} T_{2n}(x) &= \sum_{k=0}^n \frac{(1)^k}{(2k)!} \cdot x^{2k} \end{align*}
The above formula only works when x is measured in radians, because all of our derivative formulae for trig functions were developed under the assumption that angles are measured in radians.
Below we plot \(\cos x\) against its first few Maclaurin polynomial approximations:
The above work is quite easily recycled to get the Maclaurin polynomial for sine:
Find the 5th degree Maclaurin polynomial for \(\sin x\text{.}\)
Solution We could simply work as before and compute the first five derivatives of \(\sin x\text{.}\) But set \(g(x) = \sin x\) and notice that \(g(x) =  f'(x)\text{,}\) where \(f(x) =\cos x\text{.}\) Then we have
\begin{align*} g(0) &= f'(0) = 0\\ g'(0) &= f''(0) = 1\\ g''(0) &= f'''(0) = 0\\ g'''(0) &= f^{(4)}(0) = 1\\ g^{(4)}(0) &= f^{(5)}(0) = 0\\ g^{(5)}(0) &= f^{(6)}(0) = 1 \end{align*}
Hence the required Maclaurin polynomial is
\begin{align*} T_5(x) &= x  \frac{x^3}{3!} + \frac{x^5}{5!} \end{align*}
Just as we extended to the \(2n^\mathrm{th}\) Maclaurin polynomial for cosine, we can also extend our work to compute the \((2n+1)^\mathrm{th}\) Maclaurin polynomial for sine:
\begin{align*} T_{2n+1}(x) &= \sum_{k=0}^n \frac{(1)^k}{(2k+1)!} \cdot x^{2k+1} \end{align*}
The above formula only works when x is measured in radians, because all of our derivative formulae for trig functions were developed under the assumption that angles are measured in radians.
Below we plot \(\sin x\) against its first few Maclaurin polynomial approximations.
To get an idea of how good these Taylor polynomials are at approximating \(\sin\) and \(\cos\text{,}\) let's concentrate on \(\sin x\) and consider \(x\)'s whose magnitude \(x\le 1\text{.}\) There are tricks that you can employ ^{13} to evaluate sine and cosine at values of \(x\) outside this range.
If \(x\le 1\) radians ^{14} then the magnitudes of the successive terms in the Taylor polynomials for \(\sin x\) are bounded by
\begin{alignat*}{3} x&\le 1 & \tfrac{1}{3!}x^3&\le\tfrac{1}{6} & \tfrac{1}{5!}x^5&\le\tfrac{1}{120}\approx 0.0083\\ \tfrac{1}{7!}x^7&\le\tfrac{1}{7!}\approx 0.0002\quad & \tfrac{1}{9!}x^9&\le\tfrac{1}{9!}\approx 0.000003\quad & \tfrac{1}{11!}x^{11}&\le\tfrac{1}{11!}\approx 0.000000025 \end{alignat*}
From these inequalities, and the graphs on the previous pages, it certainly looks like, for \(x\) not too large, even relatively low degree Taylor polynomials give very good approximations. In Section 3.4.9 we'll see how to get rigorous error bounds on our Taylor polynomial approximations.
Estimating Change and \(\Delta x\text{,}\) \(\Delta y\) Notation
Suppose that we have two variables \(x\) and \(y\) that are related by \(y=f(x)\text{,}\) for some function \(f\text{.}\) One of the most important applications of calculus is to help us understand what happens to \(y\) when we make a small change in \(x\text{.}\)
Let \(x,y\) be variables related by a function \(f\text{.}\) That is \(y = f(x)\text{.}\) Then we denote a small change in the variable \(x\) by \(\Delta x\) (read as “delta \(x\)”). The corresponding small change in the variable \(y\) is denoted \(\Delta y\) (read as “delta \(y\)”).
\begin{align*} \Delta y &= f(x+\Delta x)  f(x) \end{align*}
In many situations we do not need to compute \(\Delta y\) exactly and are instead happy with an approximation. Consider the following example.
Let \(x\) be the number of cars manufactured per week in some factory and let \(y\) the cost of manufacturing those \(x\) cars. Given that the factory currently produces \(a\) cars per week, we would like to estimate the increase in cost if we make a small change in the number of cars produced.
Solution We are told that \(a\) is the number of cars currently produced per week; the cost of production is then \(f(a)\text{.}\)
 Say the number of cars produced is changed from \(a\) to \(a+\Delta x\) (where \(\Delta x\) is some small number.
 As \(x\) undergoes this change, the costs change from \(y=f(a)\) to \(f(a+\Delta x)\text{.}\) Hence
\begin{align*} \Delta y &= f(a+\Delta x)  f(a) \end{align*}
 We can estimate this change using a linear approximation. Substituting \(x=a+\Delta x\) into the equation 3.4.3 yields the approximation
\begin{gather*} f(a+\Delta x)\approx f(a)+f'(a)(a+\Delta xa) \end{gather*}
and consequently the approximation
\begin{gather*} \Delta y=f(a+\Delta x)f(a)\approx f(a)+f'(a)\Delta xf(a) \end{gather*}
simplifies to the following neat estimate of \(\Delta y\text{:}\)
\begin{gather*} \Delta y\approx f'(a)\Delta x \end{gather*}
 In the automobile manufacturing example, when the production level is \(a\) cars per week, increasing the production level by \(\Delta x\) will cost approximately \(f'(a)\Delta x\text{.}\) The additional cost per additional car, \(f'(a)\text{,}\) is called the “marginal cost” of a car.
 If we instead use the quadratic approximation (given by equation 3.4.6) then we estimate
\begin{gather*} f(a+\Delta x)\approx f(a)+f'(a)\Delta x+\frac{1}{2} f''(a)\Delta x^2 \end{gather*}
and so
\begin{align*} \Delta y&=f(a+\Delta x)f(a) \approx f(a)+f'(a)\Delta x +\frac{1}{2} f''(a)\Delta x^2f(a) \end{align*}
which simplifies to
\begin{align*} \Delta y &\approx f'(a)\Delta x+\frac{1}{2} f''(a)\Delta x^2 \end{align*}
Further Examples
In this subsection we give further examples of computation and use of Taylor approximations.
Estimate \(\tan 46^\circ\text{,}\) using the constant, linear and quadraticapproximations (equations 3.4.1, 3.4.3 and 3.4.6).
Solution Note that we need to be careful to translate angles measured in degrees to radians.
 Set \(f(x)=\tan x\text{,}\) \(x=46\tfrac{\pi}{180}\) radians and \(a=45\tfrac{\pi}{180}=\tfrac{\pi}{4}\) radians. This is a good choice for \(a\) because
 \(a=45^\circ\) is close to \(x=46^\circ\text{.}\) As noted above, it is generally the case that the closer \(x\) is to \(a\text{,}\) the better various approximations will be.
 We know the values of all trig functions at \(45^\circ\text{.}\)
 Now we need to compute \(f\) and its first two derivatives at \(x=a\text{.}\) It is a good time to recall the special \(1:1:\sqrt{2}\) triangle
So
\begin{align*} f(x) &= \tan x & f(\pi/4) &= 1\\ f'(x) &= \sec^2 x = \frac{1}{\cos^2 x} & f'(\pi/4) &= \frac{1}{1/\sqrt{2}^2} = 2\\ f''(x) &= \frac{2\sin x}{\cos^3 x} & f''(\pi/4) &= \frac{2/\sqrt{2}}{1/\sqrt{2}^3} = 4 \end{align*}
 As \(xa=46\tfrac{\pi}{180}45\tfrac{\pi}{180}=\tfrac{\pi}{180}\) radians, the three approximations are
\begin{alignat*}{2} f(x)&\approx f(a) \\ &=1\\ f(x)&\approx f(a)+f'(a)(xa) & &=1+2\tfrac{\pi}{180} \\ &=1.034907\\ f(x)&\approx f(a)+f'(a)(x\!\!a)+\frac{1}{2} f''(a)(x\!\!a)^2& &=1+2\tfrac{\pi}{180}+\frac{1}{2} 4\big(\tfrac{\pi}{180}\big)^2\\ & =1.035516 \end{alignat*}
For comparison purposes, \(\tan 46^\circ\) really is \(1.035530\) to 6 decimal places.
All of our derivative formulae for trig functions were developed under the assumption that angles are measured in radians. Those derivatives appeared in the approximation formulae that we used in Example 3.4.22, so we were obliged to express \(xa\) in radians.
Suppose that you are ten meters from a vertical pole. You were contracted to measure the height of the pole. You can't take it down or climb it. So you measure the angle subtended by the top of the pole. You measure \(\theta=30^\circ\text{,}\) which gives
\begin{gather*} h=10\tan 30^\circ=\tfrac{10}{\sqrt{3}}\approx 5.77\text{m}\qquad\qquad \end{gather*}
This is just standard trigonometry — if we know the angle exactly then we know the height exactly.
However, in the “real world” angles are hard to measure with such precision. If the contract requires you the measurement of the pole to be accurate within \(10\) cm, how accurate does your measurement of the angle \(\theta\) need to be?
Solution For simplicity ^{15}, we are going to assume that the pole is perfectly straight and perfectly vertical and that your distance from the pole was exactly 10 m.
 Write \(\theta=\theta_0+\Delta\theta\) where \(\theta\) is the exact angle, \(\theta_0\) is the measured angle and \(\Delta \theta\) is the error.
 Similarly write \(h=h_0+\Delta h\text{,}\) where \(h\) is the exact height and \(h_0=\tfrac{10}{\sqrt{3}}\) is the computed height. Their difference, \(\Delta h\text{,}\) is the error.
 Then
\begin{align*} h_0&=10\tan\theta_0 & h_0+\Delta h&=10\tan(\theta_0+\Delta\theta)\\ \Delta h &= 10\tan(\theta_0+\Delta\theta)  10\tan\theta_0 \end{align*}
We could attempt to solve this equation for \(\Delta\theta\) in terms of \(\Delta h\) — but it is far simpler to approximate \(\Delta h\) using the linear approximation in equation 3.4.20.  To use equation 3.4.20, replace \(y\) with \(h\text{,}\) \(x\) with \(\theta\) and \(a\) with \(\theta_0\text{.}\) Our function \(f(\theta) = 10 \tan\theta\) and \(\theta_0 = 30^\circ = \pi/6\) radians. Then
\begin{align*} \Delta y &\approx f'(a) \Delta x & \text{ becomes }&& \Delta h &\approx f'(\theta_0) \Delta \theta \end{align*}
Since \(f(\theta)=10 \tan \theta\text{,}\) \(f'(\theta) = 10\sec^2\theta\) and\begin{gather*} f'(\theta_0) = 10\sec^2(\pi/6) = 10 \cdot \left(\frac{2}{\sqrt{3}} \right)^2 = \frac{40}{3} \end{gather*}
 Putting things together gives
\begin{align*} \Delta h &\approx f'(\theta_0) \Delta \theta & \text{ becomes }&& \Delta h & \approx \frac{40}{3} \Delta \theta \end{align*}
We can then solve this equation for \(\Delta\theta\) in terms of \(\Delta h\text{:}\)\begin{align*} \Delta \theta & \approx \frac{3}{40} \Delta h \end{align*}
 We are told that we must have \(\Delta h \lt 0.1\text{,}\) so we must have
\begin{align*} \Delta \theta &\leq \frac{3}{400} \end{align*}
This is measured in radians, so converting back to degrees\begin{align*} \frac{3}{400} \cdot \frac{180}{\pi} &= 0.43^\circ \end{align*}
Suppose that you measure, approximately, some quantity. Suppose that the exact value of that quantity is \(Q_0\) and that your measurement yielded \(Q_0+\Delta Q\text{.}\) Then \(\Delta Q\) is called the absolute error of the measurement and \(100\frac{\Delta Q}{Q_0}\) is called the percentage error of the measurement. As an example, if the exact value is \(4\) and the measured value is \(5\text{,}\) then the absolute error is \(54=1\) and the percentage error is \(100\frac{54}{4}=25\text{.}\) That is, the error, \(1\text{,}\) was \(25\%\) of the exact value, \(4\text{.}\)
Suppose that the radius of a sphere has been measured with a percentage error of at most \(\varepsilon\)%. Find the corresponding approximate percentage errors in the surface area and volume of the sphere.
Solution We need to be careful in this problem to convert between absolute and percentage errors correctly.
 Suppose that the exact radius is \(r_0\) and that the measured radius is \(r_0+\Delta r\text{.}\)
 Then the absolute error in the measurement is \(\Delta r\) and, by definition, the percentage error is \(100\tfrac{\Delta r}{r_0}\text{.}\) We are told that \(100\tfrac{\Delta r}{r_0}\le\varepsilon\text{.}\)
 The surface area ^{16} of a sphere of radius \(r\) is \(A(r)=4\pi r^2\text{.}\) The error in the surface area computed with the measured radius is
\begin{align*} \Delta A &=A(r_0+\Delta r)A(r_0)\approx A'(r_0)\Delta r\\ &= 8\pi r_0 \Delta r \end{align*}
where we have made use of the linear approximation, equation 3.4.20.  The corresponding percentage error is then
\begin{gather*} 100\frac{\Delta A}{A(r_0)} \approx 100\frac{A'(r_0)\Delta r}{A(r_0)} = 100\frac{8\pi r_0\Delta r}{4\pi r_0^2} = 2\times 100\frac{\Delta r}{r_0} \le 2\varepsilon \end{gather*}
 The volume of a sphere ^{17} of radius \(r\) is \(V(r)=\frac{4}{3}\pi r^3\text{.}\) The error in the volume computed with the measured radius is
\begin{align*}\Delta V &=V(r_0+\Delta r)V(r_0)\approx V'(r_0)\Delta r\\ &= 4\pi r_0^2 \Delta r \end{align*}
where we have again made use of the linear approximation, equation 3.4.20.  The corresponding percentage error is
\begin{gather*} 100\frac{\Delta V}{V(r_0)} \approx 100\frac{V'(r_0)\Delta r}{V(r_0)} = 100\frac{4\pi r_0^2\Delta r}{4\pi r_0^3/3} = 3\times 100\frac{\Delta r}{r_0} \le 3\varepsilon \end{gather*}
We have just computed an approximation to \(\Delta V\text{.}\) This problem is actually sufficiently simple that we can compute \(\Delta V\) exactly:
\begin{align*} \Delta V &= V(r_0 + \Delta r)  V(r_0) = \tfrac{4}{3} \pi (r_0 + \Delta r)^3  \tfrac{4}{3} \pi r_0^3 \end{align*}
 Applying \((a+b)^3=a^3+3a^2b+3ab^2+b^3\) with \(a=r_0\) and \(b=\Delta r\text{,}\) gives
\begin{align*} V(r_0+\Delta r)V(r_0)&=\tfrac{4}{3}\pi \left[r_0^3+3r_0^2\Delta r+3r_0\,(\Delta r)^2+(\Delta r)^3\right]  \tfrac{4}{3}\pi r_0^3\\ &=\tfrac{4}{3}\pi[3r_0^2\Delta r+3r_0\,(\Delta r)^2+(\Delta r)^3] \end{align*}
 Thus the difference between the exact error and the linear approximation to the error is obtained by retaining only the last two terms in the square brackets. This has magnitude
\begin{gather*} \tfrac{4}{3}\pi\big3r_0\,(\Delta r)^2+(\Delta r)^3\big =\tfrac{4}{3}\pi\big3r_0+\Delta r\big(\Delta r)^2 \end{gather*}
or in percentage terms\begin{align*} 100\cdot \dfrac{1}{\tfrac{4}{3}\pi r_0^3} \cdot \tfrac{4}{3}\pi \big3r_0\,(\Delta r)^2+(\Delta r)^3\big &=100\left3\frac{\Delta r^2}{r_0^2}+\frac{\Delta r^3}{r_0^3}\right\\ &=\left(100 \frac{3\Delta r}{r_0}\right) \cdot \left(\frac{\Delta r}{r_0}\right) \left1 +\frac{\Delta r}{3r_0}\right\\ & \le 3\varepsilon \left(\frac{\varepsilon}{100}\right)\cdot \left(1+\frac{\varepsilon}{300}\right) \end{align*}
Since \(\varepsilon\) is small, we can assume that \(1 + \frac{\varepsilon}{300} \approx 1\text{.}\) Hence the difference between the exact error and the linear approximation of the error is roughly a factor of \(\tfrac{\varepsilon}{100}\) smaller than the linear approximation \(3\varepsilon\text{.}\)  As an aside, notice that if we argue that \(\Delta r\) is very small and so we can ignore terms involving \((\Delta r)^2\) and \((\Delta r)^3\) as being really really small, then we obtain
\begin{align*} V(r_0+\Delta r)V(r_0) &=\tfrac{4}{3}\pi[3r_0^2\Delta r \underbrace{+3r_0\,(\Delta r)^2+(\Delta r)^3}_\text{really really small}]\\ &\approx \tfrac{4}{3}\pi \cdot 3r_0^2\Delta r = 4 \pi r_0^2 \Delta r \end{align*}
which is precisely the result of our linear approximation above.
To compute the height \(h\) of a lamp post, the length \(s\) of the shadow of a two meter pole is measured. The pole is 6 m from the lamp post. If the length of the shadow was measured to be 4 m, with an error of at most one cm, find the height of the lamp post and estimate the percentage error in the height.
Solution We should first draw a picture ^{18}
 By similar triangles we see that
\begin{align*} \frac{2}{s} &= \frac{h}{6+s} \end{align*}
from which we can isolate \(h\) as a function of \(s\text{:}\)\begin{align*} h &= \frac{2(6+s)}{s} = \frac{12}{s} + 2 \end{align*}
 The length of the shadow was measured to be \(s_0=4\) m. The corresponding height of the lamp post is
\begin{align*} h_0 &= \frac{12}{4} + 2 = 5m \end{align*}
 If the error in the measurement of the length of the shadow was \(\Delta s\text{,}\) then the exact shadow length was \(s=s_0+\Delta s\) and the exact lamp post height is \(h=f(s_0+\Delta s)\text{,}\) where \(f(s)=\tfrac{12}{s}+2\text{.}\) The error in the computed lamp post height is
\begin{gather*} \Delta h=hh_0=f(s_0+\Delta s)f(s_0) \end{gather*}
 We can then make a linear approximation of this error using equation 3.4.20:
\begin{align*} \Delta h &\approx f'(s_0)\Delta s =\frac{12}{s_0^2}\Delta s =\frac{12}{4^2}\Delta s \end{align*}
 We are told that \(\Delta s\le\frac{1}{100}\) m. Consequently, approximately,
\begin{gather*} \Delta h\le \frac{12}{4^2}\frac{1}{100}=\frac{3}{400} \end{gather*}
The percentage error is then approximately\begin{align*} 100\frac{\Delta h}{h_0} & \le 100\frac{3}{400\times 5}=0.15\% \end{align*}
The Error in the Taylor Polynomial Approximations
Any time you make an approximation, it is desirable to have some idea of the size of the error you introduced. That is, we would like to know the difference \(R(x)\) between the original function \(f(x)\) and our approximation \(F(x)\text{:}\)
\begin{align*} R(x) &= f(x)F(x). \end{align*}
Of course if we know \(R(x)\) exactly, then we could recover \(f(x) = F(x)+R(x)\) — so this is an unrealistic hope. In practice we would simply like to bound \(R(x)\text{:}\)
\begin{align*} R(x) &= f(x)F(x) \leq M \end{align*}
where (hopefully) \(M\) is some small number. It is worth stressing that we do not need the tightest possible value of \(M\text{,}\) we just need a relatively easily computed \(M\) that isn't too far off the true value of \(f(x)F(x)\text{.}\)
We will now develop a formula for the error introduced by the constant approximation, equation 3.4.1 (developed back in Section 3.4.1)
\begin{align*} f(x)&\approx f(a) = T_0(x) & \text{$0^\mathrm{th}$ Taylor polynomial} \end{align*}
The resulting formula can be used to get an upper bound on the size of the error \(R(x)\text{.}\)
The main ingredient we will need is the MeanValue Theorem (Theorem 2.13.5) — so we suggest you quickly revise it. Consider the following obvious statement:
\begin{align*} f(x) &= f(x) & \text{now some sneaky manipulations}\\ & = f(a) + (f(x)f(a))\\ &= \underbrace{f(a)}_{=T_0(x)} + (f(x)f(a)) \cdot \underbrace{\frac{xa}{xa}}_{=1}\\ &= T_0(x) + \underbrace{\frac{f(x)f(a)}{xa}}_\text{looks familiar} \cdot (xa) \end{align*}
Indeed, this equation is important in the discussion that follows, so we'll highlight it
\begin{align*} f(x) &= T_0(x) + \left[ \frac{f(x)f(a)}{xa} \right](xa) \end{align*}
The coefficient \(\dfrac{f(x)f(a)}{xa}\) of \((xa)\) is the average slope of \(f(t)\) as \(t\) moves from \(t=a\) to \(t=x\text{.}\) We can picture this as the slope of the secant joining the points \((a,f(a))\) and \((x,f(x))\) in the sketch below.
As \(t\) moves from \(a\) to \(x\text{,}\) the instantaneous slope \(f'(t)\) keeps changing. Sometimes \(f'(t)\) might be larger than the average slope \(\tfrac{f(x)f(a)}{xa}\text{,}\) and sometimes \(f'(t)\) might be smaller than the average slope \(\tfrac{f(x)f(a)}{xa}\text{.}\) However, by the MeanValue Theorem (Theorem 2.13.5), there must be some number \(c\text{,}\) strictly between \(a\) and \(x\text{,}\) for which \(f'(c)=\dfrac{f(x)f(a)}{xa}\) exactly.
Substituting this into formula 3.4.28 gives
\begin{align*} f(x) &=T_0(x) +f'(c)(xa) & \text{for some $c$ strictly between $a$ and $x$} \end{align*}
Notice that this expression as it stands is not quite what we want. Let us massage this around a little more into a more useful form
\begin{align*} f(x)  T_0(x) &= f'(c) \cdot (xa) & \text{for some $c$ strictly between $a$ and $x$} \end{align*}
Notice that the MVT doesn't tell us the value of \(c\text{,}\) however we do know that it lies strictly between \(x\) and \(a\text{.}\) So if we can get a good bound on \(f'(c)\) on this interval then we can get a good bound on the error.
Let us return to Example 3.4.2, and we'll try to bound the error in our approximation of \(e^{0.1}\text{.}\)
 Recall that \(f(x) = e^x\text{,}\) \(a=0\) and \(T_0(x) = e^0 = 1\text{.}\)
 Then by equation 3.4.30
\begin{align*} e^{0.1}  T_0(0.1) &= f'(c) \cdot (0.1  0) & \text{with $0 \lt c \lt 0.1$} \end{align*}
 Now \(f'(c) = e^c\text{,}\) so we need to bound \(e^c\) on \((0,0.1)\text{.}\) Since \(e^c\) is an increasing function, we know that
\begin{align*} e^0 & \lt f'(c) \lt e^{0.1} & \text{ when $0 \lt c \lt 0.1$} \end{align*}
So one is tempted to write that\begin{align*} e^{0.1}  T_0(0.1) &= R(x) = f'(c) \cdot (0.1  0)\\ & \lt e^{0.1} \cdot 0.1 \end{align*}
And while this is true, it is rather circular. We have just bounded the error in our approximation of \(e^{0.1}\) by \(\frac{1}{10}e^{0.1}\) — if we actually knew \(e^{0.1}\) then we wouldn't need to estimate it!  While we don't know \(e^{0.1}\) exactly, we do know ^{19} that \(1 = e^0 \lt e^{0.1} \lt e^1 \lt 3\text{.}\) This gives us
\begin{gather*} R(0.1) \lt 3 \times 0.1 = 0.3 \end{gather*}
That is — the error in our approximation of \(e^{0.1}\) is no greater than \(0.3\text{.}\) Recall that we don't need the error exactly, we just need a good idea of how large it actually is.  In fact the real error here is
\begin{align*} e^{0.1}  T_0(0.1) &=e^{0.1}  1 = 0.1051709\dots \end{align*}
so we have overestimated the error by a factor of 3.
But we can actually go a little further here — we can bound the error above and below. If we do not take absolute values, then since
\begin{align*} e^{0.1}  T_0(0.1) &= f'(c) \cdot 0.1 & \text{ and } 1 \lt f'(c) \lt 3 \end{align*}
we can write
\begin{align*} 1\times 0.1 \leq ( e^{0.1}  T_0(0.1) ) & \leq 3\times 0.1 \end{align*}
so
\begin{align*} T_0(0.1) + 0.1 &\leq e^{0.1} \leq T_0(0.1)+0.3\\ 1.1 &\leq e^{0.1} \leq 1.3 \end{align*}
So while the upper bound is weak, the lower bound is quite tight.
There are formulae similar to equation 3.4.29, that can be used to bound the error in our other approximations; all are based on generalisations of the MVT. The next one — for linear approximations — is
\begin{align*} f(x) & =\underbrace{f(a)+f'(a)(xa)}_{=T_1(x)}+\frac{1}{2} f''(c)(xa)^2 & \text{for some } c \text{ strictly between } a \text{ and } x \end{align*}
which we can rewrite in terms of \(T_1(x)\text{:}\)
\begin{align*} f(x)T_1(x) &= \frac{1}{2} f''(c)(xa)^2 & \text{for some } c \text{ strictly between } a \text{ and } x \end{align*}
It implies that the error that we make when we approximate \(f(x)\) by \(T_1(x) = f(a)+f'(a)\,(xa)\) is exactly \(\frac{1}{2} f''(c)\,(xa)^2\) for some \(c\) strictly between \(a\) and \(x\text{.}\)
More generally
\begin{align*} f(x)=& \underbrace{f(a)\!+\!f'(a)\cdot(x\!\!a)\!+\cdots+\!\frac{1}{n!}f^{(n)}(a)\cdot(x\!\!a)^n}_{= T_n(x)} \!+\!\frac{1}{(n\!+\!1)!}f^{(n+1)}(c)\cdot (x\!\!a)^{n+1} \end{align*}
for some \(c\) strictly between \(a\) and \(x\text{.}\) Again, rewriting this in terms of \(T_n(x)\) gives
\begin{align*} f(x)  T_n(x) &= \frac{1}{(n+1)!}f^{(n+1)}(c)\cdot (xa)^{n+1} \quad \text{for some $c$ strictly between $a$ and $x$} \end{align*}
That is, the error introduced when \(f(x)\) is approximated by its Taylor polynomial of degree \(n\text{,}\) is precisely the last term of the Taylor polynomial of degree \(n+1\text{,}\) but with the derivative evaluated at some point between \(a\) and \(x\text{,}\) rather than exactly at \(a\text{.}\) These error formulae are proven in the optional Section 3.4.10 later in this chapter.
Approximate \(\sin 46^\circ\) using Taylor polynomials about \(a=45^\circ\text{,}\) and estimate the resulting error.
Solution
 Start by defining \(f(x) = \sin x\) and
\begin{align*} a&=45^\circ=45\tfrac{\pi}{180} {\rm radians}& x&=46^\circ=46\tfrac{\pi}{180} {\rm radians}\\ xa&=\tfrac{\pi}{180} {\rm radians} \end{align*}
 The first few derivatives of \(f\) at \(a\) are
\begin{align*} f(x)&=\sin x &f(a)&=\frac{1}{\sqrt{2}}\\ f'(x)&=\cos x &\\ f'(a)&=\frac{1}{\sqrt{2}}\\ f''(x)&=\sin x &\\ f''(a)&=\frac{1}{\sqrt{2}}\\ f^{(3)}(x)&=\cos x & f^{(3)}(a)&=\frac{1}{\sqrt{2}} \end{align*}
 The constant, linear and quadratic Taylor approximations for \(\sin(x)\) about \(\frac{\pi}{4}\) are
\begin{alignat*}{2} T_0(x) &= f(a) &&= \frac{1}{\sqrt{2}}\\ T_1(x) &= T_0(x) + f'(a) \cdot(x\!\!a) &&= \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}}\left(x\! \! \frac{\pi}{4} \right)\\ T_2(x) &= T_1(x)\! +\! \frac{1}{2} f''(a) \cdot(x\!\!a)^2 &&=\! \frac{1}{\sqrt{2}} \!+\! \frac{1}{\sqrt{2}}\left(x\!\! \frac{\pi}{4} \right) \!\! \frac{1}{2\sqrt{2}}\left(x\! \! \frac{\pi}{4} \right)^2 \end{alignat*}
 So the approximations for \(\sin 46^\circ\) are
\begin{align*} \sin46^\circ &\approx T_0\left(\frac{46\pi}{180}\right) = \frac{1}{\sqrt{2}}\\ &=0.70710678\\ \sin46^\circ &\approx T_1\left(\frac{46\pi}{180}\right) = \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} \left(\frac{\pi}{180}\right)\\ &=0.71944812\\ \sin46^\circ&\approx T_2\left(\frac{46\pi}{180}\right) = \frac{1}{\sqrt{2}} + \frac{1}{\sqrt{2}} \left(\frac{\pi}{180}\right)  \frac{1}{2\sqrt{2}}\left(\frac{\pi}{180}\right)^2\\ &=0.71934042 \end{align*}
 The errors in those approximations are (respectively)
\begin{alignat*}{3} &{\rm error\ in\ 0.70710678}& &=f'(c)(xa)& &=\cos c \cdot \left(\frac{\pi}{180}\right)\\ &{\rm error\ in\ 0.71944812}& &=\frac{1}{2} f''(c)(xa)^2& &=\frac{1}{2} \cdot \sin c\cdot \left(\frac{\pi}{180}\right)^2\\ &{\rm error\ in\ 0.71923272}& &=\frac{1}{3!}f^{(3)}(c)(xa)^3& &=\frac{1}{3!}\cdot \cos c \cdot \left(\frac{\pi}{180}\right)^3 \end{alignat*}
In each of these three cases \(c\) must lie somewhere between \(45^\circ\) and \(46^\circ\text{.}\)  Rather than carefully estimating \(\sin c\) and \(\cos c\) for \(c\) in that range, we make use of a simpler (but much easier bound). No matter what \(c\) is, we know that \(\sin c\le 1\) and \(\cos c\le 1\text{.}\) Hence
\begin{alignat*}{3} &\big{\rm error\ in\ 0.70710678}\big& &\le \left(\frac{\pi}{180}\right)& & \lt 0.018\\ &\big{\rm error\ in\ 0.71944812}\big& &\le\frac{1}{2} \left(\frac{\pi}{180}\right)^2& & \lt 0.00015\\ &\big{\rm error\ in\ 0.71934042}\big& &\le \frac{1}{3!} \left(\frac{\pi}{180}\right)^3& & \lt 0.0000009 \end{alignat*}
In Example 3.4.31 above we used the fact that \(e \lt 3\) without actually proving it. Let's do so now.
 Consider the linear approximation of \(e^x\) about \(a=0\text{.}\)
\begin{align*} T_1(x) &= f(0) + f'(0)\cdot x = 1 + x \end{align*}
So at \(x=1\) we have\begin{align*} e &\approx T_1(1) = 2 \end{align*}
 The error in this approximation is
\begin{align*} e^x  T_1(x) &= \frac{1}{2} f''(c) \cdot x^2 = \frac{e^c}{2} \cdot x^2 \end{align*}
So at \(x=1\) we have\begin{align*} e  T_1(1) &= \frac{e^c}{2} \end{align*}
where \(0 \lt c \lt 1\text{.}\)  Now since \(e^x\) is an increasing ^{20} function, it follows that \(e^c \lt e\text{.}\) Hence
\begin{align*} e  T_1(1) &= \frac{e^c}{2} \lt \frac{e}{2} \end{align*}
Moving the \(\frac{e}{2}\) to the left hand side and the \(T_1(1)\) to the right hand side gives\begin{gather*} \frac{e}{2} \leq T_1(1) = 2 \end{gather*}
So \(e \lt 4\text{.}\)  This isn't as tight as we would like — so now do the same with the quadratic approximation with \(a=0\text{:}\)
\begin{align*} e^x & \approx T_2(x) = 1 + x + \frac{x^2}{2}\\ \end{align*}
So when \(x=1\) we have
\begin{align*} e & \approx T_2(1) = 1 + 1 + \frac{1}{2} = \frac{5}{2} \end{align*}  The error in this approximation is
\begin{align*} e^x  T_2(x) &= \frac{1}{3!} f'''(c) \cdot x^3 = \frac{e^c}{6} \cdot x^3 \end{align*}
So at \(x=1\) we have\begin{align*} e  T_2(1) &= \frac{e^c}{6} \end{align*}
where \(0 \lt c \lt 1\text{.}\)  Again since \(e^x\) is an increasing function we have \(e^c \lt e\text{.}\) Hence
\begin{align*} e  T_2(1) &= \frac{e^c}{6} \lt \frac{e}{6} \end{align*}
That is\begin{gather*} \frac{5e}{6} \lt T_2(1) = \frac{5}{2} \end{gather*}
So \(e \lt 3\) as required.
We wrote down the general \(n^\mathrm{th}\) degree Maclaurin polynomial approximation of \(e^x\) in Example 3.4.12 above.
 Recall that
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} x^k \end{align*}
 The error in this approximation is (by equation 3.4.33)
\begin{align*} e^x  T_n(x) &= \frac{1}{(n+1)!} e^c \end{align*}
where \(c\) is some number between \(0\) and \(x\text{.}\)  So setting \(x=1\) in this gives
\begin{align*} e  T_n(1) &= \frac{1}{(n+1)!} e^c \end{align*}
where \(0 \lt c \lt 1\text{.}\)  Since \(e^x\) is an increasing function we know that \(1 = e^0 \lt e^c \lt e^1 \lt 3\text{,}\) so the above expression becomes
\begin{align*} \frac{1}{(n+1)!} \leq e  T_n(1) &= \frac{1}{(n+1)!} e^c \leq \frac{3}{(n+1)!} \end{align*}
 So when \(n=9\) we have
\begin{align*} \frac{1}{10!} \leq e  \left(1 + 1 + \frac{1}{2} +\cdots + \frac{1}{9!} \right) &\leq \frac{3}{10!} \end{align*}
 Now \(1/10! \lt 3/10! \lt 10^{6}\text{,}\) so the approximation of \(e\) by
\begin{gather*} e \approx 1 + 1 + \frac{1}{2} +\cdots + \frac{1}{9!} = \frac{98641}{36288} = 2.718281\dots \end{gather*}
is correct to 6 decimal places.  More generally we know that using \(T_n(1)\) to approximate \(e\) will have an error of at most \(\frac{3}{(n+1)!}\) — so it converges very quickly.
Recall ^{21} that in Example 3.4.24 (measuring the height of the pole), we used the linear approximation
\begin{align*} f(\theta_0+\Delta\theta)&\approx f(\theta_0)+f'(\theta_0)\Delta\theta \end{align*}
with \(f(\theta)=10\tan\theta\) and \(\theta_0=30\dfrac{\pi}{180}\) to get
\begin{align*} \Delta h &=f(\theta_0+\Delta\theta)f(\theta_0)\approx f'(\theta_0)\Delta\theta \quad \text{which implies that} \quad \Delta\theta \approx \frac{\Delta h}{f'(\theta_0)} \end{align*}
 While this procedure is fairly reliable, it did involve an approximation. So that you could not 100% guarantee to your client's lawyer that an accuracy of 10 cm was achieved.
 On the other hand, if we use the exact formula 3.4.29, with the replacements \(x\rightarrow \theta_0+\Delta\theta\) and \(a\rightarrow\theta_0\)
\begin{align*} f(\theta_0+\Delta\theta)&=f(\theta_0)+f'(c)\Delta\theta & \text{for some $c$ between $\theta_0$ and $\theta_0+\Delta\theta$} \end{align*}
in place of the approximate formula 3.4.3, this legality is taken care of:\begin{align*} \Delta h &=f(\theta_0\!+\!\Delta\theta)f(\theta_0) =f'(c)\Delta\theta \quad \text{for some $c$ between $\theta_0$ and $\theta_0+\Delta\theta$} \end{align*}
We can clean this up a little more since in our example \(f'(\theta) = 10\sec^2\theta\text{.}\) Thus for some \(c\) between \(\theta_0\) and \(\theta_0 + \Delta\theta\text{:}\)\begin{gather*} \Delta h = 10 \sec^2(c) \Delta \theta \end{gather*}
 Of course we do not know exactly what \(c\) is. But suppose that we know that the angle was somewhere between \(25^\circ\) and \(35^\circ\text{.}\) In other words suppose that, even though we don't know precisely what our measurement error was, it was certainly no more than \(5^\circ\text{.}\)
 Now on the range \(25^\circ \lt c \lt 35^\circ\text{,}\) \(\sec(c)\) is an increasing and positive function. Hence on this range
\begin{gather*} 1.217\dots = \sec^2 25^\circ \leq \sec^2 c \leq \sec^2 35^\circ = 1.490\dots \lt 1.491 \end{gather*}
So\begin{align*} 12.17 \cdot \Delta \theta &\leq \Delta h = 10 \sec^2(c) \cdot \Delta \theta \leq 14.91 \cdot  \Delta \theta \end{align*}
 Since we require \(\Delta h \lt 0.1\text{,}\) we need \(14.91 \Delta \theta \lt 0.1\text{,}\) that is
\begin{gather*} \Delta \theta \lt \frac{0.1}{14.91} = 0.0067\dots \end{gather*}
So we must measure angles with an accuracy of no less than \(0.0067\) radians — which is\begin{gather*} \frac{180}{\pi} \cdot 0.0067 = 0.38^\circ. \end{gather*}
Hence a measurement error of \(0.38^\circ\) or less is acceptable.
(Optional) — Derivation of the Error Formulae
In this section we will derive the formula for the error that we gave in equation 3.4.33 — namely
\begin{align*} R_n(x) = f(x)  T_n(x) &= \frac{1}{(n+1)!}f^{(n+1)}(c)\cdot (xa)^{n+1} \end{align*}
for some \(c\) strictly between \(a\) and \(x\text{,}\) and where \(T_n(x)\) is the \(n^\mathrm{th}\) degree Taylor polynomial approximation of \(f(x)\) about \(x=a\text{:}\)
\begin{align*} T_n(x) &= \sum_{k=0}^n \frac{1}{k!} f^{(k)}(a). \end{align*}
Recall that we have already proved a special case of this formula for the constant approximation using the MeanValue Theorem (Theorem 2.13.5). To prove the general case we need the following generalisation ^{22} of that theorem:
Let the functions \(F(x)\) and \(G(x)\) both be defined and continuous on \(a\le x\le b\) and both be differentiable on \(a \lt x \lt b\text{.}\) Furthermore, suppose that \(G'(x)\ne 0\) for all \(a \lt x \lt b\text{.}\) Then, there is a number \(c\) obeying \(a \lt c \lt b\) such that
\begin{gather*} \frac{F(b)F(a)}{G(b)G(a)}=\frac{F'(c)}{G'(c)} \end{gather*}
Notice that setting \(G(x) = x\) recovers the original MeanValue Theorem. It turns out that this theorem is not too difficult to prove from the MVT using some sneaky algebraic manipulations:

 First we construct a new function \(h(x)\) as a linear combination of \(F(x)\) and \(G(x)\) so that \(h(a)=h(b)=0\text{.}\) Some experimentation yields
\begin{gather*} h(x)=\big[F(b)F(a)\big]\cdot \big[G(x)G(a)\big] \big[G(b)G(a)\big] \cdot \big[F(x)F(a)\big] \end{gather*}
 Since \(h(a)=h(b)=0\text{,}\) the MeanValue theorem (actually Rolle's theorem) tells us that there is a number \(c\) obeying \(a \lt c \lt b\) such that \(h'(c)=0\text{:}\)\begin{align*} h'(x) &= \big[F(b)F(a)\big] \cdot G'(x)  \big[G(b)G(a)\big] \cdot F'(x) & \text{ so}\\ 0 &= \big[F(b)F(a)\big] \cdot G'(c)  \big[G(b)G(a)\big] \cdot F'(c) \end{align*}
Now move the \(G'(c)\) terms to one side and the \(F'(c)\) terms to the other:
\begin{align*} \big[F(b)F(a)\big] \cdot G'(c) &= \big[G(b)G(a)\big] \cdot F'(c). \end{align*}
 Since we have \(G'(x) \neq 0\text{,}\) we know that \(G'(c) \neq 0\text{.}\) Further the MeanValue theorem ensures ^{23} that \(G(a) \neq G(b)\text{.}\) Hence we can move terms about to get
\begin{align*} \big[F(b)F(a)\big] &= \big[G(b)G(a)\big] \cdot \frac{F'(c)}{G'(c)}\\ \frac{F(b)F(a)}{G(b)G(a)} &= \frac{F'(c)}{G'(c)} \end{align*}
as required.
Armed with the above theorem we can now move on to the proof of the Taylor remainder formula.

We begin by proving the remainder formula for \(n=1\text{.}\) That is
\begin{align*} f(x)  T_1(x) &= \frac{1}{2}f''(c) \cdot(xa)^2 \end{align*}
 Start by setting
\begin{align*} F(x) &= f(x)T_1(x) & G(x) &= (xa)^2 \end{align*}
Notice that, since \(T_1(a)=f(a)\) and \(T'_1(x) = f'(a)\text{,}\)\begin{align*} F(a) &= 0 & G(a)&=0\\ F'(x) &= f'(x)f'(a) & G'(x) &= 2(xa) \end{align*}
 Now apply the generalised MVT with \(b=x\text{:}\) there exists a point \(q\) between \(a\) and \(x\) such that
\begin{align*} \frac{F(x)F(a)}{G(x)G(a)} &= \frac{F'(q)}{G'(q)}\\ \frac{F(x)0}{G(x)  0} &= \frac{f'(q)f'(a)}{2(qa)}\\ 2 \cdot \frac{F(x)}{G(x)} &= \frac{f'(q)f'(a)}{qa} \end{align*}
 Consider the righthand side of the above equation and set \(g(x) = f'(x)\text{.}\) Then we have the term \(\frac{g(q)g(a)}{qa}\) — this is exactly the form needed to apply the MVT. So now apply the standard MVT to the righthand side of the above equation — there is some \(c\) between \(q\) and \(a\) so that
\begin{align*} \frac{f'(q)f'(a)}{qa} &= \frac{g(q)g(a)}{qa} = g'(c) = f''(c) \end{align*}
Notice that here we have assumed that \(f''(x)\) exists.
 Putting this together we have that
\begin{align*} 2 \cdot \frac{F(x)}{G(x)} &= \frac{f'(q)f'(a)}{qa} = f''(c)\\ 2 \frac{f(x)T_1(x)}{(xa)^2} &= f''(c)\\ f(x)  T_1(x) &= \frac{1}{2!} f''(c) \cdot (xa)^2 \end{align*}
as required.
Oof! We have now proved the cases \(n=1\) (and we did \(n=0\) earlier).
To proceed — assume we have proved our result for \(n=1,2,\cdots, k\text{.}\) We realise that we haven't done this yet, but bear with us. Using that assumption we will prove the result is true for \(n=k+1\text{.}\) Once we have done that, then
 we have proved the result is true for \(n=1\text{,}\) and
 we have shown if the result is true for \(n=k\) then it is true for \(n=k+1\)
Hence it must be true for all \(n \geq 1\text{.}\) This style of proof is called mathematical induction. You can think of the process as something like climbing a ladder:
 prove that you can get onto the ladder (the result is true for \(n=1\)), and
 if I can stand on the current rung, then I can step up to the next rung (if the result is true for \(n=k\) then it is also true for \(n=k+1\))
Hence I can climb as high as like.
 Let \(k \gt 0\) and assume we have proved
\begin{align*} f(x)  T_k(x) &= \frac{1}{(k+1)!} f^{(k+1)}(c) \cdot (xa)^{k+1} \end{align*}
for some \(c\) between \(a\) and \(x\text{.}\)
 Now set
\begin{align*} F(x) &= f(x)  T_{k+1}(x) & G(x) &= (xa)^{k+1}\\ \end{align*}
and notice that, since \(T_{k+1}(a)=f(a)\text{,}\)
\begin{align*} F(a) &= f(a)T_{k+1}(a)=0 & G(a) &= 0 & G'(x) &= (k+1)(xa)^k \end{align*} and apply the generalised MVT with \(b=x\text{:}\) hence there exists a \(q\) between \(a\) and \(x\) so that
\begin{align*} \frac{F(x)F(a)}{G(x)G(a)} &= \frac{F'(q)}{G'(q)} &\text{which becomes}\\ \frac{F(x)}{(xa)^{k+1}} &= \frac{F'(q)}{(k+1)(qa)^k} & \text{rearrange}\\ F(x) &= \frac{(xa)^{k+1}}{(k+1)(qa)^k} \cdot F'(q) \end{align*}
 We now examine \(F'(q)\text{.}\) First carefully differentiate \(F(x)\text{:}\)
\begin{align*} F'(x) &= \dfrac{d}{dx} \bigg[f(x)  \bigg( f(a) + f'(a)(xa) + \frac{1}{2} f''(a)(xa)^2 + \cdots \\ &\hskip2.5in+ \frac{1}{k!}f^{(k)}(xa)^k \bigg) \bigg]\\ &= f'(x)  \bigg( f'(a) + \frac{2}{2} f''(a)(xa) + \frac{3}{3!} f'''(a)(xa)^2 + \cdots \\ &\hskip2.5in+ \frac{k}{k!}f^{(k)}(a) (xa)^{k1} \bigg)\\ &= f'(x)  \bigg( f'(a) + f''(a)(xa) + \frac{1}{2} f'''(a)(xa)^2 +\cdots \\ &\hskip2.5in+ \frac{1}{(k1)!}f^{(k)}(a)(xa)^{k1} \bigg) \end{align*}
Now notice that if we set \(f'(x) = g(x)\) then this becomes
\begin{align*} F'(x) &= g(x)  \bigg( g(a) + g'(a)(xa) + \frac{1}{2} g''(a)(xa)^2 + \cdots \\ &\hskip2.5in+ \frac{1}{(k1)!}g^{(k1)}(a)(xa)^{k1} \bigg) \end{align*}
So \(F'(x)\) is then exactly the remainder formula but for a degree \(k1\) approximation to the function \(g(x) = f'(x)\text{.}\)
 Hence the function \(F'(q)\) is the remainder when we approximate \(f'(q)\) with a degree \(k1\) Taylor polynomial. The remainder formula, equation 3.4.33, then tells us that there is a number \(c\) between \(a\) and \(q\) so that
\begin{align*} F'(q) &= g(q)  \bigg( g(a) + g'(a)(qa) + \frac{1}{2} g''(a)(qa)^2 + \cdots \\ &\hskip2.5in + \frac{1}{(k1)!}g^{(k1)}(a)(qa)^{k1} \bigg)\\ &= \frac{1}{k!} g^{(k)}(c) (qa)^k = \frac{1}{k!} f^{(k+1)}(c)(qa)^k \end{align*}
Notice that here we have assumed that \(f^{(k+1)}(x)\) exists.
 Now substitute this back into our equation above
\begin{align*} F(x) &= \frac{(xa)^{k+1}}{(k+1)(qa)^k} \cdot F'(q)\\ &= \frac{(xa)^{k+1}}{(k+1)(qa)^k} \cdot \frac{1}{k!} f^{(k+1)}(c)(qa)^k\\ &= \frac{1}{(k+1)k!} \cdot f^{(k+1)}(c) \cdot \frac{(xa)^{k+1}(qa)^k}{(qa)^k}\\ &= \frac{1}{(k+1)!} \cdot f^{(k+1)}(c) \cdot(xa)^{k+1} \end{align*}
as required.
So we now know that
 if, for some \(k\text{,}\) the remainder formula (with \(n=k\)) is true for all \(k\) times differentiable functions,
 then the remainder formula is true (with \(n=k+1\)) for all \(k+1\) times differentiable functions.
Repeatedly applying this for \(k=1,2,3,4,\cdots\) (and recalling that we have shown the remainder formula is true when \(n=0,1\)) gives equation 3.4.33 for all \(n=0,1,2,\cdots\text{.}\)
Exercises
Exercises for § 3.4.1
Stage 1
The graph below shows three curves. The black curve is \(y=f(x)\text{,}\) the red curve is \(y=g(x)=1+2\sin(1+x)\text{,}\) and the blue curve is \(y=h(x)=0.7\text{.}\) If you want to estimate \(f(0)\text{,}\) what might cause you to use \(g(0)\text{?}\) What might cause you to use \(h(0)\text{?}\)
Stage 2
In this and following sections, we will ask you to approximate the value of several constants, such as \(\log(0.93)\text{.}\) A valid question to consider is why we would ask for approximations of these constants that take lots of time, and are less accurate than what you get from a calculator.
One answer to this question is historical: people were approximating logarithms before they had calculators, and these are some of the ways they did that. Pretend you're on a desert island without any of your usual devices and that you want to make a number of quick and dirty approximate evaluations.
Another reason to make these approximations is technical: how does the calculator get such a good approximation of \(\log(0.93)\text{?}\) The techniques you will learn later on in this chapter give very accurate formulas for approximating functions like \(\log x\) and \(\sin x\text{,}\) which are sometimes used in calculators.
A third reason to make simple approximations of expressions that a calculator could evaluate is to provide a reality check. If you have a ballpark guess for your answer, and your calculator gives you something wildly different, you know to doublecheck that you typed everything in correctly.
For now, questions like Question 3.4.11.2 through Question 3.4.11.4 are simply for you to practice the fundamental ideas we're learning.
Use a constant approximation to estimate the value of \(\log(x)\) when \(x=0.93\text{.}\) Sketch the curve \(y=f(x)\) and your constant approximation.
(Remember we use \(\log x\) to mean the natural logarithm of \(x\text{,}\) \(\log_e x\text{.}\))
Use a constant approximation to estimate \(\arcsin(0.1)\text{.}\)
Use a constant approximation to estimate \(\sqrt{3}\tan(1)\text{.}\)
Stage 3
Use a constant approximation to estimate the value of \(10.1^3\text{.}\) Your estimation should be something you can calculate in your head.
Exercises for § 3.4.2
Stage 1
Suppose \(f(x)\) is a function, and we calculated its linear approximation near \(x=5\) to be \(f(x) \approx 3x9\text{.}\)
 What is \(f(5)\text{?}\)
 What is \(f'(5)\text{?}\)
 What is \(f(0)\text{?}\)
The curve \(y=f(x)\) is shown below. Sketch the linear approximation of \(f(x)\) about \(x=2\text{.}\)
What is the linear approximation of the function \(f(x)=2x+5\) about \(x=a\text{?}\)
Stage 2
Use a linear approximation to estimate \(\log(x)\) when \(x=0.93\text{.}\) Sketch the curve \(y=f(x)\) and your linear approximation.
Use a linear approximation to estimate \(\sqrt{5}\text{.}\)
Use a linear approximation to estimate \(\sqrt[5]{30}\)
Stage 3
Use a linear approximation to estimate \(10.1^3\text{,}\) then compare your estimation with the actual value.
Imagine \(f(x)\) is some function, and you want to estimate \(f(b)\text{.}\) To do this, you choose a value \(a\) and take an approximation (linear or constant) of \(f(x)\) about \(a\text{.}\) Give an example of a function \(f(x)\text{,}\) and values \(a\) and \(b\text{,}\) where the constant approximation gives a more accurate estimation of \(f(b)\) than the linear approximation.
The function
\[ L(x)=\frac{1}{4}x+\frac{4\pi\sqrt{27}}{12} \nonumber \]
is the linear approximation of \(f(x)=\arctan x\) about what point \(x=a\text{?}\)
Exercises for § 3.4.3
Stage 1
The quadratic approximation of a function \(f(x)\) about \(x=3\) is
\[ f(x) \approx x^2+6x \nonumber \]
What are the values of \(f(3)\text{,}\) \(f'(3)\text{,}\) \(f''(3)\text{,}\) and \(f'''(3)\text{?}\)
Give a quadratic approximation of \(f(x)=2x+5\) about \(x=a\text{.}\)
Stage 2
Use a quadratic approximation to estimate \(\log(0.93)\text{.}\)
Use a quadratic approximation to estimate \(\cos\left(\dfrac{1}{15}\right)\text{.}\)
Calculate the quadratic approximation of \(f(x)=e^{2x}\) about \(x=0\text{.}\)
Use a quadratic approximation to estimate \(5^{\tfrac{4}{3}}\text{.}\)
Evaluate the expressions below.
 \(\displaystyle\sum_{n=5}^{30} 1\)
 \(\displaystyle\sum_{n=1}^{3} \left[ 2(n+3)n^2 \right]\)
 \(\displaystyle\sum_{n=1}^{10} \left[\frac{1}{n}\frac{1}{n+1}\right]\)
 \(\displaystyle\sum_{n=1}^{4}\frac{5\cdot 2^n}{4^{n+1}} \)
Write the following in sigma notation:
 \(1+2+3+4+5\)
 \(2+4+6+8\)
 \(3+5+7+9+11\)
 \(9+16+25+36+49\)
 \(9+4+16+5+25+6+36+7+49+8\)
 \(8+15+24+35+48\)
 \(36+912+1518\)
Stage 3
Use a quadratic approximation of \(f(x)=2\arcsin x\) about \(x=0\) to approximate \(f(1)\text{.}\) What number are you approximating?
Use a quadratic approximation of \(e^x\) to estimate \(e\) as a decimal.
Group the expressions below into collections of equivalent expressions.
 \(\displaystyle\sum_{n=1}^{10} 2n\)
 \(\displaystyle\sum_{n=1}^{10} 2^n\)
 \(\displaystyle\sum_{n=1}^{10} n^2\)
 \(2\displaystyle\sum_{n=1}^{10} n\)
 \(2\displaystyle\sum_{n=2}^{11} (n1)\)
 \(\displaystyle\sum_{n=5}^{14} (n4)^2\)
 \(\dfrac{1}{4}\displaystyle\sum_{n=1}^{10}\left( \frac{4^{n+1}}{2^n}\right)\)
Exercises for § 3.4.4
Stage 1
The 3rd degree Taylor polynomial for a function \(f(x)\) about \(x=1\) is
\[ T_3(x)=x^35x^2+9x \nonumber \]
What is \(f''(1)\text{?}\)
The \(n\)th degree Taylor polynomial for \(f(x)\) about \(x=5\) is
\[ T_n(x)=\sum_{k=0}^{n} \frac{2k+1}{3k9}(x5)^k \nonumber \]
What is \(f^{(10)}(5)\text{?}\)
Stage 3
The 4thdegree Maclaurin polynomial for \(f(x)\) is
\[ T_4(x)=x^4x^3+x^2x+1 \nonumber \]
What is the thirddegree Maclaurin polynomial for \(f(x)\text{?}\)
The 4th degree Taylor polynomial for \(f(x)\) about \(x=1\) is
\[ T_4(x)=x^4+x^39 \nonumber \]
What is the third degree Taylor polynomial for \(f(x)\) about \(x=1\text{?}\)
For any even number \(n\text{,}\) suppose the \(n\)th degree Taylor polynomial for \(f(x)\) about \(x=5\) is
\[ \sum_{k=0}^{n/2} \frac{2k+1}{3k9}(x5)^{2k} \nonumber \]
What is \(f^{(10)}(5)\text{?}\)
The thirddegree Taylor polynomial for \(f(x)=x^3\left[2\log x  \dfrac{11}{3}\right]\) about \(x=a\) is
\[ T_3(x)=\frac{2}{3}\sqrt{e^3}+3ex6\sqrt{e}x^2+x^3 \nonumber \]
What is \(a\text{?}\)
Exercises for § 3.4.5
Stage 1
Give the 16th degree Maclaurin polynomial for \(f(x)=\sin x+ \cos x\text{.}\)
Give the 100th degree Taylor polynomial for \(s(t)=4.9t^2t+10\) about \(t=5\text{.}\)
Write the \(n\)thdegree Taylor polynomial for \(f(x)=2^x\) about \(x=1\) in sigma notation.
Find the 6th degree Taylor polynomial of \(f(x)=x^2\log x+2x^2+5\) about \(x=1\text{,}\) remembering that \(\log x\) is the natural logarithm of \(x\text{,}\) \(\log_ex\text{.}\)
Give the \(n\)th degree Maclaurin polynomial for \(\dfrac{1}{1x}\) in sigma notation.
Stage 3
Calculate the \(3\)rddegree Taylor Polynomial for \(f(x)=x^x\) about \(x=1\text{.}\)
Use a 5thdegree Maclaurin polynomial for \(6\arctan x\) to approximate \(\pi\text{.}\)
Write the \(100\)thdegree Taylor polynomial for \(f(x)=x(\log x 1)\) about \(x=1\) in sigma notation.
Write the \((2n)\)thdegree Taylor polynomial for \(f(x)=\sin x\) about \(x=\dfrac{\pi}{4}\) in sigma notation.
Estimate the sum below
\[ 1+\frac{1}{2}+\frac{1}{3!}+\frac{1}{4!}+\cdots +\frac{1}{157!} \nonumber \]
by interpreting it as a Maclaurin polynomial.
Estimate the sum below
\[ \sum_{k=0}^{100}\frac{(1)^k}{2k!}\left(\frac{5\pi}{4}\right)^{2k} \nonumber \]
by interpreting it as a Maclaurin polynomial.
Exercises for § 3.4.6
Stage 1
In the picture below, label the following:
\[ f(x) \qquad f\left(x+\Delta x\right) \qquad \Delta x \qquad \Delta y \nonumber \]
At this point in the book, every homework problem takes you about 5 minutes. Use the terms you learned in this section to answer the question: if you spend 15 minutes more, how many more homework problems will you finish?
Stage 2
Let \(f(x)=\arctan x\text{.}\)
 Use a linear approximation to estimate \(f(5.1)f(5)\text{.}\)
 Use a quadratic approximation to estimate \(f(5.1)f(5)\text{.}\)
When diving off a cliff from \(x\) metres above the water, your speed as you hit the water is given by
\[ s(x)=\sqrt{19.6x}\;\frac{\mathrm{m}}{\mathrm{sec}} \nonumber \]
Your last dive was from a height of 4 metres.
 Use a linear approximation of \(\Delta y\) to estimate how much faster you will be falling when you hit the water if you jump from a height of 5 metres.
 A diver makes three jumps: the first is from \(x\) metres, the second from \(x+\Delta x\) metres, and the third from \(x+2\Delta x\) metres, for some fixed positive values of \(x\) and \(\Delta x\text{.}\) Which is bigger: the increase in terminal speed from the first to the second jump, or the increase in terminal speed from the second to the third jump?
Exercises for § 3.4.7
Stage 1
Let \(f(x)=7x^23x+4\text{.}\) Suppose we measure \(x\) to be \(x_0 = 2\) but that the real value of \(x\) is \(x_0+\Delta x\text{.}\) Suppose further that the error in our measurement is \(\Delta x = 1\text{.}\) Let \(\Delta y\) be the change in \(f(x)\) corresponding to a change of \(\Delta x \) in \(x_0\text{.}\) That is, \(\Delta y = f\left(x_0+\Delta x\right)f(x_0)\text{.}\)
True or false: \(\Delta y = f'(2)(1)=25\)
Suppose the exact amount you are supposed to tip is $5.83, but you approximate and tip $6. What is the absolute error in your tip? What is the percent error in your tip?
Suppose \(f(x)=3x^25\text{.}\) If you measure \(x\) to be \(10\text{,}\) but its actual value is \(11\text{,}\) estimate the resulting error in \(f(x)\) using the linear approximation, and then the quadratic approximation.
Stage 2
A circular pen is being built on a farm. The pen must contain \(A_0\) square metres, with an error of no more than 2%. Estimate the largest percentage error allowable on the radius.
A circle with radius 3 has a sector cut out of it. It's a smallish sector, no more than a quarter of the circle. You want to find out the area of the sector.
 Suppose the angle of the sector is \(\theta\text{.}\) What is the area of the sector?
 Unfortunately, you don't have a protractor, only a ruler. So, you measure the chord made by the sector (marked \(d\) in the diagram above). What is \(\theta\) in terms of \(d\text{?}\)
 Suppose you measured \(d=0.7\text{,}\) but actually \(d=0.68\text{.}\) Estimate the absolute error in your calculation of the area removed.
A conical tank, standing on its pointy end, has height 2 metres and radius 0.5 metres. Estimate change in volume of the water in the tank associated to a change in the height of the water from 50 cm to 45 cm.
Stage 3
A sample begins with precisely 1 \(\mu\)g of a radioactive isotope, and after 3 years is measured to have 0.9 \(\mu\)g remaining. If this measurement is correct to within 0.05 \(\mu\)g, estimate the corresponding accuracy of the halflife calculated using it.
Subsubsection Exercises for § 3.4.8
Stage 1
Suppose \(f(x)\) is a function that we approximated by \(F(x)\text{.}\) Further, suppose \(f(10)=3\text{,}\) while our approximation was \(F(10)=5\text{.}\) Let \(R(x)=f(x)F(x)\text{.}\)
 True or false: \(R(10) \leq 7\)
 True or false: \(R(10) \leq 8\)
 True or false: \(R(10) \leq 9\)
 True or false: \(R(10) \leq 100\)
Let \(f(x)=e^x\text{,}\) and let \(T_3(x)\) be the thirddegree Maclaurin polynomial for \(f(x)\text{,}\)
\[ T_3(x)=1+x+\frac{1}{2}x^2+\frac{1}{3!}x^3 \nonumber \]
Use Equation 3.4.33 to give a reasonable bound on the error \(f(2)T_3(2)\text{.}\) Then, find the error \(f(2)T_3(2)\) using a calculator.
Let \(f(x)= 5x^324x^2+ex\pi^4\text{,}\) and let \(T_5(x)\) be the fifthdegree Taylor polynomial for \(f(x)\) about \(x=1\text{.}\) Give the best bound you can on the error \(f(37)T(37)\text{.}\)
You and your friend both want to approximate \(\sin(33)\text{.}\) Your friend uses the firstdegree Maclaurin polynomial for \(f(x)=\sin x\text{,}\) while you use the zerothdegree (constant) Maclaurin polynomial for \(f(x)=\sin x\text{.}\) Who has a better approximation, you or your friend?
Stage 2
Suppose a function \(f(x)\) has sixth derivative
\[ f^{(6)}(x)=\dfrac{6!(2x5)}{x+3}. \nonumber \]
Let \(T_5(x)\) be the 5thdegree Taylor polynomial for \(f(x)\) about \(x=11\text{.}\)
Give a bound for the error \(f(11.5)T_5(11.5)\text{.}\)
Let \(f(x)= \tan x\text{,}\) and let \(T_2(x)\) be the seconddegree Taylor polynomial for \(f(x)\) about \(x=0\text{.}\) Give a reasonable bound on the error \(f(0.1)T(0.1)\) using Equation 3.4.33.
Let \(f(x)=\log (1x)\text{,}\) and let \(T_5(x)\) be the fifthdegree Maclaurin polynomial for \(f(x)\text{.}\) Use Equation 3.4.33 to give a bound on the error \(f\left(\frac{1}{4}\right)T_5\left(\frac{1}{4}\right)\text{.}\)
(Remember \(\log x=\log_ex\text{,}\) the natural logarithm of \(x\text{.}\))
Let \(f(x)=\sqrt[5]{x}\text{,}\) and let \(T_3(x)\) be the thirddegree Taylor polynomial for \(f(x)\) about \(x=32\text{.}\) Give a bound on the error \(f(30)T_3(30)\text{.}\)
Let
\[ f(x)= \sin\left(\dfrac{1}{x}\right), \nonumber \]
and let \(T_1(x)\) be the firstdegree Taylor polynomial for \(f(x)\) about \(x=\dfrac{1}{\pi}\text{.}\) Give a bound on the error \(f(0.01)T_1(0.01)\text{,}\) using Equation 3.4.33. You may leave your answer in terms of \(\pi\text{.}\)
Then, give a reasonable bound on the error \(f(0.01)T_1(0.01)\text{.}\)
Let \(f(x)=\arcsin x\text{,}\) and let \(T_2(x)\) be the seconddegree Maclaurin polynomial for \(f(x)\text{.}\) Give a reasonable bound on the error \(\leftf\left(\frac{1}{2}\right)T_2\left(\frac{1}{2}\right)\right\) using Equation 3.4.33. What is the exact value of the error \(\leftf\left(\frac{1}{2}\right)T_2\left(\frac{1}{2}\right)\right\text{?}\)
Stage 3
Let \(f(x)=\log(x)\text{,}\) and let \(T_n(x)\) be the \(n\)thdegree Taylor polynomial for \(f(x)\) about \(x=1\text{.}\) You use \(T_n(1.1)\) to estimate \(\log (1.1)\text{.}\) If your estimation needs to have an error of no more than \(10^{4}\text{,}\) what is an acceptable value of \(n\) to use?
Give an estimation of \(\sqrt[7]{2200}\) using a Taylor polynomial. Your estimation should have an error of less than 0.001.
Use Equation 3.4.33 to show that
\[ \frac{4241}{5040}\leq\sin(1) \leq\frac{4243}{5040} \nonumber \]
In this question, we use the remainder of a Maclaurin polynomial to approximate \(e\text{.}\)
 Write out the 4th degree Maclaurin polynomial \(T_4(x)\) of the function \(e^x\text{.}\)
 Compute \(T_4(1)\text{.}\)
 Use your answer from 3.4.11.14.b to conclude \(\dfrac{326}{120} \lt e \lt \dfrac{325}{119}\text{.}\)
Further problems for § 3.4
Stage 1
Consider a function \(f(x)\) whose thirddegree Maclaurin polynomial is \(4 + 3x^2 + \frac{1}{2}x^3\text{.}\) What is \(f'(0)\text{?}\) What is \(f''(0)\text{?}\)
Consider a function \(h(x)\) whose thirddegree Maclaurin polynomial is \(1+4x\dfrac{1}{3}x^2 + \dfrac{2}{3}x^3\text{.}\) What is \(h^{(3)}(0)\text{?}\)
The thirddegree Taylor polynomial of \(h(x)\) about \(x=2\) is \(3 + \dfrac{1}{2}(x2) + 2(x2)^3\text{.}\)
What is \(h'(2)\text{?}\) What is \(h''(2)\text{?}\)
Stage 2
The function \(f(x)\) has the property that \(f(3)=2,\ f'(3)=4\) and \(f''(3)=10\text{.}\)
 Use the linear approximation to \(f(x)\) centred at \(x=3\) to approximate \(f(2.98)\text{.}\)
 Use the quadratic approximation to \(f(x)\) centred at \(x=3\) to approximate \(f(2.98)\text{.}\)
Use the tangent line to the graph of \(y = x^{1/3}\) at \(x = 8\) to find an approximate value for \(10^{1/3}\text{.}\) Is the approximation too large or too small?
Estimate \(\sqrt{2}\) using a linear approximation.
Estimate \(\sqrt[3]{26}\) using a linear approximation.
Estimate \((10.1)^5\) using a linear approximation.
Estimate \(\sin\left(\dfrac{101\pi}{100}\right)\) using a linear approximation. (Leave your answer in terms of \(\pi\text{.}\))
Use a linear approximation to estimate \(\arctan(1.1)\text{,}\) using \(\arctan 1 = \dfrac{\pi}{4}\text{.}\)
Use a linear approximation to estimate \((2.001)^3\text{.}\) Write your answer in the form \(n/1000\) where \(n\) is an integer.
Using a suitable linear approximation, estimate \((8.06)^{2/3}\text{.}\) Give your answer as a fraction in which both the numerator and denominator are integers.
Find the thirdorder Taylor polynomial for \(f(x)=(1  3x)^{1/3}\) around \(x = 0\text{.}\)
Consider a function \(f(x)\) which has \(f^{(3)}(x)=\dfrac{x}{22x^2}\text{.}\) Show that when we approximate \(f(2)\) using its second degree Taylor polynomial at \(a=1\text{,}\) the absolute value of the error is less than \(\frac{1}{50}=0.02\text{.}\)
Consider a function \(f(x)\) which has \(f^{(4)}(x)=\dfrac{\cos(x^2)}{3x}\text{.}\) Show that when we approximate \(f(0.5)\) using its thirddegree Maclaurin polynomial, the absolute value of the error is less than \(\frac{1}{500}=0.002\text{.}\)
Consider a function \(f(x)\) which has \(f^{(3)}(x)=\dfrac{e^{x}}{8+x^2}\text{.}\) Show that when we approximate \(f(1)\) using its second degree Maclaurin polynomial, the absolute value of the error is less than \(1/40\text{.}\)
 By using an appropriate linear approximation for \(f(x)=x^{1/3}\text{,}\) estimate \(5^{2/3}\text{.}\)
 Improve your answer in 3.4.11.17.a by making a quadratic approximation.
 Obtain an error estimate for your answer in 3.4.11.17.a (not just by comparing with your calculator's answer for \(5^{2/3}\)).
Stage 3
The 4th degree Maclaurin polynomial for \(f(x)\) is
\[ T_4(x)=5x^29 \nonumber \]
What is the third degree Maclaurin polynomial for \(f(x)\text{?}\)
The equation \(y^4+xy=x^21\) defines \(y\) implicitly as a function of \(x\) near the point \(x=2,\ y=1\text{.}\)
 Use the tangent line approximation at the given point to estimate the value of \(y\) when \(x=2.1\text{.}\)
 Use the quadratic approximation at the given point to estimate the value of \(y\) when \(x=2.1\text{.}\)
 Make a sketch showing how the curve relates to the tangent line at the given point.
The equation \(x^4+y+xy^4=1\) defines \(y\) implicitly as a function of \(x\) near the point \(x=1, y=1\text{.}\)
 Use the tangent line approximation at the given point to estimate the value of \(y\) when \(x=0.9\text{.}\)
 Use the quadratic approximation at the given point to get another estimate of \(y\) when \(x=0.9\text{.}\)
 Make a sketch showing how the curve relates to the tangent line at the given point.
Given that \(\log 10\approx 2.30259\text{,}\) estimate \(\log 10.3\) using a suitable tangent line approximation. Give an upper and lower bound for the error in your approximation by using a suitable error estimate.
Consider \(f(x)=e^{e^x}\text{.}\)
 Give the linear approximation for \(f\) near \(x=0\) (call this \(L(x)\)).
 Give the quadratic approximation for \(f\) near \(x=0\) (call this \(Q(x)\)).
 Prove that \(L(x) \lt Q(x) \lt f(x)\) for all \(x \gt 0\text{.}\)
 Find an interval of length at most \(0.01\) that is guaranteed to contain the number \(e^{0.1}\text{.}\)