# 1.6: Continuity

- Page ID
- 89709

We have seen that computing the limits some functions — polynomials and rational functions — is very easy because

\begin{align*} \lim_{x \to a} f(x) &= f(a). \end{align*}

That is, the the limit as \(x\) approaches \(a\) is just \(f(a)\text{.}\) Roughly speaking, the reason we can compute the limit this way is that these functions do not have any abrupt jumps near \(a\text{.}\)

Many other functions have this property, \(\sin(x)\) for example. A function with this property is called “continuous” and there is a precise mathematical definition for it. If you do not recall interval notation, then now is a good time to take a quick look back at Definition 0.3.5.

A function \(f(x)\) is continuous at \(a\) if

\begin{align*} \lim_{x \to a} f(x) &= f(a) \end{align*}

If a function is not continuous at \(a\) then it is said to be discontinuous at \(a\text{.}\)

When we write that \(f\) is continuous without specifying a point, then typically this means that \(f\) is continuous at \(a\) for all \(a \in \mathbb{R}\text{.}\)

When we write that \(f(x)\) is continuous on the open interval \((a,b)\) then the function is continuous at every point \(c\) satisfying \(a \lt c \lt b\text{.}\)

So if a function is continuous at \(x=a\) we immediately know that

- \(f(a)\) exists
- \(\displaystyle \lim_{x \to a^-}\) exists and is equal to \(f(a)\text{,}\) and
- \(\displaystyle \lim_{x \to a^+}\) exists and is equal to \(f(a)\text{.}\)

## Quick Aside — One-sided Continuity

Notice in the above definition of continuity on an interval \((a,b)\) we have carefully avoided saying anything about whether or not the function is continuous at the endpoints of the interval — i.e. is \(f(x)\) continuous at \(x=a\) or \(x=b\text{.}\) This is because talking of continuity at the endpoints of an interval can be a little delicate.

In many situations we will be given a function \(f(x)\) defined on a closed interval \([a,b]\text{.}\) For example, we might have:

\begin{align*} f(x) &= \frac{x+1}{x+2} & \text{for } x \in [0,1]. \end{align*}

For any \(0 \leq x \leq 1\) we know the value of \(f(x)\text{.}\) However for \(x \lt 0\) or \(x \gt 1\) we know nothing about the function — indeed it has not been defined.

So now, consider what it means for \(f(x)\) to be continuous at \(x=0\text{.}\) We need to have

\begin{align*} \lim_{x\to 0} f(x) &= f(0), \end{align*}

however this implies that the one-sided limits

\begin{align*} \lim_{x\to 0^+} f(x) &= f(0) & \text{and}&& \lim_{x\to 0^-} f(x) &= f(0) \end{align*}

Now the first of these one-sided limits involves examining the behaviour of \(f(x)\) for \(x \gt 0\text{.}\) Since this involves looking at points for which \(f(x)\) is defined, this is something we can do. On the other hand the second one-sided limit requires us to understand the behaviour of \(f(x)\) for \(x \lt 0\text{.}\) This we cannot do because the function hasn't been defined for \(x \lt 0\text{.}\)

One way around this problem is to generalise the idea of continuity to one-sided continuity, just as we generalised limits to get one-sided limits.

A function \(f(x)\) is continuous from the right at \(a\) if

\begin{align*} \lim_{x\to a^+} f(x) &= f(a). \end{align*}

Similarly a function \(f(x)\) is continuous from the left at \(a\) if

\begin{align*} \lim_{x\to a^-} f(x) &= f(a) \end{align*}

Using the definition of one-sided continuity we can now define what it means for a function to be continuous on a closed interval.

A function \(f(x)\) is continuous on the closed interval \([a,b]\) when

- \(f(x)\) is continuous on \((a,b)\text{,}\)
- \(f(x)\) is continuous from the right at \(a\text{,}\) and
- \(f(x)\) is continuous from the left at \(b\text{.}\)

Note that the last two condition are equivalent to

\begin{align*} \lim_{x\to a^+} f(x) &= f(a) & \text{ and }&& \lim_{x\to b^-} f(x) &= f(b). \end{align*}

## Back to the Main Text

We already know from our work above that polynomials are continuous, and that rational functions are continuous at all points in their domains — i.e. where their denominators are non-zero. As we did for limits, we will see that continuity interacts “nicely” with arithmetic. This will allow us to construct complicated continuous functions from simpler continuous building blocks (like polynomials).

But first, a few examples…

Consider the functions drawn below

These are

\begin{align*} f(x) &= \begin{cases} x&x \lt 1 \\ x+2 & x\geq 1 \end{cases}\\ g(x) &= \begin{cases} 1/x^2& x\neq0 \\ 0 & x=0\end{cases}\\ h(x) &= \begin{cases}\frac{x^3-x^2}{x-1} & x\neq 1 \\ 0 & x=1 \end{cases} \end{align*}

Determine where they are continuous and discontinuous:

- When \(x \lt 1\) then \(f(x)\) is a straight line (and so a polynomial) and so it is continuous at every point \(x \lt 1\text{.}\) Similarly when \(x \gt 1\) the function is a straight line and so it is continuous at every point \(x \gt 1\text{.}\) The only point which might be a discontinuity is at \(x=1\text{.}\) We see that the one sided limits are different. Hence the limit at \(x=1\) does not exist and so the function is discontinuous at \(x=1\text{.}\)
But note that that \(f(x)\) is continuous from one side — which?

- The middle case is much like the previous one. When \(x \neq 0\) the \(g(x)\) is a rational function and so is continuous everywhere on its domain (which is all reals except \(x=0\)). Thus the only point where \(g(x)\) might be discontinuous is at \(x=0\text{.}\) We see that neither of the one-sided limits exist at \(x=0\text{,}\) so the limit does not exist at \(x=0\text{.}\) Hence the function is discontinuous at \(x=0\text{.}\)
- We have seen the function \(h(x)\) before. By the same reasoning as above, we know it is continuous except at \(x=1\) which we must check separately.
By definition of \(h(x)\text{,}\) \(h(1) = 0\text{.}\) We must compare this to the limit as \(x \to 1\text{.}\) We did this before.

\begin{align*} \frac{x^3-x^2}{x-1} &= \frac{x^2(x-1)}{x-1} = x^2 \end{align*}

So \(\lim_{x \to 1} \frac{x^3-x^2}{x-1} = \lim_{x \to 1} x^2 = 1\neq h(1)\text{.}\) Hence \(h\) is discontinuous at \(x=1\text{.}\)

This example illustrates different sorts of discontinuities:

- The function \(f(x)\) has a “jump discontinuity” because the function “jumps” from one finite value on the left to another value on the right.
- The second function, \(g(x)\text{,}\) has an “infinite discontinuity” since \(\lim f(x) =+\infty\text{.}\)
- The third function, \(h(x)\text{,}\) has a “removable discontinuity” because we could make the function continuous at that point by redefining the function at that point. i.e. setting \(h(1)=1\text{.}\) That is
\begin{align*} \text{new function }h(x) &= \begin{cases} \frac{x^3-x^2}{x-1} & x\neq 1\\ 1 & x=1 \end{cases} \end{align*}

Showing a function is continuous can be a pain, but just as the limit laws help us compute complicated limits in terms of simpler limits, we can use them to show that complicated functions are continuous by breaking them into simpler pieces.

Let \(a,c \in \mathbb{R}\) and let \(f(x)\) and \(g(x)\) be functions that are continuous at \(a\text{.}\) Then the following functions are also continuous at \(x=a\text{:}\)

- \(f(x) + g(x)\) and \(f(x) - g(x)\text{,}\)
- \(c f(x)\) and \(f(x) g(x)\text{,}\) and
- \(\frac{f(x)}{g(x)}\) provided \(g(a) \neq 0\text{.}\)

Above we stated that polynomials and rational functions are continuous (being careful about domains of rational functions — we must avoid the denominators being zero) without making it a formal statement. This is easily fixed…

Let \(c \in \mathbb{R}\text{.}\) The functions

\begin{align*} f(x) &= x & g(x) &= c \end{align*}

are continuous everywhere on the real line

This isn't quite the result we wanted (that's a couple of lines below) but it is a small result that we can combine with the arithmetic of limits to get the result we want. Such small helpful results are called “lemmas” and they will arise more as we go along.

Now since we can obtain any polynomial and any rational function by carefully adding, subtracting, multiplying and dividing the functions \(f(x)=x\) and \(g(x)=c\text{,}\) the above lemma combines with the “arithmetic of continuity” theorem to give us the result we want:

Every polynomial is continuous everywhere. Similarly every rational function is continuous except where its denominator is zero (i.e. on all its domain).

With some more work this result can be extended to wider families of functions:

The following functions are continuous everywhere in their domains

- polynomials, rational functions
- roots and powers
- trig functions and their inverses
- exponential and the logarithm

We haven't encountered inverse trigonometric functions, nor exponential functions or logarithms, but we will see them in the next chapter. For the moment, just file the information away.

Using a combination of the above results you can show that many complicated functions are continuous except at a few points (usually where a denominator is equal to zero).

Where is the function \(f(x) = \frac{\sin(x)}{2+\cos(x)}\) continuous?

We just break things down into pieces and then put them back together keeping track of where things might go wrong.

- The function is a ratio of two pieces — so check if the numerator is continuous, the denominator is continuous, and if the denominator might be zero.
- The numerator is \(\sin(x)\) which is “continuous on its domain” according to one of the above theorems. Its domain is all real numbers
^{1}, so it is continuous everywhere. No problems here. - The denominator is the sum of \(2\) and \(\cos(x)\text{.}\) Since \(2\) is a constant it is continuous everywhere. Similarly (we just checked things for the previous point) we know that \(\cos(x)\) is continuous everywhere. Hence the denominator is continuous.
- So we just need to check if the denominator is zero. One of the facts that we should know
^{2 }is that\begin{gather*} -1 \leq \cos(x) \leq 1\\ \end{gather*}

and so by adding 2 we get

\begin{gather*} 1 \leq 2+\cos(x) \leq 3 \end{gather*}

- So the numerator is continuous, the denominator is continuous and nowhere zero, so the function is continuous everywhere.

If the function were changed to \(\displaystyle \frac{\sin(x)}{x^2-5x+6}\) much of the same reasoning can be used. Being a little terse we could answer with:

- Numerator and denominator are continuous.
- Since \(x^2-5x+6=(x-2)(x-3)\) the denominator is zero when \(x=2,3\text{.}\)
- So the function is continuous everywhere except possibly at \(x=2,3\text{.}\) In order to verify that the function really is discontinuous at those points, it suffices to verify that the numerator is non-zero at \(x=2,3\text{.}\) Indeed we know that \(\sin(x)\) is zero only when \(x = n\pi\) (for any integer \(n\)). Hence \(\sin(2),\sin(3) \neq 0\text{.}\) Thus the numerator is non-zero, while the denominator is zero and hence \(x=2,3\) really are points of discontinuity.

Note that this example raises a subtle point about checking continuity when numerator and denominator are *simultaneously* zero. There are quite a few possible outcomes in this case and we need more sophisticated tools to adequately analyse the behaviour of functions near such points. We will return to this question later in the text after we have developed Taylor expansions (see Section 3.4).

So we know what happens when we add subtract multiply and divide, what about when we compose functions? Well - limits and compositions work nicely when things are continuous.

If \(f\) is continuous at \(b\) and \(\displaystyle \lim_{x \to a} g(x) = b\) then \(\displaystyle \lim_{x\to a} f(g(x)) = f(b)\text{.}\) I.e.

\begin{align*} \lim_{x \to a} f\left( g(x) \right) &= f\left( \lim_{x \to a} g(x) \right) \end{align*}

Hence if \(g\) is continuous at \(a\) and \(f\) is continuous at \(g(a)\) then the composite function \((f \circ g)(x) = f(g(x))\) is continuous at \(a\text{.}\)

So when we compose two continuous functions we get a new continuous function.

We can put this to use

Where are the following functions continuous?

\begin{align*} f(x) &= \sin\left( x^2 +\cos(x) \right)\\ g(x) &= \sqrt{\sin(x)} \end{align*}

Our first step should be to break the functions down into pieces and study them. When we put them back together we should be careful of dividing by zero, or falling outside the domain.

- The function \(f(x)\) is the composition of \(\sin(x)\) with \(x^2+\cos(x)\text{.}\)
- These pieces, \(\sin(x), x^2, \cos(x)\) are continuous everywhere.
- So the sum \(x^2+\cos(x)\) is continuous everywhere
- And hence the composition of \(\sin(x)\) and \(x^2+\cos(x)\) is continuous everywhere.

The second function is a little trickier.

- The function \(g(x)\) is the composition of \(\sqrt{x}\) with \(\sin(x)\text{.}\)
- \(\sqrt{x}\) is continuous on its domain \(x \geq 0\text{.}\)
- \(\sin(x)\) is continuous everywhere, but it is negative in many places.
- In order for \(g(x)\) to be defined and continuous we must restrict \(x\) so that \(\sin(x) \geq 0\text{.}\)
- Recall the graph of \(\sin(x)\text{:}\)
Hence \(\sin(x)\geq 0\) when \(x\in[0,\pi]\) or \(x\in [2\pi,3\pi]\) or \(x\in[-2\pi,-\pi]\) or…. To be more precise \(\sin(x)\) is positive when \(x \in [2n\pi,(2n+1)\pi]\) for any integer \(n\text{.}\)

- Hence \(g(x)\) is continuous when \(x \in [2n\pi,(2n+1)\pi]\) for any integer \(n\text{.}\)

Continuous functions are very nice (mathematically speaking). Functions from the “real world” tend to be continuous (though not always). The key aspect that makes them nice is the fact that they don't jump about.

The absence of such jumps leads to the following theorem which, while it can be quite confusing on first glance, actually says something very natural — obvious even. It says, roughly speaking, that, as you draw the graph \(y=f(x)\) starting at \(x=a\) and ending at \(x=b\text{,}\) \(y\) changes continuously from \(y=f(a)\) to \(y=f(b)\text{,}\) with no jumps, and consequently \(y\) must take every value between \(f(a)\) and \(f(b)\) at least once. We'll start by just giving the precise statement and then we'll explain it in detail.

Let \(a \lt b\) and let \(f\) be a function that is continuous at all points \(a\leq x \leq b\text{.}\) If \(Y\) is any number between \(f(a)\) and \(f(b)\) then there exists some number \(c \in [a,b]\) so that \(f(c) = Y\text{.}\)

Like the \(\epsilon-\delta\) definition of limits ^{3}, we should break this theorem down into pieces. Before we do that, keep the following pictures in mind.

Now the break-down

*Let \(a \lt b\) and let \(f\) be a function that is continuous at all points \(a\leq x \leq b\text{.}\)*— This is setting the scene. We have \(a,b\) with \(a \lt b\) (we can safely assume these to be real numbers). Our function must be continuous at all points between \(a\) and \(b\text{.}\)*if \(Y\) is any number between \(f(a)\) and \(f(b)\)*— Now we need another number \(Y\) and the only restriction on it is that it lies between \(f(a)\) and \(f(b)\text{.}\) That is, if \(f(a)\leq f(b)\) then \(f(a) \leq Y \leq f(b)\text{.}\) Or if \(f(a) \geq f(b)\) then \(f(a) \geq Y \geq f(b)\text{.}\) So notice that \(Y\) could be equal to \(f(a)\) or \(f(b)\) — if we wanted to avoid that possibility, then we would normally explicitly say \(Y \neq f(a), f(b)\) or we would write that \(Y\) is*strictly*between \(f(a)\) and \(f(b)\text{.}\)*there exists some number \(c \in [a,b]\) so that \(f(c) = Y\)*— so if we satisfy all of the above conditions, then there has to be some real number \(c\) lying between \(a\) and \(b\) so that when we evaluate \(f(c)\) it is \(Y\text{.}\)

So that breaks down the proof statement by statement, but what does it actually mean?

- Draw any continuous function you like between \(a\) and \(b\) — it must be continuous.
- The function takes the value \(f(a)\) at \(x=a\) and \(f(b)\) at \(x=b\) — see the left-hand figure above.
- Now we can pick any \(Y\) that lies between \(f(a)\) and \(f(b)\) — see the middle figure above. The IVT
^{4 }tells us that there must be some \(x\)-value that when plugged into the function gives us \(Y\text{.}\) That is, there is some \(c\) between \(a\) and \(b\) so that \(f(c) = Y\text{.}\) We can also interpret this graphically; the IVT tells us that the horizontal straight line \(y=Y\) must intersect the graph \(y=f(x)\) at some point \((c,Y)\) with \(a\le c\le b\text{.}\) - Notice that the IVT does not tell us how many such \(c\)-values there are, just that there is at least one of them. See the right-hand figure above. For that particular choice of \(Y\) there are three different \(c\) values so that \(f(c_1) = f(c_2) = f(c_3) = Y\text{.}\)

This theorem says that if \(f(x)\) is a continuous function on all of the interval \(a \leq x \leq b\) then as \(x\) moves from \(a\) to \(b\text{,}\) \(f(x)\) takes every value between \(f(a)\) and \(f(b)\) at least once. To put this slightly differently, if \(f\) were to avoid a value between \(f(a)\) and \(f(b)\) then \(f\) cannot be continuous on \([a,b]\text{.}\)

It is not hard to convince yourself that the continuity of \(f\) is crucial to the IVT. Without it one can quickly construct examples of functions that contradict the theorem. See the figure below for a few non-continuous examples:

In the left-hand example we see that a discontinuous function can “jump” over the \(Y\)-value we have chosen, so there is no \(x\)-value that makes \(f(x)=Y\text{.}\) The right-hand example demonstrates why we need to be be careful with the ends of the interval. In particular, a function must be continuous over the whole interval \([a,b]\) *including* the end-points of the interval. If we only required the function to be continuous on \((a,b)\) (so strictly between \(a\) and \(b\)) then the function could “jump” over the \(Y\)-value at \(a\) or \(b\text{.}\)

If you are still confused then here is a “real-world” example

You are climbing the Grouse-grind ^{5} with a friend — call him Bob. Bob was eager and started at 9am. Bob, while very eager, is also very clumsy; he sprained his ankle somewhere along the path and has stopped moving at 9:21am and is just sitting ^{6} enjoying the view. You get there late and start climbing at 10am and being quite fit you get to the top at 11am. The IVT implies that at some time between 10am and 11am you meet up with Bob.

You can translate this situation into the form of the IVT as follows. Let \(t\) be time and let \(a = \) 10am and \(b=\) 11am. Let \(g(t)\) be your distance along the trail. Hence ^{7 }\(g(a) = 0\) and \(g(b) = 2.9km\text{.}\) Since you are a mortal, your position along the trail is a continuous function — no helicopters or teleportation or… We have no idea where Bob is sitting, except that he is somewhere between \(g(a)\) and \(g(b)\text{,}\) call this point \(Y\text{.}\) The IVT guarantees that there is some time \(c\) between \(a\) and \(b\) (so between 10am and 11am) with \(g(c) = Y\) (and your position will be the same as Bob's).

Aside from finding Bob sitting by the side of the trail, one of the most important applications of the IVT is determining where a function is zero. For quadratics we know (or should know) that

\begin{align*} ax^2+bx+c &= 0 & \text{ when } x &= \frac{-b \pm \sqrt{b^2-4ac}}{2a} \end{align*}

While the Babylonians could (mostly, but not quite) do the above, the corresponding formula for solving a cubic is uglier and that for a quartic is uglier still. One of the most famous results in mathematics demonstrates that no such formula exists for quintics or higher degree polynomials ^{8}.

So even for polynomials we cannot, in general, write down explicit formulae for their zeros and have to make do with numerical approximations — i.e. write down the root as a decimal expansion to whatever precision we desire. For more complicated functions we have no choice — there is no reason that the zeros should be expressible as nice neat little formulas. At the same time, finding the zeros of a function:

\begin{align*} f(x) &= 0 \end{align*}

or solving equations of the form ^{9}

\begin{align*} g(x) &= h(x) \end{align*}

can be a crucial step in many mathematical proofs and applications.

For this reason there is a considerable body of mathematics which focuses just on finding the zeros of functions. The IVT provides a very simple way to “locate” the zeros of a function. In particular, if we know a continuous function is negative at a point \(x=a\) and positive at another point \(x=b\text{,}\) then there must (by the IVT) be a point \(x=c\) between \(a\) and \(b\) where \(f(c)=0\text{.}\)

Consider the leftmost of the above figures. It depicts a continuous function that is negative at \(x=a\) and positive at \(x=b\text{.}\) So choose \(Y=0\) and apply the IVT — there must be some \(c\) with \(a \leq c \leq b\) so that \(f(c) = Y = 0\text{.}\) While this doesn't tell us \(c\) exactly, it does give us bounds on the possible positions of at least one zero — there must be at least one c obeying \(a \le c \le b\text{.}\)

See middle figure. To get better bounds we could test a point half-way between \(a\) and \(b\text{.}\) So set \(a' = \frac{a+b}{2}\text{.}\) In this example we see that \(f(a')\) is negative. Applying the IVT again tells us there is some \(c\) between \(a'\) and \(b\) so that \(f(c) = 0\text{.}\) Again — we don't have \(c\) exactly, but we have halved the range of values it could take.

Look at the rightmost figure and do it again — test the point half-way between \(a'\) and \(b\text{.}\) In this example we see that \(f(b')\) is positive. Applying the IVT tells us that there is some \(c\) between \(a'\) and \(b'\) so that \(f(c) = 0\text{.}\) This new range is a quarter of the length of the original. If we keep doing this process the range will halve each time until we know that the zero is inside some tiny range of possible values. This process is called the bisection method.

Consider the following zero-finding example

Show that the function \(f(x) = x-1+\sin(\pi x/2)\) has a zero in \(0 \leq x \leq 1\text{.}\)

This question has been set up nicely to lead us towards using the IVT; we are already given a nice interval on which to look. In general we might have to test a few points and experiment a bit with a calculator before we can start narrowing down a range.

Let us start by testing the endpoints of the interval we are given

\begin{align*} f(0) &= 0 - 1 + \sin(0) = -1 \lt 0\\ f(1) &= 1-1+\sin(\pi/2) = 1 \gt 0 \end{align*}

So we know a point where \(f\) is positive and one where it is negative. So by the IVT there is a point in between where it is zero.

*BUT* in order to apply the IVT we have to show that the function is continuous, and we cannot simply write

it is continuous

We need to explain to the reader *why* it is continuous. That is — we have to prove it.

So to write up our answer we can put something like the following — keeping in mind we need to tell the reader what we are doing so they can follow along easily.

- We will use the IVT to prove that there is a zero in \([0,1]\text{.}\)
- First we must show that the function is continuous.
- Since \(x-1\) is a polynomial it is continuous everywhere.
- The function \(\sin(\pi x/2)\) is a trigonometric function and is also continuous everywhere.
- The sum of two continuous functions is also continuous, so \(f(x)\) is continuous everywhere.

- Let \(a=0, b=1\text{,}\) then
\begin{align*} f(0) &= 0 - 1 + \sin(0) = -1 \lt 0\\ f(1) &= 1-1+\sin(\pi/2) = 1 \gt 0 \end{align*}

- The function is negative at \(x=0\) and positive at \(x=1\text{.}\) Since the function is continuous we know there is a point \(c \in [0,1]\) so that \(f(c) = 0\text{.}\)

Notice that though we have not used full sentences in our explanation here, we are still using words. Your mathematics, unless it is very straight-forward computation, should contain words as well as symbols.

The zero of this function is actually located at about \(x=0.4053883559\text{.}\)

The bisection method is really just the idea that we can keep repeating the above reasoning (with a calculator handy). Each iteration will tell us the location of the zero more precisely. The following example illustrates this.

Use the bisection method to find a zero of

\begin{align*} f(x) &= x-1+\sin(\pi x/2) \end{align*}

that lies between \(0\) and \(1\text{.}\)

So we start with the two points we worked out above:

- \(a=0, b=1\) and
\begin{align*} f(0) &= -1\\ f(1) &= 1 \end{align*}

- Test the point in the middle \(x = \frac{0+1}{2} = 0.5\)
\begin{align*} f(0.5) &= 0.2071067813 \gt 0 \end{align*}

- So our new interval will be \([0,0.5]\) since the function is negative at \(x=0\) and positive at \(x=0.5\)

Repeat

- \(a=0, b=0.5\) where \(f(0) \lt 0\) and \(f(0.5) \gt 0\text{.}\)
- Test the point in the middle \(x = \frac{0+0.5}{2} = 0.25\)
\begin{align*} f(0.25) &= -0.3673165675 \lt 0 \end{align*}

- So our new interval will be \([0.25,0.5]\) since the function is negative at \(x=0.25\) and positive at \(x=0.5\)

Repeat

- \(a=0.25, b=0.5\) where \(f(0.25) \lt 0\) and \(f(0.5) \gt 0\text{.}\)
- Test the point in the middle \(x = \frac{0.25+0.5}{2} = 0.375\) \begin{align*} f(0.375) &= -0.0694297669 \lt 0 \end{align*}

- So our new interval will be \([0.375,0.5]\) since the function is negative at \(x=0.375\) and positive at \(x=0.5\)

Below is an illustration of what we have observed so far together with a plot of the actual function.

And one final iteration:

- \(a=0.375, b=0.5\) where \(f(0.375) \lt 0\) and \(f(0.5) \gt 0\text{.}\)
- Test the point in the middle \(x = \frac{0.375+0.5}{2} = 0.4375\)
\begin{align*} f(0.4375) &= 0.0718932843 \gt 0 \end{align*}

- So our new interval will be \([0.375,0.4375]\) since the function is negative at \(x=0.375\) and positive at \(x=0.4375\)

So without much work we know the location of a zero inside a range of length \(0.0625 = 2^{-4}\text{.}\) Each iteration will halve the length of the range and we keep going until we reach the precision we need, though it is much easier to program a computer to do it.

## Exercises

Stage 1

Give an example of a function (you can write a formula, or sketch a graph) that has infinitely many infinite discontinuities.

When I was born, I was less than one meter tall. Now, I am more than one meter tall. What is the conclusion of the Intermediate Value Theorem about my height?

Give an example (by sketch or formula) of a function \(f(x)\text{,}\) defined on the interval \([0,2]\text{,}\) with \(f(0)=0\text{,}\) \(f(2)=2\text{,}\) and \(f(x)\) never equal to 1. Why does this not contradict the Intermediate Value Theorem?

Is the following a valid statement?

Suppose \(f\) is a continuous function over \([10,20]\text{,}\) \(f(10)=13\text{,}\) and \(f(20)=-13\text{.}\) Then \(f\) has a zero between \(x=10\) and \(x=20\text{.}\)

Is the following a valid statement?

Suppose \(f\) is a continuous function over \([10,20]\text{,}\) \(f(10)=13\text{,}\) and \(f(20)=-13\text{.}\) Then \(f(15)=0\text{.}\)

Is the following a valid statement?

Suppose \(f\) is a function over \([10,20]\text{,}\) \(f(10)=13\text{,}\) and \(f(20)=-13\text{,}\) and \(f\) takes on every value between \(-13\) and \(13\text{.}\) Then \(f\) is continuous.

Suppose \(f(t)\) is continuous at \(t=5\text{.}\) True or false: \(t=5\) is in the domain of \(f(t)\text{.}\)

Suppose \(\displaystyle\lim_{t \rightarrow 5}f(t)=17\text{,}\) and suppose \(f(t)\) is continuous at \(t=5\text{.}\) True or false: \(f(5)=17\text{.}\)

Suppose \(\displaystyle\lim_{t \rightarrow 5}f(t)=17\text{.}\) True or false: \(f(5)=17\text{.}\)

Suppose \(f(x)\) and \(g(x)\) are continuous at \(x=0\text{,}\) and let \(h(x)=\dfrac{xf(x)}{g^2(x)+1}\text{.}\) What is \(\displaystyle\lim_{x \to 0^+} h(x)\text{?}\)

Stage 2

Find a constant \(k\) so that the function

\[ a(x)=\left\{\begin{array}{ll} x\sin\left(\frac{1}{x}\right)&\mbox{when } x \neq 0\\ k&\mbox{when }x=0 \end{array}\right. \nonumber \]

is continuous at \(x=0\text{.}\)

Use the Intermediate Value Theorem to show that the function \(f(x)=x^3+x^2+x+1\) takes on the value 12345 at least once in its domain.

Describe all points for which the function is continuous: \(f(x)=\dfrac{1}{x^2-1}\text{.}\)

Describe all points for which this function is continuous: \(f(x)=\dfrac{1}{\sqrt{x^2-1}}\text{.}\)

Describe all points for which this function is continuous: \(\dfrac{1}{\sqrt{1+\cos(x)}}\text{.}\)

Describe all points for which this function is continuous: \(f(x)=\dfrac{1}{\sin x}\text{.}\)

Find all values of \(c\) such that the following function is continuous at \(x=c\text{:}\)

\[ f(x)=\left\{\begin{array}{ccc} 8-cx & \text{if} & x\le c\\ x^2 & \text{if} & x \gt c \end{array}\right. \nonumber \]

Use the definition of continuity to justify your answer.

Find all values of \(c\) such that the following function is continuous everywhere:

\begin{align*} f(x) &= \begin{cases} x^2+c & x\geq 0\\ \cos cx & x \lt 0 \end{cases} \end{align*}

Use the definition of continuity to justify your answer.

Find all values of \(c\) such that the following function is continuous:

\[ f(x) = \begin{cases} x^2-4 & \text{if } x \lt c\\ 3x & \text{if } x \ge c\,. \end{cases} \nonumber \]

Use the definition of continuity to justify your answer.

Find all values of \(c\) such that the following function is continuous:

\[ f(x)=\left\{\begin{array}{ccc} 6-cx & \text{if} & x\le 2c\\ x^2 & \text{if} & x \gt 2c \end{array}\right. \nonumber \]

Use the definition of continuity to justify your answer.

Stage 3

Show that there exists at least one real number \(x\) satisfying \(\sin x = x-1\)

Show that there exists at least one real number \(c\) such that \(3^c=c^2\text{.}\)

Show that there exists at least one real number \(c\) such that \(2\tan(c)=c+1\text{.}\)

Show that there exists at least one real number c such that \(\sqrt{\cos(\pi c)} = \sin(2 \pi c) + \frac{1}{2}\text{.}\)

Show that there exists at least one real number \(c\) such that \(\dfrac{1}{(\cos\pi c)^2} = c+\dfrac{3}{2}\text{.}\)

Use the intermediate value theorem to find an interval of length one containing a root of \(f(x)=x^7-15x^6+9x^2-18x+15\text{.}\)

Use the intermediate value theorem to give a decimal approximation of \(\sqrt[3]{7}\) that is correct to at least two decimal places. You may use a calculator, but only to add, subtract, multiply, and divide.

Suppose \(f(x)\) and \(g(x)\) are functions that are continuous over the interval \([a,b]\text{,}\) with \(f(a) \leq g(a)\) and \(g(b)\leq f(b)\text{.}\) Show that there exists some \(c \in [a,b]\) with \(f(c)=g(c)\text{.}\)