# 6.1: An Analytic Deﬁnition of Continuity

- Page ID
- 7951

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)- Explain continuity

Before the invention of calculus, the notion of continuity was treated intuitively if it was treated at all. At first pass, it seems a very simple idea based solidly in our experience of the real world. Standing on the bank we see a river ﬂow past us continuously, not by tiny jerks. Even when the ﬂow might seem at first to be discontinuous, as when it drops precipitously over a cliff, a closer examination shows that it really is not. As the water approaches the cliff it speeds up. When it finally goes over it accelerates very quickly but no matter how fast it goes it moves continuously, moving from here to there by occupying every point in between. This is continuous motion. It never disappears over there and instantaneously reappears over here. That would be discontinuous motion.

Similarly, a thrown stone ﬂies continuously (and smoothly) from release point to landing point, passing through each point in its path.

But wait. If the stone passes through discrete points it must be doing so by teeny tiny little jerks, mustn’t it? Otherwise how would it get from one point to the next? Is it possible that motion in the real world, much like motion in a movie, is really composed of tiny jerks from one point to the next but that these tiny jerks are simply too small and too fast for our senses to detect?

If so, then the real world is more like the rational number line (\(\mathbb{Q}\)) from

Chapter 1 than the real number line (\(\mathbb{R}\)). In that case, motion really consists of jumping discretely over the “missing” points (like \(\sqrt{2}\)) as we move from here to there. That may seem like a bizarre idea to you – it does to us as well – but the idea of continuous motion is equally bizarre. It’s just a little harder to see why.The real world will be what it is regardless of what we believe it to be, but fortunately in mathematics we are not constrained to live in it. So we won’t even try. We will simply postulate that no such jerkiness exists; that all motion is continuous.

However we **are **constrained to live with the logical consequences of our assumptions, once they are made. These will lead us into some very deep waters indeed.

The intuitive treatment of continuity was maintained throughout the 1700’s as it was not generally perceived that a truly rigorous definition was necessary. Consider the following definition given by Euler in 1748.

A continuous curve is one such that its nature can be expressed by a single function of \(x\). If a curve is of such a nature that for its various parts ... different functions of \(x\) are required for its expression, ..., then we call such a curve discontinuous.

However, the complexities associated with Fourier series and the types of functions that they represented caused mathematicians in the early 1800’s to rethink their notions of continuity. As we saw in Part II, the graph of the function defined by the Fourier series

\[\frac{4}{\pi }\sum_{k=0}^{\infty } \frac{(-1)^k}{(2k+1)} \cos ((2k+1)\pi x)\]

looked like this:

**Figure \(\PageIndex{1}\): **Graph of function defined by the Fourier series.

This function went against Euler’s notion of what a continuous function should be. Here, an infinite sum of continuous cosine curves provided a single expression which resulted in a “*discontinuous*” curve. But as we’ve seen this didn’t happen with power series and an intuitive notion of continuity is inadequate to explain the difference. Even more perplexing is the following situation. Intuitively, one would think that a continuous curve should have a tangent line at at least one point. It may have a number of jagged points to it, but it should be “*smooth*” somewhere. An example of this would be \(f(x) = x^{2/3}\). Its graph is given by

**Figure \(\PageIndex{2}\): **Graph of \(f(x) = x^{2/3}\).

This function is not differentiable at the origin but it is differentiable everywhere else. One could certainly come up with examples of functions which fail to be differentiable at any number of points but, intuitively, it would be reasonable to expect that a continuous function should be differentiable somewhere. We might conjecture the following:

If f is continuous on an interval \(I\) then there is some \(a ∈ I\), such that \(f'(a)\) exists.

**Figure \(\PageIndex{3}\):** Karl Weierstrass.

Surprisingly, in 1872, Karl Weierstrass showed that the above conjecture is **FALSE**. He did this by displaying the counterexample:

\[f(x) = \sum_{n=0}^{\infty }b^n\cos (a^n\pi x)\]

Weierstrass showed that if \(a\) is an odd integer, \(b ∈ (0,1)\), and \(ab > 1 + \frac{3}{2}π\), then \(f\) is continuous everywhere, but is nowhere differentiable. Such a function is somewhat “fractal” in nature, and it is clear that a definition of continuity relying on intuition is inadequate to study it.

- Given \(f(x) = \sum_{n=0}^{\infty }\left ( \frac{1}{2} \right )^n\cos (a^n\pi x)\), what is the smallest value of \(a\) for which \(f\) satisfies Weierstrass’ criterion to be continuous and nowhere differentiable.
- Let \(f(x,N) = \sum_{n=0}^{N}\left ( \frac{1}{2} \right )^n\cos (13^n\pi x)\) and use a computer algebra system to plot \(f(x,N)\) for \(N = 0,1,2,3,4,10\) and \(x ∈ [0,1]\).
- Plot \(f(x,10)\) for \(x ∈ [0,c]\), where \(c = 0.1,0.01,0.001,0.0001,0.00001\). Based upon what you see in parts b and c, why would we describe the function to be somewhat “
*fractal*” in nature?

Just as it was important to define convergence with a rigorous definition without appealing to intuition or geometric representations, it is imperative that we define continuity in a rigorous fashion not relying on graphs.

The first appearance of a definition of continuity which did not rely on geometry or intuition was given in 1817 by Bernhard Bolzano in a paper published in the Proceedings of the Prague Scientific Society entitled

*Rein analytischer Beweis des Lehrsatzes dass zwieschen je zwey Werthen, die ein entgegengesetztes Resultat gewaehren, wenigstens eine reele Wurzel der Gleichung liege (Purely Analytic Proof of the Theorem that Between Any Two Values that Yield Results of Opposite Sign There Will be at Least One Real Root of the Equation).*

**Figure \(\PageIndex{4}\):** Bernhard Bolzano

From the title it should be clear that in this paper Bolzano is proving the Intermediate Value Theorem. To do this he needs a completely analytic definition of continuity. The substance of Bolzano’s idea is that if \(f\) is continuous at a point a then \(f(x)\) should be “*close to*” \(f(a)\) whenever \(x\) is “*close enough to*” \(a\). More precisely, Bolzano said that \(f\) is continuous at a provided \(|f(x) - f(a)|\) can be made smaller than any given quantity provided we make \(|x - a|\) sufficiently small.

The language Bolzano uses is very similar to the language Leibniz used when he postulated the existence of infinitesimally small numbers. Leibniz said that infinitesimals are “*smaller than any given quantity but not zero.*” Bolzano says that “*\(|f(x) - f(a)|\) can be made smaller than any given quantity provided we make \(|x - a|\) sufficiently small.*” But Bolzano stops short of saying that \(|x - a|\) is infinitesimally small. Given a, we can choose \(x\) so that \(|x - a|\) is smaller than any real number we could name, say \(b\), provided we name \(b\) first, but for any given choice of \(x\), \(|x - a|\), and \(b\) are both still real numbers. Possibly very small real numbers to be sure, but real numbers nonetheless. Infinitesimals have no place in Bolzano’s construction.

Bolzano’s paper was not well known when Cauchy proposed a similar definition in his Cours d’analyse [1] of 1821 so it is usually Cauchy who is credited with this definition, but even Cauchy’s definition is not quite tight enough for modern standards. It was Karl Weierstrass in 1859 who finally gave the modern definition.

We say that a function \(f\) is continuous at \(a\) provided that for any \(ε > 0\), there exists \(a δ > 0\) such that if \(|x - a| < δ\) then \(|f(x) - f(a)| < ε\).

Notice that the definition of continuity of a function is done point-by-point. A function can certainly be continuous at some points while discontinuous at others. When we say that \(f\) is continuous on an interval, then we mean that it is continuous at every point of that interval and, in theory, we would need to use the above definition to check continuity at each individual point.

Our definition fits the bill in that it does not rely on either intuition or graphs, but it is this very non-intuitiveness that makes it hard to grasp. It usually takes some time to become comfortable with this definition, let alone use it to prove theorems such as the Extreme Value Theorem and Intermediate Value Theorem. So let’s go slowly to develop a feel for it.

This definition spells out a completely black and white procedure: you give me a positive number \(ε\), and I must be able to find a positive number \(δ\) which satisfies a certain property. If I can always do that then the function is continuous at the point of interest.

This definition also makes very precise what we mean when we say that \(f(x)\) should be “*close to*” \(f(a)\) whenever \(x\) is “*close enough to*” \(a\). For example, intuitively we know that \(f(x) = x^2\) should be continuous at \(x = 2\). This means that we should be able to get \(x^2\) to within, say, \(ε = 0.1\) of \(4\) provided we make \(x\) close enough to \(2\). Specifically, we want \(3.9 < x^2 < 4.1\). This happens exactly when \(\sqrt{3.9} < x < \sqrt{4.1}\). Using the fact that \(\sqrt{3.9} < 1.98\) and \(2.02 < \sqrt{4.1}\), then we can see that if we get \(x\) to within \(δ = 0.02\) of \(2\), then \(\sqrt{3.9} < 1.98 < x < 2.02 < \sqrt{4.1}\) and so \(x^2\) will be within \(0.1\) of \(4\). This is very straightforward. What makes this situation more difficult is that we must be able to do this for any \(ε > 0\).

Notice the similarity between this definition and the definition of convergence of a sequence. Both definitions have the challenge of an \(ε > 0\). In the definition of \(\lim_{n \to \infty }s_n = s\), we had to get \(s_n\) to within \(ε\) of \(s\) by making \(n\) large enough. For sequences, the challenge lies in making \(|s_n - s|\) sufficiently small. More precisely, given \(ε > 0\) we need to decide how large \(n\) should be to guarantee that \(|s_n - s| < ε\).

In our definition of continuity, we still need to make something small (namely \(|f(x) - f(a)| < ε\)), only this time, we need to determine how close \(x\) must be to a to ensure this will happen instead of determining how large \(n\) must be.

What makes \(f\) continuous at \(a\) is the arbitrary nature of \(ε\) (as long as it is positive). As \(ε\) becomes smaller, this forces \(f(x)\) to be closer to \(f(a)\). That we can always find a positive distance \(δ\) to work is what we mean when we say that we can make \(f(x)\) as close to \(f(a)\) as we wish, provided we get \(x\) close enough to \(a\). The sequence of pictures below illustrates that the phrase “*for any \(ε > 0\), there exists a \(δ > 0\) such that if \(|x - a| < δ\) then \(|f(x) - f(a)| < ε\)*” can be replaced by the equivalent formulation “*for any \(ε > 0\), there exists a \(δ > 0\) such that if \(a - δ < x < a + δ\) then \(f(a) - ε < f(x) < f(a) + ε\)*.” This could also be replaced by the phrase “*for any \(ε > 0\), there exists a \(δ > 0\) such that if \(x ∈ (a - δ,a + δ)\) then \(f(x) ∈ (f(a) - ε,f(a) + ε\)).*” All of these equivalent formulations convey the idea that we can get \(f(x)\) to within \(ε\) of \(f(a)\), provided we make \(x\) within \(δ\) of \(a\), and we will use whichever formulation suits our needs in a particular application.

**Figure \(\PageIndex{5}\):** Function \(f\) is continuous at \(a\).

The precision of the definition is what allows us to examine continuity without relying on pictures or vague notions such as “*nearness*” or “*getting closer to*.” We will now consider some examples to illustrate this precision.

Use the definition of continuity to show that \(f(x) = x\) is continuous at any point \(a\).

If we were to draw the graph of this line, then you would likely say that this is obvious. The point behind the definition is that we can back up your intuition in a rigorous manner.

**Proof:**

Let \(ε > 0\). Let \(δ = ε\). If \(|x - a| < δ\), then

\[|f(x) - f(a)| = |x - a| < ε\]

Thus by the definition, \(f\) is continuous at \(a\).

Use the definition of continuity to show that if \(m\) and \(b\) are fixed (but unspecified) real numbers then the function \(f(x) = mx + b\) is continuous at every real number \(a\).

Use the definition of continuity to show that \(f(x) = x^2\) is continuous at \(a = 0\).

**Proof:**

Let \(ε > 0\). Let \(\delta = \sqrt{\varepsilon }\). If \(|x - 0| < δ\), then \(\left | x \right |=\sqrt{\varepsilon }\). Thus

\[\left | x^2 - 0^2 \right | = \left | x \right |^2 < (\sqrt{\varepsilon })^2 = \varepsilon\]

Thus by the definition, \(f\) is continuous at \(0\).

Notice that in these proofs, the challenge of an \(ε > 0\) was first given. This is because the choice of \(δ\) must depend upon \(ε\). Also notice that there was no explanation for our choice of \(δ\). We just supplied it and showed that it worked. As long as \(δ > 0\), then this is all that is required. In point of fact, the \(δ\) we chose in each example was not the only choice that worked; any smaller \(δ\) would work as well.

- Given a particular \(ε > 0\) in the definition of continuity, show that if a particular \(δ_0 > 0\) satisfies the definition, then any \(δ\) with \(0 < δ < δ_0\) will also work for this \(ε\).
- Show that if a \(δ\) can be found to satisfy the conditions of the definition of continuity for a particular \(ε_0 > 0\), then this \(δ\) will also work for any \(ε\) with \(0 < ε_0 < ε\).

It wasn’t explicitly stated in the definition but when we say “*if \(|x - a| < δ\) then \(|f(x) - f(a)| < ε\),*” we should be restricting ourselves to \(x\) values which are in the domain of the function \(f\), otherwise \(f(x)\) doesn’t make sense. We didn’t put it in the definition because that definition was complicated enough without this technicality. Also in the above examples, the functions were defined everywhere so this was a moot point. We will continue with the convention that when we say “*if \(|x - a| < δ\) then \(|f(x) - f(a)| < ε\),*” we will be restricting ourselves to \(x\) values which are in the domain of the function \(f\). This will allow us to examine continuity of functions not defined for all \(x\) without restating this restriction each time.

Use the definition of continuity to show that

\[f(x) = \begin{cases} \sqrt{x} & \text{ if } x \geq 0 \\ -\sqrt{-x} & \text{ if } x < 0 \end{cases}\]

is continuous at \(a = 0\).

Use the definition of continuity to show that \f(x) = \sqrt{x}\) is continuous at \(a = 0\). How is this problem different from problem \(\PageIndex{4}\)? How is it similar?

Sometimes the \(δ\) that will work for a particular \(ε\) is fairly obvious to see, especially after you’ve gained some experience. This is the case in the above examples (at least after looking back at the proofs). However, the task of finding a \(δ\) to work is usually not so obvious and requires some scrapwork. This scrapwork is vital toward producing a \(δ\), but again is not part of the polished proof. This can be seen in the following example.

Use the definition of continuity to prove that \(f(x) = \sqrt{x}\) is continuous at \(a = 1\).

**Scrapwork:**

As before, the scrapwork for these problems often consists of simply working backwards. Specifically, given an \(ε > 0\), we need to find a \(δ > 0\) so that \(\left |\sqrt{x} - \sqrt{1} \right | < \varepsilon\), whenever \(|x - 1| < δ\). We work backwards from what we want, keeping an eye on the fact that we can control the size of \(|x - 1|\).

\[\left |\sqrt{x} - \sqrt{1} \right | = \left | \frac{(\sqrt{x}-1)(\sqrt{x}+1)}{\sqrt{x}+1} \right | = \frac{\left | x-1 \right |}{\sqrt{x}+1} < \left | x-1 \right |\]

This seems to suggest that we should make \(δ = ε\). We’re now ready for the formal proof.

**End of Scrapwork**

**Proof:**

Let \(ε > 0\). Let \(δ = ε\). If \(|x - 1| < δ\), then \(|x - 1| < ε\), and so

\[\left |\sqrt{x} - \sqrt{1} \right | = \left | \frac{(\sqrt{x}-1)(\sqrt{x}+1)}{\sqrt{x}+1} \right | = \frac{\left | x-1 \right |}{\sqrt{x}+1} < \left | x-1 \right | < \varepsilon\]

Thus by definition, \(f(x) = \sqrt{x}\) is continuous at \(1\).

Bear in mind that someone reading the formal proof will not have seen the scrapwork, so the choice of \(δ\) might seem rather mysterious. However, you are in no way bound to motivate this choice of \(δ\) and usually you should not, unless it is necessary for the formal proof. All you have to do is find this \(δ\) and show that it works. Furthermore, to a trained reader, your ideas will come through when you demonstrate that your choice of \(δ\) works.

Now reverse this last statement. As a trained reader, when you read the proof of a theorem it is your responsibility to find the scrapwork, to see how the proof works and understand it fully.

**Figure \(\PageIndex{6}\):** Paul Halmos.

As the renowned mathematical expositor Paul Halmos (1916-2006) said,

*Don’t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical special case? What about the degenerate cases? Where does the proof use the hypothesis? *

This is the way to learn mathematics. It is really the only way.

Use the definition of continuity to show that \(f(x) = \sqrt{x}\) is continuous at any positive real number \(a\).

- Use a unit circle to show that for \(0 \leq \theta < \frac{\pi }{2}\), \(\sin \theta \leq \theta\) and \((1 - \cos \theta) \leq \theta\) and conclude \(|\sinθ|≤|θ|\) and \(|1-cosθ|≤|θ|\) for \(-\frac{\pi }{2} < \theta < \frac{\pi }{2}\).
- Use the definition of continuity to prove that \(f(x) = \sin x\) is continuous at any point \(a\).

**Hint for (b)**-
\(\sin x = \sin(x - a + a)\)

- Use the definition of continuity to show that \(f(x) = e^x\) is continuous at \(a = 0\).
- Show that \(f(x) = e^x\) is continuous at any point \(a\).

**Hint for (b)**-
Rewrite \(e^x - e^a\) as \(e^{a+(x-a)} - e^a\) and use what you proved in part a

In the above problems, we used the definition of continuity to verify our intuition about the continuity of familiar functions. The advantage of this analytic definition is that it can be applied when the function is not so intuitive. Consider, for example, the function given at the end of the last chapter.

\[f(x) = \begin{cases} x\sin \left (\frac{1}{x} \right ) & \text{ if } x \neq 0 \\ 0 & \text{ if } x= 0 \end{cases}\]

Near zero, the graph of \(f(x)\) looks like this:

**Figure \(\PageIndex{7}\):** The graph of \(f(x)\).

As we mentioned in the previous chapter, since \(\sin \left (\frac{1}{x} \right )\) oscillates infinitely often as \(x\) nears zero this graph must be viewed with a certain amount of suspicion. However our completely analytic definition of continuity shows that this function is, in fact, continuous at \(0\).

Use the definition of continuity to show that \(f(x) = \begin{cases} x\sin \left (\frac{1}{x} \right ) & \text{ if } x \neq 0 \\ 0 & \text{ if } x= 0 \end{cases}\) is continuous at \(0\).

Even more perplexing is the function defined by

\[D(x) = \begin{cases} x & \text{ if x is rational } \\ 0 & \text{ if x is irrational } \end{cases}\]

To the naked eye, the graph of this function looks like the lines \(y = 0\) and \(y = x\). Of course, such a graph would not be the graph of a function. Actually, both of these lines have holes in them. Wherever there is a point on one line there is a “*hole*” on the other. Each of these holes are the width of a single point (that is, their “*width*” is zero!) so they are invisible to the naked eye (or even magnified under the most powerful microscope available). This idea is illustrated in the following graph

**Figure \(\PageIndex{8}\):** Graph of the function \(D(x)\) as defined above.

Can such a function so “*full of holes*” actually be continuous anywhere? It turns out that we can use our definition to show that this function is, in fact, continuous at \(0\) and at no other point.

- Use the definition of continuity to show that the function \(D(x) = \begin{cases} x & \text{ if x is rational } \\ 0 & \text{ if x is irrational } \end{cases}\) is continuous at \(0\).
- Let \(a \neq 0\). Use the definition of continuity to show that \(D\) is not continuous at \(a\).

**Hint for (b)**-
You might want to break this up into two cases where a is rational or irrational. Show that no choice of \(δ > 0 \)will work for \(ε = |a|\). Note that Theorem 1.1.2 of Chapter 1 will probably help here.

## Contributor

Eugene Boman (Pennsylvania State University) and Robert Rogers (SUNY Fredonia)