# 6.3: The Deﬁnition of the Limit of a Function

- Page ID
- 7953

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Skills to Develop

- Explain the Limit of a function

Since these days the limit concept is generally regarded as the starting point for calculus, you might think it is a little strange that we’ve chosen to talk about continuity ﬁrst. But historically, the formal deﬁnition of a limit came after the formal deﬁnition of continuity. In some ways, the limit concept was part of a uniﬁcation of all the ideas of calculus that were studied previously and, subsequently, it became the basis for all ideas in calculus. For this reason it is logical to make it the ﬁrst topic covered in a calculus course.

To be sure, limits were always lurking in the background. In his attempts to justify his calculations, Newton used what he called his doctrine of “*Ultimate Ratios.*” Speciﬁcally the ratio

\[\dfrac{(x+h)^2 - x^2}{h} = \dfrac{2xh+h^2}{h} = 2x + h\]

becomes the ultimate ratio \(2x\) at the last instant of time before \(h\) - an “*evanescent quantity*” - vanishes. Similarly Leibniz’s “*inﬁnitely small*” diﬀerentials \(dx\) and \(dy\) can be seen as an attempt to get “*arbitrarily close*” to \(x\) and \(y\), respectively. This is the idea at the heart of calculus: to get arbitrarily close to, say, \(x\) without actually reaching it.

As we saw in Chapter 3, Lagrange tried to avoid the entire issue of “*arbitrary closesness,*” both in the limit and diﬀerential forms when, in 1797, he attempted to found calculus on inﬁnite series.

Although Lagrange’s eﬀorts failed, they set the stage for Cauchy to provide a deﬁnition of derivative which in turn relied on his precise formulation of a limit. Consider the following example: to determine the slope of the tangent line (derivative) of \(f(x) = \sin x\) at \(x = 0\). We consider the graph of the diﬀerence quotient \(D(x) = \dfrac{\sin x}{x}\).

**Figure \(\PageIndex{1}\):** Slope of the tangent line (derivative) of \(f(x) = \sin x\).

From the graph, it appears that \(D(0) = 1\) but we must be careful. \(D(0)\) doesn’t even exist! Somehow we must convey the idea that \(D(x)\) will approach \(1\) as \(x\) approaches \(0\), even though the function is not deﬁned at \(0\). Cauchy’s idea was that the limit of \(D(x)\) would equal \(1\) because we can make \(D(x)\) diﬀer from \(1\) by as little as we wish.

Karl Weierstrass made these ideas precise in his lectures on analysis at the University of Berlin (1859-60) and provided us with our modern formulation.

Definition

We say \(\displaystyle \lim_{x \to a} f(x) = L\) provided that for each \(ε > 0\), there exists \(δ > 0\) such that if \(0 < |x - a| < δ\) then \(|f(x) - L| < ε\).

Before we delve into this, notice that it is very similar to the deﬁnition of the continuity of \(f(x)\) at \(x = a\). In fact we can readily see that \(f\) is continuous at \(x = a\) if and only if \( \displaystyle \lim_{x \to a} f(x) = f(a)\).

There are two diﬀerences between this deﬁnition and the deﬁnition of continuity and they are related. The ﬁrst is that we replace the value \(f(a)\) with \(L\). This is because the function may not be deﬁned at \(a\). In a sense the limiting value \(L\) is the value \(f\) would have *if it were deﬁned and continuous at \(a\)*. The second is that we have replaced

\[|x - a| < δ\]

with

\[0 < |x - a| < δ\]

Again, since \(f\) needn’t be deﬁned at \(a\), we will not even consider what happens when \(x = a\). This is the only purpose for this change.

As with the deﬁnition of the limit of a sequence, this deﬁnition does not determine what \(L\) is, it only veriﬁes that your guess for the value of the limit is correct.

Finally, a few comments on the diﬀerences and similiarities between this limit and the limit of a sequence are in order, if for no other reason than because we use the same notation (\( \displaystyle \lim\)) for both.

When we were working with sequences in Chapter 4 and wrote things like \( \displaystyle \lim_{n \to \infty } a_n\) we were thinking of \(n\) as an integer that got bigger and bigger. To put that more mathematically, the limit parameter \(n\) was taken from the set of positive integers, or \(n ∈ \mathbb{N}\).

For both continuity and the limit of a function we write things like \( \displaystyle \lim_{x \to a} f(x)\) and think of \(x\) as a variable that gets arbitrarily close to the number \(a\). Again, to be more mathematical in our language we would say that the limit parameter \(x\) is taken from the ... Well, actually, this is interesting isn’t it? Do we need to take \(x\) from \(\mathbb{Q}\) or from \(\mathbb{R}\)? The requirement in both cases is simply that we be able to choose \(x\) arbitrarily close to a. From Theorem 1.1.2 of Chapter 1 we see that this is possible whether \(x\) is rational or not, so it seems either will work. This leads to the pardoxical sounding conclusion that we do not need a continuum (\(\mathbb{R}\)) to have continuity. This seems strange.

Before we look at the above example, let’s look at some algebraic examples to see the deﬁnition in use.

Example \(\PageIndex{1}\):

Consider the function \(D(x) = \dfrac{x^2 -1}{x-1}, x\neq 1\). You probably recognize this as the diﬀerence quotient used to compute the derivative of \(f(x) = x^2\) at \(x = 1\), so we strongly suspect that \( \displaystyle \lim_{x \to 1}\dfrac{x^2 -1}{x-1} = 2\). Just as when we were dealing with limits of sequences, we should be able to use the deﬁnition to verify this. And as before, we will start with some scrapwork.

**Scrapwork****:**

Let \(ε > 0\). We wish to ﬁnd a \(δ > 0\) such that if \(0 < |x - 1| < δ\) then \(\left |\dfrac{x^2 -1}{x-1} - 2 \right | < \varepsilon\). With this in mind, we perform the following calculations

\[ \begin{align*} \left |\dfrac{x^2 -1}{x-1} - 2 \right | &= \left | (x+1) - 2 \right | \\[4pt] &= \left | x - 1 \right | \end{align*}\]

Now we have a handle on \(δ\) that will work in the deﬁnition and we’ll give the formal proof that

\[\lim_{x \to 1}\dfrac{x^2 -1}{x-1} = 2 \nonumber\]

**End of Scrapwork**.

**Proof:**

Let \(ε > 0\) and let \(δ = ε\). If \(0 < |x - 1| < δ\), then

\[ \begin{align*} \left |\dfrac{x^2 -1}{x-1} - 2 \right | &= \left | (x+1) - 2 \right | \\[4pt] &= \left | x - 1 \right | < \delta = \varepsilon \end{align*}\]

As in our previous work with sequences and continuity, notice that the scrapwork is not part of the formal proof (though it was necessary to determine an appropriate \(δ\)). Also, notice that \(0 < |x - 1|\) was not really used except to ensure that \(x \neq 1\).

Exercise \(\PageIndex{1}\)

Use the deﬁnition of a limit to verify that

\[ \displaystyle \lim_{x \to a}\dfrac{x^2 - a^2}{x-a} = 2a.\]

Exercise \(\PageIndex{2}\)

Use the deﬁnition of a limit to verify each of the following limits.

- \(\displaystyle \lim_{x \to 1}\dfrac{x^3 - 1}{x-1} = 3\)
- \(\displaystyle \lim_{x \to 1} \dfrac{\sqrt{x}-1}{x-1} = \dfrac{1}{2}\)

**Hint a**-
\[\begin{align*} \left |\dfrac{x^3 - 1}{x-1} - 3 \right | &= \left | x^2 + x + 1 - 3 \right |\\ &\leq \left | x^2 - 1 \right | + \left | x - 1 \right |\\ &= \left | (x-1+1)^2 - 1 \right | + \left | x - 1 \right |\\ &= \left | (x-1)^2 +2(x-1) \right | + \left | x - 1 \right |\\ &\leq \left | x - 1 \right |^2 + 3\left | x - 1 \right | \end{align*}\]

**Hint b**-
\[\begin{align*} \left | \dfrac{\sqrt{x}-1}{x-1} - \dfrac{1}{2} \right | &= \left |\dfrac{1}{\sqrt{x}+1} - \dfrac{1}{2} \right |\\ &= \left | \dfrac{2-(\sqrt{x}+1)}{2(\sqrt{x})+1} \right |\\ &= \left | \dfrac{1-x}{2(1+\sqrt{x})^2} \right |\\ &\leq \dfrac{1}{2}\left | x-1 \right | \end{align*}\]

Let’s go back to the original problem: to show that \( \displaystyle \lim_{x \to 0} \dfrac{\sin x}{x} = 1\).

While rigorous, our deﬁnition of continuity is quite cumbersome. We really need to develop some tools we can use to show continuity rigorously without having to refer directly to the deﬁnition. We have already seen in Theorem 6.2.1 one way to do this. Here is another. The key is the observation we made after the deﬁnition of a limit:

\[f \text{ is continuous at } x = a \text{ if and only if }\lim_{x \to a} f(x) = f(a)\]

Read another way, we could say that \( \displaystyle \lim_{x \to a} f(x) = L\) provided that if we redeﬁne \(f(a) = L\) (or deﬁne \(f(a) = L\) in the case where \(f(a)\) is not deﬁned) then \(f\) becomes continuous at \(a\). This allows us to use all of the machinery we proved about continuous functions and limits of sequences.

For example, the following corollary to Theorem 6.2.1 comes virtually for free once we’ve made the observation above.

\( \displaystyle \lim_{x \to a} f(x) = L\) *if and only if* \(f\) satisﬁes the following property:

\[\forall \text{ sequences }(x_n),x_n\neq a, \text{ if }\lim_{x \to \infty } x_n = a \text{ then } \lim_{x \to \infty } f(x_n) = L\]

Armed with this, we can prove the following familiar limit theorems from calculus.

Theorem \(\PageIndex{1}\)

Suppose \( \displaystyle \lim_{x \to a} f(x) = L\) and \( \displaystyle \lim_{x \to a} g(x) = M\), then

- \(\displaystyle \lim_{x \to a} (f(x) + g(x)) = L + M\)
- \(\displaystyle \lim_{x \to a} (f(x) \cdot g(x)) = L \cdot M\)
- \(\displaystyle \lim_{x \to a} \left ( \dfrac{f(x)}{g(x)} \right ) = \dfrac{L}{M}\) provided \(M \neq 0\) and \(g(x) \neq 0\), for \(x\) suﬃciently close to \(a\) (but not equal to \(a\)).

We will prove part (a) to give you a feel for this and let you prove parts (b) and (c).

Proof

Let (\(x_n\)) be a sequence such that \(x_n \neq a\) and \(\displaystyle \lim_{n \to \infty }x_n = a\). Since \(\displaystyle \lim_{x \to a} f(x) = L\) and \(\displaystyle \lim_{x \to a} g(x) = M\) we see that \(\displaystyle \lim_{n \to \infty } f(x_n) = L\) and \( \displaystyle \lim_{n \to \infty } g(x_n) = M\). By Theorem 4.2.1 of Chapter 4, we have \(\displaystyle \lim_{n \to \infty } f(x_n) + g(x_n) = L + M\). Since {\(x_n\)} was an arbitrary sequence with \(x_n \neq a\) and \( \displaystyle \lim_{n \to \infty }x_n = a\) we have

\[\lim_{x \to a} f(x) + g(x) = L + M.\nonumber \]

Exercise \(\PageIndex{3}\)

Prove parts (b) and (c) of Theorem \(\PageIndex{1}\).

More in line with our current needs, we have a reformulation of the Squeeze Theorem.

Theorem \(\PageIndex{2}\): Squeeze Theorem for functions

Suppose \(f(x) ≤ g(x) ≤ h(x)\), for \(x\) suﬃciently close to \(a\) (but not equal to \(a\)). If

\[\lim_{x \to a} f(x) = L = \lim_{x \to a} h(x)\]

then also

\[\lim_{x \to a} g(x) = L.\]

Exercise \(\PageIndex{4}\)

Prove Theorem \(\PageIndex{2}\).

**Hint**-
Use the Squeeze Theorem for sequences (Theorem 4.2.4) from Chapter 4.

Returning to \( \displaystyle \lim_{x \to 0} \dfrac{\sin x}{x}\) we’ll see that the Squeeze Theorem is just what we need. First notice that since \(D(x) = \dfrac{\sin x}{x}\) is an even function, we only need to focus on \(x > 0\) in our inequalities. Consider the unit circle.

**Figure \(\PageIndex{2}\): **The Unit circle.

Exercise \(\PageIndex{5}\)

Use the fact that \(\text{area}(∆OAC) < \text{area}(sector OAC) < \text{area}(∆OAB)\) to show that if \(0 < x < \dfrac{π}{2}\), then \(\cos x < \dfrac{\sin x}{x} < 1\). Use the fact that all of these functions are even to extend the inequality for \(-\dfrac{π}{2} < x < 0\) and use the Squeeze Theorem to show \( \displaystyle \lim_{x \to 0} \dfrac{\sin x}{x} = 1\).

## Contributor

Eugene Boman (Pennsylvania State University) and Robert Rogers (SUNY Fredonia)