
# 4.3: Operations on Limits. Rational Functions


I. A function $$f : A \rightarrow T$$ is said to be real if its range $$D_{f}^{\prime}$$ lies in $$E^{1},$$ complex if $$D_{f}^{\prime} \subseteq C,$$ vector valued if $$D_{f}^{\prime}$$ is a subset of $$E^{n},$$ and scalar valued if $$D_{f}^{\prime}$$ lies in the scalar field of $$E^{n} .(\text { 'ln the latter two cases, we use the same terminology if }$$ $$E^{n}$$ is replaced by some other (fixed) normed space under consideration.) The domain $$A$$ may be arbitrary.

For such functions one can define various operations whenever they are defined for elements of their ranges, to which the function values $$f(x)$$ belong. Thus as in Chapter 3, §9, we define the functions $$f \pm g, f g,$$ and $$f / g$$ "pointwise," setting

$(f \pm g)(x)=f(x) \pm g(x), \quad(f g)(x)=f(x) g(x), \text{ and } \left(\frac{f}{g}\right)(x)=\frac{f(x)}{g(x)}$

whenever the right side expressions are defined. We also define $$|f| : A \rightarrow E^{1}$$ by

$(\forall x \in A) \quad|f|(x)=|f(x)|.$

In particular, $$f \pm g$$ is defined if $$f$$ and $$g$$ are both vector valued or both scalar valued, and $$f g$$ is defined if $$f$$ is vector valued while $$g$$ is scalar valued; similarly for $$f / g .$$ (However, the domain of $$f / g$$ consists of those $$x \in A$$ only for which $$g(x) \neq 0 . )$$

In the theorems below, all limits are at some (arbitrary, but fixed) point $$p$$ of the domain space $$(S, \rho) .$$ For brevity, we often omit $$" x \rightarrow p."$$

Theorem $$\PageIndex{1}$$

For any functions $$f, g, h : A \rightarrow E^{1}(C), A \subseteq(S, \rho),$$ we have the following:

1. (i) If $$f, g, h$$ are continuous at $$p(p \in A),$$ so are $$f \pm g$$ and fh. So also is $$f / h,$$ provided $$h(p) \neq 0 ;$$ similarly for relative continuity over $$B \subseteq A$$ .
2. (ii) If $$f(x) \rightarrow q, g(x) \rightarrow r,$$ and $$h(x) \rightarrow a(\text { all, as } x \rightarrow p \text { over } B \subseteq A),$$ then
1. $$f(x) \pm g(x) \rightarrow q \pm r$$
2. $$f(x) h(x) \rightarrow q a ;$$ and
3. $$\frac{f(x)}{h(x)} \rightarrow \frac{q}{a},$$ provided $$a \neq 0$$

All this holds also if $$f$$ and $$g$$ are vector valued and $$h$$ is scalar valued.

For a simple proof, one can use Theorem 1 of Chapter 3, §15. (An independent proof is sketched in Problems 1-7 below.)

We can also use the sequential criterion (Theorem 1 in §2). To prove (ii), take any sequence

$\left\{x_{m}\right\} \subseteq B-\{p\}, x_{m} \rightarrow p$

$f\left(x_{m}\right) \rightarrow q, g\left(x_{m}\right) \rightarrow r, \text{ and } h\left(x_{m}\right) \rightarrow a$

Thus by Theorem 1 of Chapter 3, §15,

$f\left(x_{m}\right) \pm g\left(x_{m}\right) \rightarrow q \pm r, f\left(x_{m}\right) g\left(x_{m}\right) \rightarrow q a, \text{ and } \frac{f\left(x_{m}\right)}{g\left(x_{m}\right)} \rightarrow \frac{q}{a}$

As this holds for any sequence $$\left\{x_{m}\right\} \subseteq B-\{p\}$$ with $$x_{m} \rightarrow p,$$ our assertion (ii) follows by the sequential criterion; similarly for (i).

Note 1. By induction, the theorem also holds for sums and products of any finite number of functions (whenever such products are defined).

Note 2. Part (ii) does not apply to infinite limits $$q, r, a ;$$ but it does apply to limits at $$p=\pm \infty$$ (take $$E^{*}$$ with a suitable metric for the space $$S )$$.

Note 3. The assumption $$h(x) \rightarrow a \neq 0(\text { as } x \rightarrow p \text { over } B)$$ implies that $$h(x) \neq 0$$ for $$x$$ in $$B \cap G_{\neg p}(\delta)$$ for some $$\delta>0 ;$$ see Problem 5 below. Thus the quotient function $$f / h$$ is defined on $$B \cap G_{\neg p}(\delta)$$ at least.

II. If the range space of $$f$$ is $$E^{n}\left(^{*} \text { or } C^{n}\right),$$ then each function value $$f(x)$$ is a vector in that space; thus $$n$$ real (* respectively, complex) components, denoted

$f_{k}(x), \quad k=1,2, \ldots, n.$

Here we may treat $$f_{k}$$ as a mapping of $$A=D_{f}$$ into $$E^{1}(* \text { or } C) ;$$ it carries each point $$x \in A$$ into $$f_{k}(x),$$ the $$k$$ th component of $$f(x) .$$ In this manner, each function

$f : A \rightarrow E^{n}\left(^{*} C^{n}\right)$

uniquely determines $$n$$ scalar-valued maps

$f_{k} : A \rightarrow E^{1}(C)$

called the components of $$f .$$ Notation: $$f=\left(f_{1}, \ldots, f_{n}\right)$$.

Conversely, given $$n$$ arbitrary functions

$f_{k} : A \rightarrow E^{1}(C), \quad k=1,2, \ldots, n,$

one can define $$f : A \rightarrow E^{n}\left(^{*} C^{n}\right)$$ by setting

$f(x)=\left(f_{1}(x), f_{2}(x), \ldots, f_{n}(x)\right).$

Then obviously $$f=\left(f_{1}, f_{2}, \ldots, f_{n}\right) .$$ Thus the $$f_{k}$$ in turn determine $$f$$ uniquely. To define a function $$f : A \rightarrow E^{n}\left(^{*} C^{n}\right)$$ means to give its n components $$f_{k} .$$ Note that

$f(x)=\left(f_{1}(x), \ldots, f_{n}(x)\right)=\sum_{k=1}^{n} \overline{e}_{k} f_{k}(x), \quad \text{ i.e., } f=\sum_{k=1}^{n} \overline{e}_{k} f_{k}$

where the $$\overline{e}_{k}$$ are the $$n$$ basic unit vectors; see Chapter 3, §1-3, Theorem 2. Our next theorem shows that the limits and continuity of $$f$$ reduce to those of the $$f_{k} .$$

Theorem $$\PageIndex{2}$$

(componentwise continuity and limits). For any function $$f : A \rightarrow E^{n}\left(* C^{n}\right),$$ with $$A \subseteq(S, \rho)$$ and with $$f=\left(f_{1}, \ldots, f_{n}\right),$$ we have that

(i) $$f$$ is continuous at $$p(p \in A)$$ iff all its components $$f_{k}$$ are, and

(ii) $$f(x) \rightarrow \overline{q}$$ as $$x \rightarrow p(p \in S)$$ iff

$f_{k}(x) \rightarrow q_{k} \text{ as } x \rightarrow p \quad(k=1,2, \ldots, n),$

i.e., iff each $$f_{k}$$ has, as its limit at $$p,$$ the corresponding component of $$\overline{q} .$$

Similar results hold for relative continuity and limits over a path $$B \subseteq A$$.

We prove (ii). If $$f(x) \rightarrow \overline{q}$$ as $$x \rightarrow p$$ then, by definition,

$(\forall \varepsilon>0)(\exists \delta>0)\left(\forall x \in A \cap G_{\neg p}(\delta)\right) \quad \varepsilon>|f(x)-\overline{q}|=\sqrt{\sum_{k=1}^{n}\left|f_{k}(x)-q_{k}\right|^{2}};$

in turn, the right-hand side of the inequality given above is no less than each

$\left|f_{k}(x)-q_{k}\right|, \quad k=1,2, \ldots, n.$

Thus

$(\forall \varepsilon>0)(\exists \delta>0)\left(\forall x \in A \cap G_{\neg p}(\delta)\right) \quad\left|f_{k}(x)-q_{k}\right|<\varepsilon;$

i.e., $$f_{k}(x) \rightarrow q_{k}, k=1, \ldots, n.$$

Conversely, if each $$f_{k}(x) \rightarrow q_{k},$$ then Theorem 1 (ii) yields

$\sum_{k=1}^{n} \overline{e}_{k} f_{k}(x) \rightarrow \sum_{k=1}^{n} \overline{e}_{k} q_{k}.$

By formula $$(1),$$ then, $$f(x) \rightarrow \overline{q}$$ (for $$\sum_{k=1}^{n} \overline{e}_{k} q_{k}=\overline{q} ) .$$ Thus (ii) is proved; similarly for (i) and for relative limits and continuity.

Note 4. Again, Theorem 2 holds also for $$p=\pm \infty$$ (but not for infinite $$q )$$.

Note 5. A complex function $$f : A \rightarrow C$$ may be treated as $$f : A \rightarrow E^{2}$$. Thus it has two real components: $$f=\left(f_{1}, f_{2}\right) .$$ Traditionally, $$f_{1}$$ and $$f_{2}$$ are called the real and imaginary parts of $$f,$$ also denoted by $$f_{\text { re }}$$ and $$f_{\text { im }},$$ so

$f=f_{\mathrm{re}}+i \cdot f_{\mathrm{im}}.$

By Theorem $$2, f$$ is continuous at $$p$$ iff $$f_{\text { re }}$$ and $$f_{\text { im }}$$ are.

Example $$\PageIndex{1}$$

The complex exponential is the function $$f : E^{1} \rightarrow C$$ defined by

$f(x)=\cos x+i \cdot \sin x, \text{ also written } f(x)=e^{x i}.$

As we shall see later, the sine and the cosine functions are continuous. Hence so is $$f$$ by Theorem $$2 .$$

III. Next, consider functions whose domain is a set in $$E^{n}\left(^{*} \text { or } C^{n}\right) .$$ We call them functions of $$n$$ real $$\left(* \text { or complex) variables, treating } \overline{x}=\left(x_{1}, \ldots, x_{n}\right) \text { as }\right.$$ a variable $$n$$-tuple. The range space may be arbitrary.

In particular, a monomial in $$n$$ variables is a map on $$E^{n}\left(^{*} \text { or } C^{n}\right)$$ given by a formula of the form

$f(\overline{x})=a x_{1}^{m_{1}} x_{2}^{m_{2}} \cdots x_{n}^{m_{n}}=a \cdot \prod_{k=1}^{n} x_{k}^{m_{k}},$

where the $$m_{k}$$ are fixed integers $$\geq 0$$ and $$a \in E^{1}\left(^{*} \text { or } a \in C\right) .^{2}$$ If $$a \neq 0$$ , the
$$\operatorname{sum} m=\sum_{k=1}^{n} m_{k}$$ is called the degree of the monomial. Thus

$f(x, y, z)=3 x^{2} y z^{3}=3 x^{2} y^{1} z^{3}$

defines a monomial of degree $$6,$$ in three real (or complex) variables $$x, y, z$$. (We often write $$x, y, z$$ for $$x_{1}, x_{2}, x_{3} . )$$

A polynomial is any sum of a finite number of monomials; its degree is, by definition, that of its leading term, i.e., the one of highest degree. (There may be several such terms, of equal degree.) For example,

$f(x, y, z)=3 x^{2} y z^{3}-2 x y^{7}$

defines a polynomial of degree 8 in $$x, y, z .$$ Polynomials of degree 1 are sometimes called linear.

A rational function is the quotient $$f / g$$ of two polynomials $$f$$ and $$g$$ on $$E^{n}$$ $$\left(^{*} \mathrm{or} C^{n}\right)$$. Its domain consists of those points at which $$g$$ does not vanish. For example,

$h(x, y)=\frac{x^{2}-3 x y}{x y-1}$

defines a rational function on points $$(x, y),$$ with $$x y \neq 1 .$$ Polynomials and monomials are rational functions with denominator $$1 .$$

Theorem $$\PageIndex{1}$$

Any rational function (in particular, every polynomial) in one or several variables is continuous on all of its domain.

Proof

Consider first a monomial of the form

$f(\overline{x})=x_{k} \quad(k \text{ fixed });$

it is called the $$k$$ th projection map because it "projects" each $$\overline{x} \in E^{n}\left(^{*} C^{n}\right)$$ onto its $$k$$ th component $$x_{k}$$.

Given any $$\varepsilon>0$$ and $$\overline{p},$$ choose $$\delta=\varepsilon .$$ Then

$\left(\forall \overline{x} \in G_{\overline{p}}(\delta)\right) \quad|f(\overline{x})-f(\overline{p})|=\left|x_{k}-p_{k}\right| \leq \sqrt{\sum_{i=1}^{n}\left|x_{i}-p_{i}\right|^{2}}=\rho(\overline{x}, \overline{p})<\varepsilon.$

Hence by definition, $$f$$ is continuous at each $$\overline{p} .$$ Thus the theorem holds for projection maps.

However, any other monomial, given by

$f(\overline{x})=a x_{1}^{m_{1}} x_{2}^{m_{2}} \cdots x_{n}^{m_{n}},$

is the product of finitely many (namely of $$m=m_{1}+m_{2}+\ldots+m_{n} )$$ projection maps multiplied by a constant $$a$$ . Thus by Theorem $$1,$$ it is continuous. So also is any finite sum of monomials (i.e., any polynomial), and hence so is the quotient $$f / g$$ of two polynomials (i.e., any rational function) wherever it is defined, i.e., wherever the denominator does not vanish. $$\square$$

IV. For functions on $$E^{n}\left(^{*} \text { or } C^{n}\right),$$ we often consider relative limits over $$a$$ line of the form

$\overline{x}=\overline{p}+t \vec{e}_{k} \text{ (parallel to the } k^{th} \text{ axis, through } \overline{p} );$

see Chapter 3, §§4-6, Definition $$1 .$$ If $$f$$ is relatively continuous at $$\overline{p}$$ over that line, we say that $$f$$ is continuous at $$\overline{p}$$ in the $$k$$ th variable $$x_{k}$$ (because the other components of $$\overline{x}$$ remain constant, namely, equal to those of $$\overline{p},$$ as $$\overline{x}$$ runs over that line). As opposed to this, we say that $$f$$ is continuous at $$\overline{p}$$ in all $$n$$ variables jointly if it is continuous at $$\overline{p}$$ in the ordinary (not relative) sense. Similarly, we speak of limits in one variable, or in all of them jointly.

since ordinary continuity implies relative continuity over any path, joint continuity in all $$n$$ variables always implies that in each variable separately, but the converse fails (see Problems 9 and 10 below $$) ;$$ similarly for limits at $$\overline{p}$$.