5.1: Function Spaces

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$

( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\id}{\mathrm{id}}$$

$$\newcommand{\Span}{\mathrm{span}}$$

$$\newcommand{\kernel}{\mathrm{null}\,}$$

$$\newcommand{\range}{\mathrm{range}\,}$$

$$\newcommand{\RealPart}{\mathrm{Re}}$$

$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$

$$\newcommand{\Argument}{\mathrm{Arg}}$$

$$\newcommand{\norm}[1]{\| #1 \|}$$

$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$

$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$

$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$

$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$

$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vectorC}[1]{\textbf{#1}}$$

$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$

$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$

$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$

$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

Earlier we studied finite dimensional vector spaces. Given a set of basis vectors, $$\left\{\mathbf{a}_{k}\right\}_{k=1}^{n}$$, in vector space $$V$$, we showed that we can expand any vector $$\mathbf{v} \in \mathbf{V}$$ in terms of this basis, $$\mathbf{v}=\sum_{k=1}^{n} v_{k} \mathbf{a}_{k}$$. We then spent some time looking at the simple case of extracting the components $$v_{k}$$ of the vector. The keys to doing this simply were to have a scalar product and an orthogonal basis set. These are also the key ingredients that we will need in the infinite dimensional case. In fact, we had already done this when we studied Fourier series.

Recall when we found Fourier trigonometric series representations of functions, we started with a function (vector) that we wanted to expand in a set of trigonometric functions (basis) and we sought the Fourier coefficients (components). In this section we will extend our notions from finite dimensional spaces to infinite dimensional spaces and we will develop the needed background in which to think about more general Fourier series expansions. This conceptual framework is very important in other areas in mathematics (such as ordinary and partial differential equations) and physics (such as quantum mechanics and electrodynamics).

We will consider various infinite dimensional function spaces. Functions in these spaces would differ by their properties. For example, we could consider the space of continuous functions on $$[0,1]$$, the space of differentiably continuous functions, or the set of functions integrable from $$a$$ to $$b$$. As you will see, there are many types of function spaces. In order to view these spaces as vector spaces, we will need to be able to add functions and multiply them by scalars in such as way that they satisfy the definition of a vector space as defined in Chapter 3.

We will also need a scalar product defined on this space of functions. There are several types of scalar products, or inner products, that we can define. An inner product $$\langle$$,$$\rangle$$ on a real vector space $$V$$ is a mapping from $$V \times V$$ into $$R$$ such that for $$u, v, w \in V$$ and $$\alpha \in R$$ one has

1. $$\langle v, v\rangle \geq 0$$ and $$\langle v, v\rangle=0$$ iff $$v=0$$.
2. $$\langle v, w\rangle=\langle w, v\rangle$$.
3. $$\langle\alpha v, w\rangle=\alpha\langle v, w\rangle$$.
4. $$\langle u+v, w\rangle=\langle u, w\rangle+\langle v, w\rangle$$.

A real vector space equipped with the above inner product leads to what is called a real inner product space. For complex inner product spaces the above properties hold with the third property replaced with $$\langle v, w\rangle=\overline{\langle w, v\rangle}$$.

For the time being, we will only deal with real valued functions and, thus, we will need an inner product appropriate for such spaces. One such definition is the following. Let $$f(x)$$ and $$g(x)$$ be functions defined on $$[a, b]$$ and introduce the weight function $$\sigma(x)>0$$. Then, we define the inner product, if the integral exists, as $\langle f, g\rangle=\int_{a}^{b} f(x) g(x) \sigma(x) d x .\label{eq:1}$ Spaces in which $$\langle f, f\rangle\langle\infty$$ under this inner product are called the space of square integrable functions on $$(a, b)$$ under weight $$\sigma$$ and denoted as $$L_{\sigma}^{2}(a, b)$$. In what follows, we will assume for simplicity that $$\sigma(x)=1$$. This is possible to do by using a change of variables.

Now that we have function spaces equipped with an inner product, we seek a basis for the space. For an $$n$$-dimensional space we need $$n$$ basis vectors. For an infinite dimensional space, how many will we need? How do we know when we have enough? We will provide some answers to these questions later.

Let’s assume that we have a basis of functions $$\left\{\phi_{n}(x)\right\}_{n=1}^{\infty}$$. Given a function $$f(x)$$, how can we go about finding the components of $$f$$ in this basis? In other words, let $f(x)=\sum_{n=1}^{\infty} c_{n} \phi_{n}(x) .\nonumber$ How do we find the $$c_{n}$$ ’s? Does this remind you of Fourier series expansions? Does it remind you of the problem we had earlier for finite dimensional spaces? [You may want to review the discussion at the end of Section ?? as you read the next derivation.]

Formally, we take the inner product of $$f$$ with each $$\phi_{j}$$ and use the properties of the inner product to find \begin{align} \left\langle\phi_{j}, f\right\rangle &=\left\langle\phi_{j}, \sum_{n=1}^{\infty} c_{n} \phi_{n}\right\rangle\nonumber \\ &=\sum_{n=1}^{\infty} c_{n}\left\langle\phi_{j}, \phi_{n}\right\rangle .\label{eq:2} \end{align} If the basis is an orthogonal basis, then we have $\left\langle\phi_{j}, \phi_{n}\right\rangle=N_{j} \delta_{j n},\label{eq:3}$ where $$\delta_{i n}$$ is the Kronecker delta. Recall from Chapter 3 that the Kronecker delta is defined as $\delta_{i j}= \begin{cases}0, & i \neq j \\ 1, & i=j .\end{cases}\label{eq:4}$

Note

For the generalized Fourier series expansion $$f(x)=\sum_{n=1}^{\infty} c_{n} \phi_{n}(x)$$, we have determined the generalized Fourier coefficients to be $$c_{j}=\left\langle\phi_{j}, f\right\rangle /\left\langle\phi_{j}, \phi_{j}\right\rangle$$.

Continuing with the derivation, we have \begin{align} \left\langle\phi_{j}, f\right\rangle &=\sum_{n=1}^{\infty} c_{n}\left\langle\phi_{j}, \phi_{n}\right\rangle\nonumber \\ &=\sum_{n=1}^{\infty} c_{n} N_{j} \delta_{j n}\label{eq:5} \end{align} Expanding the sum, we see that the Kronecker delta picks out one nonzero term: \begin{align} \left\langle\phi_{j}, f\right\rangle &=c_{1} N_{j} \delta_{j 1}+c_{2} N_{j} \delta_{j 2}+\ldots+c_{j} N_{j} \delta_{j j}+\ldots\nonumber \\ &=c_{j} N_{j}\label{eq:6} \end{align} So, the expansion coefficients are $c_{j}=\frac{\left\langle\phi_{j}, f\right\rangle}{N_{j}}=\frac{\left\langle\phi_{j}, f\right\rangle}{\left\langle\phi_{j}, \phi_{j}\right\rangle} \quad j=1,2, \ldots\nonumber$

We summarize this important result:

Generalized Basis Expansion

Let $$f(x)$$ be represented by an expansion over a basis of orthogonal functions, $$\left\{\phi_{n}(x)\right\}_{n=1}^{\infty}$$, $f(x)=\sum_{n=1}^{\infty} c_{n} \phi_{n}(x) .\nonumber$ Then, the expansion coefficients are formally determined as $c_{n}=\frac{\left\langle\phi_{n}, f\right\rangle}{\left\langle\phi_{n}, \phi_{n}\right\rangle} .\nonumber$ This will be referred to as the general Fourier series expansion and the $$c_{j}$$ ’s are called the Fourier coefficients. Technically, equality only holds when the infinite series converges to the given function on the interval of interest.

Example $$\PageIndex{1}$$

Find the coefficients of the Fourier sine series expansion of $$f(x)$$, given by $f(x)=\sum_{n=1}^{\infty} b_{n} \sin n x, \quad x \in[-\pi, \pi] .\nonumber$

Solution

In the last chapter we already established that the set of functions $$\phi_{n}(x)=$$ $$\sin n x$$ for $$n=1,2, \ldots$$ is orthogonal on the interval $$[-\pi, \pi]$$. Recall that using trigonometric identities, we have for $$n \neq m$$ $\left\langle\phi_{n}, \phi_{m}\right\rangle=\int_{-\pi}^{\pi} \sin n x \sin m x d x=\pi \delta_{n m} .\label{eq:7}$ Therefore, the set $$\phi_{n}(x)=\sin n x$$ for $$n=1,2, \ldots$$ is an orthogonal set of functions on the interval $$[-\pi, \pi]$$.

We determine the expansion coefficients using $b_{n}=\frac{\left\langle\phi_{n}, f\right\rangle}{N_{n}}=\frac{\left\langle\phi_{n}, f\right\rangle}{\left\langle\phi_{n}, \phi_{n}\right\rangle}=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin n x d x .\nonumber$ Does this result look familiar?

Just as with vectors in three dimensions, we can normalize these basis functions to arrive at an orthonormal basis. This is simply done by dividing by the length of the vector. Recall that the length of a vector is obtained as $$v=\sqrt{\mathbf{v} \cdot \mathbf{v}}$$. In the same way, we define the norm of a function by $\|f\|=\sqrt{\langle f, f\rangle} .\nonumber$ Note, there are many types of norms, but this induced norm will be sufficient.$$^{1}$$

For this example, the norms of the basis functions are $$\left\|\phi_{n}\right\|=\sqrt{\pi}$$. Defining $$\psi_{n}(x)=\frac{1}{\sqrt{\pi}} \phi_{n}(x)$$, we can normalize the $$\phi_{n}$$ ’s and have obtained an orthonormal basis of functions on $$[-\pi, \pi]$$.

We can also use the normalized basis to determine the expansion coefficients. In this case we have $b_{n}=\frac{\left\langle\psi_{n}, f\right\rangle}{N_{n}}=\left\langle\psi_{n}, f\right\rangle=\frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin n x d x .\nonumber$

Footnotes

[1] The norm defined here is the natural, or induced, norm on the inner product space. Norms are a generalization of the concept of lengths of vectors. Denoting $$||\mathbf{v}||$$ the norm of $$\mathbf{v}$$, it needs to satisfy the properties.

1. $$||\mathbf{v}||\geq 0$$. $$||\mathbf{v}||=0$$ if and only if $$\mathbf{v}=\mathbf{0}$$.
2. $$||\alpha\mathbf{v}||=|\alpha|\:||\mathbf{v}||$$.
3. $$||\mathbf{u}+\mathbf{v}||\leq ||\mathbf{u}||+||\mathbf{v}||$$.

Examples of common norms are

1. Euclidean norm: $||\mathbf{v}||=\sqrt{v_1^2+\cdots +v_n^2.}\nonumber$
2. Taxicab norm: $||\mathbf{v}||=|v_1|+\cdots +|v_n|.\nonumber$
3. $$L^P$$ norm: $||f||=\left(\int [f(x)]^Pdx\right)^{\frac{1}{p}}.\nonumber$

This page titled 5.1: Function Spaces is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.