Skip to main content
Mathematics LibreTexts

4.3: Connections to Linear Algebra

  • Page ID
    106224
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    We have already seen in earlier chapters that ideas from linear algebra crop up in our studies of differential equations. Namely, we solved eigenvalue problems associated with our systems of differential equations in order to determine the local behavior of dynamical systems near fixed points. In our study of boundary value problems we will find more connections with the theory of vector spaces. However, we will find that our problems lie in the realm of infinite dimensional vector spaces. In this section we will begin to see these connections.

    4.3.1 Eigenfunction Expansions for PDEs

    In the last section we sought solutions of the heat equation. Let’s formally write the heat equation in the form

    \[\dfrac{1}{k} u_{t}=L[u] \label{4.13} \]

    where

    \[L=\dfrac{\partial^{2}}{\partial x^{2}}. \nonumber \]

    \(L\) is another example of a linear differential operator. [See Section 1.1.2.] It is a differential operator because it involves derivative operators. We sometimes define \(D_{x}=\dfrac{\partial}{\partial x}\), so that \(L=D_{x}^{2}\). It is linear, because for functions \(f(x)\) and \(g(x)\) and constants \(\alpha, \beta\) we have

    \[L[\alpha f+\beta g]=\alpha L[f]+\beta L[g] \nonumber \]

    When solving the heat equation, using the method of separation of variables, we found an infinite number of product solutions \(u_{n}(x, t)=T_{n}(t) X_{n}(x)\). We did this by solving the boundary value problem

    \[L[X]=\lambda X, \quad X(0)=0=X(L) \label{4.14} \]

    Here we see that an operator acts on an unknown function and spits out an unknown constant times that unknown. Where have we done this before? This is the same form as \(A \mathbf{v}=\lambda \mathbf{v}\). So, we see that Equation (4.14) is really an eigenvalue problem for the operator \(L\) and given boundary conditions. When we solved the heat equation in the last section, we found the eigenvalues

    \[\lambda_{n}=-\left(\dfrac{n \pi}{L}\right)^{2} \nonumber \]

    and the eigenvalues

    \[X_{n}(x)=\sin \dfrac{n \pi x}{L}. \nonumber \]

    We used these to construct the general solution that is essentially a linear combination over the eigenfunctions,

    \[u(x, t)=\sum_{n=1}^{\infty} T_{n}(t) X_{n}(x). \nonumber \]

    Note that these eigenfunctions live in an infinite dimensional function space.
    We would like to generalize this method to problems in which \(L\) comes from an assortment of linear differential operators. So, we consider the more general partial differential equation

    \[u_{t}=L[u], \quad a \leq x \leq b, \quad t>0, \nonumber \]

    satisfying the boundary conditions

    \[B[u](a, t)=0, \quad B[u](b, t)=0, \quad t>0, \nonumber \]

    and initial condition

    \[u(x, 0)=f(x), \quad a \leq x \leq b. \nonumber \]

    The form of the allowed boundary conditions \(B[u]\) will be taken up later. Also, we will later see specific examples and properties of linear differential operators that will allow for this procedure to work.

    We assume product solutions of the form \(u_{n}(x, t)=b_{n}(t) \phi_{n}(x)\), where the \(\phi_{n}\)'s are the eigenfunctions of the operator \(L\),

    \[L \phi_{n}=\lambda_{n} \phi_{n}, \quad n=1,2, \ldots \label{4.15} \]

    satisfying the boundary conditions

    \[B\left[\phi_{n}\right](a)=0, \quad B\left[\phi_{n}\right](b)=0 \label{4.16} \]

    Inserting the general solution

    \[u(x, t)=\sum_{n=1}^{\infty} b_{n}(t) \phi_{n}(x) \nonumber \]

    into the partial differential equation, we have

    \[\begin{aligned}
    u_{t} &=L[u] \\
    \dfrac{\partial}{\partial t} \sum_{n=1}^{\infty} b_{n}(t) \phi_{n}(x) &=L\left[\sum_{n=1}^{\infty} b_{n}(t) \phi_{n}(x)\right]
    \end{aligned} \label{4.17} \]

    On the left we differentiate term by term and on the right side we use the linearity of \(L\):

    \[\sum_{n=1}^{\infty} \dfrac{d b_{n}(t)}{d t} \phi_{n}(x)=\sum_{n=1}^{\infty} b_{n}(t) L\left[\phi_{n}(x)\right] \label{4.18} \]

    Now, we make use of the result of applying \(L\) to the eigenfunction \(\phi_{n}\):

    \[\sum_{n=1}^{\infty} \dfrac{d b_{n}(t)}{d t} \phi_{n}(x)=\sum_{n=1}^{\infty} b_{n}(t) \lambda_{n} \phi_{n}(x) \label{4.19} \]

    Comparing both sides, or using the linear independence of the eigenfunctions, we see that

    \[\dfrac{d b_{n}(t)}{d t}=\lambda_{n} b_{n}(t) \nonumber \]

    whose solution is

    \[b_{n}(t)=b_{n}(0) e^{\lambda_{n} t} \nonumber \]

    So, the general solution becomes

    \[u(x, t)=\sum_{n=1}^{\infty} b_{n}(0) e^{\lambda_{n} t} \phi_{n}(x). \nonumber \]

    This solution satisfies, at least formally, the partial differential equation and satisfies the boundary conditions.

    Finally, we need to determine the \(b_{n}(0)\)'s, which are so far arbitrary. We use the initial condition \(u(x, 0)=f(x)\) to find that

    \[f(x)=\sum_{n=1}^{\infty} b_{n}(0) \phi_{n}(x). \nonumber \]

    So, given \(f(x)\), we are left with the problem of extracting the coefficients \(b_{n}(0)\) in an expansion of \(f\) in the eigenfunctions \(\phi_{n}\). We will see that this is related to Fourier series expansions, which we will take up in the next chapter.

    4.3.2 Eigenfunction Expansions for Nonhomogeneous ODEs

    Partial differential equations are not the only applications of the method of eigenfunction expansions, as seen in the last section. We can apply these method to nonhomogeneous two point boundary value problems for ordinary differential equations assuming that we can solve the associated eigenvalue problem.

    Let’s begin with the nonhomogeneous boundary value problem:

    \[\begin{gathered}
    L[u]=f(x), \quad a \leq x \leq b \\
    B[u](a)=0, \quad B[u](b)=0
    \end{gathered} \label{4.20} \]

    We first solve the eigenvalue problem,

    \[\begin{gathered}
    L[\phi]=\lambda \phi, \quad a \leq x \leq b \\
    B[\phi](a)=0, \quad B[\phi](b)=0
    \end{gathered} \label{4.21} \]

    and obtain a family of eigenfunctions, \(\left\{\phi_{n}(x)\right\}_{n=1}^{\infty}\). Then we assume that \(u(x)\) can be represented as a linear combination of these eigenfunctions:

    \[u(x)=\sum_{n=1}^{\infty} b_{n} \phi_{n}(x) \nonumber \]

    Inserting this into the differential equation, we have

    \[\begin{aligned}
    f(x) &=L[u] \\
    &=L\left[\sum_{n=1}^{\infty} b_{n} \phi_{n}(x)\right] \\
    &=\sum_{n=1}^{\infty} b_{n} L\left[\phi_{n}(x)\right] \\
    &=\sum_{n=1}^{\infty} \lambda_{n} b_{n} \phi_{n}(x) \\
    &\equiv \sum_{n=1}^{\infty} c_{n} \phi_{n}(x)
    \end{aligned} \label{4.22} \]

    Therefore, we have to find the expansion coefficients \(c_{n}=\lambda_{n} b_{n}\) of the given \(f(x)\) in a series expansion over the eigenfunctions. This is similar to what we had found for the heat equation problem and its generalization in the last section.

    There are a lot of questions and details that have been glossed over in our formal derivations. Can we always find such eigenfunctions for a given operator? Do the infinite series expansions converge? Can we differentiate our expansions terms by term? Can one find expansions that converge to given functions like \(f(x)\) above? We will begin to explore these questions in the case that the eigenfunctions are simple trigonometric functions like the \(\phi_{n}(x)=\sin \dfrac{n \pi x}{L}\) in the solution of the heat equation.

    4.3.3 Linear Vector Spaces

    Much of the discussion and terminology that we will use comes from the theory of vector spaces. Until now you may only have dealt with finite dimensional vector spaces in your classes. Even then, you might only be comfortable with two and three dimensions. We will review a little of what we know about finite dimensional spaces so that we can deal with the more general function spaces, which is where our eigenfunctions live.

    The notion of a vector space is a generalization of our three dimensional vector spaces. In three dimensions, we have things called vectors, which are arrows of a specific length and pointing in a given direction. To each vector, we can associate a point in a three dimensional Cartesian system. We just attach the tail of the vector \(\mathbf{v}\) to the origin and the head lands at \((x, y, z)\). We then use unit vectors \(\mathbf{i}, \mathbf{j}\) and \(\mathbf{k}\) along the coordinate axes to write

    \[\mathbf{v}=x \mathbf{i}+y \mathbf{j}+z \mathbf{k} \nonumber \]

    Having defined vectors, we then learned how to add vectors and multiply vectors by numbers, or scalars. Under these operations, we expected to get back new vectors. Then we learned that there were two types of multiplication of vectors. We could multiply then to get a scalar or a vector. This lead to the dot and cross products, respectively. The dot product was useful for determining the length of a vector, the angle between two vectors, or if the vectors were orthogonal.

    These notions were later generalized to spaces of more than three dimensions in your linear algebra class. The properties outlined roughly above need to be preserved. So, we have to start with a space of vectors and the operations between them. We also need a set of scalars, which generally come from some field. However, in our applications the field will either be the set of real numbers or the set of complex numbers.

    Definition 4.1. 

    A vector space \(V\) over a field \(F\) is a set that is closed under addition and scalar multiplication and satisfies the following conditions: For any \(u, v, w \in V\) and \(a, b \in F\)

    1. \(u+v=v+u\).
    2. \((u+v)+w=u+(v+w)\).
    3. There exists \(a 0\) such that \(0+v=v\).
    4. There exists \(a-v\) such that \(v+(-v)=0\).
    5. \(a(b v)=(a b) v\).
    6. \((a+b) v=a v+b v\).
    7. \(a(u+v)=a u+b v\).
    8. \(1(v)=v\).

    Now, for an \(n\)-dimensional vector space, we have the idea that any vector in the space can be represented as the sum over \(n\) linearly independent vectors. Recall that a linearly independent set of vectors \(\left\{\mathbf{v}_{j}\right\}_{j=1}^{n}\) satisfies

    \[\sum_{j=1}^{n} c_{j} \mathbf{v}_{j}=\mathbf{0} \quad \Leftrightarrow \quad c_{j}=0. \nonumber \]

    This leads to the idea of a basis set. The standard basis in an \(n\)-dimensional vector space is a generalization of the standard basis in three dimensions \((\mathbf{i}, \mathbf{j}\) and \(\mathbf{k})\). We define

    \[\mathbf{e}_{k}=(0, \ldots, 0, \underbrace{1}_{k \text { th space }}, 0, \ldots, 0), \quad k=1, \ldots, n. \label{4.23} \]

    Then, we can expand any \(\mathbf{v} \in V\) as

    \[\mathbf{v}=\sum_{k=1}^{n} v_{k} \mathbf{e}_{k} \label{4.24} \]

    where the \(v_{k}\)'s are called the components of the vector in this basis and one can write \(\mathbf{v}\) as an \(n\)-tuple \(\left(v_{1}, v_{2}, \ldots, v_{n}\right)\).

    The only other thing we will need at this point is to generalize the dot product, or scalar product. Recall that there are two forms for the dot product in three dimensions. First, one has that

    \[\mathbf{u} \cdot \mathbf{v}=u v \cos \theta, \label{4.25} \]

    where \(u\) and \(v\) denote the length of the vectors. The other form, is the component form:

    \[\mathbf{u} \cdot \mathbf{v}=u_{1} v_{1}+u_{2} v_{2}+u_{3} v_{3}=\sum_{k=1}^{3} u_{k} v_{k} \label{4.26} \]

    Of course, this form is easier to generalize. So, we define the scalar product between to \(n\)-dimensional vectors as

    \[<\mathbf{u}, \mathbf{v}>=\sum_{k=1}^{n} u_{k} v_{k} \label{4.27} \]

    Actually, there are a number of notations that are used in other texts. One can write the scalar product as \((\mathbf{u}, \mathbf{v})\) or even use the Dirac notation \(<\mathbf{u} \mid \mathbf{v}>\) for applications in quantum mechanics.

    While it does not always make sense to talk about angles between general vectors in higher dimensional vector spaces, there is one concept that is useful. It is that of orthogonality, which in three dimensions another way of say vectors are perpendicular to each other. So, we also say that vectors \(\mathbf{u}\) and \(\mathbf{v}\) are orthogonal if and only if \(<\mathbf{u}, \mathbf{v}>=0\). If \(\left\{\mathbf{a}_{k}\right\}_{k=1}^{n}\), is a set of basis vectors such that

    \[<\mathbf{a}_{j}, \mathbf{a}_{k}>=0, \quad k \neq j, \nonumber \]

    then it is called an orthogonal basis. If in addition each basis vector is a unit vector, then one has an orthonormal basis

    Let \(\left\{\mathbf{a}_{k}\right\}_{k=1}^{n}\), be a set of basis vectors for vector space \(V\). We know that any vector \(\mathbf{v}\) can be represented in terms of this basis, \(\mathbf{v}=\sum_{k=1}^{n} v_{k} \mathbf{a}_{k}\). If we know the basis and vector, can we find the components? The answer is, yes. We can use the scalar product of \(\mathbf{v}\) with each basis element \(\mathbf{a}_{j}\). So, we have for \(j=1, \ldots, n\)

    \[\begin{aligned}
    <\mathbf{a}_{j}, \mathbf{v}>&=<\mathbf{a}_{j}, \sum_{k=1}^{n} v_{k} \mathbf{a}_{k}>\\
    &=\sum_{k=1}^{n} v_{k}<\mathbf{a}_{j}, \mathbf{a}_{k}>
    \end{aligned} \label{4.28} \]

    Since we know the basis elements, we can easily compute the numbers

    \[A_{j k} \equiv<\mathbf{a}_{j}, \mathbf{a}_{k}> \nonumber \]

    and

    \[b_{j} \equiv<\mathbf{a}_{j}, \mathbf{v}> \nonumber \]

    Therefore, the system (4.28) for the \(v_{k}\)'s is a linear algebraic system, which takes the form \(A \mathbf{v}=\mathbf{b}\). However, if the basis is orthogonal, then the matrix \(A\) is diagonal and the system is easily solvable. We have that

    \[<\mathbf{a}_{j}, \mathbf{v}>=v_{j}<\mathbf{a}_{j}, \mathbf{a}_{j}>, \label{4.29} \]

    or

    \[v_{j}=\dfrac{<\mathbf{a}_{j}, \mathbf{v}>}{<\mathbf{a}_{j}, \mathbf{a}_{j}>} \label{4.30} \]

    In fact, if the basis is orthonormal, \(A\) is the identity matrix and the solution is simpler:

    \[v_{j}=<\mathbf{a}_{j}, \mathbf{v}> \label{4.31} \]

    We spent some time looking at this simple case of extracting the components of a vector in a finite dimensional space. The keys to doing this simply were to have a scalar product and an orthogonal basis set. These are the key ingredients that we will need in the infinite dimensional case. Recall that when we solved the heat equation, we had a function (vector) that we wanted to expand in a set of eigenfunctions (basis) and we needed to find the expansion coefficients (components). As you can see, we need to extend the concepts for finite dimensional spaces to their analogs in infinite dimensional spaces. Linear algebra will provide some of the backdrop for what is to follow: The study of many boundary value problems amounts to the solution of eigenvalue problems over infinite dimensional vector spaces (complete inner product spaces, the space of square integrable functions, or Hilbert spaces).

    We will consider the space of functions of a certain type. They could be the space of continuous functions on [0,1], or the space of differentiably continuous functions, or the set of functions integrable from \(a\) to \(b\). Later, we will specify the types of functions needed. We will further need to be able to add functions and multiply them by scalars. So, we can easily obtain a vector space of functions.

    We will also need a scalar product defined on this space of functions. There are several types of scalar products, or inner products, that we can define. For a real vector space, we define

    Definition 4.2.

    An inner product \(<,>\) on a real vector space \(V\) is a mapping from \(V \times V\) into \(R\) such that for \(u, v, w \in V\) and \(\alpha \in R\) one has

    1. \(<u+v, w>=<u, w>+<v, w>\).
    2. \(<\alpha v, w>=\alpha<v, w>\).
    3. \(<v, w>=<w, v>\).
    4. \(<v, v>\geq 0\) and \(<v, v>=0\) iff \(v=0\).

    A real vector space equipped with the above inner product leads to a real inner product space. A more general definition with the third item replaced with \(\langle v, w\rangle=\langle w, v\rangle\) is needed for complex inner product spaces.

    For the time being, we are dealing just with real valued functions. We need an inner product appropriate for such spaces. One such definition is the following. Let \(f(x)\) and \(g(x)\) be functions defined on \([a, b]\). Then, we define the inner product, if the integral exists, as

    \[<f, g>=\int_{a}^{b} f(x) g(x) d x. \label{4.32} \]

    So far, we have functions spaces equipped with an inner product. Can we find a basis for the space? For an \(n\)-dimensional space we need \(n\) basis vectors.

    For an infinite dimensional space, how many will we need? How do we know when we have enough? We will think about those things later.

    Let's assume that we have a basis of functions \(\left\{\phi_{n}(x)\right\}_{n=1}^{\infty}\). Given a function \(f(x)\), how can we go about finding the components of \(f\) in this basis? In other words, let

    \[f(x)=\sum_{n=1}^{\infty} c_{n} \phi_{n}(x) \nonumber \]

    How do we find the \(c_{n}\)'s? Does this remind you of the problem we had earlier? Formally, we take the inner product of \(f\) with each \(\phi_{j}\), to find

    \[\begin{aligned}
    <\phi_{j}, f>&=<\phi_{j}, \sum_{n=1}^{\infty} c_{n} \phi_{n}>\\
    &=\sum_{n=1}^{\infty} c_{n}<\phi_{j}, \phi_{n}>
    \end{aligned} \label{4.33} \]

    If our basis is an orthogonal basis, then we have

    \[<\phi_{j}, \phi_{n}>=N_{j} \delta_{j n} \label{4.34} \]

    where \(\delta_{i j}\) is the Kronecker delta defined as

    \[\delta_{i j}=\left\{\begin{array}{l}
    0, i \neq j \\
    1, i=j
    \end{array}\right. \label{4.35} \]

    Thus, we have

    \[\begin{aligned}
    <\phi_{j}, f>&=\sum_{n=1}^{\infty} c_{n}<\phi_{j}, \phi_{n}>\\
    &=\sum_{n=1}^{\infty} c_{n} N_{j} \delta_{j n} \\
    &=c_{1} N_{j} \delta_{j 1}+c_{2} N_{j} \delta_{j 2}+\ldots+c_{j} N_{j} \delta_{j j}+\ldots \\
    &=c_{j} N_{j}
    \end{aligned} \label{4.36} \]

    So, the expansion coefficient is

    \[c_{j}=\dfrac{<\phi_{j}, f>}{N_{j}}=\dfrac{<\phi_{j}, f>}{<\phi_{j}, \phi_{j}>} \nonumber \]

    We summarize this important result:

    Generalized Basis Expansion

    Let \(f(x)\) be represented by an expansion over a basis of orthogonal functions, \(\left\{\phi_{n}(x)\right\}_{n=1}^{\infty}\)

    \[f(x)=\sum_{n=1}^{\infty} c_{n} \phi_{n}(x). \nonumber \]

    Then, the expansion coefficients are formally determined as

    \[c_{n}=\dfrac{<\phi_{n}, f>}{<\phi_{n}, \phi_{n}>}. \nonumber \]

    In our preparation for later sections, let's determine if the set of functions \(\phi_{n}(x)=\sin n x\) for \(n=1,2, \ldots\) is orthogonal on the interval \([-\pi, \pi]\). We need to show that \(<\phi_{n}, \phi_{m}>=0\) for \(n \neq m\). Thus, we have for \(n \neq m\)

    \[\begin{aligned}
    <\phi_{n}, \phi_{m}>&=\int_{-\pi}^{\pi} \sin n x \sin m x d x \\
    &=\dfrac{1}{2} \int_{-\pi}^{\pi}[\cos (n-m) x-\cos (n+m) x] d x \\
    &=\dfrac{1}{2}\left[\dfrac{\sin (n-m) x}{n-m}-\dfrac{\sin (n+m) x}{n+m}\right]_{-\pi}^{\pi}=0
    \end{aligned} \label{4.37} \]

    Here we have made use of a trigonometric identity for the product of two sines. We recall how this identity is derived. Recall the addition formulae for cosines:

    \begin{aligned}
    &\cos (A+B)=\cos A \cos B-\sin A \sin B \\
    &\cos (A-B)=\cos A \cos B+\sin A \sin B
    \end{aligned}

    Adding, or subtracting, these equations gives

    \begin{aligned}
    &2 \cos A \cos B=\cos (A+B)+\cos (A-B), \\
    &2 \sin A \sin B=\cos (A-B)-\cos (A+B) .
    \end{aligned}

    So, we have determined that the set \(\phi_{n}(x)=\sin n x\) for \(n=1,2, \ldots\) is an orthogonal set of functions on the interval \([=\pi, \pi]\). Just as with vectors in three dimensions, we can normalize our basis functions to arrive at an orthonormal basis, \(<\phi_{n}, \phi_{m}>=\delta_{n m}, m, n=1,2, \ldots\) This is simply done by dividing by the length of the vector. Recall that the length of a vector was obtained as \(v=\sqrt{\mathbf{v} \cdot \mathbf{v}}\) In the same way, we define the norm of our functions by

    \[\|f\|=\sqrt{<f, f>}. \nonumber \]

    Note, there are many types of norms, but this will be sufficient for us.

    For the above basis of sine functions, we want to first compute the norm of each function. Then we would like to find a new basis from this one such that each basis eigenfunction has unit length and is therefore an orthonormal basis. We first compute

    \[\begin{aligned}
    \left\|\phi_{n}\right\|^{2} &=\int_{-\pi}^{\pi} \sin ^{2} n x d x \\
    &=\dfrac{1}{2} \int_{-\pi}^{\pi}[1-\cos 2 n x] d x \\
    &=\dfrac{1}{2}\left[x-\dfrac{\sin 2 n x}{2 n}\right]_{-\pi}^{\pi}=\pi
    \end{aligned} \label{4.38} \]

    We have found for our example that

    \[<\phi_{n}, \phi_{m}>=\pi \delta_{n m} \label{4.39} \]

    and that \(\left\|\phi_{n}\right\|=\sqrt{\pi}\). Defining \(\psi_{n}(x)=\dfrac{1}{\sqrt{\pi}} \phi_{n}(x)\), we have normalized the \(\phi_{n}\)'s and have obtained an orthonormal basis of functions on \([-\pi, \pi]\).

    Expansions of functions in trigonometric bases occur often and originally resulted from the study of partial differential equations. They have been named Fourier series and will be the topic of the next chapter.


    This page titled 4.3: Connections to Linear Algebra is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?