Skip to main content
Mathematics LibreTexts

7.1: Classical Orthogonal Polynomials

  • Page ID
    106237
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    We begin by noting that the sequence of functions \(\left\{1, x, x^{2}, \ldots\right\}\) is a basis of linearly independent functions. In fact, by the Stone-Weierstrass Approximation Theorem this set is a basis of \(L_{\sigma}^{2}(a, b)\), the space of square integrable functions over the interval \([a, b]\) relative to weight \(\sigma(x)\). We are familiar with being able to expand functions over this basis, since the expansions are just power series representations of the functions,

    \[f(x) \sim \sum_{n=0}^{\infty} c_{n} x^{n} . \nonumber \]

    However, this basis is not an orthogonal set of basis functions. One can easily see this by integrating the product of two even, or two odd, basis functions with $\sigma(x)=1$ and \((a, b)=(-1,1)\). For example,

    \[\left\langle 1, x^{2}\right\rangle=\int_{-1}^{1} x^{0} x^{2} d x=\dfrac{2}{3} . \nonumber \]

    Since we have found that orthogonal bases have been useful in determining the coefficients for expansions of given functions, we might ask if it is possible to obtain an orthogonal basis involving these powers of \(x\). Of course, finite combinations of these basis element are just polynomials!

    OK, we will ask.“Given a set of linearly independent basis vectors, can one find an orthogonal basis of the given space?” The answer is yes. We recall from introductory linear algebra, which mostly covers finite dimensional vector spaces, that there is a method for carrying this out called the Gram-Schmidt Orthogonalization Process. We will recall this process for finite dimensional vectors and then generalize to function spaces.

    Screen Shot 2022-07-08 at 12.14.06 PM.png
    Figure 7.1. The basis \(a_1, a_2\), and \(a_3\), of \(R^3\) considered in the text.

    Let’s assume that we have three vectors that span \(R^3\), given by \(a_1, a_2,\) and \(a_3\) and shown in Figure 7.1. We seek an orthogonal basis \(e_1, e_2\), and \(e_3\), beginning one vector at a time.

    First we take one of the original basis vectors, say \(a_1\), and define

    \[e_1 = a_1. \nonumber \]

    Of course, we might want to normalize our new basis vectors, so we would denote such a normalized vector with a “hat”:

    \(\hat{e}_1 = \dfrac{e_1}{e_1}\),

    where \(e_{1}=\sqrt{\mathbf{e}_{1} \cdot \mathbf{e}_{1}}\).

    Next, we want to determine an \(\mathbf{e}_{2}\) that is orthogonal to \(\mathbf{e}_{1}\). We take another element of the original basis, \(\mathbf{a}_{2}\). In Figure 7.2 we see the orientation of the vectors. Note that the desired orthogonal vector is \(\mathbf{e}_{2}\). Note that \(\mathbf{a}_{2}\) can be written as a sum of \(\mathbf{e}_{2}\) and the projection of \(\mathbf{a}_{2}\) on \(\mathbf{e}_{1}\). Denoting this projection by \(\mathbf{p r}_{1} \mathbf{a}_{2}\), we then have

    \[\mathbf{e}_{2}=\mathbf{a}_{2}-\mathbf{p r}_{1} \mathbf{a}_{2} . \label{7.1} \]

    We recall the projection of one vector onto another from our vector calculus class.

    \[\mathbf{p r}_{1} \mathbf{a}_{2}=\dfrac{\mathbf{a}_{2} \cdot \mathbf{e}_{1}}{e_{1}^{2}} \mathbf{e}_{1} . \label{7.2} \]

    Screen Shot 2022-07-08 at 12.22.16 PM.png
    Figure 7.2. A plot of the vectors \(e_1, a_2\), and \(e_2\) needed to find the projection of \(a_2\), on \(e_1\).

    Note that this is easily proven by writing the projection as a vector of length \(a_{2} \cos \theta\) in direction \(\hat{\mathbf{e}}_{1}\), where \(\theta\) is the angle between \(\mathbf{e}_{1}\) and \(\mathbf{a}_{2}\). Using the definition of the dot product, \(\mathbf{a} \cdot \mathbf{b}=a b \cos \theta\), the projection formula follows.

    Combining Equations (7.1)-(7.2), we find that

    \[\mathbf{e}_{2}=\mathbf{a}_{2}-\dfrac{\mathbf{a}_{2} \cdot \mathbf{e}_{1}}{e_{1}^{2}} \mathbf{e}_{1} . \label{7.3} \]

    It is a simple matter to verify that \(\mathbf{e}_{2}\) is orthogonal to \(\mathbf{e}_{1}\):

    \[\begin{aligned}
    \mathbf{e}_{2} \cdot \mathbf{e}_{1} &=\mathbf{a}_{2} \cdot \mathbf{e}_{1}-\dfrac{\mathbf{a}_{2} \cdot \mathbf{e}_{1}}{e_{1}^{2}} \mathbf{e}_{1} \cdot \mathbf{e}_{1} \\
    &=\mathbf{a}_{2} \cdot \mathbf{e}_{1}-\mathbf{a}_{2} \cdot \mathbf{e}_{1}=0
    \end{aligned} \label{7.4} \]

    Now, we seek a third vector \(\mathbf{e}_{3}\) that is orthogonal to both \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\). Pictorially, we can write the given vector \(\mathbf{a}_{3}\) as a combination of vector projections along \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) and the new vector. This is shown in Figure 7.3. Then we have,

    \[\mathbf{e}_{3}=\mathbf{a}_{3}-\dfrac{\mathbf{a}_{3} \cdot \mathbf{e}_{1}}{e_{1}^{2}} \mathbf{e}_{1}-\dfrac{\mathbf{a}_{3} \cdot \mathbf{e}_{2}}{e_{2}^{2}} \mathbf{e}_{2} . \label{7.5} \]

    Again, it is a simple matter to compute the scalar products with \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) to verify orthogonality.

    We can easily generalize the procedure to the \(N\)-dimensional case.

    Gram-Schmidt Organization in \(N\)-Dimensions

    Let \(\mathbf{a}_{n}, n=1, \ldots, N\) be a set of linearly independent vectors in \(\mathbf{R}^{N}\). Then, an orthogonal basis can be found by setting \(\mathbf{e}_{1}=\mathbf{a}_{1}\) and for \(n>1\),

    \[\mathbf{e}_{n}=\mathbf{a}_{n}-\sum_{j=1}^{n-1} \dfrac{\mathbf{a}_{n} \cdot \mathbf{e}_{j}}{e_{j}^{2}} \mathbf{e}_{j} \label{7.6} \]

    Screen Shot 2022-07-08 at 12.36.11 PM.png
    Figure 7.3. A plot of the vectors and their projections for determining \(e_3\).

    Now, we can generalize this idea to (real) function spaces.

    Gram-Schmidt Orthogonalization for Function Spaces

    Let \(f_{n}(x), n \in N_{0}=\{0,1,2, \ldots\}\), be a linearly independent sequence of continuous functions defined for \(x \in[a, b]\). Then, an orthogonal basis of functions, \(\phi_{n}(x), n \in N_{0}\) can be found and is given by

    \[\phi_{0}(x)=f_{0}(x) \nonumber \]

    and

    \[\phi_{n}(x)=f_{n}(x)-\sum_{j=0}^{n-1} \dfrac{<f_{n}, \phi_{j}>}{\left\|\phi_{j}\right\|^{2}} \phi_{j}(x), \quad n=1,2, \ldots \label{7.7} \]

    Here we are using inner products relative to weight \(\sigma(x)\),

    \[<f, g>=\int_{a}^{b} f(x) g(x) \sigma(x) d x \label{7.8} \]

    Note the similarity between the orthogonal basis in (7.7) and the expression for the finite dimensional case in Equation (7.6).

    Example 7.1. Apply the Gram-Schmidt Orthogonalization process to the set \(f_n(x) = x^n, n \in N_0\), when \(x \in (-1, 1)\) and \(\sigma(x) = 1\).

    First, we have \(\phi_{0}(x)=f_{0}(x)=1\). Note that

    \[\int_{-1}^{1} \phi_{0}^{2}(x) d x=\dfrac{1}{2}. \nonumber \]

    We could use this result to fix the normalization of our new basis, but we will hold off on doing that for now.

    Now, we compute the second basis element:

    \[\begin{aligned}
    \phi_{1}(x) &=f_{1}(x)-\dfrac{<f_{1}, \phi_{0}>}{\left\|\phi_{0}\right\|^{2}} \phi_{0}(x) \\
    &=x-\dfrac{<x, 1>}{\|1\|^{2}} 1=x
    \end{aligned} \label{7.9} \]

    since \(<x, 1>\) is the integral of an odd function over a symmetric interval.

    For \(\phi_{2}(x)\), we have

    \[\begin{aligned}
    \phi_{2}(x) &=f_{2}(x)-\dfrac{<f_{2}, \phi_{0}>}{\left\|\phi_{0}\right\|^{2}} \phi_{0}(x)-\dfrac{<f_{2}, \phi_{1}>}{\left\|\phi_{1}\right\|^{2}} \phi_{1}(x) \\
    &=x^{2}-\dfrac{<x^{2}, 1>}{\|1\|^{2}} 1-\dfrac{\left\langle x^{2}, x>\right.}{\|x\|^{2}} x \\
    &=x^{2}-\dfrac{\int_{-1}^{1} x^{2} d x}{\int_{-1}^{1} d x} \\
    &=x^{2}-\dfrac{1}{3} .
    \end{aligned} \label{7.10} \]

    So far, we have the orthogonal set \(\left\{1, x, x^{2}-\dfrac{1}{3}\right\}\). If one chooses to normalize these by forcing \(\phi_{n}(1)=1\), then one obtains the classical Legendre polynomials, \(P_{n}(x)=\phi_{1}(x)\). Thus,

    \[P_{2}(x)=\dfrac{1}{2}\left(3 x^{2}-1\right) . \nonumber \]

    Note that this normalization is different than the usual one. In fact, we see that \(P_{2}(x)\) does not have a unit norm,

    \[\left\|P_{2}\right\|^{2}=\int_{-1}^{1} P_{2}^{2}(x) d x=\dfrac{2}{5} . \nonumber \]

    The set of Legendre polynomials is just one set of classical orthogonal polynomials that can be obtained in this way. Many had originally appeared as solutions of important boundary value problems in physics. They all have similar properties and we will just elaborate some of these for the Legendre functions in the next section. Other orthogonal polynomials in this group are shown in Table 7.1.

    For reference, we also note the differential equations satisfied by these functions.


    This page titled 7.1: Classical Orthogonal Polynomials is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?