Skip to main content
Mathematics LibreTexts

6.4: Finding Orthogonal Bases

  • Page ID
    82505
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Skip to main content

    Section 6.4 Finding orthogonal bases

    The last section demonstrated the value of working with orthogonal, and especially orthonormal, sets. If we have an orthogonal basis \(\mathbf w_1,\mathbf w_2,\ldots,\mathbf w_n\) for a subspace \(W\text{,}\) the Projection Formula 6.3.15 tells us that the orthogonal projection of a vector \(\mathbf b\) onto \(W\) is

    \begin{equation*} \bhat = \frac{\mathbf b\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}~\mathbf w_1 + \frac{\mathbf b\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}~\mathbf w_2 + \ldots + \frac{\mathbf b\cdot\mathbf w_n}{\mathbf w_n\cdot\mathbf w_n}~\mathbf w_n\text{.} \end{equation*}

    An orthonormal basis \(\mathbf u_1,\mathbf u_2,\ldots,\mathbf u_n\) is even more convenient: after forming the matrix \(Q=\begin{bmatrix} \mathbf u_1 & \mathbf u_2 & \ldots & \mathbf u_n \end{bmatrix}\text{,}\) we have \(\bhat = QQ^T\mathbf b\text{.}\)

    In the examples we've seen so far, however, orthogonal bases were given to us. What we need now is a way to form orthogonal bases. In this section, we'll explore an algorithm that begins with a basis for a subspace and creates an orthogonal basis. Once we have an orthogonal basis, we can scale each of the vectors appropriately to produce an orthonormal basis.

    Preview Activity 6.4.1.

    Suppose we have a basis for \(\mathbb R^2\) consisting of the vectors

    \begin{equation*} \mathbf v_1=\twovec11,\hspace{24pt} \mathbf v_1=\twovec02 \end{equation*}

    as shown in Figure 6.4.1. Notice that this basis is not orthogonal.

    Figure 6.4.1. A basis for \(\mathbb R^2\text{.}\)
    1. Find the vector \(\vhat_2\) that is the orthogonal projection of \(\mathbf v_2\) onto the line defined by \(\mathbf v_1\text{.}\)
    2. Explain why \(\mathbf v_2 - \vhat_2\) is orthogonal to \(\mathbf v_1\text{.}\)
    3. Define the new vectors \(\mathbf w_1=\mathbf v_1\) and \(\mathbf w_2=\mathbf v_2-\vhat_2\) and sketch them in Figure 6.4.2. Explain why \(\mathbf w_1\) and \(\mathbf w_2\) define an orthogonal basis for \(\mathbb R^2\text{.}\)
    Figure 6.4.2. Sketch the new basis \(\mathbf w_1\) and \(\mathbf w_2\text{.}\)
  • Write the vector \(\mathbf b=\twovec8{-10}\) as a linear combination of \(\mathbf w_1\) and \(\mathbf w_2\text{.}\)
  • Scale the vectors \(\mathbf w_1\) and \(\mathbf w_2\) to produce an orthonormal basis \(\mathbf u_1\) and \(\mathbf u_2\) for \(\mathbb R^2\text{.}\)
  • Subsection 6.4.1 Gram-Schmidt orthogonalization

    The preview activity illustrates the main idea behind an algorithm, known as Gram-Schmidt orthogonalization, that begins with a basis for some subspace of \(\mathbb R^m\) and produces an orthogonal or orthonormal basis. The algorithm relies on our construction of the orthogonal projection. Remember that we formed the orthogonal projection \(\bhat\) of \(\mathbf b\) onto a subspace \(W\) by requiring that \(\mathbf b-\bhat\) is orthogonal to \(W\) as shown in Figure 6.4.3.

    Figure 6.4.3. If \(\bhat\) is the orthogonal projection of \(\mathbf b\) onto \(W\text{,}\) then \(\mathbf b-\bhat\) is orthogonal to \(W\text{.}\)

    This observation guides our construction of an orthogonal basis for it allows us to create a vector that is orthogonal to a given subspace. Let's see how the Gram-Schmidt algorithm works.

    Activity 6.4.2.

    Suppose that \(W\) is a three-dimensional subspace of \(\mathbb R^4\) with basis:

    \begin{equation*} \mathbf v_1 = \fourvec1111,\hspace{24pt} \mathbf v_2 = \fourvec1322,\hspace{24pt} \mathbf v_3 = \fourvec1{-3}{-3}{-3}\text{.} \end{equation*}

    We can see that this basis is not orthogonal by noting that \(\mathbf v_1\cdot\mathbf v_2 = 8\text{.}\) Our goal is to create an orthogonal basis \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\) for \(W\text{.}\)

    To begin, we declare that \(\mathbf w_1=\mathbf v_1\text{,}\) and we call \(W_1\) the line defined by \(\mathbf w_1\text{.}\)

    1. Find the vector \(\vhat_2\) that is the orthogonal projection of \(\mathbf v_2\) onto \(W_1\text{,}\) the line defined by \(\mathbf w_1\text{.}\)
    2. Form the vector \(\mathbf w_2 = \mathbf v_2-\vhat_2\) and verify that it is orthogonal to \(\mathbf w_1\text{.}\)
    3. Explain why \(\laspan{\mathbf w_1,\mathbf w_2} = \laspan{\mathbf v_1,\mathbf v_2}\) by showing that any linear combination of \(\mathbf v_1\) and \(\mathbf v_2\) can be written as a linear combination of \(\mathbf w_1\) and \(\mathbf w_2\) and vice versa.

    4. The vectors \(\mathbf w_1\) and \(\mathbf w_2\) are an orthogonal basis for a two-dimensional subspace \(W_2\) of \(\mathbb R^4\text{.}\) Find the vector \(\vhat_3\) that is the orthogonal projection of \(\mathbf v_3\) onto \(W_2\text{.}\)
    5. Verify that \(\mathbf w_3 = \mathbf v_3-\vhat_3\) is orthogonal to both \(\mathbf w_1\) and \(\mathbf w_2\text{.}\)
    6. Explain why \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\) form an orthogonal basis for \(W\text{.}\)
    7. Now find an orthonormal basis for \(W\text{.}\)

    As this activity illustrates, Gram-Schmidt orthogonalization begins with a basis \(\mathbf v_1\mathbf v_2,\ldots,\mathbf v_n\) for a subspace \(W\) of \(\mathbb R^m\) and creates an orthogonal basis for \(W\text{.}\) Let's work through a second example.

    Example 6.4.4.

    Let's start with the basis

    \begin{equation*} \mathbf v_1=\threevec{2}{-1}2,\hspace{24pt} \mathbf v_2=\threevec{-3}{3}0,\hspace{24pt} \mathbf v_3=\threevec{-2}71\text{,} \end{equation*}

    which is a basis for \(\mathbb R^3\text{.}\)

    To get started, we'll simply set \(\mathbf w_1=\mathbf v_1=\threevec{2}{-1}2\text{.}\) We construct \(\mathbf w_2\) from \(\mathbf v_2\) by subtracting its orthogonal projection onto \(W_1\text{,}\) the line defined by \(\mathbf w_1\text{.}\) This gives

    \begin{equation*} \mathbf w_2 = \mathbf v_2 - \frac{\mathbf v_2\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 = \mathbf v_2 + \mathbf w_1 = \threevec{-1}22\text{.} \end{equation*}

    Notice that we found \(\mathbf v_2 = -\mathbf w_1 + \mathbf w_2\text{.}\) Therefore, we can rewrite any linear combination of \(\mathbf v_1\) and \(\mathbf v_2\) as

    \begin{equation*} c_1\mathbf v_1 + c_2\mathbf v_2 = c_1\mathbf w_1 + c_2(-\mathbf w_1+\mathbf w_2) = (c_1-c_2)\mathbf w_1 + c_2\mathbf w_2\text{,} \end{equation*}

    a linear combination of \(\mathbf w_1\) and \(\mathbf w_2\text{.}\) This tells us that

    \begin{equation*} W_2 = \laspan{\mathbf w_1,\mathbf w_2} = \laspan{\mathbf v_1,\mathbf v_2}\text{.} \end{equation*}

    In other words, \(\mathbf w_1\) and \(\mathbf w_2\) is a basis for the same 2-dimensional subsapce as \(\mathbf v_1\) and \(\mathbf v_2\text{.}\)

    Finally, we form \(\mathbf w_3\) from \(\mathbf v_3\) by subtracting its orthogonal projection onto \(W_2\text{:}\)

    \begin{equation*} \mathbf w_3 = \mathbf v_3 - \frac{\mathbf v_3\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 - \frac{\mathbf v_3\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}\mathbf w_1 = \mathbf v_3 + \mathbf w_1 - 2\mathbf w_2 = \threevec22{-1}\text{.} \end{equation*}

    We can now check that

    \begin{equation*} \mathbf w_1=\threevec2{-1}2,\hspace{24pt} \mathbf w_2=\threevec{-1}22,\hspace{24pt} \mathbf w_3=\threevec22{-1},\hspace{24pt} \end{equation*}

    is an orthogonal set. Furthermore, we find that, as before, \(\laspan{\mathbf w_1,\mathbf w_2,\mathbf w_3} = \laspan{\mathbf v_1,\mathbf v_2,\mathbf v_3}\) so that we have found a new orthogonal basis for \(\mathbb R^3\text{.}\)

    To create an orthonormal basis, we form unit vectors parallel to each of the vectors in the orthogonal basis:

    \begin{equation*} \mathbf u_1 = \threevec{2/3}{-1/3}{2/3},\hspace{24pt} \mathbf u_2 = \threevec{-1/3}{2/3}{2/3},\hspace{24pt} \mathbf u_3 = \threevec{2/3}{2/3}{-1/3}\text{.} \end{equation*}

    More generally, if we have a basis \(\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_n\) for a subspace \(W\) of \(\mathbb R^m\text{,}\) the Gram-Schmidt algorithm creates an orthogonal basis for \(W\) in the following way:

    \begin{align*} \mathbf w_1 & = \mathbf v_1\\ \mathbf w_2 & = \mathbf v_2 - \frac{\mathbf v_2\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1\\ \mathbf w_3 & = \mathbf v_3 - \frac{\mathbf v_3\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 - \frac{\mathbf v_3\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}\mathbf w_2\\ & \vdots\\ \mathbf w_n & = \mathbf v_n - \frac{\mathbf v_n\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 - \frac{\mathbf v_n\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}\mathbf w_2 - \ldots - \frac{\mathbf v_n\cdot\mathbf w_{n-1}} {\mathbf w_{n-1}\cdot\mathbf w_{n-1}}\mathbf w_{n-1} \text{.} \end{align*}

    From here, we may form an orthonormal basis by constructing a unit vector parallel to each vector in the orthogonal basis: \(\mathbf u_j = 1/\len{\mathbf w_j}~\mathbf w_j\text{.}\)

    Activity 6.4.3.

    Sage can automate these computations for us. Before we begin, however, it will be helpful to understand how we can combine things using a list in Python. For instance, if the vectors v1, v2, and v3 form a basis for a subspace, we can bundle them together using square brackets: [v1, v2, v3]. Furthermore, we could assign this to a variable, such as basis = [v1, v2, v3].

    Evaluating the following cell will load in some special commands.

    • There is a command to apply the projection formula: projection(b, basis) returns the orthogonal projection of b onto the subspace spanned by basis, which is a list of vectors.
    • The command unit(w) returns a unit vector parallel to w.
    • Given a collection of vectors, say, v1 and v2, we can form the matrix whose columns are v1 and v2 using matrix([v1, v2]).T. When given a list of vectors, Sage constructs a matrix whose rows are the given vectors. For this reason, we need to apply the tranpose.

    Let's now consider \(W\text{,}\) the subspace of \(\mathbb R^5\) having basis

    \begin{equation*} \mathbf v_1 = \fivevec{14}{-6}{8}2{-6},\hspace{24pt} \mathbf v_2 = \fivevec{5}{-3}{4}3{-7},\hspace{24pt} \mathbf v_3 = \fivevec{2}30{-2}1. \end{equation*}
    1. Apply the Gram-Schmidt algorithm to find an orthogonal basis \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\) for \(W\text{.}\)
    2. Find \(\bhat\text{,}\) the orthogonal projection of \(\mathbf b = \fivevec{-5}{11}0{-1}5\) onto \(W\text{.}\)

    3. Explain why we know that \(\bhat\) is a linear combination of the original vectors \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\) and then find weights so that
      \begin{equation*} \bhat = c_1\mathbf v_1 + c_2\mathbf v_2 + c_3\mathbf v_3. \end{equation*}
    4. Find an orthonormal basis \(\mathbf u_1\text{,}\) \(\mathbf u_2\text{,}\) for \(\mathbf u_3\) for \(W\) and form the matrix \(Q\) whose columns are these vectors.
    5. Find the product \(Q^TQ\) and explain the result.
    6. Find the matrix \(P\) that projects vectors orthogonally onto \(W\) and verify that \(P\mathbf b\) gives \(\bhat\text{,}\) the orthogonal projection that you found earlier.

    Subsection 6.4.2 \(QR\) factorizations

    Now that we've seen how the Gram-Schmidt algorithm forms an orthonormal basis for a given subspace, we will explore how the algorithm leads to an important matrix factorization known as the \(QR\) factorization.

    Activity 6.4.4.

    Suppose that \(A\) is the \(4\times3\) matrix whose columns are

    \begin{equation*} \mathbf v_1 = \fourvec1111,\hspace{24pt} \mathbf v_2 = \fourvec1322,\hspace{24pt} \mathbf v_3 = \fourvec1{-3}{-3}{-3}\text{.} \end{equation*}

    These vectors form a basis for \(W\text{,}\) the subspace of \(\mathbb R^4\) that we encountered in Activity 6.4.2. Since these vectors are the columns of \(A\text{,}\) we have \(\col(A) = W\text{.}\)

    1. When we implemented Gram-Schmidt, we first found an orthogonal basis \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\) using
      \begin{equation*} \begin{aligned} \mathbf w_1 & = \mathbf v_1 \\ \mathbf w_2 & = \mathbf v_2 - \frac{\mathbf v_2\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 \\ \mathbf w_3 & = \mathbf v_3 - \frac{\mathbf v_3\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_2}\mathbf w_1 - \frac{\mathbf v_3\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}\mathbf w_2\text{.} \\ \end{aligned} \end{equation*}

      Use these expressions to write \(\mathbf v_1\text{,}\) \(\mathbf v_1\text{,}\) and \(\mathbf v_3\) as linear combinations of \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\text{.}\)

    2. We next normalized the orthogonal basis \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\) to obtain an orthonormal basis \(\mathbf u_1\text{,}\) \(\mathbf u_2\text{,}\) and \(\mathbf u_3\text{.}\)

      Write the vectors \(\mathbf w_i\) as scalar multiples of \(\mathbf u_i\text{.}\) Then use these expressions to write \(\mathbf v_1\text{,}\) \(\mathbf v_1\text{,}\) and \(\mathbf v_3\) as linear combinations of \(\mathbf u_1\text{,}\) \(\mathbf u_2\text{,}\) and \(\mathbf u_3\text{.}\)

    3. Suppose that \(Q = \left[ \begin{array}{ccc} \mathbf u_1 & \mathbf u_2 & \mathbf u_3 \end{array} \right]\text{.}\) Use the result of the previous part to find a vector \(\rvec_1\) so that \(Q\rvec_1 = \mathbf v_1\text{.}\)

    4. Then find vectors \(\rvec_2\) and \(\rvec_3\) such that \(Q\rvec_2 = \mathbf v_2\) and \(Q\rvec_3 = \mathbf v_3\text{.}\)

    5. Construct the matrix \(R = \left[ \begin{array}{ccc} \rvec_1 & \rvec_2 & \rvec_3 \end{array} \right]\text{.}\) Remembering that \(A = \left[ \begin{array}{ccc} \mathbf v_1 & \mathbf v_2 & \mathbf v_3 \end{array} \right]\text{,}\) explain why \(A = QR\text{.}\)

    6. What is special about the shape of \(R\text{?}\)
    7. Suppose that \(A\) is a \(10\times 6\) matrix whose columns are linearly independent. This means that the columns of \(A\) form a basis for \(W=\col(A)\text{,}\) a 6-dimensional subspace of \(\mathbb R^{10}\text{.}\) Suppose that we apply Gram-Schmidt orthogonalization to create an orthonormal basis whose vectors form the columns of \(Q\) and that we write \(A=QR\text{.}\) What are the dimensions of \(Q\) and what are the dimensions of \(R\text{?}\)

    When the columns of a matrix \(A\) are linearly independent, they form a basis for \(\col(A)\) so that we can perform the Gram-Schmidt algorithm. The previous activity shows how this leads to a factorization of \(A\) as the product of a matrix \(Q\) whose columns are an orthonormal basis for \(\col(A)\) and an upper triangular matrix \(R\text{.}\)

    Proposition 6.4.5. \(QR\) factorization. If \(A\) is an \(m\times n\) matrix whose columns are linearly independent, we may write \(A=QR\) where \(Q\) is an \(m\times n\) matrix whose columns form an orthonormal basis for \(\col(A)\) and \(R\) is an \(n\times n\) upper triangular matrix.
    Example 6.4.6.

    We'll consider the matrix \(A=\begin{bmatrix} 2 & -3 & -2 \\ -1 & 3 & 7 \\ 2 & 0 & 1 \\ \end{bmatrix}\) whose columns, which we'll denote \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\text{,}\) are the basis of \(\mathbb R^3\) that we considered in Example 6.4.4. There we found an orthogonal basis \(\mathbf w_1\text{,}\) \(\mathbf w_2\text{,}\) and \(\mathbf w_3\) that satisfied

    \begin{align*} \mathbf v_1 & {}={} \mathbf w_1\\ \mathbf v_2 & {}={} -\mathbf w_1 + \mathbf w_2\\ \mathbf v_3 & {}={} -\mathbf w_1 + 2\mathbf w_2 + \mathbf w _3\text{.} \end{align*}

    In terms of the resulting orthonormal basis \(\mathbf u_1\text{,}\) \(\mathbf u_2\text{,}\) and \(\mathbf u_3\text{,}\) we had

    \begin{equation*} \mathbf w_1 = 3 \mathbf u_1,\hspace{24pt} \mathbf w_2 = 3 \mathbf u_2,\hspace{24pt} \mathbf w_3 = 3 \mathbf u_3 \end{equation*}

    so that

    \begin{align*} \mathbf v_1 & {}={} 3\mathbf u_1\\ \mathbf v_2 & {}={} -3\mathbf u_1 + 3\mathbf u_2\\ \mathbf v_3 & {}={} -3\mathbf u_1 + 6\mathbf u_2 + 3\mathbf u _3\text{.} \end{align*}

    Therefore, if \(Q=\begin{bmatrix} \mathbf u_1 & \mathbf u_2 & \mathbf u_3 \end{bmatrix}\text{,}\) we have the \(QR\) factorization

    \begin{equation*} A = Q\begin{bmatrix} 3 & -3 & -3 \\ 0 & 3 & 6 \\ 0 & 0 & 3 \\ \end{bmatrix} =QR\text{.} \end{equation*}
    Activity 6.4.5.

    As before, we would like to use Sage to automate the process of finding and using the \(QR\) factorization of a matrix \(A\text{.}\) Evaluating the following cell provides a command QR(A) that returns the factorization, which may be stored using, for example, Q, R = QR(A).

    Suppose that \(A\) is the following matrix whose columns are linearly independent.

    \begin{equation*} A = \begin{bmatrix} 1 & 0 & -3 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\ 1 & 3 & 5 \\ \end{bmatrix}. \end{equation*}
    1. If \(A=QR\text{,}\) what are the dimensions of \(Q\) and \(R\text{?}\) What is special about the form of \(R\text{?}\)
    2. Find the \(QR\) factorization using Q, R = QR(A) and verify that \(R\) has the predicted shape and that \(A=QR\text{.}\)

    3. Find the matrix \(P\) that orthogonally projects vectors onto \(\col(A)\text{.}\)
    4. Find \(\bhat\text{,}\) the orthogonal projection of \(\mathbf b=\fourvec4{-17}{-14}{22}\) onto \(\col(A)\text{.}\)
    5. Explain why the equation \(A\mathbf x=\bhat\) must be consistent and then find \(\mathbf x\text{.}\)

    In fact, Sage provides its own version of the \(QR\) factorization that is a bit different than the way we've developed the factorization here. For this reason, we have provided our own version of the factorization.

    Subsection 6.4.3 Summary

    This section explored the Gram-Schmidt orthogonalization algorithm and how it leads to the matrix factorization \(A=QR\) when the columns of \(A\) are linearly independent.

    • Beginning with a basis \(\mathbf v_1, \mathbf v_2,\ldots,\mathbf v_n\) for a subspace \(W\) of \(\mathbb R^m\text{,}\) the vectors

      \begin{align*} \mathbf w_1 & = \mathbf v_1\\ \mathbf w_2 & = \mathbf v_2 - \frac{\mathbf v_2\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1\\ \mathbf w_3 & = \mathbf v_3 - \frac{\mathbf v_3\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 - \frac{\mathbf v_3\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}\mathbf w_2\\ & \vdots\\ \mathbf w_n & = \mathbf v_n - \frac{\mathbf v_n\cdot\mathbf w_1}{\mathbf w_1\cdot\mathbf w_1}\mathbf w_1 - \frac{\mathbf v_n\cdot\mathbf w_2}{\mathbf w_2\cdot\mathbf w_2}\mathbf w_2 - \ldots - \frac{\mathbf v_n\cdot\mathbf w_{n-1}} {\mathbf w_{n-1}\cdot\mathbf w_{n-1}}\mathbf w_{n-1} \end{align*}

      form an orthogonal basis for \(W\text{.}\)

    • We may scale each vector \(\mathbf w_i\) appropriately to obtain an orthonormal basis \(\mathbf u_1,\mathbf u_2,\ldots,\mathbf u_n\text{.}\)
    • Expressing the Gram-Schmidt algorithm in matrix form shows that, if the columns of \(A\) are linearly independent, then we can write \(A=QR\text{,}\) where the columns of \(Q\) form an orthonormal basis for \(\col(A)\) and \(R\) is upper triangular.

    Exercises 6.4.4 Exercises

    1.

    Suppose that a subspace \(W\) of \(\mathbb R^3\) has a basis formed by

    \begin{equation*} \mathbf v_1=\threevec111, \hspace{24pt} \mathbf v_2=\threevec1{-2}{-2}. \end{equation*}
    1. Find an orthogonal basis for \(W\text{.}\)
    2. Find an orthonormal basis for \(W\text{.}\)
    3. Find the matrix \(P\) that projects vectors orthogonally onto \(W\text{.}\)
    4. Find the orthogonal projection of \(\threevec34{-2}\) onto \(W\text{.}\)
    2.

    Find the \(QR\) factorization of \(A=\begin{bmatrix} 4 & 7 \\ -2 & 4 \\ 4 & 4 \end{bmatrix} \text{.}\)

    3.

    Consider the basis of \(\mathbb R^3\) given by the vectors

    \begin{equation*} \mathbf v_1=\threevec2{-2}2,\hspace{24pt} \mathbf v_2=\threevec{-1}{-3}1,\hspace{24pt} \mathbf v_3=\threevec{2}0{-5}. \end{equation*}

    1. Apply the Gram-Schmit orthogonalization algorithm to find an orthonormal basis \(\mathbf u_1\text{,}\) \(\mathbf u_2\text{,}\) \(\mathbf u_3\) for \(\mathbb R^3\text{.}\)
    2. If \(A\) is the \(3\times3\) whose columns are \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\text{,}\) find the \(QR\) factorization of \(A\text{.}\)
    3. Suppose that we want to solve the equation \(A\mathbf x=\mathbf b = \threevec{-9}17\text{,}\) which we can rewrite as \(QR\mathbf x = \mathbf b\text{.}\)
      1. If we set \(\yvec=R\mathbf x\text{,}\) explain why the equation \(Q\yvec=\mathbf b\) is computationally easy to solve.
      2. Explain why the equation \(R\mathbf x=\yvec\) is computationally easy to solve.
      3. Find the solution \(\mathbf x\text{.}\)
    4.

    Consider the vectors

    \begin{equation*} \mathbf v_1=\fivevec1{-1}{-1}11,\hspace{24pt} \mathbf v_2=\fivevec2{1}{4}{-4}2,\hspace{24pt} \mathbf v_3=\fivevec5{-4}{-3}71 \end{equation*}

    and the subspace \(W\) of \(\mathbb R^5\) that they span.

    1. Find an orthonormal basis for \(W\text{.}\)
    2. Find the \(5\times5\) matrix that projects vectors orthogonally onto \(W\text{.}\)
    3. Find \(\bhat\text{,}\) the orthogonal projection of \(\mathbf b=\fivevec{-8}3{-12}8{-4}\) onto \(W\text{.}\)
    4. Express \(\bhat\) as a linear combination of \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\text{.}\)
    5.

    Consider the set of vectors

    \begin{equation*} \mathbf v_1=\threevec211,\hspace{24pt} \mathbf v_2=\threevec122,\hspace{24pt} \mathbf v_3=\threevec300. \end{equation*}
    1. What happens when we apply the Gram-Schmit orthogonalization algorithm?
    2. Why does the algorithm fail to produce an orthogonal basis for \(\mathbb R^3\text{?}\)
    6.

    Suppose that \(A\) is a matrix with linearly independent columns and having the factorization \(A=QR\text{.}\) Determine whether the following statements are true or false and explain your thinking.

    1. It follows that \(R=Q^TA\text{.}\)
    2. The matrix \(R\) is invertible.
    3. The product \(Q^TQ\) projects vectors orthogonally onto \(\col(A)\text{.}\)
    4. The columns of \(Q\) are an orthogonal basis for \(\col(A)\text{.}\)
    5. The orthogonal complement \(\col(A)^\perp = \nul(Q^T)\text{.}\)

    7.

    Suppose we have the \(QR\) factorization \(A=QR\text{,}\) where \(A\) is a \(7\times 4\) matrix.

    1. What are the dimensions of the product \(QQ^T\text{?}\) Explain the significance of this product.
    2. What are the dimensions of the product \(Q^TQ\text{?}\) Explain the significance of this product.
    3. What are the dimensions of the matrix \(R\text{?}\)
    4. If \(R\) is a diagonal matrix, what can you say about the columns of \(A\text{?}\)
    8.

    Suppose we have the \(QR\) factorization \(A=QR\) where the columns of \(A\) are \(\avec_1,\avec_2,\ldots,\avec_n\) and the columns of \(R\) are \(\rvec_1,\rvec_2,\ldots,\rvec_n\text{.}\)

    1. How can the matrix product \(A^TA\) be expressed in terms of dot products?
    2. How can the matrix product \(R^TR\) be expressed in terms of dot products?
    3. Explain why \(A^TA=R^TR\text{.}\)
    4. Explain why the dot product \(\avec_i\cdot\avec_j = \rvec_i\cdot\rvec_j\text{.}\)


    This page titled 6.4: Finding Orthogonal Bases is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by David Austin via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?