Skip to main content
Mathematics LibreTexts

8.1E: Orthogonal Complements and Projections Exercises

  • Page ID
    132837
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Exercises for 1

    solutions

    2

    In each case, use the Gram-Schmidt algorithm to convert the given basis \(B\) of \(V\) into an orthogonal basis.

    1. \(V = \mathbb{R}^2\), \(B = \{(1, -1), (2, 1)\}\)
    2. \(V = \mathbb{R}^2\), \(B = \{(2, 1), (1, 2)\}\)
    3. \(V = \mathbb{R}^3\), \(B = \{(1, -1, 1), (1, 0, 1), (1, 1, 2)\}\)
    4. \(V = \mathbb{R}^3\), \(B = \{(0, 1, 1), (1, 1, 1), (1, -2, 2)\}\)
    1. \(\{(2,1),\frac{3}{5}(-1,2)\}\)
    2. \(\{(0,1,1),(1,0,0),(0,-2,2)\}\)

    In each case, write \(\mathbf{x}\) as the sum of a vector in \(U\) and a vector in \(U^\perp\).

    1. \(\mathbf{x} = (1, 5, 7)\), \(U = span \;\{(1, -2, 3), (-1, 1, 1)\}\)
    2. \(\mathbf{x} = (2, 1, 6)\), \(U = span \;\{(3, -1, 2), (2, 0, -3)\}\)
    3. \(\mathbf{x} = (3, 1, 5, 9)\),
      \(U = span \;\{(1, 0, 1, 1), (0, 1, -1, 1), (-2, 0, 1, 1)\}\)

    4. \(\mathbf{x} = (2, 0, 1, 6)\),
      \(U = span \;\{(1, 1, 1, 1), (1, 1, -1, -1), (1, -1, 1, -1)\}\)

    5. \(\mathbf{x} = (a, b, c, d)\),
      \(U = span \;\{(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0)\}\)

    6. \(\mathbf{x} = (a, b, c, d)\),
      \(U = span \;\{(1, -1, 2, 0), (-1, 1, 1, 1)\}\)

    1. \(\mathbf{x} = \frac{1}{182}(271,-221,1030) + \frac{1}{182}(93,403,62)\)
    2. \(\mathbf{x}= \frac{1}{4}(1, 7, 11, 17) + \frac{1}{4}(7, -7, -7, 7)\)
    3. \(\mathbf{x} = \frac{1}{12}(5a - 5b + c - 3d, -5a + 5b - c + 3d, a - b + 11c + 3d, -3a + 3b + 3c + 3d) + \frac{1}{12}(7a + 5b - c + 3d, 5a + 7b + c - 3d, -a + b + c -3d, 3a - 3b - 3c + 9d)\)

    Let \(\mathbf{x} = (1, -2, 1, 6)\) in \(\mathbb{R}^4\), and let \(U = span \;\{(2, 1, 3, -4), (1, 2, 0, 1)\}\).

    1. Compute \(\proj{U}{\mathbf{x}}\).
    2. Show that \(\{(1, 0, 2, -3), (4, 7, 1, 2)\}\) is another orthogonal basis of \(U\).
    3. Use the basis in part (b) to compute \(\proj{U}{\mathbf{x}}\).
    1. \(\frac{1}{10}(-9,3,-21,33) = \frac{3}{10}(-3,1,-7,11)\)
    2. \(\frac{1}{70}(-63,21,-147,231) = \frac{3}{10}(-3,1,-7,11)\)

    In each case, use the Gram-Schmidt algorithm to find an orthogonal basis of the subspace \(U\), and find the vector in \(U\) closest to \(\mathbf{x}\).

    1. \(U = span \;\{(1, 1, 1), (0, 1, 1)\}\), \(\mathbf{x} = (-1, 2, 1)\)
    2. \(U = span \;\{(1, -1, 0), (-1, 0, 1)\}\), \(\mathbf{x} = (2, 1, 0)\)
    3. \(U = span \;\{(1, 0, 1, 0), (1, 1, 1, 0), (1, 1, 0, 0)\}\), \(\mathbf{x} = (2, 0, -1, 3)\)
    4. \(U = span \;\{(1, -1, 0, 1), (1, 1, 0, 0), (1, 1, 0, 1)\}\), \(\mathbf{x} = (2, 0, 3, 1)\)
    1. \(\{(1, -1, 0), \frac{1}{2}(-1, -1, 2)\}\); \(\proj{U}{\mathbf{x}} = (1, 0, -1)\)
    2. \(\{(1, -1, 0, 1), (1, 1, 0, 0), \frac{1}{3}(-1, 1, 0, 2)\}\); \(\proj{U}{\mathbf{x}} = (2, 0, 0, 1)\)

    Let \(U = span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\), \(\mathbf{v}_{i}\) in \(\mathbb{R}^n\), and let \(A\) be the \(k \times n\) matrix with the \(\mathbf{v}_{i}\) as rows.

    1. Show that \(U^\perp = \{\mathbf{x} \mid \mathbf{x} \mbox{ in } \mathbb{R}^n, A\mathbf{x}^{T} = \mathbf{0}\}\).
    2. Use part (a) to find \(U^\perp\) if
      \(U = span \;\{(1, -1, 2, 1), (1, 0, -1, 1)\}\).

    1. \(U^\perp = span \;\{(1, 3, 1, 0), (-1, 0, 0, 1)\}\)

    [ex:8_1_6]

    1. Prove part 1 of Lemma [lem:023783].
    2. Prove part 2 of Lemma [lem:023783].

    [ex:8_1_7] Let \(U\) be a subspace of \(\mathbb{R}^n\). If \(\mathbf{x}\) in \(\mathbb{R}^n\) can be written in any way at all as \(\mathbf{x} = \mathbf{p} + \mathbf{q}\) with \(\mathbf{p}\) in \(U\) and \(\mathbf{q}\) in \(U^\perp\), show that necessarily \(\mathbf{p} = \proj{U}{\mathbf{x}}\).

    Let \(U\) be a subspace of \(\mathbb{R}^n\) and let \(\mathbf{x}\) be a vector in \(\mathbb{R}^n\). Using Exercise [ex:8_1_7], or otherwise, show that \(\mathbf{x}\) is in \(U\) if and only if \(\mathbf{x} = \proj{U}{\mathbf{x}}\).

    Write \(\mathbf{p} = \proj{U}{\mathbf{x}}\). Then \(\mathbf{p}\) is in \(U\) by definition. If \(\mathbf{x}\) is \(U\), then \(\mathbf{x} - \mathbf{p}\) is in \(U\). But \(\mathbf{x} - \mathbf{p}\) is also in \(U^\perp\) by Theorem [thm:023885], so \(\mathbf{x} - \mathbf{p}\) is in \(U \cap U^\perp = \{\mathbf{0}\}\). Thus \(\mathbf{x} = \mathbf{p}\).

    Let \(U\) be a subspace of \(\mathbb{R}^n\).

    1. Show that \(U^\perp = \mathbb{R}^n\) if and only if \(U = \{\mathbf{0}\}\).
    2. Show that \(U^\perp = \{\mathbf{0}\}\) if and only if \(U = \mathbb{R}^n\).

    If \(U\) is a subspace of \(\mathbb{R}^n\), show that \(\proj{U}{\mathbf{x}} = \mathbf{x}\) for all \(\mathbf{x}\) in \(U\).

    Let \(\{\mathbf{f}_{1}, \mathbf{f}_{2}, \dots , \mathbf{f}_{m}\}\) be an orthonormal basis of \(U\). If \(\mathbf{x}\) is in \(U\) the expansion theorem gives \(\mathbf{x} = (\mathbf{x}\bullet \mathbf{f}_{1})\mathbf{f}_{1} + (\mathbf{x}\bullet \mathbf{f}_{2})\mathbf{f}_{2} + \dots + (\mathbf{x}\bullet \mathbf{f}_{m})\mathbf{f}_{m} = \proj{U}{\mathbf{x}}\).

    If \(U\) is a subspace of \(\mathbb{R}^n\), show that \(\mathbf{x} = \proj{U}{\mathbf{x}} + \proj{U^\perp}{\mathbf{x}}\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\).

    If \(\{\mathbf{f}_{1}, \dots, \mathbf{f}_{n}\}\) is an orthogonal basis of \(\mathbb{R}^n\) and \(U = span \;\{\mathbf{f}_{1}, \dots, \mathbf{f}_{m}\}\), show that
    \(U^\perp = span \;\{\mathbf{f}_{m + 1}, \dots, \mathbf{f}_{n}\}\).

    [ex:8_1_13] If \(U\) is a subspace of \(\mathbb{R}^n\), show that \(U^{\perp \perp} = U\). [Hint: Show that \(U \subseteq U^{\perp \perp}\), then use Theorem [thm:023953] (3) twice.]

    If \(U\) is a subspace of \(\mathbb{R}^n\), show how to find an \(n \times n\) matrix \(A\) such that \(U = \{\mathbf{x} \mid A\mathbf{x} = \mathbf{0}\}\). [Hint: Exercise [ex:8_1_13].]

    Let \(\{\mathbf{y}_{1}, \mathbf{y}_{2}, \dots, \mathbf{y}_{m}\}\) be a basis of \(U^\perp\), and let \(A\) be the \(n \times n\) matrix with rows \(\mathbf{y}^T_1, \mathbf{y}^T_2, \dots, \mathbf{y}^T_m, 0, \dots, 0\). Then \(A\mathbf{x} = \mathbf{0}\) if and only if \(\mathbf{y}_{i}\bullet \mathbf{x} = 0\) for each \(i = 1, 2, \dots, m\); if and only if \(\mathbf{x}\) is in \(U^{\perp \perp} = U\).

    Write \(\mathbb{R}^n\) as rows. If \(A\) is an \(n \times n\) matrix, write its null space as \(\func{null }A = \{\mathbf{x} \mbox{ in } \mathbb{R}^n \mid A\mathbf{x}^{T} = \mathbf{0}\}\). Show that:

    \(\func{null }A = (\func{row }A)^\perp\); \(\func{null }A^{T} = (\func{col }A)^\perp\).

    If \(U\) and \(W\) are subspaces, show that \((U + W)^\perp = U^\perp \cap W^\perp\). [See Exercise [ex:5_1_22].]

    [ex:8_1_17] Think of \(\mathbb{R}^n\) as consisting of rows.

    1. Let \(E\) be an \(n \times n\) matrix, and let
      \(U = \{\mathbf{x} E \mid \mathbf{x} \mbox{ in } \mathbb{R}^n\}\). Show that the following are equivalent.

      1. \(E^{2} = E = E^{T}\) (\(E\) is a projection matrix).
      2. \((\mathbf{x} - \mathbf{x}E)\bullet (\mathbf{y}E) = 0\) for all \(\mathbf{x}\) and \(\mathbf{y}\) in \(\mathbb{R}^n\).
      3. [Hint: For (ii) implies (iii): Write \(\mathbf{x} = \mathbf{x}E + (\mathbf{x} - \mathbf{x}E)\) and use the uniqueness argument preceding the definition of \(\proj{U}{\mathbf{x}}\). For (iii) implies (ii): \(\mathbf{x} - \mathbf{x}E\) is in \(U^\perp\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\).]
    2. If \(E\) is a projection matrix, show that \(I - E\) is also a projection matrix.
    3. If \(EF = 0 = FE\) and \(E\) and \(F\) are projection matrices, show that \(E + F\) is also a projection matrix.
    4. If \(A\) is \(m \times n\) and \(AA^{T}\) is invertible, show that \(E = A^{T}(AA^{T})^{-1}A\) is a projection matrix.
    1. \(E^2 = A^T(AA^T)^{-1}AA^T(AA^T)^{-1}A = A^T(AA^T)^{-1}A = E\)

    Let \(A\) be an \(n \times n\) matrix of rank \(r\). Show that there is an invertible \(n \times n\) matrix \(U\) such that \(UA\) is a row-echelon matrix with the property that the first \(r\) rows are orthogonal. [Hint: Let \(R\) be the row-echelon form of \(A\), and use the Gram-Schmidt process on the nonzero rows of \(R\) from the bottom up. Use Lemma [cor:004537].]

    Let \(A\) be an \((n - 1) \times n\) matrix with rows \(\mathbf{x}_{1}, \mathbf{x}_{2}, \dots, \mathbf{x}_{n-1}\) and let \(A_{i}\) denote the
    \((n - 1) \times (n - 1)\) matrix obtained from \(A\) by deleting column \(i\). Define the vector \(\mathbf{y}\) in \(\mathbb{R}^n\) by

    \[\mathbf{y} = \left[ \def\arraycolsep{1.5pt} \begin{array}{ccccc} \det A_{1} & -\det A_{2} & \det A_{3} & \cdots & (-1)^{n+1} \det A_{n} \end{array}\right] \nonumber \]

    Show that:

    1. \(\mathbf{x}_{i}\bullet \mathbf{y} = 0\) for all \(i = 1, 2, \dots , n - 1\). [Hint: Write \(B_{i} = \left[ \begin{array}{c} x_{i} \\ A \end{array} \right]\) and show that \(\det B_{i} = 0\).]
    2. \(\mathbf{y} \neq \mathbf{0}\) if and only if \(\{\mathbf{x}_{1}, \mathbf{x}_{2}, \dots , \mathbf{x}_{n-1}\}\) is linearly independent. [Hint: If some \(\det A_{i} \neq 0\), the rows of \(A_{i}\) are linearly independent. Conversely, if the \(\mathbf{x}_{i}\) are independent, consider \(A = UR\) where \(R\) is in reduced row-echelon form.]
    3. If \(\{\mathbf{x}_{1}, \mathbf{x}_{2}, \dots , \mathbf{x}_{n-1}\}\) is linearly independent, use Theorem [thm:023885](3) to show that all solutions to the system of \(n - 1\) homogeneous equations

      \[A\mathbf{x}^T = \mathbf{0} \nonumber \]


    8.1E: Orthogonal Complements and Projections Exercises is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?