Skip to main content
Mathematics LibreTexts

Orthogonal Complements

  • Page ID
    218314
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Orthogonal Complements

    Definition of the Orthogonal Complement

    Geometrically, we can understand that two lines can be perpendicular in R2 and that a line and a plane can be perpendicular to each other in R3.  We now generalize this concept and ask given a vector subspace, what is the set of vectors that are orthogonal to all vectors in the subspace.

    Definition

    Let V be a vector space and W be a subspace of V.  Then the orthogonal complement of W in V is the set of vectors u such that u is orthogonal to all vectors in W.

     

    Example

    Let V  =  R2  and W be the subspace spanned by (1,2).  Then  \( W^\perp \) is the set of vectors (a,b) with

            (a,b) . c(1,2)  =  0  

    or

            ac + 2bc  =  0        a + 2b  =  0

    This is a 1 dimensional vector space spanned by

            (-2,1)


    In the example above the orthogonal complement was a subspace.  This will always be the case.

     

    Theorem

    Let W be a subspace of a vector space VThen the orthogonal complement of W is also a subspace of V.  Furthermore, the intersection of W and its orthogonal complement is just the zero vector.

     

    Proof

    Let u1 and u2 be vectors in the orthogonal complement of W and c be a constant.  Then 

    1.  If w is in W, then

            (u1 + u2) .u1. w + u2 .0

    2.  If w is in W, then

            (cu1) .=  c(u1 . w)  =  c(0)  =  0

    Now we prove that the intersection is zero.  If v is in the intersection then we think of v first as being in W and second as being in the orthogonal complement of W.  Hence 

            v .=  0

    This implies that

            v  =  0


    The next theorem states that if w1, ... ,wr is a basis for W and u1, ... ,uk is a basis for \( W^\perp \) then 

            {w1, ... ,wr, u1, ... ,uk}

    is a basis for Rn.  In symbols, we write

     

    Theorem

    \( R^n = W \bigoplus W^\perp \)

     

    We leave it up to you to look up the proof of this statement.  What this means is that every vector v in Rn can be uniquely written in the form

            v  =  w + u 

    with w in W and u in \( W^\perp \) .


    A corollary of this theorem is the following

     

    Corollary

           \( (W^\perp )^\perp = W \)  

    Proof

    First if a vector is in W then it is orthogonal to every vector in the orthogonal complement of W.  If a vector v is orthogonal to every vector in the orthogonal complement of W, and also by the theorem above we have 

            v  =  w + u

    with w in W and u in the orthogonal complement of W.  Since u is in the orthogonal complement of W, we have 

            0  =  v . u  =  (w + u) . u  =  w . u + u . u  =  u . u

    Hence u  =  0 and v  =  w.


    Matrices and Complements

    If we think of matrix multiplication as a collection of dot products then if

            Ax  =  0

    then x is orthogonal to each of the rows of A.  Also if 

            ATy  =  0

    then y is orthogonal to each of the columns of A.  More precisely we have 

     

    Theorem

    1.  The null space of A is the orthogonal complement of the row space of A.

    2.  The null space of AT is the orthogonal complement of the column space of A.

     

    Example

    Find a basis for the orthogonal complement of the space spanned by (1,0,1,0,2), (0,1,1,1,0) and (1,1,1,1,1).

     

    Solution

    We find the null space of the matrix

     \( A = \begin{pmatrix} 1 & 0 & 1 & 0 & 2 \\ 0 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 \end{pmatrix} \)

    We find the rref of A.

     \( rref(A) = \begin{pmatrix} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 & -1 \\ 0 & 0 & 1 & 0 & 1 \end{pmatrix} \)        

    We get a basis 

            {(0,-1,0,1,0), (-1,1,-1,0,1)}

     


    Projections

            Given a vector v and a subspace W with orthogonal basis w1, ... , wn, we are often interested in finding in finding the vector in W that is closest to v.  This closest vector is 

                                 v . w1               v . w2                            v . wn    
            projWv  =                   w1+                    w2   + ... +                    wn                          
                                w1 . w1               w2 . w2                                  wn . wn  

    We will use this formula when we talk about inner product spaces and Fourier coefficients.  Notice that if W is orthonormal then the denominators are all equal to one.

     



    Back to the Vectors Page

     

     

    Orthogonal Complements is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?