6.2: Orthogonal Complements
( \newcommand{\kernel}{\mathrm{null}\,}\)
- Understand the basic properties of orthogonal complements.
- Learn to compute the orthogonal complement of a subspace.
- Recipes: shortcuts for computing the orthogonal complements of common subspaces.
- Picture: orthogonal complements in
and . - Theorem: row rank equals column rank.
- Vocabulary words: orthogonal complement, row space.
It will be important to compute the set of all vectors that are orthogonal to a given set of vectors. It turns out that a vector is orthogonal to a set of vectors if and only if it is orthogonal to the span of those vectors, which is a subspace, so we restrict ourselves to the case of subspaces.
Definition of the Orthogonal Complement
Taking the orthogonal complement is an operation that is performed on subspaces.
Let
The symbol
This is the set of all vectors
We now have two similar-looking pieces of notation:
Try not to confuse the two.
Pictures of orthogonal complements
The orthogonal complement of a line

Figure
The orthogonal complement of a line

Figure
The orthogonal complement of a plane

Figure
We see in the above pictures that
The orthogonal complement of
For the same reason, we have
Computing Orthogonal Complements
Since any subspace is a span, the following proposition gives a recipe for computing the orthogonal complement of any subspace. However, below we will give several shortcuts for computing the orthogonal complements of other common kinds of subspaces–in particular, null spaces. To compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix, as in Note 2.6.3 in Section 2.6.
Let
- Proof
-
To justify the first equality, we need to show that a vector
is perpendicular to the all of the vectors in if and only if it is perpendicular only to . Since the are contained in we really only have to show that if then is perpendicular to every vector in . Indeed, any vector in has the form for suitable scalars soTherefore,
is inTo prove the second equality, we let
By the row-column rule for matrix multiplication Definition 2.3.3 in Section 2.3, for any vector
in we haveTherefore,
is in if and only if is perpendicular to each vector .
Since column spaces are the same as spans, we can rephrase the proposition as follows. Let
Again, it is important to be able to go easily back and forth between spans and column spaces. If you are handed a span, you can apply the proposition once you have rewritten your span as a column space.
By the proposition, computing the orthogonal complement of a span means solving a system of linear equations. For example, if
then
This is the solution set of the system of equations
Compute
Solution
According to Proposition
The free variable is
Scaling by a factor of
We can check our work:
Find all vectors orthogonal to
Solution
According to Proposition
This matrix is in reduced-row echelon form. The parametric form for the solution set is
Therefore, the answer is the plane

Compute
Solution
According to Proposition
The parametric vector form of the solution is
Therefore, the answer is the line

In order to find shortcuts for computing orthogonal complements, we need the following basic facts. Looking back the the above examples, all of these facts should be believable.
Let
is also a subspace of
- Proof
-
For the first assertion, we verify the three defining properties of subspaces, Definition 2.6.2 in Section 2.6.
- The zero vector is in
because the zero vector is orthogonal to every vector in . - Let
be in so and for every vector in . We must verify that for every in . Indeed, we have - Let
be in so for every in and let be a scalar. We must verify that for every in . Indeed, we have
Next we prove the third assertion. Let
be a basis for so and let be a basis for so . We need to show . First we claim that is linearly independent. Suppose that . Let and so is in is in and . Then is in both and which implies is perpendicular to itself. In particular, so and hence . Therefore, all coefficients are equal to zero, because and are linearly independent.It follows from the previous paragraph that
. Suppose that . Then the matrixhas more columns than rows (it is “wide”), so its null space is nonzero by Note 3.2.1 in Section 3.2. Let
be a nonzero vector in . Thenby the row-column rule for matrix multiplication Definition 2.3.3 in Section 2.3. Since
it follows from Proposition that is in and similarly, is in . As above, this implies is orthogonal to itself, which contradicts our assumption that is nonzero. Therefore, as desired.Finally, we prove the second assertion. Clearly
is contained in this says that everything in is perpendicular to the set of all vectors perpendicular to everything in . Let By 3, we have so . The only -dimensional subspace of is all of so - The zero vector is in
See subsection Pictures of orthogonal complements, for pictures of the second property. As for the third: for example, if
the orthogonal complement of the
The row space of a matrix
If
We showed in the above Proposition
Taking orthogonal complements of both sides and using the second fact
Replacing
To summarize:
For any vectors
For any matrix
As mentioned in the beginning of this subsection, in order to compute the orthogonal complement of a general subspace, usually it is best to rewrite the subspace as the column space or null space of a matrix.
Compute the orthogonal complement of the subspace
Solution
Rewriting, we see that
No row reduction was needed!
Find the orthogonal complement of the
Solution
The
so
These vectors are necessarily linearly dependent (why)?
Row rank and Column Rank
Suppose that
Let
- Proof
-
By Theorem 2.9.1 in Section 2.9, we have
On the other hand the third fact
says thatwhich implies
. Since we haveas desired.
In particular, by Corollary 2.7.1 in Section 2.7 both the row rank and the column rank are equal to the number of pivots of






