Skip to main content
Mathematics LibreTexts

9.4: Subspaces and Basis

  • Page ID
    14555
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Outcomes

    1. Utilize the subspace test to determine if a set is a subspace of a given vector space.
    2. Extend a linearly independent set and shrink a spanning set to a basis of a given vector space.

    In this section we will examine the concept of subspaces introduced earlier in terms of \(\mathbb{R}^n\). Here, we will discuss these concepts in terms of abstract vector spaces.

    Consider the definition of a subspace.

    Definition \(\PageIndex{1}\): Subspace

    Let \(V\) be a vector space. A subset \(W\subseteq V\) is said to be a subspace of \(V\) if \(a\vec{x}+b\vec{y} \in W\) whenever \(a,b\in \mathbb{R}\) and \(\vec{x},\vec{y}\in W.\)

    The span of a set of vectors as described in Definition 9.2.3 is an example of a subspace. The following fundamental result says that subspaces are subsets of a vector space which are themselves vector spaces.

    Theorem \(\PageIndex{1}\): Subspaces are Vector Spaces

    Let \(W\) be a nonempty collection of vectors in a vector space \(V\). Then \(W\) is a subspace if and only if \(W\) satisfies the vector space axioms, using the same operations as those defined on \(V\).

    Proof

    Suppose first that \(W\) is a subspace. It is obvious that all the algebraic laws hold on \(W\) because it is a subset of \(V\) and they hold on \(V\). Thus \(\vec{u}+\vec{v}=\vec{v}+\vec{u}\) along with the other axioms. Does \(W\) contain \(\vec{0}?\) Yes because it contains \(0\vec{u}=\vec{0}\). See Theorem 9.1.1.

    Are the operations of \(V\) defined on \(W?\) That is, when you add vectors of \(W\) do you get a vector in \(W?\) When you multiply a vector in \(W\) by a scalar, do you get a vector in \(W?\) Yes. This is contained in the definition. Does every vector in \(W\) have an additive inverse? Yes by Theorem 9.1.1 because \(-\vec{v}=\left( -1\right) \vec{v}\) which is given to be in \(W\) provided \(\vec{v}\in W\).

    Next suppose \(W\) is a vector space. Then by definition, it is closed with respect to linear combinations. Hence it is a subspace.

    Consider the following useful Corollary.

    Corollary \(\PageIndex{1}\): Span is a Subspace

    Let \(V\) be a vector space with \(W \subseteq V\). If \(W = \mathrm{span} \left\{ \vec{v}_1, \cdots, \vec{v}_n \right\}\) then \(W\) is a subspace of \(V\).

    When determining spanning sets the following theorem proves useful.

    Theorem \(\PageIndex{2}\): Spanning Set

    Let \(W \subseteq V\) for a vector space \(V\) and suppose \(W = \mathrm{span} \left\{ \vec{v}_1, \vec{v}_2, \cdots, \vec{v}_n \right\}\).

    Let \(U \subseteq V\) be a subspace such that \(\vec{v}_1, \vec{v}_2, \cdots, \vec{v}_n \in U\). Then it follows that \(W \subseteq U\).

    In other words, this theorem claims that any subspace that contains a set of vectors must also contain the span of these vectors.

    The following example will show that two spans, described differently, can in fact be equal.

    Example \(\PageIndex{1}\): Equal Span

    Let \(p(x), q(x)\) be polynomials and suppose \(U = \mathrm{span}\left\{ 2p(x) - q(x), p(x) + 3q(x)\right\}\) and \(W = \mathrm{span}\left\{ p(x), q(x) \right\}\). Show that \(U = W\).

    Solution

    We will use Theorem \(\PageIndex{2}\) to show that \(U \subseteq W\) and \(W \subseteq U\). It will then follow that \(U=W\).

    1. \(U \subseteq W\) Notice that \(2p(x) - q(x)\) and \(p(x) + 3q(x)\) are both in \(W=span\{p(x),q(x)\}\). Then by Theorem \(\PageIndex{2}\) \(W\) must contain the span of these polynomials and so \(U⊆W\).
    2. \(W \subseteq U\) Notice that \[\begin{aligned}p(x)&=\frac{3}{7}(2p(x)-q(x))+\frac{2}{7}(p(x)+3q(x)) \\ q(x)&=-\frac{1}{7}(2p(x)-q(x))+\frac{2}{7}(p(x)+3q(x))\end{aligned}\] Hence \(p(x),\:q(x)\) are in \(span\{2p(x)−q(x),\: p(x) +3q(x)\}\). By Theorem \(\PageIndex{2}\) \(U\) must contain the span of these polynomials and so \(W ⊆ U\).

    To prove that a set is a vector space, one must verify each of the axioms given in Definition 9.1.2 and 9.1.3. This is a cumbersome task, and therefore a shorter procedure is used to verify a subspace.

    Procedure \(\PageIndex{1}\): Subspace Test

    Suppose \(W\) is a subset of a vector space \(V\). To determine if \(W\) is a subspace of \(V\), it is sufficient to determine if the following three conditions hold, using the operations of \(V\):

    1. The additive identity \(\vec{0}\) of \(V\) is contained in \(W\).
    2. For any vectors \(\vec{w}_1, \vec{w}_2\) in \(W\), \(\vec{w}_1 + \vec{w}_2\) is also in \(W\).
    3. For any vector \(\vec{w}_1\) in \(W\) and scalar \(a\), the product \(a\vec{w}_1\) is also in \(W\).

    Therefore it suffices to prove these three steps to show that a set is a subspace.

    Consider the following example.

    Example \(\PageIndex{2}\): Improper Subspaces

    Let \(V\) be an arbitrary vector space. Then \(V\) is a subspace of itself. Similarly, the set \(\left\{ \vec{0} \right\}\) containing only the zero vector is also a subspace.

    Solution

    Using the subspace test in Procedure \(\PageIndex{1}\) we can show that \(V\) and \(\left\{ \vec{0} \right\}\) are subspaces of \(V\).

    Since \(V\) satisfies the vector space axioms it also satisfies the three steps of the subspace test. Therefore \(V\) is a subspace.

    Let’s consider the set \(\left\{ \vec{0} \right\}\).

    1. The vector \(\vec{0}\) is clearly contained in \(\left\{ \vec{0} \right\}\), so the first condition is satisfied.
    2. Let \(\vec{w}_1, \vec{w}_2\) be in \(\left\{ \vec{0} \right\}\). Then \(\vec{w}_1 = \vec{0}\) and \(\vec{w}_2 = \vec{0}\) and so \[\vec{w}_1 + \vec{w}_2 = \vec{0} + \vec{0} = \vec{0}\nonumber \] It follows that the sum is contained in \(\left\{ \vec{0} \right\}\) and the second condition is satisfied.
    3. Let \(\vec{w}_1\) be in \(\left\{ \vec{0} \right\}\) and let \(a\) be an arbitrary scalar. Then \[a\vec{w}_1 = a\vec{0} = \vec{0}\nonumber \] Hence the product is contained in \(\left\{ \vec{0} \right\}\) and the third condition is satisfied.

    It follows that \(\left\{ \vec{0} \right\}\) is a subspace of \(V\).

    The two subspaces described above are called improper subspaces. Any subspace of a vector space \(V\) which is not equal to \(V\) or \(\left\{ \vec{0} \right\}\) is called a proper subspace.

    Consider another example.

    Example \(\PageIndex{3}\): Subspace of Polynomials

    Let \(\mathbb{P}_2\) be the vector space of polynomials of degree two or less. Let \(W \subseteq \mathbb{P}_2\) be all polynomials of degree two or less which have \(1\) as a root. Show that \(W\) is a subspace of \(\mathbb{P}_2\).

    Solution

    First, express \(W\) as follows: \[W = \left\{ p(x) = ax^2 +bx +c, a,b,c, \in \mathbb{R} | p(1) = 0 \right\}\nonumber \]

    We need to show that \(W\) satisfies the three conditions of Procedure \(\PageIndex{1}\).

    1. The zero polynomial of \(\mathbb{P}_2\) is given by \(0(x) = 0x^2 + 0x + 0 = 0\). Clearly \(0(1) = 0\) so \(0(x)\) is contained in \(W\).
    2. Let \(p(x), q(x)\) be polynomials in \(W\). It follows that \(p(1) = 0\) and \(q(1) = 0\). Now consider \(p(x) + q(x)\). Let \(r(x)\) represent this sum. \[\begin{aligned} r(1) &= p(1) + q(1) \\ &= 0 + 0 \\ &= 0\end{aligned}\] Therefore the sum is also in \(W\) and the second condition is satisfied.
    3. Let \(p(x)\) be a polynomial in \(W\) and let \(a\) be a scalar. It follows that \(p(1) = 0\). Consider the product \(ap(x)\). \[\begin{aligned} ap(1) &= a(0) \\ &= 0\end{aligned}\] Therefore the product is in \(W\) and the third condition is satisfied.

    It follows that \(W\) is a subspace of \(\mathbb{P}_2\).

    Recall the definition of basis, considered now in the context of vector spaces.

    Definition \(\PageIndex{2}\): Basis

    Let \(V\) be a vector space. Then \(\{\vec{v}_{1},\cdots ,\vec{v}_{n}\}\) is called a basis for \(V\) if the following conditions hold.

    1. \(\mathrm{span}\left\{ \vec{v}_{1},\cdots ,\vec{v}_{n}\right\} = V\)
    2. \(\{\vec{v}_{1},\cdots ,\vec{v}_{n}\}\) is linearly independent

    Consider the following example.

    Example \(\PageIndex{4}\): Polynomials of Degree Two

    Let \(\mathbb{P}_2\) be the set polynomials of degree no more than 2. We can write \(\mathbb{P}_2=\mathrm{span}\left\{ x^{2}, x, 1\right\} .\) Is \(\left\{ x^{2}, x, 1\right\}\) a basis for \(\mathbb{P}_2\)?

    Solution

    It can be verified that \(\mathbb{P}_2\) is a vector space defined under the usual addition and scalar multiplication of polynomials.

    Now, since \(\mathbb{P}_2=\mathrm{span}\left\{ x^{2},x, 1\right\}\), the set \(\left\{ x^{2}, x, 1\right\}\) is a basis if it is linearly independent. Suppose then that \[ax^{2}+bx+c=0x^2 + 0x + 0\nonumber \] where \(a,b,c\) are real numbers. It is clear that this can only occur if \(a=b=c=0\). Hence the set is linearly independent and forms a basis of \(\mathbb{P}_2\).

    The next theorem is an essential result in linear algebra and is called the exchange theorem.

    Theorem \(\PageIndex{3}\): Exchange Theorem

    Let \(\left\{ \vec{x}_{1},\cdots ,\vec{x}_{r}\right\}\) be a linearly independent set of vectors such that each \(\vec{x}_{i}\) is contained in span\(\left\{ \vec{y}_{1},\cdots ,\vec{y}_{s}\right\} .\) Then \(r\leq s.\)

    Proof

    The proof will proceed as follows. First, we set up the necessary steps for the proof. Next, we will assume that \(r > s\) and show that this leads to a contradiction, thus requiring that \(r \leq s\).

    Define span\(\left\{ \vec{y}_{1},\cdots ,\vec{y}_{s}\right\} = V.\) Since each \(\vec{x}_i\) is in span\(\left\{ \vec{y}_{1},\cdots ,\vec{y}_{s}\right\}\), it follows there exist scalars \(c_{1},\cdots ,c_{s}\) such that \[\vec{x}_{1}=\sum_{i=1}^{s}c_{i}\vec{y}_{i} \label{lincomb}\] Note that not all of these scalars \(c_i\) can equal zero. Suppose that all the \(c_i=0\). Then it would follow that \(\vec{x}_{1}=\vec{0}\) and so \(\left\{ \vec{x} _{1},\cdots ,\vec{x}_{r}\right\}\) would not be linearly independent. Indeed, if \(\vec{x}_{1}=\vec{0}\), \(1\vec{x}_{1}+\sum_{i=2}^{r}0 \vec{x}_{i}=\vec{x}_{1}=\vec{0}\) and so there would exist a nontrivial linear combination of the vectors \(\left\{ \vec{x}_{1},\cdots , \vec{x}_{r}\right\}\) which equals zero. Therefore at least one \(c_i\) is nonzero.

    Say \(c_{k}\neq 0.\) Then solve \(\eqref{lincomb}\) for \(\vec{y}_{k}\) and obtain \[\vec{y}_{k}\in \mathrm{span}\left\{ \vec{x}_{1},\overset{\text{s-1 vectors here}}{\overbrace{\vec{y}_{1},\cdots ,\vec{y}_{k-1},\vec{y} _{k+1},\cdots ,\vec{y}_{s}}}\right\} .\nonumber \] Define \(\left\{ \vec{z}_{1},\cdots ,\vec{z}_{s-1}\right\}\) to be \[\left\{ \vec{z}_{1},\cdots ,\vec{z}_{s-1}\right\} = \left\{ \vec{y}_{1},\cdots ,\vec{y}_{k-1},\vec{y}_{k+1},\cdots ,\vec{y} _{s}\right\}\nonumber \] Now we can write \[\vec{y}_{k}\in \mathrm{span}\left\{ \vec{x}_{1}, \vec{z}_{1},\cdots, \vec{z}_{s-1}\right\}\nonumber \] Therefore, \(\mathrm{span}\left\{ \vec{x}_{1},\vec{z}_{1},\cdots ,\vec{z }_{s-1}\right\}=V\). To see this, suppose \(\vec{v}\in V\). Then there exist constants \(c_{1},\cdots ,c_{s}\) such that \[\vec{v}=\sum_{i=1}^{s-1}c_{i}\vec{z}_{i}+c_{s}\vec{y}_{k}.\nonumber \] Replace this \(\vec{y}_{k}\) with a linear combination of the vectors \(\left\{ \vec{x}_{1},\vec{z}_{1},\cdots ,\vec{z}_{s-1}\right\}\) to obtain \(\vec{v}\in \mathrm{span}\left\{ \vec{x}_{1},\vec{z} _{1},\cdots ,\vec{z}_{s-1}\right\} .\) The vector \(\vec{y}_{k},\) in the list \(\left\{ \vec{y}_{1},\cdots ,\vec{y}_{s}\right\} ,\) has now been replaced with the vector \(\vec{x}_{1}\) and the resulting modified list of vectors has the same span as the original list of vectors, \(\left\{ \vec{y }_{1},\cdots ,\vec{y}_{s}\right\} .\)

    We are now ready to move on to the proof. Suppose that \(r>s\) and that \[\mathrm{span}\left\{ \vec{x}_{1},\cdots , \vec{x}_{l},\vec{z}_{1},\cdots ,\vec{z}_{p}\right\} =V\nonumber \] where the process established above has continued. In other words, the vectors \(\vec{z}_{1},\cdots ,\vec{z}_{p}\) are each taken from the set \(\left\{ \vec{y}_{1},\cdots ,\vec{y}_{s}\right\}\) and \(l+p=s.\) This was done for \(l=1\) above. Then since \(r>s,\) it follows that \(l\leq s<r\) and so \(l+1\leq r.\) Therefore, \(\vec{x}_{l+1}\) is a vector not in the list, \(\left\{ \vec{x}_{1},\cdots ,\vec{x}_{l}\right\}\) and since \[\mathrm{span}\left\{ \vec{x}_{1},\cdots ,\vec{x}_{l},\vec{z} _{1},\cdots ,\vec{z}_{p}\right\} =V\nonumber \] there exist scalars, \(c_{i}\) and \(d_{j}\) such that \[\vec{x}_{l+1}=\sum_{i=1}^{l}c_{i}\vec{x}_{i}+\sum_{j=1}^{p}d_{j} \vec{z}_{j}. \label{lincomb2}\] Not all the \(d_{j}\) can equal zero because if this were so, it would follow that \(\left\{ \vec{x}_{1},\cdots ,\vec{x}_{r}\right\}\) would be a linearly dependent set because one of the vectors would equal a linear combination of the others. Therefore, \(\eqref{lincomb2}\) can be solved for one of the \(\vec{z}_{i},\) say \(\vec{z}_{k},\) in terms of \(\vec{x}_{l+1}\) and the other \(\vec{z}_{i}\) and just as in the above argument, replace that \(\vec{z}_{i}\) with \(\vec{x}_{l+1}\) to obtain \[\mathrm{span}\left\{ \vec{x}_{1},\cdots \vec{x}_{l},\vec{x}_{l+1}, \overset{\text{p-1 vectors here}}{\overbrace{\vec{z}_{1},\cdots \vec{z} _{k-1},\vec{z}_{k+1},\cdots ,\vec{z}_{p}}}\right\} =V\nonumber \] Continue this way, eventually obtaining \[\mathrm{span}\left\{ \vec{x}_{1},\cdots ,\vec{x}_{s}\right\} =V.\nonumber \] But then \(\vec{x}_{r}\in\) \(\mathrm{span}\left\{ \vec{x}_{1},\cdots , \vec{x}_{s}\right\}\) contrary to the assumption that \(\left\{ \vec{x} _{1},\cdots ,\vec{x}_{r}\right\}\) is linearly independent. Therefore, \(r\leq s\) as claimed.

    The following corollary follows from the exchange theorem.

    Corollary \(\PageIndex{2}\): Two Bases of the Same Length

    Let \(B_1\), \(B_2\) be two bases of a vector space \(V\). Suppose \(B_1\) contains \(m\) vectors and \(B_2\) contains \(n\) vectors. Then \(m = n\).

    Proof

    By Theorem \(\PageIndex{3}\), \(m\leq n\) and \(n\leq m\). Therefore \(m=n\).

    This corollary is very important so we provide another proof independent of the exchange theorem above.

    Proof

    Suppose \(n > m.\) Then since the vectors \(\left\{ \vec{u} _{1},\cdots ,\vec{u}_{m}\right\}\) span \(V,\) there exist scalars \(c_{ij}\) such that \[\sum_{i=1}^{m}c_{ij}\vec{u}_{i}=\vec{v}_{j}.\nonumber \] Therefore, \[\sum_{j=1}^{n}d_{j}\vec{v}_{j}=\vec{0} \text{ if and only if }\sum_{j=1}^{n}\sum_{i=1}^{m}c_{ij}d_{j}\vec{u}_{i}= \vec{0}\nonumber \] if and only if \[\sum_{i=1}^{m}\left( \sum_{j=1}^{n}c_{ij}d_{j}\right) \vec{u}_{i}=\vec{ 0}\nonumber \] Now since \(\{\vec{u}_{1},\cdots ,\vec{u}_{n}\}\) is independent, this happens if and only if \[\sum_{j=1}^{n}c_{ij}d_{j}=0,\;i=1,2,\cdots ,m.\nonumber \] However, this is a system of \(m\) equations in \(n\) variables, \(d_{1},\cdots ,d_{n}\) and \(m<n.\) Therefore, there exists a solution to this system of equations in which not all the \(d_{j}\) are equal to zero. Recall why this is so. The augmented matrix for the system is of the form \(\left [ \begin{array}{c|c} C & \vec{0} \end{array} \right ]\) where \(C\) is a matrix which has more columns than rows. Therefore, there are free variables and hence nonzero solutions to the system of equations. However, this contradicts the linear independence of \(\left\{ \vec{u}_{1},\cdots ,\vec{u}_{m}\right\}\). Similarly it cannot happen that \(m > n\).

    Given the result of the previous corollary, the following definition follows.

    Definition \(\PageIndex{3}\): Dimension

    A vector space \(V\) is of dimension \(n\) if it has a basis consisting of \(n\) vectors.

    Notice that the dimension is well defined by Corollary \(\PageIndex{2}\). It is assumed here that \(n<\infty\) and therefore such a vector space is said to be finite dimensional.

    Example \(\PageIndex{5}\): Dimension of a Vector Space

    Let \(\mathbb{P}_2\) be the set of all polynomials of degree at most \(2\). Find the dimension of \(\mathbb{P}_2\).

    Solution

    If we can find a basis of \(\mathbb{P}_2\) then the number of vectors in the basis will give the dimension. Recall from Example \(\PageIndex{4}\) that a basis of \(\mathbb{P}_2\) is given by \[S = \left\{ x^2, x, 1 \right\}\nonumber \] There are three polynomials in \(S\) and hence the dimension of \(\mathbb{P}_2\) is three.

    It is important to note that a basis for a vector space is not unique. A vector space can have many bases. Consider the following example.

    Example \(\PageIndex{6}\): A Different Basis for Polynomials of Degree Two

    Let \(\mathbb{P}_2\) be the polynomials of degree no more than 2. Is \(\left\{ x^{2}+x+1,2x+1,3x^{2}+1\right\}\) a basis for \(\mathbb{P}_2\)?

    Solution

    Suppose these vectors are linearly independent but do not form a spanning set for \(\mathbb{P}_2\). Then by Lemma 9.3.2, we could find a fourth polynomial in \(\mathbb{P}_2\) to create a new linearly independent set containing four polynomials. However this would imply that we could find a basis of \(\mathbb{P}_2\) of more than three polynomials. This contradicts the result of Example \(\PageIndex{5}\) in which we determined the dimension of \(\mathbb{P}_2\) is three. Therefore if these vectors are linearly independent they must also form a spanning set and thus a basis for \(\mathbb{P}_2\).

    Suppose then that \[\begin{aligned} a\left( x^{2}+x+1\right) +b\left( 2x+1\right) +c\left( 3x^{2}+1\right) &= 0\\ \left( a+3c\right) x^{2}+\left( a+2b\right) x+\left( a+b+c\right) &= 0 \end{aligned}\] We know that \(\left\{ x^2, x, 1 \right\}\) is linearly independent, and so it follows that \[\begin{aligned} a+3c &= 0 \\ a+2b &= 0 \\ a+b+c &= 0\end{aligned}\] and there is only one solution to this system of equations, \(a=b=c=0\). Therefore, these are linearly independent and form a basis for \(\mathbb{P}_2\).

    Consider the following theorem.

    Theorem \(\PageIndex{4}\): Every Subspace has a Basis

    Let \(W\) be a nonzero subspace of a finite dimensional vector space \(V\). Suppose \(V\) has dimension \(n\). Then \(W\) has a basis with no more than \(n\) vectors.

    Proof

    Let \(\vec{v}_{1}\in V\) where \(\vec{v}_{1}\neq 0.\) If \(\mathrm{span}\left\{ \vec{v}_{1}\right\} =V,\) then it follows that \(\left\{ \vec{v} _{1}\right\}\) is a basis for \(V\). Otherwise, there exists \(\vec{v} _{2}\in V\) which is not in \(\mathrm{span}\left\{ \vec{v}_{1}\right\} .\) By Lemma 9.3.2 \(\left\{ \vec{v}_{1},\vec{v}_{2}\right\}\) is a linearly independent set of vectors. Then \(\left\{ \vec{v}_{1},\vec{v} _{2}\right\}\) is a basis for \(V\) and we are done. If \(\mathrm{span}\left\{ \vec{v}_{1}, \vec{v}_{2}\right\} \neq V,\) then there exists \(\vec{v}_{3}\notin \mathrm{ span}\left\{ \vec{v}_{1},\vec{v}_{2}\right\}\) and \(\left\{ \vec{v} _{1},\vec{v}_{2},\vec{v}_{3}\right\}\) is a larger linearly independent set of vectors. Continuing this way, the process must stop before \(n+1\) steps because if not, it would be possible to obtain \(n+1\) linearly independent vectors contrary to the exchange theorem, Theorem \(\PageIndex{3}\).

    If in fact \(W\) has \(n\) vectors, then it follows that \(W=V\).

    Theorem \(\PageIndex{5}\): Subspace of Same Dimension

    Let \(V\) be a vector space of dimension \(n\) and let \(W\) be a subspace. Then \(W=V\) if and only if the dimension of \(W\) is also \(n\).

    Proof

    First suppose \(W=V.\) Then obviously the dimension of \(W=n.\)

    Now suppose that the dimension of \(W\) is \(n\). Let a basis for \(W\) be \(\left\{ \vec{w}_{1},\cdots ,\vec{w}_{n}\right\}\). If \(W\) is not equal to \(V\), then let \(\vec{v}\) be a vector of \(V\) which is not contained in \(W.\) Thus \(\vec{v}\) is not in \(\mathrm{span}\left\{ \vec{w}_{1},\cdots ,\vec{w} _{n}\right\}\) and by Lemma 9.7.2, \(\left\{ \vec{w}_{1},\cdots ,\vec{w}_{n},\vec{v}\right\}\) is linearly independent which contradicts Theorem \(\PageIndex{3}\) because it would be an independent set of \(n+1\) vectors even though each of these vectors is in a spanning set of \(n\) vectors, a basis of \(V\).

    Consider the following example.

    Example \(\PageIndex{7}\): Basis of a Subspace

    Let \(U=\left\{ A\in\mathbb{M}_{22} ~\left|~ A\left [\begin{array}{rr} 1 & 0 \\ 1 & -1 \end{array}\right ]\right. = \left [\begin{array}{rr} 1 & 1 \\ 0 & -1 \end{array}\right ] A \right\}\). Then \(U\) is a subspace of \(\mathbb{M}_{22}\) Find a basis of \(U\), and hence \(\dim(U)\).

    Solution

    Let \(A=\left [\begin{array}{rr} a & b \\ c & d \end{array}\right ] \in\mathbb{M}_{22}\). Then \[A\left [\begin{array}{rr} 1 & 0 \\ 1 & -1 \end{array}\right ] = \left [\begin{array}{rr} a & b \\ c & d \end{array}\right ] \left [\begin{array}{rr} 1 & 0 \\ 1 & -1 \end{array}\right ] =\left [\begin{array}{rr} a+b & -b \\ c+d & -d \end{array}\right ]\nonumber \] and \[\left [\begin{array}{rr} 1 & 1 \\ 0 & -1 \end{array}\right ] A = \left [\begin{array}{rr} 1 & 1 \\ 0 & -1 \end{array}\right ] \left [\begin{array}{rr} a & b \\ c & d \end{array}\right ] =\left [\begin{array}{cc} a+c & b+d \\ -c & -d \end{array}\right ].\nonumber \] If \(A\in U\), then \(\left [\begin{array}{cc} a+b & -b \\ c+d & -d \end{array}\right ]= \left [\begin{array}{cc} a+c & b+d \\ -c & -d \end{array}\right ]\).

    Equating entries leads to a system of four equations in the four variables \(a,b,c\) and \(d\).

    \[\begin{array}{ccc} a+b & = & a + c \\ -b & = & b + d \\ c + d & = & -c \\ -d & = & -d \end{array}\nonumber\]

    or

    \[\begin{array}{rcc} b - c & = & 0 \\ -2b - d & = & 0 \\ 2c + d & = & 0 \end{array}.\nonumber \]

    The solution to this system is \(a=s\), \(b=-\frac{1}{2}t\), \(c=-\frac{1}{2}t\), \(d=t\) for any \(s,t\in\mathbb{R}\), and thus \[A=\left [\begin{array}{cc} s & \frac{t}{2} \\ -\frac{t}{2} & t \end{array}\right ] = s\left [\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right ] + t\left [\begin{array}{rr} 0 & -\frac{1}{2} \\ -\frac{1}{2} & 1 \end{array}\right ] .\nonumber \] Let \[B=\left\{ \left [\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right ], \left [\begin{array}{rr} 0 & -\frac{1}{2} \\ -\frac{1}{2} & 1 \end{array}\right ]\right\}.\nonumber \] Then \(\mathrm{span}(B)=U\), and it is routine to verify that \(B\) is an independent subset of \(\mathbb{M}_{22}\). Therefore \(B\) is a basis of \(U\), and \(\dim(U)=2\).

    The following theorem claims that a spanning set of a vector space \(V\) can be shrunk down to a basis of \(V\). Similarly, a linearly independent set within \(V\) can be enlarged to create a basis of \(V\).

    Theorem \(\PageIndex{6}\): Basis of \(V\)

    If \(V=\mathrm{span}\left\{ \vec{u}_{1},\cdots ,\vec{u} _{n}\right\}\) is a vector space, then some subset of \(\{\vec{u}_{1},\cdots ,\vec{u}_{n}\}\) is a basis for \(V.\) Also, if \(\{\vec{u}_{1},\cdots ,\vec{u} _{k}\}\subseteq V\) is linearly independent and the vector space is finite dimensional, then the set \(\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\},\) can be enlarged to obtain a basis of \(V.\)

    Proof

    Let \[S=\{E\subseteq \{\vec{u}_{1},\cdots ,\vec{u}_{n}\}\text{ such that } \mathrm{span}\left\{ E\right\} =V\}.\nonumber \] For \(E\in S,\) let \(\left\vert E\right\vert\) denote the number of elements of \(E.\) Let \[m= \min \{\left\vert E\right\vert \text{ such that }E\in S\}.\nonumber \] Thus there exist vectors \[\{\vec{v}_{1},\cdots ,\vec{v}_{m}\}\subseteq \{\vec{u}_{1},\cdots , \vec{u}_{n}\}\nonumber \] such that \[\mathrm{span}\left\{ \vec{v}_{1},\cdots ,\vec{v}_{m}\right\} =V\nonumber \] and \(m\) is as small as possible for this to happen. If this set is linearly independent, it follows it is a basis for \(V\) and the theorem is proved. On the other hand, if the set is not linearly independent, then there exist scalars, \(c_{1},\cdots ,c_{m}\) such that \[\vec{0}=\sum_{i=1}^{m}c_{i}\vec{v}_{i}\nonumber \] and not all the \(c_{i}\) are equal to zero. Suppose \(c_{k}\neq 0.\) Then solve for the vector \(\vec{v}_{k}\) in terms of the other vectors. Consequently, \[V=\mathrm{span}\left\{ \vec{v}_{1},\cdots ,\vec{v}_{k-1},\vec{v} _{k+1},\cdots ,\vec{v}_{m}\right\}\nonumber \] contradicting the definition of \(m\). This proves the first part of the theorem.

    To obtain the second part, begin with \(\{\vec{u}_{1},\cdots ,\vec{u} _{k}\}\) and suppose a basis for \(V\) is \[\left\{ \vec{v}_{1},\cdots ,\vec{v}_{n}\right\}\nonumber \] If \[\mathrm{span}\left\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\right\} =V,\nonumber \] then \(k=n\). If not, there exists a vector \[\vec{u}_{k+1}\notin \mathrm{span}\left\{ \vec{u}_{1},\cdots ,\vec{u} _{k}\right\}\nonumber \] Then from Lemma 9.3.2, \(\{\vec{u}_{1},\cdots ,\vec{u}_{k}, \vec{u}_{k+1}\}\) is also linearly independent. Continue adding vectors in this way until \(n\) linearly independent vectors have been obtained. Then \[\mathrm{span}\left\{ \vec{u}_{1},\cdots ,\vec{u}_{n}\right\} =V\nonumber \] because if it did not do so, there would exist \(\vec{u}_{n+1}\) as just described and \(\left\{ \vec{u}_{1},\cdots ,\vec{u}_{n+1}\right\}\) would be a linearly independent set of vectors having \(n+1\) elements. This contradicts the fact that \(\left\{ \vec{v}_{1},\cdots ,\vec{v}_{n}\right\}\) is a basis. In turn this would contradict Theorem \(\PageIndex{3}\). Therefore, this list is a basis.

    Recall Example 9.3.4 in which we added a matrix to a linearly independent set to create a larger linearly independent set. By Theorem \(\PageIndex{6}\) we can extend a linearly independent set to a basis.

    Example \(\PageIndex{8}\): Adding to a Linearly Independent Set

    Let \(S \subseteq M_{22}\) be a linearly independent set given by \[S = \left\{ \left [ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right ], \left [ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right ] \right\}\nonumber \] Enlarge \(S\) to a basis of \(M_{22}\).

    Solution

    Recall from the solution of Example 9.3.4 that the set \(R \subseteq M_{22}\) given by \[R = \left\{ \left [ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right ], \left [ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right ], \left [ \begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array} \right ] \right\}\nonumber \] is also linearly independent. However this set is still not a basis for \(M_{22}\) as it is not a spanning set. In particular, \(\left [ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right ]\) is not in \(\mathrm{span} R\). Therefore, this matrix can be added to the set by Lemma 9.3.2 to obtain a new linearly independent set given by \[T = \left\{ \left [ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right ], \left [ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right ], \left [ \begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array} \right ], \left [ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right ] \right\}\nonumber \]

    This set is linearly independent and now spans \(M_{22}\). Hence \(T\) is a basis.

    Next we consider the case where you have a spanning set and you want a subset which is a basis. The above discussion involved adding vectors to a set. The next theorem involves removing vectors.

    Theorem \(\PageIndex{7}\): Basis from a Spanning Set

    Let \(V\) be a vector space and let \(W\) be a subspace. Also suppose that \(W=\mathrm{span}\left\{ \vec{w}_{1},\cdots ,\vec{w} _{m}\right\}\). Then there exists a subset of \(\left\{ \vec{w}_{1},\cdots , \vec{w}_{m}\right\}\) which is a basis for \(W\).

    Proof

    Let \(S\) denote the set of positive integers such that for \(k\in S,\) there exists a subset of \(\left\{ \vec{w}_{1},\cdots ,\vec{w} _{m}\right\}\) consisting of exactly \(k\) vectors which is a spanning set for \(W\). Thus \(m\in S\). Pick the smallest positive integer in \(S\). Call it \(k\). Then there exists \(\left\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\right\} \subseteq \left\{ \vec{w}_{1},\cdots ,\vec{w}_{m}\right\}\) such that \(span \left\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\right\} =W.\) If \[\sum_{i=1}^{k}c_{i}\vec{w}_{i}=\vec{0}\nonumber \] and not all of the \(c_{i}=0,\) then you could pick \(c_{j}\neq 0\), divide by it and solve for \(\vec{u}_{j}\) in terms of the others. \[\vec{w}_{j}=\sum_{i\neq j}\left( -\frac{c_{i}}{c_{j}}\right) \vec{w}_{i}\nonumber \] Then you could delete \(\vec{w}_{j}\) from the list and have the same span. In any linear combination involving \(\vec{w}_{j}\), the linear combination would equal one in which \(\vec{w}_{j}\) is replaced with the above sum, showing that it could have been obtained as a linear combination of \(\vec{w}_{i}\) for \(i\neq j\). Thus \(k-1\in S\) contrary to the choice of \(k\). Hence each \(c_{i}=0\) and so \(\left\{ \vec{u}_{1},\cdots ,\vec{u} _{k}\right\}\) is a basis for \(W\) consisting of vectors of \(\left\{ \vec{w} _{1},\cdots ,\vec{w}_{m}\right\}\).

    Consider the following example of this concept.

    Example \(\PageIndex{9}\): Basis from a Spanning Set

    Let \(V\) be the vector space of polynomials of degree no more than 3, denoted earlier as \(\mathbb{P}_{3}\). Consider the following vectors in \(V\). \[\begin{aligned} &2x^{2}+x+1,x^{3}+4x^{2}+2x+2,2x^{3}+2x^{2}+2x+1, \\ &x^{3}+4x^{2}-3x+2,x^{3}+3x^{2}+2x+1\end{aligned}\] Then, as mentioned above, \(V\) has dimension 4 and so clearly these vectors are not linearly independent. A basis for \(V\) is \(\left\{ 1,x,x^{2},x^{3}\right\}\). Determine a linearly independent subset of these which has the same span. Determine whether this subset is a basis for \(V\).

    Solution

    Consider an isomorphism which maps \(\mathbb{R}^{4}\) to \(V\) in the obvious way. Thus \[\left [ \begin{array}{c} 1 \\ 1 \\ 2 \\ 0 \end{array} \right ]\nonumber \] corresponds to \(2x^{2}+x+1\) through the use of this isomorphism. Then corresponding to the above vectors in \(V\) we would have the following vectors in \(\mathbb{R}^{4}.\) \[\left [ \begin{array}{c} 1 \\ 1 \\ 2 \\ 0 \end{array} \right ] ,\left [ \begin{array}{c} 2 \\ 2 \\ 4 \\ 1 \end{array} \right ] ,\left [ \begin{array}{c} 1 \\ 2 \\ 2 \\ 2 \end{array} \right ] ,\left [ \begin{array}{r} 2 \\ -3 \\ 4 \\ 1 \end{array} \right ] ,\left [ \begin{array}{c} 1 \\ 2 \\ 3 \\ 1 \end{array} \right ]\nonumber \] Now if we obtain a subset of these which has the same span but which is linearly independent, then the corresponding vectors from \(V\) will also be linearly independent. If there are four in the list, then the resulting vectors from \(V\) must be a basis for \(V\). The reduced row-echelon form for the matrix which has the above vectors as columns is \[\left [ \begin{array}{rrrrr} 1 & 0 & 0 & -15 & 0 \\ 0 & 1 & 0 & 11 & 0 \\ 0 & 0 & 1 & -5 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{array} \right ]\nonumber \] Therefore, a basis for \(V\) consists of the vectors \[\begin{aligned} &2x^{2}+x+1,x^{3}+4x^{2}+2x+2,2x^{3}+2x^{2}+2x+1, \\ &x^{3}+3x^{2}+2x+1.\end{aligned}\] Note how this is a subset of the original set of vectors. If there had been only three pivot columns in this matrix, then we would not have had a basis for \(V\) but we would at least have obtained a linearly independent subset of the original set of vectors in this way.

    Note also that, since all linear relations are preserved by an isomorphism, \[\begin{aligned} &-15\left( 2x^{2}+x+1\right) +11\left( x^{3}+4x^{2}+2x+2\right) +\left( -5\right) \left( 2x^{3}+2x^{2}+2x+1\right) \\ =&x^{3}+4x^{2}-3x+2\end{aligned}\]

    Consider the following example.

    Example \(\PageIndex{10}\): Shrinking a Spanning Set

    Consider the set \(S \subseteq \mathbb{P}_2\) given by \[S = \left\{ 1, x, x^2, x^2 + 1 \right\}\nonumber \] Show that \(S\) spans \(\mathbb{P}_2\), then remove vectors from \(S\) until it creates a basis.

    Solution

    First we need to show that \(S\) spans \(\mathbb{P}_2\). Let \(ax^2 + bx + c\) be an arbitrary polynomial in \(\mathbb{P}_2\). Write \[ax^2 + bx + c = r(1) + s(x) + t(x^2) + u (x^2 + 1)\nonumber \] Then, \[\begin{aligned} ax^2 +bx + c &= r(1) + s(x) + t(x^2) + u (x^2 + 1) \\ &= (t+u) x^2 + s(x) + (r+u) \end{aligned}\]

    It follows that \[\begin{aligned} a &= t + u \\ b &= s \\ c &= r + u \end{aligned}\]

    Clearly a solution exists for all \(a,b,c\) and so \(S\) is a spanning set for \(\mathbb{P}_2\). By Theorem \(\PageIndex{6}\), some subset of \(S\) is a basis for \(\mathbb{P}_2\).

    Recall that a basis must be both a spanning set and a linearly independent set. Therefore we must remove a vector from \(S\) keeping this in mind. Suppose we remove \(x\) from \(S\). The resulting set would be \(\left\{ 1, x^2, x^2 + 1 \right\}\). This set is clearly linearly dependent (and also does not span \(\mathbb{P}_2\)) and so is not a basis.

    Suppose we remove \(x^2 + 1\) from \(S\). The resulting set is \(\left\{ 1, x, x^2 \right\}\) which is both linearly independent and spans \(\mathbb{P}_2\). Hence this is a basis for \(\mathbb{P}_2\). Note that removing any one of \(1, x^2\), or \(x^2 + 1\) will result in a basis.

    Now the following is a fundamental result about subspaces.

    Theorem \(\PageIndex{8}\): Basis of a Vector Space

    Let \(V\) be a finite dimensional vector space and let \(W\) be a non-zero subspace. Then \(W\) has a basis. That is, there exists a linearly independent set of vectors \(\left\{ \vec{w}_{1},\cdots ,\vec{w}_{r}\right\}\) such that \[\left\{ \vec{w}_{1},\cdots ,\vec{w}_{r}\right\} =W\nonumber \] Also if \(\left\{ \vec{w}_{1},\cdots ,\vec{w}_{s}\right\}\) is a linearly independent set of vectors, then \(W\) has a basis of the form \(\left\{ \vec{w} _{1},\cdots ,\vec{w}_{s},\cdots ,\vec{w}_{r}\right\}\) for \(r\geq s\).

    Proof

    Let the dimension of \(V\) be \(n\). Pick \(\vec{w}_{1}\in W\) where \(\vec{w}_{1}\neq \vec{0}.\) If \(\vec{w}_{1},\cdots ,\vec{w}_{s}\) have been chosen such that \(\left\{ \vec{w}_{1},\cdots ,\vec{w}_{s}\right\}\) is linearly independent, if \(span\left\{ \vec{w}_{1},\cdots ,\vec{w} _{r}\right\} =W,\) stop. You have the desired basis. Otherwise, there exists \(\vec{w}_{s+1}\notin span\left\{ \vec{w}_{1},\cdots ,\vec{w} _{s}\right\}\) and \(\left\{ \vec{w}_{1},\cdots , \vec{w}_{s},\vec{w}_{s+1}\right\}\) is linearly independent. Continue this way until the process stops. It must stop since otherwise, you could obtain a linearly independent set of vectors having more than \(n\) vectors which is impossible.

    The last claim is proved by following the above procedure starting with \(\left\{ \vec{w}_{1},\cdots ,\vec{w}_{s}\right\}\) as above.

    This also proves the following corollary. Let \(V\) play the role of \(W\) in the above theorem and begin with a basis for \(W\), enlarging it to form a basis for \(V\) as discussed above.

    Corollary \(\PageIndex{3}\): Basis Extension

    Let \(W\) be any non-zero subspace of a vector space \(V\). Then every basis of \(W\) can be extended to a basis for \(V\).

    Consider the following example.

    Example \(\PageIndex{11}\): Basis Extension

    Let \(V=\mathbb{R}^{4}\) and let \[W=\mathrm{span}\left\{ \left [ \begin{array}{c} 1 \\ 0 \\ 1 \\ 1 \end{array} \right ] ,\left [ \begin{array}{c} 0 \\ 1 \\ 0 \\ 1 \end{array} \right ] \right\}\nonumber \] Extend this basis of \(W\) to a basis of \(V\).

    Solution

    An easy way to do this is to take the reduced row-echelon form of the matrix \[\left [ \begin{array}{cccccc} 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \end{array} \right ] \label{vectorspaceeq1}\] Note how the given vectors were placed as the first two and then the matrix was extended in such a way that it is clear that the span of the columns of this matrix yield all of \(\mathbb{R}^{4}\). Now determine the pivot columns. The reduced row-echelon form is \[\left [ \begin{array}{rrrrrr} 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & -1 & 1 \\ 0 & 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 1 & -1 \end{array} \right ] \label{vectorspaceeq2}\] These are \[\left [ \begin{array}{c} 1 \\ 0 \\ 1 \\ 1 \end{array} \right ] ,\left [ \begin{array}{c} 0 \\ 1 \\ 0 \\ 1 \end{array} \right ] ,\left [ \begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array} \right ] ,\left [ \begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array} \right ]\nonumber \] and now this is an extension of the given basis for \(W\) to a basis for \(\mathbb{R}^{4}\).

    Why does this work? The columns of \(\eqref{vectorspaceeq1}\) obviously span \(\mathbb{R} ^{4}\) the span of the first four is the same as the span of all six.


    This page titled 9.4: Subspaces and Basis is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Ken Kuttler (Lyryx) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?