Up to this point, we have had no guarantee that an arbitrary vector space has a basis—and hence no guarantee that one can speak at all of the dimension of \(V\). However, Theorem \(\PageIndex{1}\) will show that any space that is spanned by a finite set of vectors has a (finite) basis: The proof requires the following basic lemma, of interest in itself, that gives a way to enlarge a given independent set of vectors.
Lemma \(\PageIndex{1}\): Independent Lemma
Let \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) be an independent set of vectors in a vector space \(V\). If \(\mathbf{u} \in V\) but \(\mathbf{u} \notin span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\), then \(\{\mathbf{u}, \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) is also independent.
Proof
Let \(t\mathbf{u} + t_{1}\mathbf{v}_{1} + t_{2}\mathbf{v}_{2} + \dots + t_{k}\mathbf{v}_{k} = \mathbf{0}\); we must show that all the coefficients are zero. First, \(t = 0\) because, otherwise, \(\mathbf{u} = - \frac{t_1}{t}\mathbf{v}_1 - \frac{t_2}{t}\mathbf{v}_2 - \dots - \frac{t_k}{t}\mathbf{v}_k\) is in \(span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\), contrary to our assumption. Hence \(t = 0\). But then \(t_{1}\mathbf{v}_{1} + t_{2}\mathbf{v}_{2} + \dots + t_{k}\mathbf{v}_{k} = \mathbf{0}\) so the rest of the \(t_{i}\) are zero by the independence of \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\). This is what we wanted.
\(\square\)
Note that the converse of Lemma \(\PageIndex{1}\) is also true: if \(\{\mathbf{u}, \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) is independent, then \(\mathbf{u}\) is not in \(span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\).
As an illustration, suppose that \(\{\mathbf{v}_{1}, \mathbf{v}_{2}\}\) is independent in \(\mathbb{R}^3\). Then \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) are not parallel, so \(span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}\}\) is a plane through the origin (shaded in the diagram). By Lemma \(\PageIndex{1}\), \(\mathbf{u}\) is not in this plane if and only if \(\{\mathbf{u}, \mathbf{v}_{1}, \mathbf{v}_{2}\}\) is independent.

Definition: Finite Dimensional and Infinite Dimensional Vector Spaces
A vector space \(V\) is called finite dimensional if it is spanned by a finite set of vectors. Otherwise, \(V\) is called infinite dimensional.
Thus the zero vector space \(\{\mathbf{0}\}\) is finite dimensional because \(\{\mathbf{0}\}\) is a spanning set.
Lemma \(\PageIndex{2}\)
Let \(V\) be a finite dimensional vector space. If \(U\) is any subspace of \(V\), then any independent subset of \(U\) can be enlarged to a finite basis of \(U\).
Proof
Suppose that \(I\) is an independent subset of \(U\). If \(span \; I = U\) then \(I\) is already a basis of \(U\). If \(span \; I \neq U\), choose \(\mathbf{u}_{1} \in U\) such that \(\mathbf{u}_{1} \notin span \; I\). Hence the set \(I \cup \{\mathbf{u}_{1}\}\) is independent by Lemma \(\PageIndex{1}\). If \(span \;(I \cup \{\mathbf{u}_{1}\}) = U\) we are done; otherwise choose \(\mathbf{u}_{2} \in U\) such that \(\mathbf{u}_{2} \notin span \;(I \cup \{\mathbf{u}_{1}\})\). Hence \(I \cup \{\mathbf{u}_{1}, \mathbf{u}_{2}\}\) is independent, and the process continues. We claim that a basis of \(U\) will be reached eventually. Indeed, if no basis of \(U\) is ever reached, the process creates arbitrarily large independent sets in \(V\). But this is impossible by the fundamental theorem because \(V\) is finite dimensional and so is spanned by a finite set of vectors.
\(\square\)
Theorem \(\PageIndex{1}\)
Let \(V\) be a finite dimensional vector space spanned by m vectors.
- \(V\) has a finite basis, and \(dim \; V \leq m\).
- Every independent set of vectors in \(V\) can be enlarged to a basis of \(V\) by adding vectors from any fixed basis of \(V\).
- If \(U\) is a subspace of \(V\), then
- \(U\) is finite dimensional and \(dim \; U \leq dim \; V\).
- If \(dim \; U = dim \; V\) then \(U=V\).
Proof
- If \(V = \{\mathbf{0}\}\), then \(V\) has an empty basis and \(dim \; V = 0 \leq m\). Otherwise, let \(\mathbf{v} \neq \mathbf{0}\) be a vector in \(V\). Then \(\{\mathbf{v}\}\) is independent, so (1) follows from Lemma 6.4.2 with \(U = V\).
- We refine the proof of Lemma 6.4.2. Fix a basis \(B\) of \(V\) and let \(I\) be an independent subset of \(V\). If \(span \; I = V\) then \(I\) is already a basis of \(V\). If \(span \; I \neq V\), then \(B\) is not contained in \(I\) (because \(B\) spans \(V\)). Hence choose \(\mathbf{b}_{1} \in B\) such that \(\mathbf{b}_{1} \notin span \; I\). Hence the set \(I \cup \{\mathbf{b}_{1}\}\) is independent by Lemma \(\PageIndex{1}\). If \(span \;(I \cup \{\mathbf{b}_{1}\}) = V\) we are done; otherwise a similar argument shows that \((I \cup \{\mathbf{b}_{1}, \mathbf{b}_{2}\})\) is independent for some \(\mathbf{b}_{2} \in B\). Continue this process. As in the proof of Lemma 6.4.2, a basis of \(V\) will be reached eventually.
-
- This is clear if \(U = \{\mathbf{0}\}\). Otherwise, let \(\mathbf{u} \neq \mathbf{0}\) in \(U\). Then \(\{\mathbf{u}\}\) can be enlarged to a finite basis \(B\) of \(U\) by Lemma 6.4.2, proving that \(U\) is finite dimensional. But \(B\) is independent in \(V\), so \(dim \; U \leq dim \; V\) by the fundamental theorem.
- This is clear if \(U = \{\mathbf{0}\}\) because \(V\) has a basis; otherwise, it follows from (2).
Theorem \(\PageIndex{1}\) shows that a vector space \(V\) is finite dimensional if and only if it has a finite basis (possibly empty), and that every subspace of a finite dimensional space is again finite dimensional.
Example \(\PageIndex{1}\)
Enlarge the independent set \(D = \left\{ \left[ \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 1 & 1 \end{array} \right], \left[ \begin{array}{rr} 1 & 0 \\ 1 & 1 \end{array} \right] \right\}\) to a basis of \(\mathbf{M}_{22}\).
Solution
The standard basis of \(\mathbf{M}_{22}\) is \(\left\{ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right] \right\}\), so including one of these in \(D\) will produce a basis by Theorem \(\PageIndex{1}\). In fact including any of these matrices in \(D\) produces an independent set (verify), and hence a basis by Theorem 6.4.1. Of course these vectors are not the only possibilities, for example, including \(\left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right]\) works as well.
Example \(\PageIndex{2}\)
Find a basis of \(\mathbf{P}_{3}\) containing the independent set \(\{1 + x, 1 + x^{2}\}\).
Solution
The standard basis of \(\mathbf{P}_{3}\) is \(\{1, x, x^{2}, x^{3}\}\), so including two of these vectors will do. If we use \(1\) and \(x^{3}\), the result is \(\{1, 1 + x, 1 + x^{2}, x^{3}\}\). This is independent because the polynomials have distinct degrees (Example 6.3.4), and so is a basis by Theorem \(\PageIndex{1}\). Of course, including \(\{1, x\}\) or \(\{1, x^{2}\}\) would not work!
Example \(\PageIndex{3}\)
Show that the space \(\mathbf{P}\) of all polynomials is infinite dimensional.
Solution
For each \(n \geq 1\), \(\mathbf{P}\) has a subspace \(\mathbf{P}_{n}\) of dimension \(n + 1\). Suppose \(\mathbf{P}\) is finite dimensional, say \(dim \;\|{P} = m\). Then \(dim \;\|{P}_{n} \leq dim \;\|{P}\) by Theorem \(\PageIndex{1}\), that is \(n + 1 \leq m\). This is impossible since \(n\) is arbitrary, so \(\mathbf{P}\) must be infinite dimensional.
The next example illustrates how (2) of Theorem \(\PageIndex{1}\) can be used.
Example \(\PageIndex{1}\)
If \(\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{k}\) are independent columns in \(\mathbb{R}^n\), show that they are the first \(k\) columns in some invertible \(n \times n\) matrix.
Solution
By Theorem \(\PageIndex{1}\), expand \(\{\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{k}\}\) to a basis \(\{\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{k}, \mathbf{c}_{k+1}, \dots, \mathbf{c}_{n}\}\) of \(\mathbb{R}^n\). Then the matrix \(A = \left[ \begin{array}{ccccccc} \mathbf{c}_{1} & \mathbf{c}_{2} & \dots & \mathbf{c}_{k} & \mathbf{c}_{k+1} & \dots & \mathbf{c}_{n} \end{array} \right]\) with this basis as its columns is an \(n \times n\) matrix and it is invertible by Theorem 5.2.3.
Theorem \(\PageIndex{2}\)
Let \(U\) and \(W\) be subspaces of the finite dimensional space \(V\).
- If \(U \subseteq W\), then \(dim \; U \leq dim \; W\).
- If \(U \subseteq W\) and \(dim \; U = dim \; W\), then \(U = W\).
Proof
Since \(W\) is finite dimensional, (1) follows by taking \(V = W\) in part (3) of Theorem \(\PageIndex{1}\). Now assume \(dim \; U = dim \; W = n\), and let \(B\) be a basis of \(U\). Then \(B\) is an independent set in \(W\). If \(U \neq W\), then \(span \; B \neq W\), so \(B\) can be extended to an independent set of \(n + 1\) vectors in \(W\) by Lemma \(\PageIndex{1}\). This contradicts the fundamental theorem (Theorem 6.3.2) because \(W\) is spanned by \(dim \; W = n\) vectors. Hence \(U = W\), proving (2).
\(\square\)
Theorem \(\PageIndex{2}\) is very useful. This was illustrated in Example 5.2.13 for \(\mathbb{R}^2\) and \(\mathbb{R}^3\); here is another example.
Example \(\PageIndex{5}\)
If \(a\) is a number, let \(W\) denote the subspace of all polynomials in \(\mathbf{P}_{n}\) that have \(a\) as a root:
\[W = \{p(x) \mid p(x) \in\|{P}_n \mbox{ and } p(a) = 0 \} \nonumber \]
Show that \(\{(x - a), (x - a)^{2}, \dots, (x - a)^{n}\}\) is a basis of \(W\).
Solution
Observe first that \((x - a), (x - a)^2, \dots, (x - a)^n\) are members of \(W\), and that they are independent because they have distinct degrees (Example 6.3.4). Write
\[U = span \;\{(x - a), (x - a)^2, \dots, (x - a)^n \} \nonumber \]
Then we have \(U \subseteq W \subseteq\|{P}_{n}\), \(dim \; U = n\), and \(dim \;\|{P}_{n} = n + 1\). Hence \(n \leq dim \; W \leq n + 1\) by Theorem \(\PageIndex{2}\). Since \(dim \; W\) is an integer, we must have \(dim \; W = n\) or \(dim \; W = n + 1\). But then \(W = U\) or \(W =\|{P}_{n}\), again by Theorem \(\PageIndex{2}\). Because \(W \neq\|{P}_{n}\), it follows that \(W = U\), as required.
A set of vectors is called dependent if it is not independent, that is if some nontrivial linear combination vanishes. The next result is a convenient test for dependence.
Lemma \(\PageIndex{3}\): Dependent Lemma
A set \(D = \{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) of vectors in a vector space V is dependent if and only if some vector in \(D\) is a linear combination of the others.
Proof
Let \(\mathbf{v}_{2}\) (say) be a linear combination of the rest: \(\mathbf{v}_{2} = s_{1}\mathbf{v}_{1} + s_{3}\mathbf{v}_{3} + \dots + s_{k}\mathbf{v}_{k}\). Then
\[s_{1}\mathbf{v}_{1} + (-1)\mathbf{v}_{2} + s_{3}\mathbf{v}_{3} + \dots + s_{k}\mathbf{v}_{k} = \mathbf{0} \nonumber \]
is a nontrivial linear combination that vanishes, so \(D\) is dependent. Conversely, if \(D\) is dependent, let \(t_{1}\mathbf{v}_{1} + t_{2}\mathbf{v}_{2} + \dots + t_{k}\mathbf{v}_{k} = \mathbf{0}\) where some coefficient is nonzero. If (say) \(t_{2} \neq 0\), then \(\mathbf{v}_2 = - \frac{t_1}{t_2}\mathbf{v}_1 - \frac{t_3}{t_2}\mathbf{v}_3 - \dots - \frac{t_k}{t_2}\mathbf{v}_k\) is a linear combination of the others.
\(\square\)
Lemma \(\PageIndex{1}\) gives a way to enlarge independent sets to a basis; by contrast, Lemma \(\PageIndex{3}\) shows that spanning sets can be cut down to a basis.
Lemma \(\PageIndex{3}\)
Let \(V\) be a finite dimensional vector space. Any spanning set for \(V\) can be cut down (by deleting vectors) to a basis of \(V\).
Proof
Since \(V\) is finite dimensional, it has a finite spanning set \(S\). Among all spanning sets contained in \(S\), choose \(S_{0}\) containing the smallest number of vectors. It suffices to show that \(S_{0}\) is independent (then \(S_{0}\) is a basis, proving the theorem). Suppose, on the contrary, that \(S_{0}\) is not independent. Then, by Lemma \(\PageIndex{3}\), some vector \(\mathbf{u} \in S_{0}\) is a linear combination of the set \(S_{1} = S_{0} \setminus \{\mathbf{u}\}\) of vectors in \(S_{0}\) other than \(\mathbf{u}\). It follows that \(span \; S_{0} = span \; S_{1}\), that is, \(V = span \; S_{1}\). But \(S_{1}\) has fewer elements than \(S_{0}\) so this contradicts the choice of \(S_{0}\). Hence \(S_{0}\) is independent after all.
\(\square\)
Note that, with Theorem \(\PageIndex{1}\), Theorem 5.2.6 completes the promised proof of Theorem 5.2.6 for the case \(V = \mathbb{R}^n\).
Example \(\PageIndex{6}\)
Find a basis of \(\mathbf{P}_{3}\) in the spanning set \(S = \{1, x + x^{2}, 2x - 3x^{2}, 1 + 3x - 2x^{2}, x^{3}\}\).
Solution
Since \(dim \;\|{P}_{3} = 4\), we must eliminate one polynomial from \(S\). It cannot be \(x^{3}\) because the span of the rest of \(S\) is contained in \(\mathbf{P}_{2}\). But eliminating \(1 + 3x - 2x^{2}\) does leave a basis (verify). Note that \(1 + 3x - 2x^{2}\) is the sum of the first three polynomials in \(S\).
Theorems 6.4.1 and 6.4.3 have other useful consequences.
Theorem \(\PageIndex{4}\)
Let \(V\) be a vector space with \(dim \; V = n\), and suppose \(S\) is a set of exactly \(n\) vectors in \(V\). Then \(S\) is independent if and only if \(S\) spans \(V\).
Proof
Assume first that \(S\) is independent. By Theorem \(\PageIndex{1}\), \(S\) is contained in a basis \(B\) of \(V\). Hence \(|S| = n = |B|\) so, since \(S \subseteq B\), it follows that \(S = B\). In particular \(S\) spans \(V\).
Conversely, assume that \(S\) spans \(V\), so \(S\) contains a basis \(B\) by Theorem 5.2.6. Again \(|S| = n = |B|\) so, since \(S \supseteq B\), it follows that \(S = B\). Hence \(S\) is independent.
\(\square\)
One of independence or spanning is often easier to establish than the other when showing that a set of vectors is a basis. For example if \(V = \mathbb{R}^n\) it is easy to check whether a subset \(S\) of \(\mathbb{R}^n\) is orthogonal (hence independent) but checking spanning can be tedious. Here are three more examples.
Example \(\PageIndex{7}\)
Consider the set \(S = \{p_{0}(x), p_{1}(x), \dots, p_{n}(x)\}\) of polynomials in \(\mathbf{P}_{n}\). If \(\text{deg} p_{k}(x) = k\) for each \(k\), show that \(S\) is a basis of \(\mathbf{P}_{n}\).
Solution
The set \(S\) is independent—the degrees are distinct—see Example 6.4.4. Hence \(S\) is a basis of \(\mathbf{P}_{n}\) by Theorem 6.4.1 because \(dim \;\|{P}_{n} = n + 1\).
Example \(\PageIndex{8}\)
Let \(V\) denote the space of all symmetric \(2 \times 2\) matrices. Find a basis of \(V\) consisting of invertible matrices.
Solution
We know that \(dim \; V = 3\) (Example 6.3.11), so what is needed is a set of three invertible, symmetric matrices that (using Theorem 6.4.1) is either independent or spans \(V\). The set \(\left\{ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right] \right\}\) is independent (verify) and so is a basis of the required type.
Example \(\PageIndex{9}\)
Let \(A\) be any \(n \times n\) matrix. Show that there exist \(n^{2} + 1\) scalars \(a_{0}, a_{1}, a_{2}, \dots, a_{n^{2}}\) not all zero, such that
\[a_0I + a_1A +a_2A^2 + \dots + a_{n^2}A^{n^2} = 0 \nonumber \]
where \(I\) denotes the \(n \times n\) identity matrix.
Solution
The space \(\mathbf{M}_{nn}\) of all \(n \times n\) matrices has dimension \(n^{2}\) by Example 6.4.4. Hence the \(n^{2} + 1\) matrices \(I, A, A^{2}, \dots, A^{n^{2}}\) cannot be independent by Theorem 6.4.1, so a nontrivial linear combination vanishes. This is the desired conclusion.
The result in Example 6.4.9 can be written as \(f(A) = 0\) where \(f(x) = a_{0} + a_{1}x + a_{2}x^{2} + \dots + a_{n^{2}}x^{n^{2}}\). In other words, \(A\) satisfies a nonzero polynomial \(f(x)\) of degree at most \(n^{2}\). In fact we know that \(A\) satisfies a nonzero polynomial of degree \(n\) (this is the Cayley-Hamilton theorem—see Theorem 8.7.10), but the brevity of the solution in Example 6.4.6 is an indication of the power of these methods.
If \(U\) and \(W\) are subspaces of a vector space \(V\), there are two related subspaces that are of interest, their sum \(U + W\) and their intersection \(U \cap W\), defined by
\[\begin{aligned} U + W &= \{\mathbf{u} + \mathbf{w} \mid \mathbf{u} \in U \mbox{ and } \mathbf{w} \in W \} \\ U \cap W &= \{\mathbf{v} \in V \mid \mathbf{v} \in U \mbox{ and } \mathbf{v} \in W \}\end{aligned} \nonumber \]
It is routine to verify that these are indeed subspaces of \(V\), that \(U \cap W\) is contained in both \(U\) and \(W\), and that \(U + W\) contains both \(U\) and \(W\). We conclude this section with a useful fact about the dimensions of these spaces. The proof is a good illustration of how the theorems in this section are used.
Theorem \(\PageIndex{5}\)
Suppose that \(U\) and \(W\) are finite dimensional subspaces of a vector space \(V\). Then \(U + W\) is finite dimensional and
\[dim \;(U + W) = dim \; U + dim \; W - dim \;(U \cap W). \nonumber \]
Proof
Since \(U \cap W \subseteq U\), it has a finite basis, say \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}\}\). Extend it to a basis \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{u}_{1}, \dots, \mathbf{u}_{m}\}\) of \(U\) by Theorem \(\PageIndex{1}\). Similarly extend \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}\}\) to a basis \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) of \(W\). Then
\[U + W = span \;\{\mathbf{x}_1, \dots, \mathbf{x}_d, \mathbf{u}_1, \dots, \mathbf{u}_m, \mathbf{w}_1, \dots, \mathbf{w}_p \} \nonumber \]
as the reader can verify, so \(U + W\) is finite dimensional. For the rest, it suffices to show that
\(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{u}_{1}, \dots, \mathbf{u}_{m}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) is independent (verify). Suppose that
\[\label{eq:thm6_4_5proof} r_1\mathbf{x}_1 + \dots + r_d\mathbf{x}_d + s_1\mathbf{u}_1 + \dots + s_m\mathbf{u}_m + t_1\mathbf{w}_1 + \dots + t_p\mathbf{w}_p = \mathbf{0} \]
where the \(r_{i}\), \(s_{j}\), and \(t_{k}\) are scalars. Then
\[r_1\mathbf{x}_1 + \dots + r_d\mathbf{x}_d + s_1\mathbf{u}_1 + \dots + s_m\mathbf{u}_m = -(t_1\mathbf{w}_1 + \dots + t_p\mathbf{w}_p) \nonumber \]
is in \(U\) (left side) and also in \(W\) (right side), and so is in \(U \cap W\). Hence \((t_{1}\mathbf{w}_{1} + \dots + t_{p}\mathbf{w}_{p})\) is a linear combination of \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}\}\), so \(t_{1} = \dots = t_{p} = 0\), because \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) is independent. Similarly, \(s_{1} = \dots = s_{m} = 0\), so (\ref{eq:thm6_4_5proof}) becomes \(r_{1}\mathbf{x}_{1} + \dots + r_{d}\mathbf{x}_{d} = \mathbf{0}\). It follows that \(r_{1} = \dots = r_{d} = 0\), as required.
\(\square\)
Theorem 6.4.5 is particularly interesting if \(U \cap W = \{\mathbf{0}\}\). Then there are no vectors \(\mathbf{x}_{i}\) in the above proof, and the argument shows that if \(\{\mathbf{u}_{1}, \dots, \mathbf{u}_{m}\}\) and \(\{\mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) are bases of \(U\) and \(W\) respectively, then \(\{\mathbf{u}_{1}, \dots, \mathbf{u}_{m}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) is a basis of \(U\) + \(W\). In this case \(U + W\) is said to be a direct sum (written \(U \oplus W\)); we return to this in Chapter 9.