Skip to main content
Mathematics LibreTexts

5.11.1.4: Finite Dimensional Spaces

  • Page ID
    134819
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Up to this point, we have had no guarantee that an arbitrary vector space has a basis—and hence no guarantee that one can speak at all of the dimension of \(V\). However, Theorem [thm:019430] will show that any space that is spanned by a finite set of vectors has a (finite) basis: The proof requires the following basic lemma, of interest in itself, that gives a way to enlarge a given independent set of vectors.

    Independent Lemma019357 Let \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) be an independent set of vectors in a vector space \(V\). If \(\mathbf{u} \in V\) but \(\mathbf{u} \notin span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\), then \(\{\mathbf{u}, \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) is also independent.

    Let \(t\mathbf{u} + t_{1}\mathbf{v}_{1} + t_{2}\mathbf{v}_{2} + \dots + t_{k}\mathbf{v}_{k} = \mathbf{0}\); we must show that all the coefficients are zero. First, \(t = 0\) because, otherwise, \(\mathbf{u} = - \frac{t_1}{t}\mathbf{v}_1 - \frac{t_2}{t}\mathbf{v}_2 - \dots - \frac{t_k}{t}\mathbf{v}_k\) is in \(span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\), contrary to our assumption. Hence \(t = 0\). But then \(t_{1}\mathbf{v}_{1} + t_{2}\mathbf{v}_{2} + \dots + t_{k}\mathbf{v}_{k} = \mathbf{0}\) so the rest of the \(t_{i}\) are zero by the independence of \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\). This is what we wanted.

    Note that the converse of Lemma [lem:019357] is also true: if \(\{\mathbf{u}, \mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) is independent, then \(\mathbf{u}\) is not in \(span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\).

    As an illustration, suppose that \(\{\mathbf{v}_{1}, \mathbf{v}_{2}\}\) is independent in \(\mathbb{R}^3\). Then \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) are not parallel, so \(span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}\}\) is a plane through the origin (shaded in the diagram). By Lemma [lem:019357], \(\mathbf{u}\) is not in this plane if and only if \(\{\mathbf{u}, \mathbf{v}_{1}, \mathbf{v}_{2}\}\) is independent.

    Finite Dimensional and Infinite Dimensional Vector Spaces019411 A vector space \(V\) is called finite dimensional if it is spanned by a finite set of vectors. Otherwise, \(V\) is called infinite dimensional.

    Thus the zero vector space \(\{\mathbf{0}\}\) is finite dimensional because \(\{\mathbf{0}\}\) is a spanning set.

    019415 Let \(V\) be a finite dimensional vector space. If \(U\) is any subspace of \(V\), then any independent subset of \(U\) can be enlarged to a finite basis of \(U\).

    Suppose that \(I\) is an independent subset of \(U\). If \(span \; I = U\) then \(I\) is already a basis of \(U\). If \(span \; I \neq U\), choose \(\mathbf{u}_{1} \in U\) such that \(\mathbf{u}_{1} \notin span \; I\). Hence the set \(I \cup \{\mathbf{u}_{1}\}\) is independent by Lemma [lem:019357]. If \(span \;(I \cup \{\mathbf{u}_{1}\}) = U\) we are done; otherwise choose \(\mathbf{u}_{2} \in U\) such that \(\mathbf{u}_{2} \notin span \;(I \cup \{\mathbf{u}_{1}\})\). Hence \(I \cup \{\mathbf{u}_{1}, \mathbf{u}_{2}\}\) is independent, and the process continues. We claim that a basis of \(U\) will be reached eventually. Indeed, if no basis of \(U\) is ever reached, the process creates arbitrarily large independent sets in \(V\). But this is impossible by the fundamental theorem because \(V\) is finite dimensional and so is spanned by a finite set of vectors.

    019430 Let \(V\) be a finite dimensional vector space spanned by m vectors.

    1. \(V\) has a finite basis, and \(dim \; V \leq m\).
    2. Every independent set of vectors in \(V\) can be enlarged to a basis of \(V\) by adding vectors from any fixed basis of \(V\).
    3. If \(U\) is a subspace of \(V\), then
      1. \(U\) is finite dimensional and \(dim \; U \leq dim \; V\).
      2. If \(dim \; U = dim \; V\) then \(U=V\).
    1. If \(V = \{\mathbf{0}\}\), then \(V\) has an empty basis and \(dim \; V = 0 \leq m\). Otherwise, let \(\mathbf{v} \neq \mathbf{0}\) be a vector in \(V\). Then \(\{\mathbf{v}\}\) is independent, so (1) follows from Lemma [lem:019415] with \(U = V\).
    2. We refine the proof of Lemma [lem:019415]. Fix a basis \(B\) of \(V\) and let \(I\) be an independent subset of \(V\). If \(span \; I = V\) then \(I\) is already a basis of \(V\). If \(span \; I \neq V\), then \(B\) is not contained in \(I\) (because \(B\) spans \(V\)). Hence choose \(\mathbf{b}_{1} \in B\) such that \(\mathbf{b}_{1} \notin span \; I\). Hence the set \(I \cup \{\mathbf{b}_{1}\}\) is independent by Lemma [lem:019357]. If \(span \;(I \cup \{\mathbf{b}_{1}\}) = V\) we are done; otherwise a similar argument shows that \((I \cup \{\mathbf{b}_{1}, \mathbf{b}_{2}\})\) is independent for some \(\mathbf{b}_{2} \in B\). Continue this process. As in the proof of Lemma [lem:019415], a basis of \(V\) will be reached eventually.
      1. This is clear if \(U = \{\mathbf{0}\}\). Otherwise, let \(\mathbf{u} \neq \mathbf{0}\) in \(U\). Then \(\{\mathbf{u}\}\) can be enlarged to a finite basis \(B\) of \(U\) by Lemma [lem:019415], proving that \(U\) is finite dimensional. But \(B\) is independent in \(V\), so \(dim \; U \leq dim \; V\) by the fundamental theorem.
      2. This is clear if \(U = \{\mathbf{0}\}\) because \(V\) has a basis; otherwise, it follows from (2).

    Theorem [thm:019430] shows that a vector space \(V\) is finite dimensional if and only if it has a finite basis (possibly empty), and that every subspace of a finite dimensional space is again finite dimensional.

    019464 Enlarge the independent set \(D = \left\{ \left[ \begin{array}{rr} 1 & 1 \\ 1 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 1 & 1 \end{array} \right], \left[ \begin{array}{rr} 1 & 0 \\ 1 & 1 \end{array} \right] \right\}\) to a basis of \(\mathbf{M}_{22}\).

    The standard basis of \(\mathbf{M}_{22}\) is \(\left\{ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right] \right\}\), so including one of these in \(D\) will produce a basis by Theorem [thm:019430]. In fact including any of these matrices in \(D\) produces an independent set (verify), and hence a basis by Theorem [thm:019633]. Of course these vectors are not the only possibilities, for example, including \(\left[ \begin{array}{rr} 1 & 1 \\ 0 & 1 \end{array} \right]\) works as well.

    019475 Find a basis of \(\mathbf{P}_{3}\) containing the independent set \(\{1 + x, 1 + x^{2}\}\).

    The standard basis of \(\mathbf{P}_{3}\) is \(\{1, x, x^{2}, x^{3}\}\), so including two of these vectors will do. If we use \(1\) and \(x^{3}\), the result is \(\{1, 1 + x, 1 + x^{2}, x^{3}\}\). This is independent because the polynomials have distinct degrees (Example [exa:018606]), and so is a basis by Theorem [thm:019430]. Of course, including \(\{1, x\}\) or \(\{1, x^{2}\}\) would not work!

    019490 Show that the space \(\mathbf{P}\) of all polynomials is infinite dimensional.

    For each \(n \geq 1\), \(\mathbf{P}\) has a subspace \(\mathbf{P}_{n}\) of dimension \(n + 1\). Suppose \(\mathbf{P}\) is finite dimensional, say \(dim \;\|{P} = m\). Then \(dim \;\|{P}_{n} \leq dim \;\|{P}\) by Theorem [thm:019430], that is \(n + 1 \leq m\). This is impossible since \(n\) is arbitrary, so \(\mathbf{P}\) must be infinite dimensional.

    The next example illustrates how (2) of Theorem [thm:019430] can be used.

    019499 If \(\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{k}\) are independent columns in \(\mathbb{R}^n\), show that they are the first \(k\) columns in some invertible \(n \times n\) matrix.

    By Theorem [thm:019430], expand \(\{\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{k}\}\) to a basis \(\{\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{k}, \mathbf{c}_{k+1}, \dots, \mathbf{c}_{n}\}\) of \(\mathbb{R}^n\). Then the matrix \(A = \left[ \begin{array}{ccccccc} \mathbf{c}_{1} & \mathbf{c}_{2} & \dots & \mathbf{c}_{k} & \mathbf{c}_{k+1} & \dots & \mathbf{c}_{n} \end{array} \right]\) with this basis as its columns is an \(n \times n\) matrix and it is invertible by Theorem [thm:014205].

    019525 Let \(U\) and \(W\) be subspaces of the finite dimensional space \(V\).

    1. If \(U \subseteq W\), then \(dim \; U \leq dim \; W\).
    2. If \(U \subseteq W\) and \(dim \; U = dim \; W\), then \(U = W\).

    Since \(W\) is finite dimensional, (1) follows by taking \(V = W\) in part (3) of Theorem [thm:019430]. Now assume \(dim \; U = dim \; W = n\), and let \(B\) be a basis of \(U\). Then \(B\) is an independent set in \(W\). If \(U \neq W\), then \(span \; B \neq W\), so \(B\) can be extended to an independent set of \(n + 1\) vectors in \(W\) by Lemma [lem:019357]. This contradicts the fundamental theorem (Theorem [thm:018746]) because \(W\) is spanned by \(dim \; W = n\) vectors. Hence \(U = W\), proving (2).

    Theorem [thm:019525] is very useful. This was illustrated in Example [exa:014418] for \(\mathbb{R}^2\) and \(\mathbb{R}^3\); here is another example.

    019539 If \(a\) is a number, let \(W\) denote the subspace of all polynomials in \(\mathbf{P}_{n}\) that have \(a\) as a root:

    \[W = \{p(x) \mid p(x) \in\|{P}_n \mbox{ and } p(a) = 0 \} \nonumber \]

    Show that \(\{(x - a), (x - a)^{2}, \dots, (x - a)^{n}\}\) is a basis of \(W\).

    Observe first that \((x - a), (x - a)^2, \dots, (x - a)^n\) are members of \(W\), and that they are independent because they have distinct degrees (Example [exa:018606]). Write

    \[U = span \;\{(x - a), (x - a)^2, \dots, (x - a)^n \} \nonumber \]

    Then we have \(U \subseteq W \subseteq\|{P}_{n}\), \(dim \; U = n\), and \(dim \;\|{P}_{n} = n + 1\). Hence \(n \leq dim \; W \leq n + 1\) by Theorem [thm:019525]. Since \(dim \; W\) is an integer, we must have \(dim \; W = n\) or \(dim \; W = n + 1\). But then \(W = U\) or \(W =\|{P}_{n}\), again by Theorem [thm:019525]. Because \(W \neq\|{P}_{n}\), it follows that \(W = U\), as required.

    A set of vectors is called dependent if it is not independent, that is if some nontrivial linear combination vanishes. The next result is a convenient test for dependence.

    Dependent Lemma019559 A set \(D = \{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) of vectors in a vector space V is dependent if and only if some vector in \(D\) is a linear combination of the others.

    Let \(\mathbf{v}_{2}\) (say) be a linear combination of the rest: \(\mathbf{v}_{2} = s_{1}\mathbf{v}_{1} + s_{3}\mathbf{v}_{3} + \dots + s_{k}\mathbf{v}_{k}\). Then

    \[s_{1}\mathbf{v}_{1} + (-1)\mathbf{v}_{2} + s_{3}\mathbf{v}_{3} + \dots + s_{k}\mathbf{v}_{k} = \mathbf{0} \nonumber \]

    is a nontrivial linear combination that vanishes, so \(D\) is dependent. Conversely, if \(D\) is dependent, let \(t_{1}\mathbf{v}_{1} + t_{2}\mathbf{v}_{2} + \dots + t_{k}\mathbf{v}_{k} = \mathbf{0}\) where some coefficient is nonzero. If (say) \(t_{2} \neq 0\), then \(\mathbf{v}_2 = - \frac{t_1}{t_2}\mathbf{v}_1 - \frac{t_3}{t_2}\mathbf{v}_3 - \dots - \frac{t_k}{t_2}\mathbf{v}_k\) is a linear combination of the others.

    Lemma [lem:019357] gives a way to enlarge independent sets to a basis; by contrast, Lemma [lem:019559] shows that spanning sets can be cut down to a basis.

    019593 Let \(V\) be a finite dimensional vector space. Any spanning set for \(V\) can be cut down (by deleting vectors) to a basis of \(V\).

    Since \(V\) is finite dimensional, it has a finite spanning set \(S\). Among all spanning sets contained in \(S\), choose \(S_{0}\) containing the smallest number of vectors. It suffices to show that \(S_{0}\) is independent (then \(S_{0}\) is a basis, proving the theorem). Suppose, on the contrary, that \(S_{0}\) is not independent. Then, by Lemma [lem:019559], some vector \(\mathbf{u} \in S_{0}\) is a linear combination of the set \(S_{1} = S_{0} \setminus \{\mathbf{u}\}\) of vectors in \(S_{0}\) other than \(\mathbf{u}\). It follows that \(span \; S_{0} = span \; S_{1}\), that is, \(V = span \; S_{1}\). But \(S_{1}\) has fewer elements than \(S_{0}\) so this contradicts the choice of \(S_{0}\). Hence \(S_{0}\) is independent after all.

    Note that, with Theorem [thm:019430], Theorem [thm:019593] completes the promised proof of Theorem [thm:014407] for the case \(V = \mathbb{R}^n\).

    019616 Find a basis of \(\mathbf{P}_{3}\) in the spanning set \(S = \{1, x + x^{2}, 2x - 3x^{2}, 1 + 3x - 2x^{2}, x^{3}\}\).

    Since \(dim \;\|{P}_{3} = 4\), we must eliminate one polynomial from \(S\). It cannot be \(x^{3}\) because the span of the rest of \(S\) is contained in \(\mathbf{P}_{2}\). But eliminating \(1 + 3x - 2x^{2}\) does leave a basis (verify). Note that \(1 + 3x - 2x^{2}\) is the sum of the first three polynomials in \(S\).

    Theorems [thm:019430] and [thm:019593] have other useful consequences.

    019633 Let \(V\) be a vector space with \(dim \; V = n\), and suppose \(S\) is a set of exactly \(n\) vectors in \(V\). Then \(S\) is independent if and only if \(S\) spans \(V\).

    Assume first that \(S\) is independent. By Theorem [thm:019430], \(S\) is contained in a basis \(B\) of \(V\). Hence \(|S| = n = |B|\) so, since \(S \subseteq B\), it follows that \(S = B\). In particular \(S\) spans \(V\).

    Conversely, assume that \(S\) spans \(V\), so \(S\) contains a basis \(B\) by Theorem [thm:019593]. Again \(|S| = n = |B|\) so, since \(S \supseteq B\), it follows that \(S = B\). Hence \(S\) is independent.

    One of independence or spanning is often easier to establish than the other when showing that a set of vectors is a basis. For example if \(V = \mathbb{R}^n\) it is easy to check whether a subset \(S\) of \(\mathbb{R}^n\) is orthogonal (hence independent) but checking spanning can be tedious. Here are three more examples.

    019643 Consider the set \(S = \{p_{0}(x), p_{1}(x), \dots, p_{n}(x)\}\) of polynomials in \(\mathbf{P}_{n}\). If \(\text{deg} p_{k}(x) = k\) for each \(k\), show that \(S\) is a basis of \(\mathbf{P}_{n}\).

    The set \(S\) is independent—the degrees are distinct—see Example [exa:018606]. Hence \(S\) is a basis of \(\mathbf{P}_{n}\) by Theorem [thm:019633] because \(dim \;\|{P}_{n} = n + 1\).

    019657 Let \(V\) denote the space of all symmetric \(2 \times 2\) matrices. Find a basis of \(V\) consisting of invertible matrices.

    We know that \(dim \; V = 3\) (Example [exa:018930]), so what is needed is a set of three invertible, symmetric matrices that (using Theorem [thm:019633]) is either independent or spans \(V\). The set \(\left\{ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right] \right\}\) is independent (verify) and so is a basis of the required type.

    019664 Let \(A\) be any \(n \times n\) matrix. Show that there exist \(n^{2} + 1\) scalars \(a_{0}, a_{1}, a_{2}, \dots, a_{n^{2}}\) not all zero, such that

    \[a_0I + a_1A +a_2A^2 + \dots + a_{n^2}A^{n^2} = 0 \nonumber \]

    where \(I\) denotes the \(n \times n\) identity matrix.

    The space \(\mathbf{M}_{nn}\) of all \(n \times n\) matrices has dimension \(n^{2}\) by Example [exa:018880]. Hence the \(n^{2} + 1\) matrices \(I, A, A^{2}, \dots, A^{n^{2}}\) cannot be independent by Theorem [thm:019633], so a nontrivial linear combination vanishes. This is the desired conclusion.

    The result in Example [exa:019664] can be written as \(f(A) = 0\) where \(f(x) = a_{0} + a_{1}x + a_{2}x^{2} + \dots + a_{n^{2}}x^{n^{2}}\). In other words, \(A\) satisfies a nonzero polynomial \(f(x)\) of degree at most \(n^{2}\). In fact we know that \(A\) satisfies a nonzero polynomial of degree \(n\) (this is the Cayley-Hamilton theorem—see Theorem [thm:025927]), but the brevity of the solution in Example [exa:019616] is an indication of the power of these methods.

    If \(U\) and \(W\) are subspaces of a vector space \(V\), there are two related subspaces that are of interest, their sum \(U + W\) and their intersection \(U \cap W\), defined by

    \[\begin{aligned} U + W &= \{\mathbf{u} + \mathbf{w} \mid \mathbf{u} \in U \mbox{ and } \mathbf{w} \in W \} \\ U \cap W &= \{\mathbf{v} \in V \mid \mathbf{v} \in U \mbox{ and } \mathbf{v} \in W \}\end{aligned} \nonumber \]

    It is routine to verify that these are indeed subspaces of \(V\), that \(U \cap W\) is contained in both \(U\) and \(W\), and that \(U + W\) contains both \(U\) and \(W\). We conclude this section with a useful fact about the dimensions of these spaces. The proof is a good illustration of how the theorems in this section are used.

    019692 Suppose that \(U\) and \(W\) are finite dimensional subspaces of a vector space \(V\). Then \(U + W\) is finite dimensional and

    \[dim \;(U + W) = dim \; U + dim \; W - dim \;(U \cap W). \nonumber \]

    Since \(U \cap W \subseteq U\), it has a finite basis, say \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}\}\). Extend it to a basis \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{u}_{1}, \dots, \mathbf{u}_{m}\}\) of \(U\) by Theorem [thm:019430]. Similarly extend \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}\}\) to a basis \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) of \(W\). Then

    \[U + W = span \;\{\mathbf{x}_1, \dots, \mathbf{x}_d, \mathbf{u}_1, \dots, \mathbf{u}_m, \mathbf{w}_1, \dots, \mathbf{w}_p \} \nonumber \]

    as the reader can verify, so \(U + W\) is finite dimensional. For the rest, it suffices to show that
    \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{u}_{1}, \dots, \mathbf{u}_{m}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) is independent (verify). Suppose that

    \[\label{eq:thm6_4_5proof} r_1\mathbf{x}_1 + \dots + r_d\mathbf{x}_d + s_1\mathbf{u}_1 + \dots + s_m\mathbf{u}_m + t_1\mathbf{w}_1 + \dots + t_p\mathbf{w}_p = \mathbf{0} \]

    where the \(r_{i}\), \(s_{j}\), and \(t_{k}\) are scalars. Then

    \[r_1\mathbf{x}_1 + \dots + r_d\mathbf{x}_d + s_1\mathbf{u}_1 + \dots + s_m\mathbf{u}_m = -(t_1\mathbf{w}_1 + \dots + t_p\mathbf{w}_p) \nonumber \]

    is in \(U\) (left side) and also in \(W\) (right side), and so is in \(U \cap W\). Hence \((t_{1}\mathbf{w}_{1} + \dots + t_{p}\mathbf{w}_{p})\) is a linear combination of \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}\}\), so \(t_{1} = \dots = t_{p} = 0\), because \(\{\mathbf{x}_{1}, \dots, \mathbf{x}_{d}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) is independent. Similarly, \(s_{1} = \dots = s_{m} = 0\), so ([eq:thm6_4_5proof]) becomes \(r_{1}\mathbf{x}_{1} + \dots + r_{d}\mathbf{x}_{d} = \mathbf{0}\). It follows that \(r_{1} = \dots = r_{d} = 0\), as required.

    Theorem [thm:019692] is particularly interesting if \(U \cap W = \{\mathbf{0}\}\). Then there are no vectors \(\mathbf{x}_{i}\) in the above proof, and the argument shows that if \(\{\mathbf{u}_{1}, \dots, \mathbf{u}_{m}\}\) and \(\{\mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) are bases of \(U\) and \(W\) respectively, then \(\{\mathbf{u}_{1}, \dots, \mathbf{u}_{m}, \mathbf{w}_{1}, \dots, \mathbf{w}_{p}\}\) is a basis of \(U\) + \(W\). In this case \(U + W\) is said to be a direct sum (written \(U \oplus W\)); we return to this in Chapter [chap:9].


    This page titled 5.11.1.4: Finite Dimensional Spaces is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.