Skip to main content
Mathematics LibreTexts

5.11.1.2: Subspaces and Spanning Sets

  • Page ID
    134815
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Chapter [chap:5] is essentially about the subspaces of \(\mathbb{R}^n\). We now extend this notion.

    Subspaces of a Vector Space018059 If \(V\) is a vector space, a nonempty subset \(U \subseteq V\) is called a subspace of \(V\) if \(U\) is itself a vector space using the addition and scalar multiplication of \(V\).

    Subspaces of \(\mathbb{R}^n\) (as defined in Section [sec:5_1]) are subspaces in the present sense by Example [exa:017680]. Moreover, the defining properties for a subspace of \(\mathbb{R}^n\) actually characterize subspaces in general.

    Subspace Test018065 A subset \(U\) of a vector space is a subspace of \(V\) if and only if it satisfies the following three conditions:

    1. \(\mathbf{0}\) lies in \(U\) where \(\mathbf{0}\) is the zero vector of \(V\).
    2. If \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\) are in \(U\), then \(\mathbf{u}_{1} + \mathbf{u}_{2}\) is also in \(U\).
    3. If \(\mathbf{u}\) is in \(U\), then \(a\mathbf{u}\) is also in \(U\) for each scalar \(a\).

    If \(U\) is a subspace of \(V\), then (2) and (3) hold by axioms A1 and S1 respectively, applied to the vector space \(U\). Since \(U\) is nonempty (it is a vector space), choose \(\mathbf{u}\) in \(U\). Then (1) holds because \(\mathbf{0} = 0\mathbf{u}\) is in \(U\) by (3) and Theorem [thm:017797].

    Conversely, if (1), (2), and (3) hold, then axioms A1 and S1 hold because of (2) and (3), and axioms A2, A3, S2, S3, S4, and S5 hold in \(U\) because they hold in \(V\). Axiom A4 holds because the zero vector \(\mathbf{0}\) of \(V\) is actually in \(U\) by (1), and so serves as the zero of \(U\). Finally, given \(\mathbf{u}\) in \(U\), then its negative \(-\mathbf{u}\) in \(V\) is again in \(U\) by (3) because \(-\mathbf{u} = (-1)\mathbf{u}\) (again using Theorem [thm:017797]). Hence \(-\mathbf{u}\) serves as the negative of \(\mathbf{u}\) in \(U\).

    Note that the proof of Theorem [thm:018065] shows that if \(U\) is a subspace of \(V\), then \(U\) and \(V\) share the same zero vector, and that the negative of a vector in the space \(U\) is the same as its negative in \(V\).

    018086 If \(V\) is any vector space, show that \(\{\mathbf{0}\}\) and \(V\) are subspaces of \(V\).

    \(U = V\) clearly satisfies the conditions of the subspace test. As to \(U = \{\mathbf{0}\}\), it satisfies the conditions because \(\mathbf{0} + \mathbf{0} = \mathbf{0}\) and \(a\mathbf{0} = \mathbf{0}\) for all \(a\) in \(\mathbb{R}\).

    The vector space \(\{\mathbf{0}\}\) is called the zero subspace of \(V\).

    018093 Let \(\mathbf{v}\) be a vector in a vector space \(V\). Show that the set

    \[\mathbb{R}\mathbf{v} = \{a\mathbf{v} \mid a \mbox{ in } \mathbb{R} \} \nonumber \]

    of all scalar multiples of \(\mathbf{v}\) is a subspace of \(V\).

    Because \(\mathbf{0} = 0\mathbf{v}\), it is clear that \(\mathbf{0}\) lies in \(\mathbb{R}\mathbf{v}\). Given two vectors \(a\mathbf{v}\) and \(a_{1}\mathbf{v}\) in \(\mathbb{R}\mathbf{v}\), their sum \(a\mathbf{v} + a_{1}\mathbf{v} = (a + a_{1})\mathbf{v}\) is also a scalar multiple of \(\mathbf{v}\) and so lies in \(\mathbb{R}\mathbf{v}\). Hence \(\mathbb{R}\mathbf{v}\) is closed under addition. Finally, given \(a\mathbf{v}\), \(r(a\mathbf{v}) = (ra)\mathbf{v}\) lies in \(\mathbb{R}\mathbf{v}\) for all \(r \in \mathbb{R}\), so \(\mathbb{R}\mathbf{v}\) is closed under scalar multiplication. Hence the subspace test applies.

    In particular, given \(\mathbf{d} \neq \mathbf{0}\) in \(\mathbb{R}^{3}\), \(\mathbb{R}\mathbf{d}\) is the line through the origin with direction vector \(\mathbf{d}\).

    The space \(\mathbb{R}\mathbf{v}\) in Example [exa:018093] is described by giving the form of each vector in \(\mathbb{R}\mathbf{v}\). The next example describes a subset \(U\) of the space \(\mathbf{M}_{nn}\) by giving a condition that each matrix of \(U\) must satisfy.

    018107 Let \(A\) be a fixed matrix in \(\mathbf{M}_{nn}\). Show that \(U = \{X\mbox{ in }\|{M}_{nn} \mid AX = XA\}\) is a subspace of \(\mathbf{M}_{nn}\).

    If \(0\) is the \(n \times n\) zero matrix, then \(A0 = 0A\), so \(0\) satisfies the condition for membership in \(U\). Next suppose that \(X\) and \(X_{1}\) lie in \(U\) so that \(AX = XA\) and \(AX_{1} = X_{1}A\). Then

    \[\begin{aligned} A(X + X_1) &= AX + AX_1 = XA + X_1A + (X + X_1)A \\ A(aX) &= a(AX) = a(XA) = (aX)A\end{aligned} \nonumber \]

    for all \(a\) in \(\mathbb{R}\), so both \(X + X_{1}\) and \(aX\) lie in \(U\). Hence \(U\) is a subspace of \(\mathbf{M}_{nn}\).

    Suppose \(p(x)\) is a polynomial and \(a\) is a number. Then the number \(p(a)\) obtained by replacing \(x\) by \(a\) in the expression for \(p(x)\) is called the evaluation of \(p(x)\) at \(a\). For example, if \(p(x) = 5 - 6x + 2x^{2}\), then the evaluation of \(p(x)\) at \(a = 2\) is \(p(2) = 5 - 12 + 8 = 1\). If \(p(a) = 0\), the number \(a\) is called a root of \(p(x)\).

    018124 Consider the set \(U\) of all polynomials in \(\mathbf{P}\) that have \(3\) as a root:

    \[U = \{p(x) \in\|{P} \mid p(3) = 0 \} \nonumber \]

    Show that \(U\) is a subspace of \(\mathbf{P}\).

    Clearly, the zero polynomial lies in \(U\). Now let \(p(x)\) and \(q(x)\) lie in \(U\) so \(p(3) = 0\) and \(q(3) = 0\). We have \((p + q)(x) = p(x) + q(x)\) for all \(x\), so \((p + q)(3) = p(3) + q(3) = 0 + 0 = 0\), and \(U\) is closed under addition. The verification that \(U\) is closed under scalar multiplication is similar.

    Recall that the space \(\mathbf{P}_{n}\) consists of all polynomials of the form

    \[a_0 + a_1x + a_2x^2 + \dots + a_nx^n \nonumber \]

    where \(a_{0}, a_{1}, a_{2}, \dots, a_{n}\) are real numbers, and so is closed under the addition and scalar multiplication in \(\mathbf{P}\). Moreover, the zero polynomial is included in \(\mathbf{P}_{n}\). Thus the subspace test gives Example [exa:018140].

    018140 \(\mathbf{P}_{n}\) is a subspace of \(\mathbf{P}\) for each \(n \geq 0\).

    The next example involves the notion of the derivative \(f^\prime\) of a function \(f\). (If the reader is not familiar with calculus, this example may be omitted.) A function \(f\) defined on the interval \([a, b]\) is called differentiable if the derivative \(f^\prime(r)\) exists at every \(r\) in \([a, b]\).

    018145 Show that the subset \(\mathbf{D}[a, b]\) of all differentiable functions on \([a, b]\) is a subspace of the vector space \(\mathbf{F}[a, b]\) of all functions on \([a, b]\).

    The derivative of any constant function is the constant function \(0\); in particular, \(0\) itself is differentiable and so lies in \(\mathbf{D}[a, b]\). If \(f\) and \(g\) both lie in \(\mathbf{D}[a, b]\) (so that \(f^\prime\) and \(g^\prime\) exist), then it is a theorem of calculus that \(f + g\) and \(rf\) are both differentiable for any \(r \in \mathbb{R}\). In fact, \((f + g)^\prime = f^\prime + g^\prime\) and \((rf)^\prime = rf^\prime\), so both lie in \(\mathbf{D}[a, b]\). This shows that \(\mathbf{D}[a, b]\) is a subspace of \(\mathbf{F}[a, b]\).

    Linear Combinations and Spanning Sets

    Linear Combinations and Spanning018153 Let \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) be a set of vectors in a vector space \(V\). As in \(\mathbb{R}^n\), a vector \(\mathbf{v}\) is called a linear combination of the vectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\) if it can be expressed in the form

    \[\mathbf{v} = a_1\mathbf{v}_1 + a_2\mathbf{v}_2 + \dots + a_n\mathbf{v}_n \nonumber \]

    where \(a_{1}, a_{2}, \dots, a_{n}\) are scalars, called the coefficients of \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\). The set of all linear combinations of these vectors is called their span, and is denoted by

    \[span \;\{\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_n\} = \{ a_1\mathbf{v}_1 + a_2\mathbf{v}_2 + \dots + a_n\mathbf{v}_n \mid a_i \mbox{ in } \mathbb{R} \} \nonumber \]

    If it happens that \(V = span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\), these vectors are called a spanning set for \(V\). For example, the span of two vectors \(\mathbf{v}\) and \(\mathbf{w}\) is the set

    \[span \;\{\mathbf{v}, \mathbf{w} \} = \{s\mathbf{v} + t\mathbf{w} \mid s \mbox{ and } t \mbox{ in } \mathbb{R} \} \nonumber \]

    of all sums of scalar multiples of these vectors.

    018176 Consider the vectors \(p_{1} = 1 + x + 4x^{2}\) and \(p_{2} = 1 + 5x + x^{2}\) in \(\mathbf{P}_{2}\). Determine whether \(p_{1}\) and \(p_{2}\) lie in \(span \;\{1 + 2x - x^{2}, 3 + 5x + 2x^{2}\}\).

    For \(p_{1}\), we want to determine if \(s\) and \(t\) exist such that

    \[p_1 = s(1 + 2x - x^2) + t(3 + 5x + 2x^2) \nonumber \]

    Equating coefficients of powers of \(x\) (where \(x^{0} = 1\)) gives

    \[1 = s + 3t,\quad 1 = 2s + 5t, \quad \mbox{ and } \quad 4 = -s + 2t \nonumber \]

    These equations have the solution \(s = -2\) and \(t = 1\), so \(p_{1}\) is indeed in \(span \;\{1 + 2x - x^{2}, 3 + 5x + 2x^{2}\}\).

    Turning to \(p_{2} = 1 + 5x + x^{2}\), we are looking for \(s\) and \(t\) such that

    \[p_{2} = s(1 + 2x - x^{2}) + t(3 + 5x + 2x^{2}) \nonumber \]

    Again equating coefficients of powers of \(x\) gives equations \(1 = s + 3t\), \(5 = 2s + 5t\), and \(1 = -s + 2t\). But in this case there is no solution, so \(p_{2}\) is not in \(span \;\{1 + 2x - x^{2}, 3 + 5x + 2x^{2}\}\).

    We saw in Example [exa:013694] that \(\mathbb{R}^m = span \;\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{m}\}\) where the vectors \(\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{m}\) are the columns of the \(m \times m\) identity matrix. Of course \(\mathbb{R}^m =\|{M}_{m1}\) is the set of all \(m \times 1\) matrices, and there is an analogous spanning set for each space \(\mathbf{M}_{mn}\). For example, each \(2 \times 2\) matrix has the form

    \[\left[ \begin{array}{rr} a & b \\ c & d \end{array} \right] = a \left[ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right] + b \left[ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right] + c \left[ \begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array} \right] + d \left[ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right] \nonumber \]

    so

    \[\mathbf{M}_{22} = span \; \left \{ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right], \left[ \begin{array}{rr} 0 & 1 \\ 0 & 0 \end{array} \right],\ \left[ \begin{array}{rr} 0 & 0 \\ 1 & 0 \end{array} \right],\ \left[ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right] \right \} \nonumber \]

    Similarly, we obtain

    018224 \(\mathbf{M}_{mn}\) is the span of the set of all \(m \times n\) matrices with exactly one entry equal to \(1\), and all other entries zero.

    The fact that every polynomial in \(\mathbf{P}_{n}\) has the form \(a_{0} + a_{1}x + a_{2}x^{2} + \dots+ a_{n}x^{n}\) where each \(a_{i}\) is in \(\mathbb{R}\) shows that

    018237 \(\mathbf{P}_{n} = span \;\{1, x, x^{2}, \dots, x^{n}\}\).

    In Example [exa:018093] we saw that \(span \;\{\mathbf{v}\} = \{a\mathbf{v} \mid a \mbox{ in } \mathbb{R}\} = \mathbb{R}\mathbf{v}\) is a subspace for any vector \(\mathbf{v}\) in a vector space \(V\). More generally, the span of any set of vectors is a subspace. In fact, the proof of Theorem [thm:013606] goes through to prove:

    018244 Let \(U = span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) in a vector space \(V\). Then:

    1. \(U\) is a subspace of \(V\) containing each of \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\).
    2. \(U\) is the “smallest” subspace containing these vectors in the sense that any subspace that contains each of \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\) must contain \(U\).

    Here is how condition 2 in Theorem [thm:018244] is used. Given vectors \(\mathbf{v}_1, \dots, \mathbf{v}_{k}\) in a vector space \(V\) and a subspace \(U \subseteq V\), then:

    \[span \;\{\mathbf{v}_{1}, \dots, \mathbf{v}_{n}\} \subseteq U \Leftrightarrow \mbox{ each } \mathbf{v}_i \in U \nonumber \]

    The following examples illustrate this.

    018262 Show that \(\mathbf{P}_{3} = span \;\{x^{2} + x^{3}, x, 2x^{2} + 1, 3\}\).

    Write \(U = span \;\{x^{2} + x^{3}, x, 2x^{2} + 1, 3\}\). Then \(U \subseteq\|{P}_{3}\), and we use the fact that \(\mathbf{P}_{3} = span \;\{1, x, x^{2}, x^{3}\}\) to show that \(\mathbf{P}_{3} \subseteq U\). In fact, \(x\) and \(1 = \frac{1}{3} \cdot 3\) clearly lie in \(U\). But then successively,

    \[x^2 = \frac{1}{2}[(2x^2 + 1) - 1] \quad \mbox{ and } \quad x^3 = (x^2 + x^3) - x^2 \nonumber \]

    also lie in \(U\). Hence \(\mathbf{P}_{3} \subseteq U\) by Theorem [thm:018244].

    018282 Let \(\mathbf{u}\) and \(\mathbf{v}\) be two vectors in a vector space \(V\). Show that

    \[span \;\{\mathbf{u}, \mathbf{v}\} = span \;\{\mathbf{u} + 2\mathbf{v}, \mathbf{u} - \mathbf{v} \} \nonumber \]

    We have \(span \;\{\mathbf{u} + 2\mathbf{v}, \mathbf{u} - \mathbf{v}\} \subseteq\) \(span \;\{\mathbf{u}, \mathbf{v}\}\) by Theorem [thm:018244] because both \(\mathbf{u} + 2\mathbf{v}\) and \(\mathbf{u} - \mathbf{v}\) lie in \(span \;\{\mathbf{u}, \mathbf{v}\}\). On the other hand,

    \[\mathbf{u} = \frac{1}{3}(\mathbf{u} + 2\mathbf{v}) + \frac{2}{3}(\mathbf{u} - \mathbf{v}) \quad \mbox{ and } \quad \mathbf{v} = \frac{1}{3}(\mathbf{u} + 2\mathbf{v}) - \frac{1}{3}(\mathbf{u} - \mathbf{v}) \nonumber \]

    so \(span \;\{\mathbf{u}, \mathbf{v}\} \subseteq\) \(span \;\{\mathbf{u} + 2\mathbf{v}, \mathbf{u} - \mathbf{v}\}\), again by Theorem [thm:018244].


    This page titled 5.11.1.2: Subspaces and Spanning Sets is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.