Skip to main content
Mathematics LibreTexts

5.11.1.3: Linear Independence and Dimension

  • Page ID
    134817
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Linear Independence and Dependence018551 As in \(\mathbb{R}^n\), a set of vectors \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) in a vector space \(V\) is called linearly independent (or simply independent) if it satisfies the following condition:

    \[\mbox{If } \quad s_1\mathbf{v}_1 + s_2\mathbf{v}_2 + \dots + s_n\mathbf{v}_n = \mathbf{0}, \quad \mbox{ then } \quad s_1 = s_2 = \dots = s_n = 0. \nonumber \]

    A set of vectors that is not linearly independent is said to be linearly dependent (or simply dependent).

    The trivial linear combination of the vectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\) is the one with every coefficient zero:

    \[0\mathbf{v}_1 + 0\mathbf{v}_2 + \dots + 0\mathbf{v}_n \nonumber \]

    This is obviously one way of expressing \(\mathbf{0}\) as a linear combination of the vectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\), and they are linearly independent when it is the only way.

    018569 Show that \(\{1 + x, 3x + x^{2}, 2 + x - x^{2}\}\) is independent in \(\mathbf{P}_{2}\).

    Suppose a linear combination of these polynomials vanishes.

    \[s_1(1 + x) + s_2(3x + x^2) + s_3(2 + x - x^2) = 0 \nonumber \]

    Equating the coefficients of \(1\), \(x\), and \(x^{2}\) gives a set of linear equations.

    \[ \begin{array}{rlrlrcr} s_1 & + & & + & 2s_3 & = & 0 \\ s_1 & + & 3s_2 & + & s_3 & = & 0 \\ & & s_2 & - & s_3 & = & 0 \\ \end{array} \nonumber \]

    The only solution is \(s_{1} = s_{2} = s_{3} = 0\).

    018586 Show that \(\{\sin x, \cos x\}\) is independent in the vector space \(\mathbf{F}[0, 2\pi]\) of functions defined on the interval \([0, 2\pi]\).

    Suppose that a linear combination of these functions vanishes.

    \[s_1(\sin x) + s_2(\cos x) = 0 \nonumber \]

    This must hold for all values of \(x\) in \([0, 2\pi]\) (by the definition of equality in \(\mathbf{F}[0, 2\pi]\)). Taking \(x = 0\) yields \(s_{2} = 0\) (because \(\sin 0 = 0\) and \(\cos 0 = 1\)). Similarly, \(s_{1} = 0\) follows from taking \(x = \frac{\pi}{2}\) (because \(\sin \frac{\pi}{2} = 1\) and \(\cos \frac{\pi}{2} = 0\)).

    018596 Suppose that \(\{\mathbf{u}, \mathbf{v}\}\) is an independent set in a vector space \(V\). Show that \(\{\mathbf{u} + 2\mathbf{v}, \mathbf{u} - 3\mathbf{v}\}\) is also independent.

    Suppose a linear combination of \(\mathbf{u} + 2\mathbf{v}\) and \(\mathbf{u} - 3\mathbf{v}\) vanishes:

    \[s(\mathbf{u} + 2\mathbf{v}) + t(\mathbf{u} - 3\mathbf{v}) = \mathbf{0} \nonumber \]

    We must deduce that \(s = t = 0\). Collecting terms involving \(\mathbf{u}\) and \(\mathbf{v}\) gives

    \[(s + t)\mathbf{u} + (2s - 3t)\mathbf{v} = \mathbf{0} \nonumber \]

    Because \(\{\mathbf{u}, \mathbf{v}\}\) is independent, this yields linear equations \(s + t = 0\) and \(2s - 3t = 0\). The only solution is \(s = t = 0\).

    018606 Show that any set of polynomials of distinct degrees is independent.

    Let \(p_{1}, p_{2}, \dots, p_{m}\) be polynomials where \(\text{deg} (p_{i}) = d_{i}\). By relabelling if necessary, we may assume that \(d_{1} > d_{2} > \dots > d_{m}\). Suppose that a linear combination vanishes:

    \[t_1p_1 + t_2p_2 + \dots + t_mp_m = 0 \nonumber \]

    where each \(t_{i}\) is in \(\mathbb{R}\). As \(\text{deg} (p_{1}) = d_{1}\), let \(ax^{d_{1}}\) be the term in \(p_{1}\) of highest degree, where \(a \neq 0\). Since \(d_{1} > d_{2} > \dots > d_{m}\), it follows that \(t_{1}ax^{d_{1}}\) is the only term of degree \(d_{1}\) in the linear combination \(t_{1}p_{1} + t_{2}p_{2} + \dots + t_{m}p_{m} = 0\). This means that \(t_{1}ax^{d_{1}} = 0\), whence \(t_{1}a = 0\), hence \(t_{1} = 0\) (because \(a \neq 0\)). But then \(t_{2}p_{2} + \dots + t_{m}p_{m} = 0\) so we can repeat the argument to show that \(t_{2} = 0\). Continuing, we obtain \(t_{i} = 0\) for each \(i\), as desired.

    018648 Suppose that \(A\) is an \(n \times n\) matrix such that \(A^{k} = 0\) but \(A^{k-1} \neq 0\). Show that \(B = \{I, A, A^{2}, \dots, A^{k-1}\}\) is independent in \(\mathbf{M}_{nn}\).

    Suppose \(r_{0}I + r_{1}A + r_{2}A^{2} + \dots + r_{k-1}A^{k-1} = 0\). Multiply by \(A^{k-1}\):

    \[r_0A^{k - 1} + r_1A^k + r_2A^{k + 1} + \dots + r_{k-1}A^{2k-2} = 0 \nonumber \]

    Since \(A^{k} = 0\), all the higher powers are zero, so this becomes \(r_{0}A^{k-1} = 0\). But \(A^{k-1} \neq 0\), so \(r_{0} = 0\), and we have \(r_{1}A^{1} + r_{2}A^{2} + \dots + r_{k-1}A^{k-1} = 0\). Now multiply by \(A^{k-2}\) to conclude that \(r_{1} = 0\). Continuing, we obtain \(r_{i} = 0\) for each \(i\), so \(B\) is independent.

    The next example collects several useful properties of independence for reference.

    018694 Let \(V\) denote a vector space.

    1. If \(\mathbf{v} \neq \mathbf{0}\) in \(V\), then \(\{\mathbf{v}\}\) is an independent set.
    2. No independent set of vectors in \(V\) can contain the zero vector.
    1. Let \(t\mathbf{v} = \mathbf{0}\), \(t\) in \(\mathbb{R}\). If \(t \neq 0\), then \(\mathbf{v} = 1\mathbf{v} = \frac{1}{t}(t\mathbf{v}) = \frac{1}{t}\mathbf{0} = \mathbf{0}\), contrary to assumption. So \(t = 0\).
    2. If \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\) is independent and (say) \(\mathbf{v}_{2} = \mathbf{0}\), then \(0\mathbf{v}_{1} + 1\mathbf{v}_{2} + \dots + 0\mathbf{v}_{k} = \mathbf{0}\) is a nontrivial linear combination that vanishes, contrary to the independence of \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{k}\}\).

    A set of vectors is independent if \(\mathbf{0}\) is a linear combination in a unique way. The following theorem shows that every linear combination of these vectors has uniquely determined coefficients, and so extends Theorem [thm:013996].

    018721 Let \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) be a linearly independent set of vectors in a vector space \(V\). If a vector \(\mathbf{v}\) has two (ostensibly different) representations

    \[\def\arraycolsep{1.5pt} \begin{array}{lllllllll} \mathbf{v} & = & s_1\mathbf{v}_1 &+& s_2\mathbf{v}_2 &+& \cdots &+& s_n\mathbf{v}_n \\ \mathbf{v} & = & t_1\mathbf{v}_1 &+& t_2\mathbf{v}_2 &+& \cdots &+& t_n\mathbf{v}_n \end{array} \nonumber \]

    as linear combinations of these vectors, then \(s_{1} = t_{1}, s_{2} = t_{2}, \dots, s_{n} = t_{n}\). In other words, every vector in \(V\) can be written in a unique way as a linear combination of the \(\mathbf{v}_{i}\).

    Subtracting the equations given in the theorem gives

    \[(s_1 - t_1)\mathbf{v}_1 + (s_2 - t_2)\mathbf{v}_2 + \dots + (s_n - t_n)\mathbf{v}_n = \mathbf{0} \nonumber \]

    The independence of \(\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) gives \(s_{i} - t_{i} = 0\) for each \(i\), as required.

    The following theorem extends (and proves) Theorem [thm:014254], and is one of the most useful results in linear algebra.

    Fundamental Theorem018746 Suppose a vector space \(V\) can be spanned by \(n\) vectors. If any set of \(m\) vectors in \(V\) is linearly independent, then \(m \leq n\).

    Let \(V = span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\), and suppose that \(\{\mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{m}\}\) is an independent set in \(V\). Then \(\mathbf{u}_{1} = a_{1}\mathbf{v}_{1} + a_{2}\mathbf{v}_{2} + \dots + a_{n}\mathbf{v}_{n}\) where each \(a_{i}\) is in \(\mathbb{R}\). As \(\mathbf{u}_{1} \neq \mathbf{0}\) (Example [exa:018694]), not all of the \(a_{i}\) are zero, say \(a_{1} \neq 0\) (after relabelling the \(\mathbf{v}_{i}\)). Then \(V = span \;\{\mathbf{u}_{1}, \mathbf{v}_{2}, \mathbf{v}_{3}, \dots, \mathbf{v}_{n}\}\) as the reader can verify. Hence, write \(\mathbf{u}_{2} = b_{1}\mathbf{u}_{1} + c_{2}\mathbf{v}_{2} + c_{3}\mathbf{v}_{3} + \dots + c_{n}\mathbf{v}_{n}\). Then some \(c_{i} \neq 0\) because \(\{\mathbf{u}_{1}, \mathbf{u}_{2}\}\) is independent; so, as before, \(V = span \;\{\mathbf{u}_{1}, \mathbf{u}_{2}, \mathbf{v}_{3}, \dots, \mathbf{v}_{n}\}\), again after possible relabelling of the \(\mathbf{v}_{i}\). If \(m > n\), this procedure continues until all the vectors \(\mathbf{v}_{i}\) are replaced by the vectors \(\mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{n}\). In particular, \(V = span \;\{\mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{n}\}\). But then \(\mathbf{u}_{n+1}\) is a linear combination of \(\mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{n}\) contrary to the independence of the \(\mathbf{u}_{i}\). Hence, the assumption \(m > n\) cannot be valid, so \(m \leq n\) and the theorem is proved.

    If \(V = span \;\{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\), and if \(\{\mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{m}\}\) is an independent set in \(V\), the above proof shows not only that \(m \leq n\) but also that \(m\) of the (spanning) vectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\) can be replaced by the (independent) vectors \(\mathbf{u}_{1}, \mathbf{u}_{2}, \dots, \mathbf{u}_{m}\) and the resulting set will still \(span \; V\). In this form the result is called the Steinitz Exchange Lemma.

    Basis of a Vector Space018819 As in \(\mathbb{R}^n\), a set \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) of vectors in a vector space V is called a basis of \(V\) if it satisfies the following two conditions:

    1. \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) is linearly independent
    2. \(V = span \;\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\)

    Thus if a set of vectors \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) is a basis, then every vector in \(V\) can be written as a linear combination of these vectors in a unique way (Theorem [thm:018721]). But even more is true: Any two (finite) bases of \(V\) contain the same number of vectors.

    Invariance Theorem018841 Let \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) and \(\{\mathbf{f}_{1}, \mathbf{f}_{2}, \dots, \mathbf{f}_{m}\}\) be two bases of a vector space \(V\). Then \(n = m\).

    Because \(V = span \;\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) and \(\{\mathbf{f}_{1}, \mathbf{f}_{2}, \dots, \mathbf{f}_{m}\}\) is independent, it follows from Theorem [thm:018746] that \(m \leq n\). Similarly \(n \leq m\), so \(n = m\), as asserted.

    Theorem [thm:018841] guarantees that no matter which basis of \(V\) is chosen it contains the same number of vectors as any other basis. Hence there is no ambiguity about the following definition.

    Dimension of a Vector Space018862 If \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) is a basis of the nonzero vector space \(V\), the number \(n\) of vectors in the basis is called the dimension of \(V\), and we write

    \[dim \; V = n \nonumber \]

    The zero vector space \(\{\mathbf{0}\}\) is defined to have dimension \(0\):

    \[dim \;\{\mathbf{0}\} = 0 \nonumber \]

    In our discussion to this point we have always assumed that a basis is nonempty and hence that the dimension of the space is at least \(1\). However, the zero space \(\{\mathbf{0}\}\) has no basis (by Example [exa:018694]) so our insistence that \(dim \;\{\mathbf{0}\} = 0\) amounts to saying that the empty set of vectors is a basis of \(\{\mathbf{0}\}\). Thus the statement that “the dimension of a vector space is the number of vectors in any basis” holds even for the zero space.

    We saw in Example [exa:014241] that \(dim \;(\mathbb{R}^n) = n\) and, if \(\mathbf{e}_{j}\) denotes column \(j\) of \(I_{n}\), that \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) is a basis (called the standard basis). In Example [exa:018880] below, similar considerations apply to the space \(\mathbf{M}_{mn}\) of all \(m \times n\) matrices; the verifications are left to the reader.

    018880 The space \(\mathbf{M}_{mn}\) has dimension \(mn\), and one basis consists of all \(m \times n\) matrices with exactly one entry equal to \(1\) and all other entries equal to \(0\). We call this the standard basis of \(\mathbf{M}_{mn}\).

    018885 Show that \(dim \; \mathbf{P}_{n} = n + 1\) and that \(\{1, x, x^{2}, \dots, x^{n}\}\) is a basis, called the standard basis of \(\mathbf{P}_{n}\).

    Each polynomial \(p(x) = a_{0} + a_{1}x + \dots + a_{n}x^{n}\) in \(\mathbf{P}_{n}\) is clearly a linear combination of \(1, x, \dots, x^{n}\), so \(\mathbf{P}_{n} = span \;\{1, x, \dots, x^{n}\}\). However, if a linear combination of these vectors vanishes, \(a_{0}1 + a_{1}x + \dots + a_{n}x^{n} = 0\), then \(a_{0} = a_{1} = \dots = a_{n} = 0\) because \(x\) is an indeterminate. So \(\{1, x, \dots, x^{n}\}\) is linearly independent and hence is a basis containing \(n + 1\) vectors. Thus, \(dim \;(\mathbf{P}_{n}) = n + 1\).

    018912 If \(\mathbf{v} \neq \mathbf{0}\) is any nonzero vector in a vector space \(V\), show that \(span \;\{\mathbf{v}\} = \mathbb{R}\mathbf{v}\) has dimension \(1\).

    \(\{\mathbf{v}\}\) clearly spans \(\mathbb{R}\mathbf{v}\), and it is linearly independent by Example [exa:018694]. Hence \(\{\mathbf{v}\}\) is a basis of \(\mathbb{R}\mathbf{v}\), and so \(dim \; \mathbb{R}\mathbf{v} = 1\).

    018918 Let \(A = \left[ \begin{array}{rr} 1 & 1 \\ 0 & 0 \end{array} \right]\) and consider the subspace

    \[U = \{X \mbox{ in }\|{M}_{22} \mid AX = XA \} \nonumber \]

    of \(\mathbf{M}_{22}\). Show that \(dim \; U = 2\) and find a basis of \(U\).

    It was shown in Example [exa:018107] that \(U\) is a subspace for any choice of the matrix \(A\). In the present case, if \(X = \left[ \begin{array}{rr} x & y \\ z & w \end{array} \right]\) is in \(U\), the condition \(AX = XA\) gives \(z = 0\) and \(x = y + w\). Hence each matrix \(X\) in \(U\) can be written

    \[X = \left[ \begin{array}{cc} y + w & y \\ 0 & w \end{array} \right] = y \left[ \begin{array}{rrr} 1 & 1 \\ 0 & 0 \end{array} \right] + w \left[ \begin{array}{rrr} 1 & 0 \\ 0 & 1 \end{array} \right] \nonumber \]

    so \(U = span \; B\) where \(B = \left\{ \left[ \begin{array}{rrr} 1 & 1 \\ 0 & 0 \end{array} \right] , \left[ \begin{array}{rrr} 1 & 0 \\ 0 & 1 \end{array} \right] \right\}.\) Moreover, the set \(B\) is linearly independent (verify this), so it is a basis of \(U\) and \(dim \; U = 2\).

    018930 Show that the set \(V\) of all symmetric \(2 \times 2\) matrices is a vector space, and find the dimension of \(V\).

    A matrix \(A\) is symmetric if \(A^{T} = A\). If \(A\) and \(B\) lie in \(V\), then

    \[(A + B)^T = A^T + B^T = A + B \quad \mbox{ and } \quad (kA)^T = kA^T = kA \nonumber \]

    using Theorem [thm:002240]. Hence \(A + B\) and \(kA\) are also symmetric. As the \(2 \times 2\) zero matrix is also in \(V\), this shows that \(V\) is a vector space (being a subspace of \(\mathbf{M}_{22}\)). Now a matrix \(A\) is symmetric when entries directly across the main diagonal are equal, so each \(2 \times 2\) symmetric matrix has the form

    \[\left[ \begin{array}{rr} a & c \\ c & b \end{array} \right] = a \left[ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right] + b \left[ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right] + c \left[ \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right] \nonumber \]

    Hence the set \(B = \left\{ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 0 \end{array} \right] , \left[ \begin{array}{rr} 0 & 0 \\ 0 & 1 \end{array} \right] ,\ \left[ \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right] \right\}\) spans \(V\), and the reader can verify that \(B\) is linearly independent. Thus \(B\) is a basis of \(V\), so \(dim \; V = 3\).

    It is frequently convenient to alter a basis by multiplying each basis vector by a nonzero scalar. The next example shows that this always produces another basis. The proof is left as Exercise [ex:6_3_22].

    018943 Let \(B = \{\mathbf{v}_{1}, \mathbf{v}_{2}, \dots, \mathbf{v}_{n}\}\) be nonzero vectors in a vector space \(V\). Given nonzero scalars \(a_{1}, a_{2}, \dots, a_{n}\), write \(D = \{a_{1}\mathbf{v}_{1}, a_{2}\mathbf{v}_{2}, \dots, a_{n}\mathbf{v}_{n}\}\). If \(B\) is independent or spans \(V\), the same is true of \(D\). In particular, if \(B\) is a basis of \(V\), so also is \(D\).


    This page titled 5.11.1.3: Linear Independence and Dimension is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.