Skip to main content
Mathematics LibreTexts

5.2: Linear Independence

  • Page ID
    290
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We are now going to define the notion of linear independence of a list of vectors. This concept will be extremely important in the sections that follow, and especially when we introduce bases and the dimension of a vector space.

    Definition 5.2.1: linearly independent Vectors

    A list of vectors \((v_1,\ldots,v_m)\) is called linearly independent if the only solution for \(a_1,\ldots,a_m\in\mathbb{F}\) to the equation

    \[ a_1 v_1 + \cdots + a_m v_m = 0 \]

    is \(a_1=\cdots = a_m=0\). In other words, the zero vector can only trivially be written as a linear combination of \((v_1,\ldots,v_m)\).

    Definition 5.2.2: Linearly dependent Vectors

    A list of vectors \((v_1,\ldots,v_m)\) is called linearly dependent if it is not linearly independent. That is, \((v_1,\ldots,v_m)\) is linear dependent if there exist \(a_1,\ldots,a_m\in \mathbb{F}\), not all zero, such that

    \[ a_1 v_1 + \cdots + a_m v_m = 0.\]

    Example \(\PageIndex{1}\):

    The vectors \((e_1,\ldots,e_m)\) of Example 5.1.4 are linearly independent. To see this, note that the only solution to the vector equation

    \[ 0 = a_1 e_1 + \cdots + a_m e_m = (a_1,\ldots,a_m) \]

    is \(a_1=\cdots=a_m=0\). Alternatively, we can reinterpret this vector equation as the homogeneous linear system

    \[ \left. \begin{array}{cccccc} a_{1} & & ~ & ~ & = & 0 \\ ~ & a_{2} & ~ & ~ & = & 0 \\ ~ & ~ & \ddots & ~ & \vdots & \vdots \\ ~ & ~ & ~ & a_{m} & = & 0 \end{array} \right\}, \]

    which clearly has only the trivial solution. (See Section A.3.2 for the appropriate definitions.)

    Example \(\PageIndex{2}\):

    The vectors \(v_1=(1,1,1),v_2=(0,1,-1)\), and \(v_3=(1,2,0)\) are linearly dependent. To see this, we need to consider the vector equation

    \[ \begin{eqnarray} a_1 v_1 + a_2 v_2 + a_3 v_3 & = & a_1 (1,1,1) + a_2 (0,1,-1) + a_3 (1,2,0)\\ & = & (a_1+a_3,a_1+a_2+2a_3,a_1-a_2) = (0,0,0). \end{eqnarray} \]

    Solving for \(a_1,a_2\), and \(a_3\), we see, for example, that \((a_1,a_2,a_3) = (1,1,-1)\) is a nonzero solution. Alternatively, we can reinterpret this vector equation as the homogeneous linear system

    \[ \left. \begin{array}{ccccccc} a_{1} & ~ & ~ & + & a_{3} & = & 0 \\ a_{1} & + & a_{2} & + & 2a_{3} & = & 0 \\ a_{1} & - & a_{2} & ~ & ~ & = & 0 \end{array} \right\}. \]

    Using the techniques of Section A.3, we see that solving this linear system is equivalent to solving the following linear system:

    \[ \left. \begin{array}{ccccccc} a_{1} & ~ & ~ & + & a_{3} & = & 0 \\ ~ & ~ & a_{2} & + & a_{3} & = & 0 \end{array} \right\}. \]

    Note that this new linear system clearly has infinitely many solutions. In particular, the set of all solutions is given by

    \[ N = \{ (a_1,a_2,a_3) \in \mathbb{F}^{n} \ | \ a_{1} = a_{2} = -a_{3} \} = \Span((1,1,-1)). \]

    Example \(\PageIndex{3}\):

    The vectors \((1,z,\ldots,z^m)\) in the vector space \(\mathbb{F}_m[z]\) are linearly independent. Requiring that

    \[ a_0 1 + a_1 z +\cdots+ a_m z^m = 0 \]

    means that the polynomial on the left should be zero for all \(z\in \mathbb{F}\). This is only possible for \(a_0=a_1=\cdots=a_m=0\).

    An important consequence of the notion of linear independence is the fact that any vector in the span of a given list of linearly independent vectors can be uniquely written as a linear combination.

    Lemma 5.2.6

    The list of vectors \((v_1,\ldots,v_m)\) is linearly independent if and only if every \(v\in\Span(v_1,\ldots,v_m)\) can be uniquely written as a linear combination of \((v_1,\ldots,v_m)\).

    Proof

    \(( "\Longrightarrow" )\) Assume that \((v_1,\ldots,v_m)\) is a linearly independent list of vectors.

    Suppose there are two ways of writing \(v\in\Span(v_1,\ldots,v_m)\) as a linear combination of the \(v_i\):

    \[ v=a_1v_1 + \cdots a_m v_m,\\ v=a^{'}_1v_1 + \cdots a^{'}_m v_m. \]

    Subtracting the two equations yields \(0=(a_1-a_1')v_1 + \cdots + (a_m-a_m')v_m\).

    Since \((v_1,\ldots,v_m)\) is linearly independent, the only solution to this equation is \(a_1-a_1'=0,\ldots,a_m-a_m'=0\), or equivalently \(a_1=a_1',\ldots,a_m=a_m'\).

    \(( "\Longleftarrow" )\) Now assume that, for every \(v \in \Span(v_1,\ldots,v_m)\), there are unique \(a_1,\ldots,a_m \in \mathbb{F}\) such that

    \[ v = a_1 v_1 + \cdots + a_m v_m. \]

    This implies, in particular, that the only way the zero vector \(v=0\) can be written as a linear combination of \(v_1,\ldots,v_m\) is with \(a_1=\cdots=a_m=0\). This shows that \((v_1,\ldots,v_m)\) are linearly independent.

    \(\square\)

    It is clear that if \((v_1,\ldots,v_m)\) is a list of linearly independent vectors, then the list \((v_1,\ldots,v_{m-1})\) is also linearly independent.

    For the next lemma, we introduce the following notation: If we want to drop a vector \(v_j\) from a given list \((v_1,\ldots,v_m)\) of vectors, then we indicate the dropped vector by a hat. I.e., we write

    \[ (v_1,\ldots,\hat{v}_j,\ldots,v_m) = (v_1,\ldots,v_{j - 1}, v_{j + 1},\ldots,v_m). \]

    Lemma 5.2.7: Linear Dependence Lemma

    If \((v_1,\ldots,v_m)\) is linearly dependent and \(v_1\neq 0\), then there exists an index \(j\in \{2,\ldots,m\}\) such that the following two conditions hold.

    1. \(v_j\in \Span(v_1,\ldots, v_{j-1})\).
    2. If \(v_j\) is removed from \((v_1,\ldots,v_m)\), then \(\Span(v_1,\ldots,\hat{v}_j,\ldots,v_m) = \Span(v_1,\ldots,v_m)\).

    Proof

    Since \((v_1,\ldots,v_m)\) is linearly dependent there exist \(a_1,\ldots,a_m\in \mathbb{F}\) not all zero such that \(a_1v_1+\cdots+ a_mv_m =0\). Since by assumption \(v_1\neq 0\), not all of \(a_2,\ldots,a_m\) can be zero. Let \(j\in\{2,\ldots,m\}\) be largest such that \(a_j\neq 0\). Then we have

    \[ v_j = -\frac{a_1}{a_j} v_1-\cdots - \frac{a_{j-1}}{a_j} v_{j-1}, \label{5.2.1} \]

    which implies Part~1.

    Let \(v\in\Span(v_1,\ldots,v_m)\). This means, by definition, that there exist scalars \(b_1,\ldots,b_m\in\mathbb{F}\) such that

    \[ v = b_1 v_1 + \cdots + b_m v_m. \]

    The vector \(v_j\) that we determined in Part 1 can be replaced by Equation \ref{5.2.1}) so that \(v\) is written as a linear combination of \((v_1,\ldots,\hat{v}_j,\ldots,v_m)\). Hence, \(\Span(v_1,\ldots,\hat{v}_j,\ldots,v_m) = \Span(v_1,\ldots,v_m)\).

    \(\square\)

    Example \(\PageIndex{4}\):

    The list \((v_1,v_2,v_3)=((1,1),(1,2),(1,0))\) of vectors spans \(\mathbb{R}^2\).

    To see this, take any vector \(v=(x,y)\in \mathbb{R}^2\). We want to show that \(v\) can be written as a linear combination of \((1,1),(1,2),(1,0)\), i.e., that there exist scalars \(a_{1}, a_{2}, a_{3} \in \mathbb{F}\) such that

    \[ v = a_1 (1,1) + a_2 (1,2) + a_3 (1,0),\]

    or equivalently that

    \[ (x,y) = (a_1+a_2+a_3,a_1+2a_2). \]

    Clearly \(a_1=y\), \(a_2=0\), and \(a_3=x-y\) form a solution for any choice of \(x, y\in \mathbb{R}\), and so \(\mathbb{R}^2=\Span((1,1),(1,2),(1,0))\). However, note that

    \[ 2 (1,1) - (1,2) - (1,0) = (0,0), \label{5.2.2} \]

    which shows that the list \(((1,1),(1,2),(1,0))\) is linearly dependent.

    The Linear Dependence

    Lemma 5.2.7 thus states that one of the vectors can be dropped from \(((1,1),(1,2),(1,0))\) and that the resulting list of vectors will still span \(\mathbb{R}^2\). Indeed, by Equation \ref{5.2.2},

    \[ v_3 = (1,0) = 2(1,1)-(1,2) = 2v_1-v_2, \]

    and so \(\Span((1,1),(1,2),(1,0))=\Span((1,1),(1,2)).\)

    The next result shows that linearly independent lists of vectors that span a finite-dimensional vector space are the smallest possible spanning sets.

    Theorem 5.2.9

    Let \(V\) be a finite-dimensional vector space. Suppose that \((v_1,\ldots,v_m)\) is a linearly independent list of vectors that spans \(V\), and let \((w_1,\ldots,w_n)\) be any list that spans \(V\). Then \(m\le n\).

    Proof

    The proof uses the following iterative procedure: start with an arbitrary list of vectors \(\mathcal{S}_0=(w_1,\ldots,w_n)\) such that \(V = \Span(\mathcal{S}_0)\). At the \(k^{\text{th}}\) step of the procedure, we construct a new list \(\mathcal{S}_k\) by replacing some vector \(w_{j_k}\) by the vector \(v_k\) such that \(\mathcal{S}_k\) still spans \(V\). Repeating this for all \(v_k\) then produces a new list \(\mathcal{S}_m\) of length \(n\) that contains each of \(v_1,\ldots,v_m\), which then proves that \(m\le n\). Let us now discuss each step in this procedure in detail.

    Step 1. Since \((w_1,\ldots,w_n)\) spans \(V\), adding a new vector to the list makes the new list linearly dependent. Hence \((v_1,w_1,\ldots,w_n)\) is linearly dependent. By Lemma 5.2.7, there exists an index \(j_1\) such that

    \[ w_{j_1} \in \Span(v_1,w_1,\ldots, w_{j_1-1}).\]

    Hence \(\mathcal{S}_1=(v_1,w_1,\ldots,\hat{w}_{j_1},\ldots,w_n)\) spans \(V\). In this step, we added the vector \(v_1\) and removed the vector \(w_{j_1}\) from \(\mathcal{S}_0\).

    Step \(k\). Suppose that we already added \(v_1,\ldots,v_{k-1}\) to our spanning list and removed the vectors \(w_{j_1},\ldots,w_{j_{k-1}}\). It is impossible that we have reached the situation where all of the vectors \(w_1,\ldots, w_n\) have been removed from the spanning list at this step if \(k\leq m\) because then we would have \(V=\Span(v_1,\ldots, v_{k-1})\) which would allow \(v_k\) to be expressed as a linear combination of \(v_1,\ldots,v_{k-1}\) (in contradiction with the assumption of linear independence of \(v_1,\ldots,v_n\)).

    Now, call the list reached at this step \(\mathcal{S}_{k-1}\), and note that \(V = \Span(\mathcal{S}_{k-1})\). Add the vector \(v_k\) to \(\mathcal{S}_{k-1}\). By the same arguments as before, adjoining the extra vector \(v_k\) to the spanning list \(\mathcal{S}_{k-1}\) yields a list of linearly dependent vectors. Hence, by Lemma 5.2.7, there exists an index \(j_k\) such that \(\mathcal{S}_{k-1}\) with \(v_k\) added and \(w_{j_k}\) removed still spans \(V\). The fact that \((v_1,\ldots,v_k)\) is linearly independent ensures that the vector removed is indeed among the \(w_j\). Call the new list \(\mathcal{S}_k\), and note that \(V = \Span(\mathcal{S}_{k})\).

    The final list \(\mathcal{S}_m\) is \(\mathcal{S}_0\) but with each \(v_1,\ldots,v_m\) added and each \(w_{j_1},\ldots, w_{j_m}\) removed. Moreover, note that \(\mathcal{S}_{m}\) has length \(n\) and still spans \(V\). It follows that \(m\le n\).

    \(\square\)


    This page titled 5.2: Linear Independence is shared under a not declared license and was authored, remixed, and/or curated by Isaiah Lankham, Bruno Nachtergaele, & Anne Schilling.

    • Was this article helpful?