Loading [MathJax]/jax/output/HTML-CSS/jax.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

4.5: Linear Independence

( \newcommand{\kernel}{\mathrm{null}\,}\)

Learning Objectives
  • Determine if a set of vectors is linearly independent
  • Evaluate the linear independence of vectors through theoretical analysis and solving systems of equations.

We now turn our attention to the following question: what linear combinations of a given set of vectors {u1,,uk} in Rn yields the zero vector? Clearly 0u1+0u2++0uk=0, but is it possible to have ki=1aiui=0 without all coefficients being zero?

You can create examples where this easily happens. For example if u1=u2, then 1u1u2+0u3++0uk=0, no matter the vectors {u3,,uk}. But sometimes it can be more subtle.

Example 4.5.1: Linearly Dependent Set of Vectors

Consider the vectors u1=[012]T,u2=[110]T,u3=[232]T, and u4=[120]T in R3.

Then verify that 1u1+0u2+u32u4=0

You can see that the linear combination does yield the zero vector but has some non-zero coefficients. Thus we define a set of vectors to be linearly dependent if this happens.

Definition 4.5.1: Linearly Dependent Set of Vectors

A set of non-zero vectors {u1,,uk} in Rn is said to be linearly dependent if a linear combination of these vectors without all coefficients being zero does yield the zero vector.

Note that if ki=1aiui=0 and some coefficient is non-zero, say a10, then u1=1a1ki=2aiui and thus u1 is in the span of the other vectors. And the converse clearly works as well, so we get that a set of vectors is linearly dependent precisely when one of its vector is in the span of the other vectors of that set.

In particular, you can show that the vector u1 in the above example is in the span of the vectors {u2,u3,u4}.

If a set of vectors is NOT linearly dependent, then it must be that any linear combination of these vectors which yields the zero vector must use all zero coefficients. This is a very important notion, and we give it its own name of linear independence.

Definition 4.5.2: Linearly Independent Set of Vectors

A set of non-zero vectors {u1,,uk} in Rn is said to be linearly independent if whenever ki=1aiui=0 it follows that each ai=0.

Note also that we require all vectors to be non-zero to form a linearly independent set.

To view this in a more familiar setting, form the n×k matrix A having these vectors as columns. Then all we are saying is that the set {u1,,uk} is linearly independent precisely when AX=0 has only the trivial solution.

Here is an example.

Example 4.5.2: Linearly Independent Vectors

Consider the vectors u=[110]T, v=[101]T, and w=[011]T in R3. Verify whether the set {u,v,w} is linearly independent.

Solution

So suppose that we have a linear combinations au+bv+cw=0. Then you can see that this can only happen with a=b=c=0.

As mentioned above, you can equivalently form the 3×3 matrix A=[110101011], and show that AX=0 has only the trivial solution.

Thus this means the set {u,v,w} is linearly independent.

Example 4.5.3

Consider the vectors: v1=[523]T, v2=[14135]T, and v3=[355]T in R3. Are these vectors linearly independent?

Solution
  • Video Length: 8 minutes and 8 seconds
  • Context: Determine if the given set of vectors in R3\mathbb{R}^ is linearly independent.

In terms of spanning, a set of vectors is linearly independent if it does not contain unnecessary vectors; that is, not vector is in the span of the others.

Thus we put all this together in the following important theorem.

Theorem \PageIndex{1}: Linear Independence as a Linear Combination

Let \left\{\vec{u}_{1},\cdots ,\vec{u}_{k}\right\} be a collection of vectors in \mathbb{R}^{n}. Then the following are equivalent:

  1. It is linearly independent, that is whenever \sum_{i=1}^{k}a_{i}\vec{u}_{i}=\vec{0}\nonumber it follows that each coefficient a_{i}=0.
  2. No vector is in the span of the others.
  3. The system of linear equations AX=0 has only the trivial solution, where A is the n \times k matrix having these vectors as columns.

The last sentence of this theorem is useful as it allows us to use the reduced row-echelon form of a matrix to determine if a set of vectors is linearly independent. Let the vectors be columns of a matrix A. Find the reduced row-echelon form of A. If each column has a leading one, then it follows that the vectors are linearly independent.

Sometimes we refer to the condition regarding sums as follows: The set of vectors, \left\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\right\} is linearly independent if and only if there is no nontrivial linear combination which equals the zero vector. A nontrivial linear combination is one in which not all the scalars equal zero. Similarly, a trivial linear combination is one in which all scalars equal zero.

Here is a detailed example in \mathbb{R}^{4}.

Example \PageIndex{4}: Linear Independence

Determine whether the set of vectors given by \left\{ \left[ \begin{array}{r} 1 \\ 2 \\ 3 \\ 0 \end{array} \right], \; \left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 1 \end{array} \right] , \; \left[ \begin{array}{r} 0 \\ 1 \\ 1 \\ 2 \end{array} \right] , \; \left[ \begin{array}{r} 3 \\ 2 \\ 2 \\ 0 \end{array} \right] \right\}\nonumber is linearly independent. If it is linearly dependent, express one of the vectors as a linear combination of the others.

Solution

In this case the matrix of the corresponding homogeneous system of linear equations is \left[ \begin{array}{rrrr|r} 1 & 2 & 0 & 3 & 0\\ 2 & 1 & 1 & 2 & 0 \\ 3 & 0 & 1 & 2 & 0 \\ 0 & 1 & 2 & 0 & 0 \end{array} \right]\nonumber

The reduced row-echelon form is \left[ \begin{array}{rrrr|r} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{array} \right]\nonumber

and so every column is a pivot column and the corresponding system AX=0 only has the trivial solution. Therefore, these vectors are linearly independent and there is no way to obtain one of the vectors as a linear combination of the others.

Consider another example.

Example \PageIndex{5}: Linear Independence

Determine whether the set of vectors given by \left\{ \left[ \begin{array}{r} 1 \\ 2 \\ 3 \\ 0 \end{array} \right], \; \left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 1 \end{array} \right], \; \left[ \begin{array}{r} 0 \\ 1 \\ 1 \\ 2 \end{array} \right], \; \left[ \begin{array}{r} 3 \\ 2 \\ 2 \\ -1 \end{array} \right] \right\}\nonumber is linearly independent. If it is linearly dependent, express one of the vectors as a linear combination of the others.

Solution

Form the 4 \times 4 matrix A having these vectors as columns: A= \left[ \begin{array}{rrrr} 1 & 2 & 0 & 3 \\ 2 & 1 & 1 & 2 \\ 3 & 0 & 1 & 2 \\ 0 & 1 & 2 & -1 \end{array} \right]\nonumber Then by Theorem \PageIndex{1}, the given set of vectors is linearly independent exactly if the system AX=0 has only the trivial solution.

The augmented matrix for this system and corresponding reduced row-echelon form are given by \left[ \begin{array}{rrrr|r} 1 & 2 & 0 & 3 & 0 \\ 2 & 1 & 1 & 2 & 0 \\ 3 & 0 & 1 & 2 & 0 \\ 0 & 1 & 2 & -1 & 0 \end{array} \right] \rightarrow \cdots \rightarrow \left[ \begin{array}{rrrr|r} 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array} \right]\nonumber Not all the columns of the coefficient matrix are pivot columns and so the vectors are not linearly independent. In this case, we say the vectors are linearly dependent.

It follows that there are infinitely many solutions to AX=0, one of which is \left[ \begin{array}{r} 1 \\ 1 \\ -1 \\ -1 \end{array} \right]\nonumber Therefore we can write 1\left[ \begin{array}{r} 1 \\ 2 \\ 3 \\ 0 \end{array} \right] +1\left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 1 \end{array} \right] -1 \left[ \begin{array}{r} 0 \\ 1 \\ 1 \\ 2 \end{array} \right] -1 \left[ \begin{array}{r} 3 \\ 2 \\ 2 \\ -1 \end{array} \right] = \left[ \begin{array}{r} 0 \\ 0 \\ 0 \\ 0 \end{array} \right]\nonumber

This can be rearranged as follows 1\left[ \begin{array}{r} 1 \\ 2 \\ 3 \\ 0 \end{array} \right] +1\left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 1 \end{array} \right] -1 \left[ \begin{array}{r} 0 \\ 1 \\ 1 \\ 2 \end{array} \right] =\left[ \begin{array}{r} 3 \\ 2 \\ 2 \\ -1 \end{array} \right]\nonumber This gives the last vector as a linear combination of the first three vectors.

Notice that we could rearrange this equation to write any of the four vectors as a linear combination of the other three.

When given a linearly independent set of vectors, we can determine if related sets are linearly independent.

Example \PageIndex{6}: Related Sets of Vectors

Let \{ \vec{u},\vec{v},\vec{w}\} be an independent set of \mathbb{R}^n. Is \{\vec{u}+\vec{v}, 2\vec{u}+\vec{w}, \vec{v}-5\vec{w}\} linearly independent?

Solution

Suppose a(\vec{u}+\vec{v}) + b(2\vec{u}+\vec{w}) + c(\vec{v}-5\vec{w})=\vec{0}_n for some a,b,c\in\mathbb{R}. Then (a+2b)\vec{u} + (a+c)\vec{v} + (b-5c)\vec{w}=\vec{0}_n.\nonumber

Since \{\vec{u},\vec{v},\vec{w}\} is independent, \begin{aligned} a + 2b & = 0 \\ a + c & = 0 \\ b - 5c & = 0 \end{aligned}

This system of three equations in three variables has the unique solution a=b=c=0. Therefore, \{\vec{u}+\vec{v}, 2\vec{u}+\vec{w}, \vec{v}-5\vec{w}\} is independent.

The following corollary follows from the fact that if the augmented matrix of a homogeneous system of linear equations has more columns than rows, the system has infinitely many solutions.

Corollary \PageIndex{1}: Linear Dependence in \mathbb{R}''

Let \left\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\right\} be a set of vectors in \mathbb{R}^{n}. If k>n, then the set is linearly dependent (i.e. NOT linearly independent).

Proof

Form the n \times k matrix A having the vectors \left\{ \vec{u}_{1},\cdots ,\vec{u}_{k}\right\} as its columns and suppose k > n. Then A has rank r \leq n <k, so the system AX=0 has a nontrivial solution and thus not linearly independent by Theorem \PageIndex{1}.

Example \PageIndex{7}: Linear Dependence

Consider the vectors \left\{ \left[ \begin{array}{r} 1 \\ 4 \end{array} \right], \left[ \begin{array}{r} 2 \\ 3 \end{array} \right], \left[ \begin{array}{r} 3 \\ 2 \end{array} \right] \right\}\nonumber Are these vectors linearly independent?

Solution

This set contains three vectors in \mathbb{R}^2. By Corollary \PageIndex{1} these vectors are linearly dependent. In fact, we can write (-1) \left[ \begin{array}{r} 1 \\ 4 \end{array} \right] + (2) \left[ \begin{array}{r} 2 \\ 3 \end{array} \right] = \left[ \begin{array}{r} 3 \\ 2 \end{array} \right]\nonumber showing that this set is linearly dependent.

The third vector in the previous example is in the span of the first two vectors. We could find a way to write this vector as a linear combination of the other two vectors. It turns out that the linear combination which we found is the only one, provided that the set is linearly independent.

Theorem \PageIndex{2}: Unique Linear Combination

Let U \subseteq\mathbb{R}^n be an independent set. Then any vector \vec{x}\in\mathrm{span}(U) can be written uniquely as a linear combination of vectors of U.

Proof

To prove this theorem, we will show that two linear combinations of vectors in U that equal \vec{x} must be the same. Let U =\{ \vec{u}_1, \vec{u}_2, \ldots, \vec{u}_k\}. Suppose that there is a vector \vec{x}\in \mathrm{span}(U) such that \begin{aligned} \vec{x} & = s_1\vec{u}_1 + s_2\vec{u}_2 + \cdots + s_k\vec{u}_k, \mbox{ for some } s_1, s_2, \ldots, s_k\in\mathbb{R}, \mbox{ and} \\ \vec{x} & = t_1\vec{u}_1 + t_2\vec{u}_2 + \cdots + t_k\vec{u}_k, \mbox{ for some } t_1, t_2, \ldots, t_k\in\mathbb{R}.\end{aligned} Then \vec{0}_n=\vec{x}-\vec{x} = (s_1-t_1)\vec{u}_1 + (s_2-t_2)\vec{u}_2 + \cdots + (s_k-t_k)\vec{u}_k.

Since U is independent, the only linear combination that vanishes is the trivial one, so s_i-t_i=0 for all i, 1\leq i\leq k.

Therefore, s_i=t_i for all i, 1\leq i\leq k, and the representation is unique.Let U \subseteq\mathbb{R}^n be an independent set. Then any vector \vec{x}\in\mathrm{span}(U) can be written uniquely as a linear combination of vectors of U.

Suppose that \vec{u},\vec{v} and \vec{w} are nonzero vectors in \mathbb{R}^3, and that \{ \vec{v},\vec{w}\} is independent. Consider the set \{ \vec{u},\vec{v},\vec{w}\}. When can we know that this set is independent? It turns out that this follows exactly when \vec{u}\not\in\mathrm{span}\{\vec{v},\vec{w}\}.

Example \PageIndex{8}

Suppose that \vec{u},\vec{v} and \vec{w} are nonzero vectors in \mathbb{R}^3, and that \{ \vec{v},\vec{w}\} is independent. Prove that \{ \vec{u},\vec{v},\vec{w}\} is independent if and only if \vec{u}\not\in\mathrm{span}\{\vec{v},\vec{w}\}.

If \vec{u}\in\mathrm{span}\{\vec{v},\vec{w}\}, then there exist a,b\in\mathbb{R} so that \vec{u}=a\vec{v} + b\vec{w}. This implies that \vec{u}-a\vec{v} - b\vec{w}=\vec{0}_3, so \vec{u}-a\vec{v} - b\vec{w} is a nontrivial linear combination of \{ \vec{u},\vec{v},\vec{w}\} that vanishes, and thus \{ \vec{u},\vec{v},\vec{w}\} is dependent.

Now suppose that \vec{u}\not\in\mathrm{span}\{\vec{v},\vec{w}\}, and suppose that there exist a,b,c\in\mathbb{R} such that a\vec{u}+b\vec{v}+c\vec{w}=\vec{0}_3. If a\neq 0, then \vec{u}=-\frac{b}{a}\vec{v}-\frac{c}{a}\vec{w}, and \vec{u}\in\mathrm{span}\{\vec{v},\vec{w}\}, a contradiction. Therefore, a=0, implying that b\vec{v}+c\vec{w}=\vec{0}_3. Since \{ \vec{v},\vec{w}\} is independent, b=c=0, and thus a=b=c=0, i.e., the only linear combination of \vec{u},\vec{v} and \vec{w} that vanishes is the trivial one.

Therefore, \{ \vec{u},\vec{v},\vec{w}\} is independent.

Consider the following useful theorem.

Theorem \PageIndex{3}: Invertible Matrices

Let A be an invertible n \times n matrix. Then the columns of A are independent and span \mathbb{R}^n. Similarly, the rows of A are independent and span the set of all 1 \times n vectors.

This theorem also allows us to determine if a matrix is invertible. If an n \times n matrix A has columns which are independent, or span \mathbb{R}^n, then it follows that A is invertible. If it has rows that are independent, or span the set of all 1 \times n vectors, then A is invertible.

A Short Application to Chemistry

The following section applies the concepts of spanning and linear independence to the subject of chemistry.

When working with chemical reactions, there are sometimes a large number of reactions and some are in a sense redundant. Suppose you have the following chemical reactions. \begin{array}{c} CO+\frac{1}{2}O_{2}\rightarrow CO_{2} \\ H_{2}+\frac{1}{2}O_{2}\rightarrow H_{2}O \\ CH_{4}+\frac{3}{2}O_{2}\rightarrow CO+2H_{2}O \\ CH_{4}+2O_{2}\rightarrow CO_{2}+2H_{2}O \end{array}\nonumber There are four chemical reactions here but they are not independent reactions. There is some redundancy. What are the independent reactions? Is there a way to consider a shorter list of reactions? To analyze this situation, we can write the reactions in a matrix as follows \left[ \begin{array}{cccccc} CO & O_{2} & CO_{2} & H_{2} & H_{2}O & CH_{4} \\ 1 & 1/2 & -1 & 0 & 0 & 0 \\ 0 & 1/2 & 0 & 1 & -1 & 0 \\ -1 & 3/2 & 0 & 0 & -2 & 1 \\ 0 & 2 & -1 & 0 & -2 & 1 \end{array} \right]\nonumber

Each row contains the coefficients of the respective elements in each reaction. For example, the top row of numbers comes from CO+\frac{1}{2}O_{2}-CO_{2}=0 which represents the first of the chemical reactions.

We can write these coefficients in the following matrix \left[ \begin{array}{rrrrrr} 1 & 1/2 & -1 & 0 & 0 & 0 \\ 0 & 1/2 & 0 & 1 & -1 & 0 \\ -1 & 3/2 & 0 & 0 & -2 & 1 \\ 0 & 2 & -1 & 0 & -2 & 1 \end{array} \right]\nonumber Rather than listing all of the reactions as above, it would be more efficient to only list those which are independent by throwing out that which is redundant. We can use the concepts of the previous section to accomplish this.

First, take the reduced row-echelon form of the above matrix. \left[ \begin{array}{rrrrrr} 1 & 0 & 0 & 3 & -1 & -1 \\ 0 & 1 & 0 & 2 & -2 & 0 \\ 0 & 0 & 1 & 4 & -2 & -1 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]\nonumber The top three rows represent “independent" reactions which come from the original four reactions. One can obtain each of the original four rows of the matrix given above by taking a suitable linear combination of rows of this reduced row-echelon matrix.

With the redundant reaction removed, we can consider the simplified reactions as the following equations \begin{array}{c} CO+3H_{2}-1H_{2}O-1CH_{4}=0 \\ O_{2}+2H_{2}-2H_{2}O=0 \\ CO_{2}+4H_{2}-2H_{2}O-1CH_{4}=0 \end{array}\nonumber In terms of the original notation, these are the reactions \begin{array}{c} CO+3H_{2}\rightarrow H_{2}O+CH_{4} \\ O_{2}+2H_{2}\rightarrow 2H_{2}O \\ CO_{2}+4H_{2}\rightarrow 2H_{2}O+CH_{4} \end{array}\nonumber

These three reactions provide an equivalent system to the original four equations. The idea is that, in terms of what happens chemically, you obtain the same information with the shorter list of reactions. Such a simplification is especially useful when dealing with very large lists of reactions which may result from experimental evidence.

The main theorem about bases is not only they exist, but that they must be of the same size. To show this, we will need the the following fundamental result, called the Exchange Theorem.


4.5: Linear Independence is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by LibreTexts.

Support Center

How can we help?