Loading [MathJax]/jax/output/HTML-CSS/jax.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

5.2: Linear Independence

( \newcommand{\kernel}{\mathrm{null}\,}\)

We are now going to define the notion of linear independence of a list of vectors. This concept will be extremely important in the sections that follow, and especially when we introduce bases and the dimension of a vector space.

Definition 5.2.1: linearly independent Vectors

A list of vectors (v1,,vm) is called linearly independent if the only solution for a1,,amF to the equation

a1v1++amvm=0

is a1==am=0. In other words, the zero vector can only trivially be written as a linear combination of (v1,,vm).

Definition 5.2.2: Linearly dependent Vectors

A list of vectors (v1,,vm) is called linearly dependent if it is not linearly independent. That is, (v1,,vm) is linear dependent if there exist a1,,amF, not all zero, such that

a1v1++amvm=0.

Example 5.2.1:

The vectors (e1,,em) of Example 5.1.4 are linearly independent. To see this, note that the only solution to the vector equation

0=a1e1++amem=(a1,,am)

is a1==am=0. Alternatively, we can reinterpret this vector equation as the homogeneous linear system

a1  =0 a2  =0      am=0},

which clearly has only the trivial solution. (See Section A.3.2 for the appropriate definitions.)

Example 5.2.2:

The vectors v1=(1,1,1),v2=(0,1,1), and v3=(1,2,0) are linearly dependent. To see this, we need to consider the vector equation

a1v1+a2v2+a3v3=a1(1,1,1)+a2(0,1,1)+a3(1,2,0)=(a1+a3,a1+a2+2a3,a1a2)=(0,0,0).

Solving for a1,a2, and a3, we see, for example, that (a1,a2,a3)=(1,1,1) is a nonzero solution. Alternatively, we can reinterpret this vector equation as the homogeneous linear system

a1  +a3=0a1+a2+2a3=0a1a2  =0}.

Using the techniques of Section A.3, we see that solving this linear system is equivalent to solving the following linear system:

a1  +a3=0  a2+a3=0}.

Note that this new linear system clearly has infinitely many solutions. In particular, the set of all solutions is given by

N={(a1,a2,a3)Fn | a1=a2=a3}=span((1,1,1)).

Example 5.2.3:

The vectors (1,z,,zm) in the vector space Fm[z] are linearly independent. Requiring that

a01+a1z++amzm=0

means that the polynomial on the left should be zero for all zF. This is only possible for a0=a1==am=0.

An important consequence of the notion of linear independence is the fact that any vector in the span of a given list of linearly independent vectors can be uniquely written as a linear combination.

Lemma 5.2.6

The list of vectors (v1,,vm) is linearly independent if and only if every vspan(v1,,vm) can be uniquely written as a linear combination of (v1,,vm).

Proof

("⟹") Assume that (v1,,vm) is a linearly independent list of vectors.

Suppose there are two ways of writing vspan(v1,,vm) as a linear combination of the vi:

v=a1v1+amvm,v=a1v1+amvm.

Subtracting the two equations yields 0=(a1a1)v1++(amam)vm.

Since (v1,,vm) is linearly independent, the only solution to this equation is a1a1=0,,amam=0, or equivalently a1=a1,,am=am.

("⟸") Now assume that, for every vspan(v1,,vm), there are unique a1,,amF such that

v=a1v1++amvm.

This implies, in particular, that the only way the zero vector v=0 can be written as a linear combination of v1,,vm is with a1==am=0. This shows that (v1,,vm) are linearly independent.

It is clear that if (v1,,vm) is a list of linearly independent vectors, then the list (v1,,vm1) is also linearly independent.

For the next lemma, we introduce the following notation: If we want to drop a vector vj from a given list (v1,,vm) of vectors, then we indicate the dropped vector by a hat. I.e., we write

(v1,,ˆvj,,vm)=(v1,,vj1,vj+1,,vm).

Lemma 5.2.7: Linear Dependence Lemma

If (v1,,vm) is linearly dependent and v10, then there exists an index j{2,,m} such that the following two conditions hold.

  1. vjspan(v1,,vj1).
  2. If vj is removed from (v1,,vm), then span(v1,,ˆvj,,vm)=span(v1,,vm).

Proof

Since (v1,,vm) is linearly dependent there exist a1,,amF not all zero such that a1v1++amvm=0. Since by assumption v10, not all of a2,,am can be zero. Let j{2,,m} be largest such that aj0. Then we have

vj=a1ajv1aj1ajvj1,

which implies Part~1.

Let vspan(v1,,vm). This means, by definition, that there exist scalars b1,,bmF such that

v=b1v1++bmvm.

The vector vj that we determined in Part 1 can be replaced by Equation ???) so that v is written as a linear combination of (v1,,ˆvj,,vm). Hence, span(v1,,ˆvj,,vm)=span(v1,,vm).

Example 5.2.4:

The list (v1,v2,v3)=((1,1),(1,2),(1,0)) of vectors spans R2.

To see this, take any vector v=(x,y)R2. We want to show that v can be written as a linear combination of (1,1),(1,2),(1,0), i.e., that there exist scalars a1,a2,a3F such that

v=a1(1,1)+a2(1,2)+a3(1,0),

or equivalently that

(x,y)=(a1+a2+a3,a1+2a2).

Clearly a1=y, a2=0, and a3=xy form a solution for any choice of x,yR, and so R2=span((1,1),(1,2),(1,0)). However, note that

2(1,1)(1,2)(1,0)=(0,0),

which shows that the list ((1,1),(1,2),(1,0)) is linearly dependent.

The Linear Dependence

Lemma 5.2.7 thus states that one of the vectors can be dropped from ((1,1),(1,2),(1,0)) and that the resulting list of vectors will still span R2. Indeed, by Equation ???,

v3=(1,0)=2(1,1)(1,2)=2v1v2,

and so span((1,1),(1,2),(1,0))=span((1,1),(1,2)).

The next result shows that linearly independent lists of vectors that span a finite-dimensional vector space are the smallest possible spanning sets.

Theorem 5.2.9

Let V be a finite-dimensional vector space. Suppose that (v1,,vm) is a linearly independent list of vectors that spans V, and let (w1,,wn) be any list that spans V. Then mn.

Proof

The proof uses the following iterative procedure: start with an arbitrary list of vectors S0=(w1,,wn) such that V=span(S0). At the kth step of the procedure, we construct a new list Sk by replacing some vector wjk by the vector vk such that Sk still spans V. Repeating this for all vk then produces a new list Sm of length n that contains each of v1,,vm, which then proves that mn. Let us now discuss each step in this procedure in detail.

Step 1. Since (w1,,wn) spans V, adding a new vector to the list makes the new list linearly dependent. Hence (v1,w1,,wn) is linearly dependent. By Lemma 5.2.7, there exists an index j1 such that

wj1span(v1,w1,,wj11).

Hence S1=(v1,w1,,ˆwj1,,wn) spans V. In this step, we added the vector v1 and removed the vector wj1 from S0.

Step k. Suppose that we already added v1,,vk1 to our spanning list and removed the vectors wj1,,wjk1. It is impossible that we have reached the situation where all of the vectors w1,,wn have been removed from the spanning list at this step if km because then we would have V=span(v1,,vk1) which would allow vk to be expressed as a linear combination of v1,,vk1 (in contradiction with the assumption of linear independence of v1,,vn).

Now, call the list reached at this step Sk1, and note that V=span(Sk1). Add the vector vk to Sk1. By the same arguments as before, adjoining the extra vector vk to the spanning list Sk1 yields a list of linearly dependent vectors. Hence, by Lemma 5.2.7, there exists an index jk such that Sk1 with vk added and wjk removed still spans V. The fact that (v1,,vk) is linearly independent ensures that the vector removed is indeed among the wj. Call the new list Sk, and note that V=span(Sk).

The final list Sm is S0 but with each v1,,vm added and each wj1,,wjm removed. Moreover, note that Sm has length n and still spans V. It follows that mn.


This page titled 5.2: Linear Independence is shared under a not declared license and was authored, remixed, and/or curated by Isaiah Lankham, Bruno Nachtergaele, & Anne Schilling.

  • Was this article helpful?

Support Center

How can we help?