16: Basis and Dimension
( \newcommand{\kernel}{\mathrm{null}\,}\)
In chapter 10, the notions of a linearly independent set of vectors in a vector space V, and of a set of vectors that span V were established: Any set of vectors that span V can be reduced to some minimal collection of linearly independent vectors; such a set is called a \emph{basis} of the subspace V.
Definitions
Let V be a vector space.
- Then a set S is a basis for V if S is linearly independent and V=spanS.
- If S is a basis of V and S has only finitely many elements, then we say that V is finite-dimensional.
- The number of vectors in S is the dimension of V.
Suppose V is a finite-dimensional vector space, and S and T are two different bases for V. One might worry that S and T have a different number of vectors; then we would have to talk about the dimension of V in terms of the basis S or in terms of the basis T. Luckily this isn't what happens. Later in this chapter, we will show that S and T must have the same number of vectors. This means that the dimension of a vector space is basis-independent. In fact, dimension is a very important characteristic of a vector space.
Example 112
Pn(t) (polynomials in t of degree n or less) has a basis {1,t,…,tn}, since every vector in this space is a sum
a01+a1t+⋯+antn,ai∈ℜ,
so Pn(t)=span{1,t,…,tn}. This set of vectors is linearly independent: If the polynomial p(t)=c01+c1t+⋯+cntn=0, then c0=c1=⋯=cn=0, so p(t) is the zero polynomial. Thus Pn(t) is finite dimensional, and dimPn(t)=n+1.
Theorem
Let S={v1,…,vn} be a basis for a vector space V. Then every vector w∈V can be written uniquely as a linear combination of vectors in the basis S:
w=c1v1+⋯+cnvn.
Proof
Since S is a basis for V, then spanS=V, and so there exist constants ci such that w=c1v1+⋯+cnvn.
Suppose there exists a second set of constants di such that
w=d1v1+⋯+dnvn. Then:
0V=w−w=c1v1+⋯+cnvn−d1v1−⋯−dnvn=(c1−d1)v1+⋯+(cn−dn)vn.
If it occurs exactly once that ci≠di, then the equation reduces to 0=(ci−di)vi, which is a contradiction since the vectors vi are assumed to be non-zero.
If we have more than one i for which ci≠di, we can use this last equation to write one of the vectors in S as a linear combination of other vectors in S, which contradicts the assumption that S is linearly independent. Then for every i, ci=di.
Remark
This theorem is the one that makes bases so useful--they allow us to convert abstract vectors into column vectors. By ordering the set S we obtain B=(v1,…,vn) and can write
w=(v1,…,vn)(c1⋮cn)=(c1⋮cn)B.
Remember that in general it makes no sense to drop the subscript B on the column vector on the right--most vector spaces are not made from columns of numbers!
Next, we would like to establish a method for determining whether a collection of vectors forms a basis for ℜn. But first, we need to show that any two bases for a finite-dimensional vector space has the same number of vectors.
Lemma
If S={v1,…,vn} is a basis for a vector space V and T={w1,…,wm} is a linearly independent set of vectors in V, then m≤n.
The idea of the proof is to start with the set S and replace vectors in S one at a time with vectors from T, such that after each replacement we still have a basis for V.
Proof
Since S spans V, then the set {w1,v1,…,vn} is linearly dependent. Then we can write w1 as a linear combination of the vi; using that equation, we can express one of the vi in terms of w1 and the remaining vj with j≠i. Then we can discard one of the vi from this set to obtain a linearly independent set that still spans V. Now we need to prove that S1 is a basis; we must show that S1 is linearly independent and that S1 spans V.
The set S1={w1,v1,…,vi−1,vi+1,…,vn} is linearly independent: By the previous theorem, there was a unique way to express w1 in terms of the set S. Now, to obtain a contradiction, suppose there is some k and constants ci such that
vk=c0w1+c1v1+⋯+ci−1vi−1+ci+1vi+1+⋯+cnvn.
Then replacing w1 with its expression in terms of the collection S gives a way to express the vector vk as a linear combination of the vectors in S, which contradicts the linear independence of S. On the other hand, we cannot express w1 as a linear combination of the vectors in {vj|j≠i}, since the expression of w1 in terms of S was unique, and had a non-zero coefficient for the vector vi. Then no vector in S1 can be expressed as a combination of other vectors in S1, which demonstrates that S1 is linearly independent.
The set S1 spans V: For any u∈V, we can express u as a linear combination of vectors in S. But we can express vi as a linear combination of vectors in the collection S1; rewriting vi as such allows us to express u as a linear combination of the vectors in S1. Thus S1 is a basis of V with n vectors.
We can now iterate this process, replacing one of the vi in S1 with w2, and so on. If m≤n, this process ends with the set Sm={w1,…,wm, vi1,…,vin−m}, which is fine.
Otherwise, we have m>n, and the set Sn={w1,…,wn} is a basis for V. But we still have some vector wn+1 in T that is not in Sn. Since Sn is a basis, we can write wn+1 as a combination of the vectors in Sn, which contradicts the linear independence of the set T. Then it must be the case that m≤n, as desired.
◻
Corollary
For a finite-dimensional vector space V, any two bases for V have the same number of vectors.
Proof
Let S and T be two bases for V. Then both are linearly independent sets that span V. Suppose S has n vectors and T has m vectors. Then by the previous lemma, we have that m≤n. But (exchanging the roles of S and T in application of the lemma) we also see that n≤m. Then m=n, as desired.
Contributor
David Cherney, Tom Denton, and Andrew Waldron (UC Davis)
Thumbnail: A linear combination of one basis set of vectors (purple) obtains new vectors (red). If they are linearly independent, these form a new basis set. The linear combinations relating the first set to the other extend to a linear transformation, called the change of basis. (CC0; Maschen via Wikipedia)