Processing math: 100%
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

16: Kernel, Range, Nullity, Rank

( \newcommand{\kernel}{\mathrm{null}\,}\)

Given a linear transformation L:VW, we want to know if it has an inverse, i.e., is there a linear transformation M:WV such that for any vector vV, we have MLv=v, and for any vector wW, we have LMw=w. A linear transformation is just a special kind of function from one vector space to another. So before we discuss which linear transformations have inverses, let us first discuss inverses of arbitrary functions. When we later specialize to linear transformations, we'll also find some nice ways of creating subspaces.

Let f:ST be a function from a set S to a set T. Recall that S is called the domain of f, T is called the codomain or target of f, and the set

ran(f)=im(f)=f(S)={f(s)|sS}T,

is called the range or image of f. The image of f is the set of elements of T to which the function f maps, i.e., the things in T which you can get to by starting in S and applying f. We can also talk about the pre-image of any subset UT:

f1(U)={sS|f(s)U}S.

The pre-image of a set U is the set of all elements of S which map to U.

functions.jpg

For the function f:ST, S is the domain, T is the target, f(S) is the image/range and f1(U) is the pre-image of UT.

The function f is one-to-one if different elements in S always map to different elements in T. That is, f is one-to-one if for any elements xyS, we have that f(x)f(y):

121.jpg

One-to-one functions are also called injective functions. Notice that injectivity is a condition on the pre-images of f.

The function f is onto if every element of T is mapped to by some element of S. That is, f is onto if for any tT, there exists some sS such that f(s)=t. Onto functions are also called surjective functions. Notice that surjectivity is a condition on the image of f:

onto.jpg

If f is both injective and surjective, it is bijective:

biject.jpg

Theorem

A function f:ST has an inverse function g:TS if and only if it is bijective.

Proof

This is an "if and only if'' statement so the proof has two parts:

1. (Existence of an inverse  bijective.)

a) Suppose that f has an inverse function g. We need to show f is bijective, which we break down into injective and surjective: The function f is injective: Suppose that we have s,sS such that f(x)=f(y). We must have that g(f(s))=s for any sS, so in particular g(f(s))=s and g(f(s))=s. But since f(s)=f(s), we have g(f(s))=g(f(s)) so s=s. Therefore, f is injective.

b) The function f is surjective: Let t be any element of T. We must have that f(g(t))=t. Thus, g(t) is an element of S which maps to t. So f is surjective.

2. (Bijectivity  existence of an inverse.)

Suppose that f is bijective. Hence f is surjective, so every element tT has at least one pre-image. Being bijective, f is also injective, so every t has no more than one pre-image. Therefore, to construct an inverse function g, we simply define g(t) to be the unique pre-image f1(t) of t.

Now let us specialize to functions f that are linear maps between two vector spaces. Everything we said above for arbitrary functions is exactly the same for linear functions. However, the structure of vector spaces lets us say much more about one-to-one and onto functions whose domains are vector spaces than we can say about functions on general sets. For example, we know that a linear function always sends 0V to 0W, i.e., f(0V)=0W. In review exercise 3, you will show that a linear transformation is one-to-one if and only if 0V is the only vector that is sent to 0W: In contrast to arbitrary functions between sets, by looking at just one (very special) vector, we can figure out whether f is one-to-one!

Let L:VW be a linear transformation. Suppose L is \emph{not} injective. Then we can find v1v2 such that Lv1=Lv2. So v1v20, but L(v1v2)=0.

Definition: linear transformation

Let L:VW be a linear transformation. The set of all vectors v such that Lv=0W is called the kernel of L:

kerL={vV|Lv=0W}V.

Theorem

A linear transformation L is injective if and only if $$\ker L=\{ 0_{V} \}\, .\]

Proof
The proof of this theorem is review exercise 2.

Notice that if L has matrix M in some basis, then finding the kernel of L is equivalent to solving the homogeneous system

MX=0.

Example 16.1:

Let L(x,y)=(x+y,x+2y,y). Is L one-to-one?

To find out, we can solve the linear system:

\[(110120010) \sim (100010000)$$

Then all solutions of MX=0 are of the form x=y=0. In other words, kerL={0}, and so L is injective.

Theorem

Let L:VlinearW. Then kerL is a subspace of V.

Proof
Notice that if L(v)=0 and L(u)=0, then for any constants c,d, L(cu+dv)=0. Then by the subspace theorem, the kernel of L is a subspace of V.

Example 16.2:

Let L:3 be the linear transformation defined by L(x,y,z)=(x+y+z). Then kerL consists of all vectors (x,y,z)3 such that x+y+z=0. Therefore, the set
V={(x,y,z)3x+y+z=0}
is a subspace of 3.

When L:VV, the above theorem has an interpretation in terms of the eigenspaces of L: Suppose L has a zero eigenvalue. Then the associated eigenspace consists of all vectors v such that Lv=0v=0; in other words, the 0-eigenspace of L is exactly the kernel of L.

In the example where L(x,y)=(x+y,x+2y,y), the map L is clearly not surjective, since L maps 2 to a plane through the origin in 3. But any plane through the origin is a subspace. In general notice that if w=L(v) and w=L(v), then for any constants c,d, linearity of L ensures that cw+dw=L(cv+dv). Now the subspace theorem strikes again, and we have the following theorem:

Theorem

Let L:VW. Then the image L(V) is a subspace of W.

Example 16.3:

Let L(x,y)=(x+y,x+2y,y). The image of L is a plane through the origin and thus a subspace of R3. Indeed the matrix of L in the standard basis is
(111201).
The columns of this matrix encode the possible outputs of the function L because
L(x,y)=(111201)(xy)=x(110)+y(121).
Thus
L(R2)=span{(110),(121)}
Hence, when bases and a linear transformation is are given, people often refer to its image as the column space of the corresponding matrix.

To find a basis of the image of L, we can start with a basis S={v1,,vn} for V. Then
the most general input for L is of the form α1v1++αnvn. In turn, its most general output looks like
L(α1v1++αnvn)=α1Lv1++αnLvnspan{Lv1,Lvn}.
Thus
L(V)=spanL(S)=span{Lv1,,Lvn}.
However, the set {Lv1,,Lvn} may not be linearly independent; we must solve
c1Lv1++cnLvn=0,
to determine whether it is. By finding relations amongst the elements of L(S)={Lv1,,Lvn}, we can discard vectors until a basis is arrived at. The size of this basis is the dimension of the image of L, which is known as the rank of L.

Definition

The rank of a linear transformation L is the dimension of its image, written rankL=dimL(V)=dimranL.
The nullity of a linear transformation is the dimension of the kernel, written nulL=dimkerL.

Theorem: Dimension formula

Let L:VW be a linear transformation, with V a finite-dimensional vector space. Then:
dimV=dimkerV+dimL(V)=nulL+rankL.

Proof
Pick a basis for V:
{v1,,vp,u1,,uq},
where v1,,vp is also a basis for kerL. This can always be done, for example, by finding a basis for the kernel of L and then extending to a basis for V. Then p=nulL and p+q=dimV. Then we need to show that q=rankL. To accomplish this, we show that {L(u1),,L(uq)} is a basis for L(V).

To see that {L(u1),,L(uq)} spans L(V), consider any vector w in L(V). Then we can find constants ci,dj such that:
w=L(c1v1++cpvp+d1u1++dquq)=c1L(v1)++cpL(vp)+d1L(u1)++dqL(uq)=d1L(u1)++dqL(uq) since L(vi)=0,L(V)=span{L(u1),,L(uq)}.

Now we show that {L(u1),,L(uq)} is linearly independent. We argue by contradiction: Suppose there exist constants dj (not all zero) such that
0=d1L(u1)++dqL(uq)=L(d1u1++dquq).
But since the uj are linearly independent, then d1u1++dquq0, and so d1u1++dquq is in the kernel of L. But then d1u1++dquq must be in the span of {v1,,vp}, since this was a basis for the kernel. This contradicts the assumption that {v1,,vp,u1,,uq} was a basis for V, so we are done.

Contributor

This page titled 16: Kernel, Range, Nullity, Rank is shared under a not declared license and was authored, remixed, and/or curated by David Cherney, Tom Denton, & Andrew Waldron.

Support Center

How can we help?

15.1: Review Problems
16.1: Summary