Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

G8: Movie Scripts 15-16

G.15 Kernel, Range, Nullity, Rank

Invertibility Conditions

Here I am going to discuss some of the conditions on the invertibility of a matrix stated in Theorem 16.3.1. Condition 1 states that \(X = M^{-1}V\) uniquely, which is clearly equivalent to 4. Similarly, every square matrix \(M\) uniquely corresponds to a linear transformation \(L \colon \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}\), so condition 3 is equivalent to condition 1.

 

Condition 6 implies 4 by the adjoint construct the inverse, but the converse is not so obvious. For the converse (4 implying 6), we refer back the proofs in Chapter 18 and 19. Note that if \(\det M = 0\), there exists an eigenvalue of \(M\) equal to \(0\), which implies \(M\) is not invertible. Thus condition 8 is equivalent to conditions 4, 5, 9, and 10.

 

The map \(M\) is injective if it does not have a null space by definition, however eigenvectors with eigenvalue \(0\) form a basis for the null space. Hence conditions 8 and 14 are equivalent, and 14, 15, and 16 are equivalent by the Dimension Formula (also known as the Rank-Nullity Theorem).

 

Now conditions 11, 12, and 13 are all equivalent by the definition of a basis. Finally if a matrix \(M\) is not row-equivalent to the identity matrix, then \(\det M = 0\), so conditions 2 and 8 are equivalent.

 

Hints for Review Problem 3

Lets work through this problem. Let \(L \colon V\rightarrow W\) be a linear transformation. Show that \(\ker L=\{0_{V}\}\) if and only if \(L\) is one-to-one:

 

  1. \item First, suppose that \(\ker L=\{0_{V}\}\). Show that \(L\) is one-to-one. Remember what one-one means, it means whenever \(L(x) = L(y)\) we can be certain that \(x=y\). While this might seem like a weird thing to require this statement really means that each vector in the range gets mapped to a unique vector in the range. We know we have the one-one property, but we also don't want to forget some of the more basic properties of linear transformations namely that they are linear, which means \(L(ax+by) = aL(x) + bL(y)\) for scalars \(a\) and \(b\). What if we rephrase the one-one property to say whenever \(L(x) -L(y) = 0\) implies that \(x-y = 0\)? Can we connect that to the statement that \(\ker L=\{0_{V}\}\)? Remember that if \(L(v) = 0\) then \(v \in \ker L=\{0_{V}\}\).
  2. Now, suppose that \(L\) is one-to-one. Show that \(\ker L=\{0_{V}\}\). That is, show that \(0_{V}\) is in \(\ker L\), and then show that there are no other vectors in \(\ker L\). What would happen if we had a nonzero kernel? If we had some vector \(v\) with \(L(v) = 0\) and \(v \not = 0\), we could try to show that this would contradict the given that L is one-one. If we found \(x\) and \(y\) with \(L(x) = L(y)\), then we know \(x=y\). But if \(L(v) = 0\) then \(L(x) + L(v) = L(y)\). Does this cause a problem?

 

 

G.16 Least Squares and Singular Values

Least Squares: Hint for Review Problem 2

Lets work through this problem. Let \(L \colon V\rightarrow W\) be a linear transformation. Show that \(\ker L=\{0_{V}\}\) if and only if \(L\) is one-to-one:

 

  1. First, suppose that \(\ker L=\{0_{V}\}\). Show that \(L\) is one-to-one. Remember what one-one means, it means whenever \(L(x) = L(y)\) we can be certain that \(x=y\). While this might seem like a weird thing to require this statement really means that each vector in the range gets mapped to a unique vector in the range. We know we have the one-one property, but we also don't want to forget some of the more basic properties of linear transformations namely that they are linear, which means \(L(ax+by) = aL(x) + bL(y)\) for scalars \(a\) and \(b\). What if we rephrase the one-one property to say whenever \(L(x) -L(y) = 0\) implies that \(x-y = 0\)? Can we connect that to the statement that \(\ker L=\{0_{V}\}\)? Remember that if \(L(v) = 0\) then \(v \in \ker L=\{0_{V}\}\).
  2. Now, suppose that \(L\) is one-to-one. Show that \(\ker L=\{0_{V}\}\). That is, show that \(0_{V}\) is in \(\ker L\), and then show that there are no other vectors in \(\ker L\). What would happen if we had a nonzero kernel? If we had some vector \(v\) with \(L(v) = 0\) and \(v \not = 0\), we could try to show that this would contradict the given that L is one-one. If we found \(x\) and \(y\) with \(L(x) = L(y)\), then we know \(x=y\). But if \(L(v) = 0\) then \(L(x) + L(v) = L(y)\). Does this cause a problem?

 

 

Contributor