9.6: Linear Transformations
- Understand the definition of a linear transformation in the context of vector spaces.
Recall that a function is simply a transformation of a vector to result in a new vector. Consider the following definition.
Let \(V\) and \(W\) be vector spaces. Suppose \(T: V \mapsto W\) is a function, where for each \(\vec{x} \in V ,T\left(\vec{x}\right)\in W.\) Then \(T\) is a linear transformation if whenever \(k ,p\) are scalars and \(\vec{v}_1\) and \(\vec{v}_2\) are vectors in \(V\) \[T\left( k \vec{v}_1 + p \vec{v}_2 \right) = kT\left(\vec{v}_1\right)+ pT\left(\vec{v}_{2} \right)\nonumber \]
Several important examples of linear transformations include the zero transformation, the identity transformation, and the scalar transformation.
Let \(V\) and \(W\) be vector spaces.
-
The zero transformation
\(0:V\to W\) is defined by \(0(\vec{v})=\vec{0}\) for all \(\vec{v}\in V\). -
The identity transformation
\(1_V:V\to V\) is defined by \(1_V(\vec{v})=\vec{v}\) for all \(\vec{v}\in V\). -
The scalar transformation
Let \(a\in\mathbb{R}\).
\(s_a:V\to V\) is defined by \(s_a(\vec{v})=a\vec{v}\text{ for all }\vec{v}\in V\).
Solution
We will show that the scalar transformation \(s_a\) is linear, the rest are left as an exercise.
By Definition \(\PageIndex{1}\) we must show that for all scalars \(k ,p\) and vectors \(\vec{v}_1\) and \(\vec{v}_2\) in \(V\), \(s_a\left( k \vec{v}_1 + p \vec{v}_2 \right) = k s_a\left(\vec{v}_1\right)+ p s_a\left(\vec{v}_{2} \right)\). Assume that \(a\) is also a scalar. \[\begin{aligned} s_a\left( k \vec{v}_1 + p \vec{v}_2 \right) &= a \left( k \vec{v}_1 + p \vec{v}_2 \right) \\ &= ak \vec{v}_1 + ap \vec{v}_2 \\ &= k \left(a \vec{v}_1\right) + p\left(a \vec{v}_2\right) \\ &= k s_a\left( \vec{v}_1 \right) + p s_a \left(\vec{v}_2 \right)\end{aligned}\] Therefore \(s_a\) is a linear transformation.
Below is a video on finding a linear transformation given \(T(a+bt)\) and \(T(c+dt)\): \(P^1\) to \(M_{2x2}\).
Consider the following important theorem.
Let \(V\) and \(W\) be vector spaces, and \(T:V \mapsto W\) a linear transformation. Then
- \(T\) preserves the zero vector. \[T(\vec{0})=\vec{0}\nonumber \]
- \(T\) preserves additive inverses. For all \(\vec{v}\in V\), \[T(-\vec{v})= -T(\vec{v})\nonumber \]
- \(T\) preserves linear combinations. For all \(\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_m \in V\) and all \(k_1, k_2, \ldots, k_m\in\mathbb{R}\), \[T(k_1\vec{v}_1 + k_2\vec{v}_2 + \cdots + k_m\vec{v}_m) = k_1T(\vec{v}_1) + k_2T(\vec{v}_2) + \cdots + k_mT(\vec{v}_m).\nonumber \]
- Proof
-
- Let \(\vec{0}_V\) denote the zero vector of \(V\) and let \(\vec{0}_W\) denote the zero vector of \(W\). We want to prove that \(T(\vec{0}_V)=\vec{0}_W\). Let \(\vec{v}\in V\). Then \(0\vec{v}=\vec{0}_V\) and \[T(\vec{0}_V)=T(0\vec{v})=0T(\vec{v})=\vec{0}_W.\nonumber \]
- Let \(\vec{v}\in V\); then \(-\vec{v}\in V\) is the additive inverse of \(\vec{v}\), so \(\vec{v} + (-\vec{v})=\vec{0}_V\). Thus \[\begin{aligned} T(\vec{v} + (-\vec{v})) & = T(\vec{0}_V) \\ T(\vec{v}) + T(-\vec{v})) & = \vec{0}_W \\ T(-\vec{v}) & = \vec{0}_W - T(\vec{v}) = - T(\vec{v}).\end{aligned}\]
- This result follows from preservation of addition and preservation of scalar multiplication. A formal proof would be by induction on \(m\).
Consider the following example using the above theorem.
Let \(T:\mathbb{P}_2 \to \mathbb{R}\) be a linear transformation such that \[T(x^2+x)=-1; T(x^2-x)=1; T(x^2+1)=3.\nonumber \] Find \(T(4x^2+5x-3)\).
We provide two solutions to this problem.
Solution 1:
Suppose \(a(x^2+x) + b(x^2-x) + c(x^2+1) = 4x^2+5x-3\). Then \[(a+b+c)x^2 + (a-b)x + c = 4x^2+5x-3.\nonumber \] Solving for \(a\), \(b\), and \(c\) results in the unique solution \(a=6\), \(b=1\), \(c=-3\). Thus \[\begin{aligned}T(4x^2+5x-3)&=T(6(x^2+x)+(x^2-x)-3(x^2+1)) \\ &=6T(x^2+x)+T(x^2-x)-3T(x^2+1) \\ &=6(-1)+1-3(3)=-14.\end{aligned}\]
Solution 2:
Notice that \(S=\{ x^2+x, x^2-x, x^2+1\}\) is a basis of \(\mathbb{ P}_2\), and thus \(x^2\), \(x\), and \(1\) can each be written as a linear combination of elements of \(S\).
\[\begin{aligned} x^2 & = \textstyle \frac{1}{2}(x^2+x) + \frac{1}{2}(x^2-x) \\ x & = \textstyle \frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x) \\ 1 & = (x^2+1)-\textstyle \frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x).\end{aligned}\] Then \[\begin{aligned} T(x^2) & = \textstyle T\left(\frac{1}{2}(x^2+x) + \frac{1}{2}(x^2-x)\right) =\frac{1}{2}T(x^2+x) + \frac{1}{2}T(x^2-x)\\ & = \textstyle \frac{1}{2}(-1) + \frac{1}{2}(1) = 0. \\ T(x) & = \textstyle T\left(\frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x)\right) = \frac{1}{2}T(x^2+x) - \frac{1}{2}T(x^2-x) \\ & = \textstyle \frac{1}{2}(-1) - \frac{1}{2}(1) = -1.\\ T(1) & = \textstyle T\left((x^2+1)-\frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x)\right)\\ & = \textstyle T(x^2+1)-\frac{1}{2}T(x^2+x) - \frac{1}{2}T(x^2-x) \\ & = \textstyle 3-\frac{1}{2}(-1) - \frac{1}{2}(1) = 3.\end{aligned}\]
Therefore, \[\begin{aligned} T(4x^2+5x-3) & = 4T(x^2) + 5T(x) -3T(1) \\ & = 4(0) + 5(-1) - 3(3)=-14.\end{aligned}\] The advantage of Solution 2 over Solution 1 is that if you were now asked to find \(T(-6x^2-13x+9)\), it is easy to use \(T(x^2)=0\), \(T(x)=-1\) and \(T(1)= 3\): \[\begin{aligned} T(-6x^2-13x+9) & = -6T(x^2)-13T(x)+9T(1) \\ & = -6(0)-13(-1)+9(3)=13+27=40.\end{aligned}\] More generally, \[\begin{aligned} T(ax^2+bx+c) & = aT(x^2)+bT(x)+cT(1) \\ & = a(0)+b(-1)+c(3)=-b+3c.\end{aligned}\]
Below is a video on matching 2x2 matrices with \(R^2\) transformations.
Below is a video on writing a matrix for the integration linear transformation.
Below is a video on writing a matrix for the horizontal shift linear transformation.
Below is a video on writing a matrix for a derivative linear transformation.
Suppose two linear transformations act in the same way on \(\vec{v}\) for all vectors. Then we say that these transformations are equal.
Let \(S\) and \(T\) be linear transformations from \(V\) to \(W\). Then \(S = T\) if and only if for every \(\vec{v} \in V\), \[S \left( \vec{v} \right) = T \left( \vec{v} \right)\nonumber \]
The definition above requires that two transformations have the same action on every vector in order for them to be equal. The next theorem argues that it is only necessary to check the action of the transformations on basis vectors.
Let \(V\) and \(W\) be vector spaces and suppose that \(S\) and \(T\) are linear transformations from \(V\) to \(W\). Then in order for \(S\) and \(T\) to be equal, it suffices that \(S(\vec{v}_i) = T(\vec{v}_i)\) where \(V = span \{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\}.\)
This theorem tells us that a linear transformation is completely determined by its actions on a spanning set. We can also examine the effect of a linear transformation on a basis.
Suppose \(V\) and \(W\) are vector spaces and let \(\{ \vec{w}_1, \vec{w}_2, \ldots, \vec{w}_n\}\) be any given vectors in \(W\) that may not be distinct. Then there exists a basis \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\}\) of \(V\) and a unique linear transformation \(T: V \mapsto W\) with \(T (\vec{v}_i) = \vec{w}_i\).
Furthermore, if \[\vec{v} = k_1\vec{v}_1+k_2\vec{v}_2+ \cdots+ k_n\vec{v}_n\nonumber \] is a vector of \(V\), then \[T(\vec{v}) = k_1\vec{w}_1+k_2\vec{w}_2+ \cdots+ k_n\vec{w}_n.\nonumber \]
Below is a video on determining which sequences of linear transformations are valid (composition of linear transformations).