Skip to main content
Mathematics LibreTexts

9.6: Linear Transformations

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)


    1. Understand the definition of a linear transformation in the context of vector spaces.

    Recall that a function is simply a transformation of a vector to result in a new vector. Consider the following definition.

    Definition \(\PageIndex{1}\): Linear Transformation

    Let \(V\) and \(W\) be vector spaces. Suppose \(T: V \mapsto W\) is a function, where for each \(\vec{x} \in V ,T\left(\vec{x}\right)\in W.\) Then \(T\) is a linear transformation if whenever \(k ,p\) are scalars and \(\vec{v}_1\) and \(\vec{v}_2\) are vectors in \(V\) \[T\left( k \vec{v}_1 + p \vec{v}_2 \right) = kT\left(\vec{v}_1\right)+ pT\left(\vec{v}_{2} \right)\nonumber \]

    Several important examples of linear transformations include the zero transformation, the identity transformation, and the scalar transformation.

    Example \(\PageIndex{1}\): Linear Transformations

    Let \(V\) and \(W\) be vector spaces.

    1. The zero transformation
      \(0:V\to W\) is defined by \(0(\vec{v})=\vec{0}\) for all \(\vec{v}\in V\).
    2. The identity transformation
      \(1_V:V\to V\) is defined by \(1_V(\vec{v})=\vec{v}\) for all \(\vec{v}\in V\).
    3. The scalar transformation Let \(a\in\mathbb{R}\).
      \(s_a:V\to V\) is defined by \(s_a(\vec{v})=a\vec{v}\text{ for all }\vec{v}\in V\).

    We will show that the scalar transformation \(s_a\) is linear, the rest are left as an exercise.

    By Definition \(\PageIndex{1}\) we must show that for all scalars \(k ,p\) and vectors \(\vec{v}_1\) and \(\vec{v}_2\) in \(V\), \(s_a\left( k \vec{v}_1 + p \vec{v}_2 \right) = k s_a\left(\vec{v}_1\right)+ p s_a\left(\vec{v}_{2} \right)\). Assume that \(a\) is also a scalar. \[\begin{aligned} s_a\left( k \vec{v}_1 + p \vec{v}_2 \right) &= a \left( k \vec{v}_1 + p \vec{v}_2 \right) \\ &= ak \vec{v}_1 + ap \vec{v}_2 \\ &= k \left(a \vec{v}_1\right) + p\left(a \vec{v}_2\right) \\ &= k s_a\left( \vec{v}_1 \right) + p s_a \left(\vec{v}_2 \right)\end{aligned}\] Therefore \(s_a\) is a linear transformation.

    Consider the following important theorem.

    Theorem \(\PageIndex{1}\): Properties of Linear Transformations

    Let \(V\) and \(W\) be vector spaces, and \(T:V \mapsto W\) a linear transformation. Then

    1. \(T\) preserves the zero vector. \[T(\vec{0})=\vec{0}\nonumber \]
    2. \(T\) preserves additive inverses. For all \(\vec{v}\in V\), \[T(-\vec{v})= -T(\vec{v})\nonumber \]
    3. \(T\) preserves linear combinations. For all \(\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_m \in V\) and all \(k_1, k_2, \ldots, k_m\in\mathbb{R}\), \[T(k_1\vec{v}_1 + k_2\vec{v}_2 + \cdots + k_m\vec{v}_m) = k_1T(\vec{v}_1) + k_2T(\vec{v}_2) + \cdots + k_mT(\vec{v}_m).\nonumber \]
    1. Let \(\vec{0}_V\) denote the zero vector of \(V\) and let \(\vec{0}_W\) denote the zero vector of \(W\). We want to prove that \(T(\vec{0}_V)=\vec{0}_W\). Let \(\vec{v}\in V\). Then \(0\vec{v}=\vec{0}_V\) and \[T(\vec{0}_V)=T(0\vec{v})=0T(\vec{v})=\vec{0}_W.\nonumber \]
    2. Let \(\vec{v}\in V\); then \(-\vec{v}\in V\) is the additive inverse of \(\vec{v}\), so \(\vec{v} + (-\vec{v})=\vec{0}_V\). Thus \[\begin{aligned} T(\vec{v} + (-\vec{v})) & = T(\vec{0}_V) \\ T(\vec{v}) + T(-\vec{v})) & = \vec{0}_W \\ T(-\vec{v}) & = \vec{0}_W - T(\vec{v}) = - T(\vec{v}).\end{aligned}\]
    3. This result follows from preservation of addition and preservation of scalar multiplication. A formal proof would be by induction on \(m\).

    Consider the following example using the above theorem.

    Example \(\PageIndex{2}\): Linear Combination

    Let \(T:\mathbb{P}_2 \to \mathbb{R}\) be a linear transformation such that \[T(x^2+x)=-1; T(x^2-x)=1; T(x^2+1)=3.\nonumber \] Find \(T(4x^2+5x-3)\).

    We provide two solutions to this problem.

    Solution 1:

    Suppose \(a(x^2+x) + b(x^2-x) + c(x^2+1) = 4x^2+5x-3\). Then \[(a+b+c)x^2 + (a-b)x + c = 4x^2+5x-3.\nonumber \] Solving for \(a\), \(b\), and \(c\) results in the unique solution \(a=6\), \(b=1\), \(c=-3\). Thus \[\begin{aligned}T(4x^2+5x-3)&=T(6(x^2+x)+(x^2-x)-3(x^2+1)) \\ &=6T(x^2+x)+T(x^2-x)-3T(x^2+1) \\ &=6(-1)+1-3(3)=-14.\end{aligned}\]

    Solution 2:

    Notice that \(S=\{ x^2+x, x^2-x, x^2+1\}\) is a basis of \(\mathbb{ P}_2\), and thus \(x^2\), \(x\), and \(1\) can each be written as a linear combination of elements of \(S\).

    \[\begin{aligned} x^2 & = \textstyle \frac{1}{2}(x^2+x) + \frac{1}{2}(x^2-x) \\ x & = \textstyle \frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x) \\ 1 & = (x^2+1)-\textstyle \frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x).\end{aligned}\] Then \[\begin{aligned} T(x^2) & = \textstyle T\left(\frac{1}{2}(x^2+x) + \frac{1}{2}(x^2-x)\right) =\frac{1}{2}T(x^2+x) + \frac{1}{2}T(x^2-x)\\ & = \textstyle \frac{1}{2}(-1) + \frac{1}{2}(1) = 0. \\ T(x) & = \textstyle T\left(\frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x)\right) = \frac{1}{2}T(x^2+x) - \frac{1}{2}T(x^2-x) \\ & = \textstyle \frac{1}{2}(-1) - \frac{1}{2}(1) = -1.\\ T(1) & = \textstyle T\left((x^2+1)-\frac{1}{2}(x^2+x) - \frac{1}{2}(x^2-x)\right)\\ & = \textstyle T(x^2+1)-\frac{1}{2}T(x^2+x) - \frac{1}{2}T(x^2-x) \\ & = \textstyle 3-\frac{1}{2}(-1) - \frac{1}{2}(1) = 3.\end{aligned}\]

    Therefore, \[\begin{aligned} T(4x^2+5x-3) & = 4T(x^2) + 5T(x) -3T(1) \\ & = 4(0) + 5(-1) - 3(3)=-14.\end{aligned}\] The advantage of Solution 2 over Solution 1 is that if you were now asked to find \(T(-6x^2-13x+9)\), it is easy to use \(T(x^2)=0\), \(T(x)=-1\) and \(T(1)= 3\): \[\begin{aligned} T(-6x^2-13x+9) & = -6T(x^2)-13T(x)+9T(1) \\ & = -6(0)-13(-1)+9(3)=13+27=40.\end{aligned}\] More generally, \[\begin{aligned} T(ax^2+bx+c) & = aT(x^2)+bT(x)+cT(1) \\ & = a(0)+b(-1)+c(3)=-b+3c.\end{aligned}\]

    Suppose two linear transformations act in the same way on \(\vec{v}\) for all vectors. Then we say that these transformations are equal.

    Definition \(\PageIndex{2}\): Equal Transformations

    Let \(S\) and \(T\) be linear transformations from \(V\) to \(W\). Then \(S = T\) if and only if for every \(\vec{v} \in V\), \[S \left( \vec{v} \right) = T \left( \vec{v} \right)\nonumber \]

    The definition above requires that two transformations have the same action on every vector in order for them to be equal. The next theorem argues that it is only necessary to check the action of the transformations on basis vectors.

    Theorem \(\PageIndex{2}\): Transformation of a Spanning Set

    Let \(V\) and \(W\) be vector spaces and suppose that \(S\) and \(T\) are linear transformations from \(V\) to \(W\). Then in order for \(S\) and \(T\) to be equal, it suffices that \(S(\vec{v}_i) = T(\vec{v}_i)\) where \(V = span \{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\}.\)

    This theorem tells us that a linear transformation is completely determined by its actions on a spanning set. We can also examine the effect of a linear transformation on a basis.

    Theorem \(\PageIndex{3}\): Transformation of a Basis

    Suppose \(V\) and \(W\) are vector spaces and let \(\{ \vec{w}_1, \vec{w}_2, \ldots, \vec{w}_n\}\) be any given vectors in \(W\) that may not be distinct. Then there exists a basis \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_n\}\) of \(V\) and a unique linear transformation \(T: V \mapsto W\) with \(T (\vec{v}_i) = \vec{w}_i\).

    Furthermore, if \[\vec{v} = k_1\vec{v}_1+k_2\vec{v}_2+ \cdots+ k_n\vec{v}_n\nonumber \] is a vector of \(V\), then \[T(\vec{v}) = k_1\vec{w}_1+k_2\vec{w}_2+ \cdots+ k_n\vec{w}_n.\nonumber \]

    This page titled 9.6: Linear Transformations is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Ken Kuttler (Lyryx) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?