Skip to main content
Mathematics LibreTexts

2.6E: Linear Transformations Exercises

  • Page ID
    132805
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Exercises for 1

    solutions

    2

    Let \(T : \mathbb{R}^3 \to \mathbb{R}^2\) be a linear transformation.

    1. Find \(T \left[ \begin{array}{r} 8 \\ 3 \\ 7 \end{array} \right]\) if \(T \left[ \begin{array}{r} 1 \\ 0 \\ -1 \end{array} \right] = \left[ \begin{array}{r} 2 \\ 3 \end{array} \right]\)
      and \(T \left[ \begin{array}{r} 2 \\ 1 \\ 3 \end{array} \right] = \left[ \begin{array}{r} -1 \\ 0 \end{array} \right]\).

    2. Find \(T \left[ \begin{array}{r} 5 \\ 6 \\ -13 \end{array} \right]\) if \(T \left[ \begin{array}{r} 3 \\ 2\\ -1 \end{array} \right] = \left[ \begin{array}{r} 3 \\ 5 \end{array} \right]\)
      and \(T \left[ \begin{array}{r} 2 \\ 0 \\ 5 \end{array} \right] = \left[ \begin{array}{r} -1 \\ 2 \end{array} \right]\).

    1. \(\left[ \begin{array}{r} 5 \\ 6 \\ -13 \end{array} \right] = 3 \left[ \begin{array}{r} 3 \\ 2 \\ -1 \end{array} \right] - 2 \left[ \begin{array}{r} 2 \\ 0 \\ 5 \end{array} \right]\), so
      \(T \left[ \begin{array}{r} 5 \\ 6 \\ -13 \end{array} \right] = 3T \left[ \begin{array}{r} 3 \\ 2 \\ -1 \end{array} \right] - 2T \left[ \begin{array}{r} 2 \\ 0 \\ 5 \end{array} \right] = 3 \left[ \begin{array}{r} 3 \\ 5 \end{array} \right] - 2 \left[ \begin{array}{r} -1 \\ 2 \end{array} \right] = \left[ \begin{array}{r} 11 \\ 11 \end{array} \right]\)

    Let \(T : \mathbb{R}^4 \to \mathbb{R}^3\) be a linear transformation.

    1. Find \(T \left[ \begin{array}{r} 1 \\ 3 \\ -2 \\ -3 \end{array} \right]\) if \(T \left[ \begin{array}{r} 1 \\ 1 \\ 0 \\ -1 \end{array} \right] = \left[ \begin{array}{r} 2 \\ 3 \\ -1 \end{array} \right]\)
      and \(T \left[ \begin{array}{r} 0 \\ -1 \\ 1 \\ 1 \end{array} \right] = \left[ \begin{array}{r} 5 \\ 0 \\ 1 \end{array} \right]\).

    2. Find \(T \left[ \begin{array}{r} 5 \\ -1 \\ 2 \\ -4 \end{array} \right]\) if \(T \left[ \begin{array}{r} 1 \\ 1 \\ 1 \\ 1 \end{array} \right] = \left[ \begin{array}{r} 5 \\ 1 \\ -3 \end{array} \right]\)
      and \(T \left[ \begin{array}{r} -1 \\ 1 \\ 0 \\ 2 \end{array} \right] = \left[ \begin{array}{r} 2 \\ 0 \\ 1 \end{array} \right]\).

    1. As in 1(b), \(T \left[ \begin{array}{r} 5 \\ -1 \\ 2 \\ -4 \end{array} \right] = \left[ \begin{array}{r} 4 \\ 2 \\ -9 \end{array} \right]\).

    In each case assume that the transformation \(T\) is linear, and use Theorem [thm:005789] to obtain the matrix \(A\) of \(T\).

    1. \(T : \mathbb{R}^2 \to \mathbb{R}^2\) is reflection in the line \(y = -x\).
    2. \(T : \mathbb{R}^2 \to \mathbb{R}^2\) is given by \(T(\mathbf{x}) = -\mathbf{x}\) for each \(\mathbf{x}\) in \(\mathbb{R}^2\).
    3. \(T : \mathbb{R}^2 \to \mathbb{R}^2\) is clockwise rotation through \(\frac{\pi}{4}\).
    4. \(T : \mathbb{R}^2 \to \mathbb{R}^2\) is counterclockwise rotation through \(\frac{\pi}{4}\).
    1. \(T(\mathbf{e}_{1}) = -\mathbf{e}_{2}\) and \(T(\mathbf{e}_{2}) = -\mathbf{e}_{1}\). So \(A\left[ \begin{array}{cc} T(\mathbf{e}_{1}) & T(\mathbf{e}_{2}) \end{array} \right] = \left[ \begin{array}{cc} -\mathbf{e}_{2} & -\mathbf{e}_{1} \end{array} \right] = \left[ \begin{array}{rr} -1 & 0 \\ 0 & -1 \end{array} \right]\).
    2. \(T(\mathbf{e}_{1}) = \left[ \def\arraystretch{1.5}\begin{array}{r} \frac{\sqrt{2}}{2} \\ \frac{\sqrt{2}}{2} \end{array} \right]\) and \(T(\mathbf{e}_{2}) = \left[ \def\arraystretch{1.5}\begin{array}{r} -\frac{\sqrt{2}}{2} \\ \frac{\sqrt{2}}{2} \end{array} \right]\)

      So \(A = \left[ \begin{array}{cc} T(\mathbf{e}_{1}) & T(\mathbf{e}_{2}) \end{array} \right] = \frac{\sqrt{2}}{2} \left[ \begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array} \right]\).

    In each case use Theorem [thm:005789] to obtain the matrix \(A\) of the transformation \(T\). You may assume that \(T\) is linear in each case.

    1. \(T : \mathbb{R}^3 \to \mathbb{R}^3\) is reflection in the \(x-z\) plane.
    2. \(T : \mathbb{R}^3 \to \mathbb{R}^3\) is reflection in the \(y-z\) plane.
    1. \(T(\mathbf{e}_{1}) = -\mathbf{e}_{1}\), \(T(\mathbf{e}_{2}) = \mathbf{e}_{2}\) and \(T(\mathbf{e}_{3}) = \mathbf{e}_{3}\). Hence Theorem [thm:005789] gives \(A\left[ \begin{array}{ccc} T(\mathbf{e}_{1}) & T(\mathbf{e}_{2}) & T(\mathbf{e}_{3}) \end{array} \right] = \left[ \begin{array}{ccc} -\mathbf{e}_{1} & \mathbf{e}_{2} & \mathbf{e}_{3} \end{array} \right] = \left[ \begin{array}{rrr} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]\).

    Let \(T : \mathbb{R}^n \to \mathbb{R}^m\) be a linear transformation.

    1. If \(\mathbf{x}\) is in \(\mathbb{R}^n\), we say that \(\mathbf{x}\) is in the kernel of \(T\) if \(T(\mathbf{x}) = \mathbf{0}\). If \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are both in the kernel of \(T\), show that \(a\mathbf{x}_{1} + b\mathbf{x}_{2}\) is also in the kernel of \(T\) for all scalars \(a\) and \(b\).
    2. If \(\mathbf{y}\) is in \(\mathbb{R}^n\), we say that \(\mathbf{y}\) is in the image of \(T\) if \(\mathbf{y} = T(\mathbf{x})\) for some \(\mathbf{x}\) in \(\mathbb{R}^n\). If \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\) are both in the image of \(T\), show that \(a\mathbf{y}_{1} + b\mathbf{y}_{2}\) is also in the image of \(T\) for all scalars \(a\) and \(b\).
    1. We have \(\mathbf{y}_{1} = T(\mathbf{x}_{1})\) for some \(\mathbf{x}_{1}\) in \(\mathbb{R}^n\), and \(\mathbf{y}_{2} = T(\mathbf{x}_{2})\) for some \(\mathbf{x}_{2}\) in \(\mathbb{R}^n\). So \(a\mathbf{y}_{1} + b\mathbf{y}_{2} = aT(\mathbf{x}_{1}) + bT(\mathbf{x}_{2}) = T(a\mathbf{x}_{1} + b\mathbf{x}_{2})\). Hence \(a\mathbf{y}_{1} + b\mathbf{y}_{2}\) is also in the image of \(T\).

    Use Theorem [thm:005789] to find the matrix of the identity transformation \(1_{\mathbb{R}^n} : \mathbb{R}^n \to \mathbb{R}^n\) defined by \(1_{\mathbb{R}^n}(\mathbf{x}) = \mathbf{x}\) for each \(\mathbf{x}\) in \(\mathbb{R}^n\).

    In each case show that \(T : \mathbb{R}^2 \to \mathbb{R}^2\) is not a linear transformation.

    \(T \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} xy \\ 0 \end{array} \right]\) \(T \left[ \begin{array}{c} x \\ y \end{array} \right] = \left[ \begin{array}{c} 0 \\ y^2 \end{array} \right]\)

    1. \(T\left(2 \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \right) \neq 2 \left[ \begin{array}{r} 0 \\ -1 \end{array} \right]\).

    In each case show that \(T\) is either reflection in a line or rotation through an angle, and find the line or angle.

    1. \(T \left[ \begin{array}{c} x \\ y \end{array} \right] = \frac{1}{5} \left[ \begin{array}{c} -3x + 4y \\ 4x + 3y \end{array} \right]\)
    2. \(T \left[ \begin{array}{c} x \\ y \end{array} \right] = \frac{1}{\sqrt{2}} \left[ \begin{array}{c} x + y \\ -x + y \end{array} \right]\)
    3. \(T \left[ \begin{array}{c} x \\ y \end{array} \right] = \frac{1}{\sqrt{3}} \left[ \begin{array}{c} x - \sqrt{3}y \\ \sqrt{3}x + y \end{array} \right]\)
    4. \(T \left[ \begin{array}{c} x \\ y \end{array} \right] = -\frac{1}{10} \left[ \begin{array}{c} 8x + 6y \\ 6x - 8y \end{array} \right]\)
    1. \(A = \frac{1}{\sqrt{2}} \left[ \begin{array}{rr} 1 & 1 \\ -1 & 1 \end{array} \right]\), rotation through \(\theta = -\frac{\pi}{4}\).
    2. \(A = \frac{1}{10} \left[ \begin{array}{rr} -8 & -6 \\ -6 & 8 \end{array} \right]\), reflection in the line \(y = -3x\).

    Express reflection in the line \(y = -x\) as the composition of a rotation followed by reflection in the line \(y = x\).

    Find the matrix of \(T : \mathbb{R}^3 \to \mathbb{R}^3\) in each case:

    1. \(T\) is rotation through \(\theta\) about the \(x\) axis (from the \(y\) axis to the \(z\) axis).
    2. \(T\) is rotation through \(\theta\) about the \(y\) axis (from the \(x\) axis to the \(z\) axis).
    1. \(\left[ \begin{array}{ccc} \cos \theta & 0 & -\sin \theta \\ 0 & 1 & 0 \\ \sin \theta & 0 & \cos \theta \end{array} \right]\)

    Let \(T_{\theta} : \mathbb{R}^2 \to \mathbb{R}^2\) denote reflection in the line making an angle \(\theta\) with the positive \(x\) axis.

    1. Show that the matrix of \(T_{\theta}\) is \(\left[ \begin{array}{rr} \cos 2\theta & \sin 2\theta \\ \sin 2\theta & -\cos 2\theta \end{array} \right]\) for all \(\theta\).
    2. Show that \(T_{\theta} \circ R_{2\phi} = T_{\theta - \phi}\) for all \(\theta\) and \(\phi\).

    In each case find a rotation or reflection that equals the given transformation.

    1. Reflection in the \(y\) axis followed by rotation through \(\frac{\pi}{2}\).
    2. Rotation through \(\pi\) followed by reflection in the \(x\) axis.
    3. Rotation through \(\frac{\pi}{2}\) followed by reflection in the line \(y = x\).
    4. Reflection in the \(x\) axis followed by rotation through \(\frac{\pi}{2}\).
    5. Reflection in the line \(y = x\) followed by reflection in the \(x\) axis.
    6. Reflection in the \(x\) axis followed by reflection in the line \(y = x\).
    1. Reflection in the \(y\) axis
    2. Reflection in \(y = x\)
    3. Rotation through \(\frac{\pi}{2}\)

    Let \(R\) and \(S\) be matrix transformations \(\mathbb{R}^n \to \mathbb{R}^m\) induced by matrices \(A\) and \(B\) respectively. In each case, show that \(T\) is a matrix transformation and describe its matrix in terms of \(A\) and \(B\).

    1. \(T(\mathbf{x}) = R(\mathbf{x}) + S(\mathbf{x})\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\).
    2. \(T(\mathbf{x}) = aR(\mathbf{x})\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\) (where \(a\) is a fixed real number).
    1. \(T(\mathbf{x}) = aR(\mathbf{x}) = a(A\mathbf{x}) = (aA)\mathbf{x}\) for all \(\mathbf{x}\) in \(\mathbb{R}\). Hence \(T\) is induced by \(aA\).

    Show that the following hold for all linear transformations \(T : \mathbb{R}^n \to \mathbb{R}^m\):

    \(T(\mathbf{0}) = \mathbf{0}\) \(T(-\mathbf{x}) = -T(\mathbf{x})\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\)

    1. If \(\mathbf{x}\) is in \(\mathbb{R}^n\), then \(T(-\mathbf{x}) = T\left[(-1)\mathbf{x}\right] = (-1)T(\mathbf{x}) = -T(\mathbf{x})\).

    The transformation \(T : \mathbb{R}^n \to \mathbb{R}^m\) defined by \(T(\mathbf{x}) = \mathbf{0}\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\) is called the zero transformation.

    1. Show that the zero transformation is linear and find its matrix.
    2. Let \(\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\) denote the columns of the \(n \times n\) identity matrix. If \(T : \mathbb{R}^n \to \mathbb{R}^m\) is linear and \(T(\mathbf{e}_{i}) = \mathbf{0}\) for each \(i\), show that \(T\) is the zero transformation. [Hint: Theorem [thm:005709].]

    Write the elements of \(\mathbb{R}^n\) and \(\mathbb{R}^m\) as rows. If \(A\) is an \(m \times n\) matrix, define \(T : \mathbb{R}^m \to \mathbb{R}^n\) by \(T(\mathbf{y}) = \mathbf{y}A\) for all rows \(\mathbf{y}\) in \(\mathbb{R}^m\). Show that:

    1. \(T\) is a linear transformation.
    2. the rows of \(A\) are \(T(\mathbf{f}_{1}), T(\mathbf{f}_{2}), \dots, T(\mathbf{f}_{m})\) where \(\mathbf{f}_{i}\) denotes row \(i\) of \(I_{m}\). [Hint: Show that \(\mathbf{f}_{i} A\) is row \(i\) of \(A\).]

    Let \(S : \mathbb{R}^n \to \mathbb{R}^n\) and \(T : \mathbb{R}^n \to \mathbb{R}^n\) be linear transformations with matrices \(A\) and \(B\) respectively.

    1. Show that \(B^{2} = B\) if and only if \(T^{2} = T\) (where \(T^{2}\) means \(T \circ T\)).
    2. Show that \(B^{2} = I\) if and only if \(T^2 = 1_{\mathbb{R}^n}\).
    3. Show that \(AB = BA\) if and only if \(S \circ T = T \circ S\).
    1. If \(B^{2} = I\) then \(T^{2}(\mathbf{x}) = T[T(\mathbf{x})] = B(B\mathbf{x}) = B^{2}\mathbf{x} = I\mathbf{x} = \mathbf{x} = 1_{\mathbb{R}^2}(\mathbf{x})\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\). Hence \(T^{2} = 1_{\mathbb{R}^2}\). If \(T^{2} = 1_{\mathbb{R}^2}\), then \(B^{2}\mathbf{x} = T^{2}(\mathbf{x}) = 1_{\mathbb{R}^2}(\mathbf{x}) = \mathbf{x} = I\mathbf{x}\) for all \(\mathbf{x}\), so \(B^{2} = I\) by Theorem [thm:002985].

    Let \(Q_{0} : \mathbb{R}^2 \to \mathbb{R}^2\) be reflection in the \(x\) axis, let \(Q_{1} : \mathbb{R}^2 \to \mathbb{R}^2\) be reflection in the line \(y = x\), let \(Q_{-1} : \mathbb{R}^2 \to \mathbb{R}^2\) be reflection in the line \(y = -x\), and let \(R_{\frac{\pi}{2}} : \mathbb{R}^2 \to \mathbb{R}^2\) be counterclockwise rotation through \(\frac{\pi}{2}\).

    1. Show that \(Q_{1} \circ R_{\frac{\pi}{2}} = Q_{0}\).
    2. Show that \(Q_{1} \circ Q_{0} = R_{\frac{\pi}{2}}\).
    3. Show that \(R_{\frac{\pi}{2}} \circ Q_{0} = Q_{1}\).
    4. Show that \(Q_{0} \circ R_{\frac{\pi}{2}} = Q_{-1}\).
    1. The matrix of \(Q_{1} \circ Q_{0}\) is \(\left[ \begin{array}{rr} 0 & 1 \\ 1 & 0 \end{array} \right] \left[ \begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \right] = \left[ \begin{array}{rr} 0 & -1 \\ 1 & 0 \end{array} \right]\), which is the matrix of \(R_{\frac{\pi}{2}}\).
    2. The matrix of \(Q_{0} \circ R_{\frac{\pi}{2}}\) is
      \(\left[ \begin{array}{rr} 1 & 0 \\ 0 & -1 \end{array} \right] \left[ \begin{array}{rr} 0 & -1 \\ 1 & 0 \end{array} \right] = \left[ \begin{array}{rr} 0 & -1 \\ -1 & 0 \end{array} \right]\), which is the matrix of \(Q_{-1}\).

    For any slope \(m\), show that:

    \(Q_{m} \circ P_{m} = P_{m}\) \(P_{m} \circ Q_{m} = P_{m}\)

    Define \(T : \mathbb{R}^n \to \mathbb{R}\) by \(T(x_{1}, x_{2}, \dots, x_{n}) = x_{1} + x_{2} + \cdots + x_{n}\). Show that \(T\) is a linear transformation and find its matrix.

    We have \(T(\mathbf{x}) = x_{1} + x_{2} + \cdots + x_{n} = \left[ \begin{array}{cccc} 1 & 1 & \cdots & 1 \end{array} \right] \left[ \begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \\ \end{array} \right]\), so \(T\) is the matrix transformation induced by the matrix \(A = \left[ \begin{array}{cccc} 1 & 1 & \cdots & 1 \end{array} \right]\). In particular, \(T\) is linear. On the other hand, we can use Theorem [thm:005789] to get \(A\), but to do this we must first show directly that \(T\) is linear. If we write \(\mathbf{x} = \left[ \begin{array}{c} x_{1} \\ x_{2} \\ \vdots \\ x_{n} \\ \end{array} \right]\) and \(\mathbf{y} = \left[ \begin{array}{c} y_{1} \\ y_{2} \\ \vdots \\ y_{n} \\ \end{array} \right]\). Then

    \[\begin{aligned} T(\mathbf{x} + \mathbf{y}) &= T \left[ \begin{array}{c} x_{1} + y_{1} \\ x_{2} + y_{2} \\ \vdots \\ x_{n} + y_{n} \\ \end{array} \right] \\ &= (x_{1} + y_{1}) + (x_{2} + y_{2}) + \cdots + (x_{n} + y_{n}) \\ &= (x_{1} + x_{2} + \cdots + x_{n}) + (y_{1} + y_{2} + \cdots + y_{n}) \\ &= T(\mathbf{x}) + T(\mathbf{y})\end{aligned} \nonumber \]

    Similarly, \(T(a\mathbf{x}) = aT(\mathbf{x})\) for any scalar \(a\), so \(T\) is linear. By Theorem [thm:005789], \(T\) has matrix \(A = \left[ \begin{array}{cccc} T(\mathbf{e}_{1}) & T(\mathbf{e}_{2}) & \cdots & T(\mathbf{e}_{n}) \end{array} \right] = \left[ \begin{array}{cccc} 1 & 1 & \cdots & 1 \end{array} \right]\), as before.

    Given \(c\) in \(\mathbb{R}\), define \(T_{c} : \mathbb{R}^n \to \mathbb{R}\) by \(T_{c}(\mathbf{x}) = c\mathbf{x}\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\). Show that \(T_{c}\) is a linear transformation and find its matrix.

    Given vectors \(\mathbf{w}\) and \(\mathbf{x}\) in \(\mathbb{R}^n\), denote their dot product by \(\mathbf{w}\bullet \mathbf{x}\).

    1. Given \(\mathbf{w}\) in \(\mathbb{R}^n\), define \(T_{\mathbf{w}} : \mathbb{R}^n \to \mathbb{R}\) by \(T_{\mathbf{w}}(\mathbf{x}) = \mathbf{w} \cdot \mathbf{x}\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\). Show that \(T_{\mathbf{w}}\) is a linear transformation.
    2. Show that every linear transformation \(T : \mathbb{R}^n \to \mathbb{R}\) is given as in (a); that is \(T = T_{\mathbf{w}}\) for some \(\mathbf{w}\) in \(\mathbb{R}^n\).
    1. If \(T : \mathbb{R}^n \to \mathbb{R}\) is linear, write \(T(\mathbf{e}_{j}) = w_{j}\) for each \(j = 1, 2, \dots, n\) where \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) is the standard basis of \(\mathbb{R}^n\). Since \(\mathbf{x} = x_{1}\mathbf{e}_{1} + x_{2}\mathbf{e}_{2} + \cdots + x_{n}\mathbf{e}_{n}\), Theorem [thm:005709] gives

      \[\begin{aligned} T(\mathbf{x}) & = T(x_{1}\mathbf{e}_{1} + x_{2}\mathbf{e}_{2} + \cdots + x_{n}\mathbf{e}_{n}) \\ &= x_{1}T(\mathbf{e}_{1}) + x_{2}T(\mathbf{e}_{2}) + \cdots + x_{n}T(\mathbf{e}_{n}) \\ &= x_{1}w_{1} + x_{2}w_{2} + \cdots + x_{n}w_{n} \\ &= \mathbf{w}\bullet \mathbf{x} = T_{\mathbf{w}}(\mathbf{x})\end{aligned} \nonumber \]

    If \(\mathbf{x} \neq \mathbf{0}\) and \(\mathbf{y}\) are vectors in \(\mathbb{R}^n\), show that there is a linear transformation \(T : \mathbb{R}^n \to \mathbb{R}^n\) such that \(T(\mathbf{x}) = \mathbf{y}\). [Hint: By Definition [def:002668], find a matrix \(A\) such that \(A\mathbf{x} = \mathbf{y}\).]

    1. Given \(\mathbf{x}\) in \(\mathbb{R}\) and \(a\) in \(\mathbb{R}\), we have
      \(\begin{array}{lllll} (S \circ T)(a\mathbf{x}) & = & S\left[T(a\mathbf{x})\right] & & \mbox{Definition of } S \circ T \\ & = & S\left[aT(\mathbf{x})\right] & & \mbox{Because } T \mbox{ is linear.} \\ & = & a\left[S\left[T(\mathbf{x})\right]\right] & & \mbox{Because } S \mbox{ is linear.} \\ & = & a\left[S \circ T(\mathbf{x})\right]& & \mbox{Definition of } S \circ T \\ \end{array}\)

    Let \(\mathbb{R}^n \xrightarrow{T} \mathbb{R}^m \xrightarrow{S} \mathbb{R}^k\) be two linear transformations. Show directly that \(S \circ T\) is linear. That is:

    1. Show that \((S \circ T)(\mathbf{x} + \mathbf{y}) = (S \circ T)\mathbf{x} + (S \circ T)\mathbf{y}\) for all \(\mathbf{x}\), \(\mathbf{y}\) in \(\mathbb{R}^n\).
    2. Show that \((S \circ T)(a\mathbf{x}) = a[(S \circ T)\mathbf{x}]\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\) and all \(a\) in \(\mathbb{R}\).

    Let \(\mathbb{R}^n \xrightarrow{T} \mathbb{R}^m \xrightarrow{S} \mathbb{R}^k \xrightarrow{R} \mathbb{R}^k\) be linear. Show that \(R \circ (S \circ T) = (R \circ S) \circ T\) by showing directly that \([R \circ (S \circ T)](\mathbf{x}) = [(R \circ S) \circ T)](\mathbf{x})\) holds for each vector \(\mathbf{x}\) in \(\mathbb{R}^n\).


    2.6E: Linear Transformations Exercises is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?