Skip to main content
Mathematics LibreTexts

4.4: Linear Operators on R³

  • Page ID
    58851
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Recall that a transformation \(T : \mathbb{R}^n \to \mathbb{R}^m\) is called linear if \(T(\mathbf{x} + \mathbf{y}) = T(\mathbf{x}) + T(\mathbf{y})\) and \(T(a\mathbf{x}) = aT(\mathbf{x})\) holds for all \(\mathbf{x}\) and \(\mathbf{y}\) in \(\mathbb{R}^n\) and all scalars \(a\). In this case we showed (in Theorem [thm:005789]) that there exists an \(m \times n\) matrix \(A\) such that \(T(\mathbf{x}) = A\mathbf{x}\) for all \(\mathbf{x}\) in \(\mathbb{R}^n\), and we say that \(T\) is the matrix transformation induced by \(A\).

    Linear Operator on \(\mathbb{R}^n\)012946 A linear transformation

    \[T: \mathbb{R}^n \to \mathbb{R}^n \nonumber \]

    is called a linear operator on \(\mathbb{R}^n\).

    In Section [sec:2_6] we investigated three important linear operators on \(\mathbb{R}^2\): rotations about the origin, reflections in a line through the origin, and projections on this line.

    In this section we investigate the analogous operators on \(\mathbb{R}^3\): Rotations about a line through the origin, reflections in a plane through the origin, and projections onto a plane or line through the origin in \(\mathbb{R}^3\). In every case we show that the operator is linear, and we find the matrices of all the reflections and projections.

    To do this we must prove that these reflections, projections, and rotations are actually linear operators on \(\mathbb{R}^3\). In the case of reflections and rotations, it is convenient to examine a more general situation. A transformation \(T : \mathbb{R}^3 \to \mathbb{R}^3\) is said to be distance preserving if the distance between \(T(\mathbf{v})\) and \(T(\mathbf{w})\) is the same as the distance between \(\mathbf{v}\) and \(\mathbf{w}\) for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(\mathbb{R}^3\); that is,

    \[\label{eq:distancePresEq} \| T(\mathbf{v}) - T(\mathbf{w}) \| = \| \mathbf{v} - \mathbf{w} \| \mbox{ for all } \mathbf{v} \mbox{ and } \mathbf{w} \mbox{ in } \mathbb{R}^3 \]

    Clearly reflections and rotations are distance preserving, and both carry \(\mathbf{0}\) to \(\mathbf{0}\), so the following theorem shows that they are both linear.

    012963 If \(T : \mathbb{R}^3 \to \mathbb{R}^3\) is distance preserving, and if \(T(\mathbf{0}) = \mathbf{0}\), then \(T\) is linear.

    Since \(T(\mathbf{0}) = \mathbf{0}\), taking \(\mathbf{w} = \mathbf{0}\) in ([eq:distancePresEq]) shows that \(\| T(\mathbf{v})\| = \|\mathbf{v}\|\) for all \(\mathbf{v}\) in \(\mathbb{R}^3\), that is \(T\) preserves length. Also, \(\| T(\mathbf{v}) - T(\mathbf{w})\|^{2} = \|\mathbf{v} - \mathbf{w}\|^{2}\) by ([eq:distancePresEq]). Since \(\|\mathbf{v} - \mathbf{w}\|^{2} = \|\mathbf{v}\|^{2} - 2\mathbf{v}\bullet \mathbf{w} + \|\mathbf{w}\|^{2}\) always holds, it follows that \(T(\mathbf{v})\bullet T(\mathbf{w}) = \mathbf{v}\bullet \mathbf{w}\) for all \(\mathbf{v}\) and \(\mathbf{w}\). Hence (by Theorem [thm:011851]) the angle between \(T(\mathbf{v})\) and \(T(\mathbf{w})\) is the same as the angle between \(\mathbf{v}\) and \(\mathbf{w}\) for all (nonzero) vectors \(\mathbf{v}\) and \(\mathbf{w}\) in \(\mathbb{R}^3\).

    With this we can show that \(T\) is linear. Given nonzero vectors \(\mathbf{v}\) and \(\mathbf{w}\) in \(\mathbb{R}^3\), the vector \(\mathbf{v} + \mathbf{w}\) is the diagonal of the parallelogram determined by \(\mathbf{v}\) and \(\mathbf{w}\). By the preceding paragraph, the effect of \(T\) is to carry this entire parallelogram to the parallelogram determined by \(T(\mathbf{v})\) and \(T(\mathbf{w})\), with diagonal \(T(\mathbf{v} + \mathbf{w})\). But this diagonal is \(T(\mathbf{v}) + T(\mathbf{w})\) by the parallelogram law (see Figure [fig:012980]).

    In other words, \(T(\mathbf{v} + \mathbf{w}) = T(\mathbf{v}) + T(\mathbf{w})\). A similar argument shows that \(T(a\mathbf{v}) = aT(\mathbf{v})\) for all scalars \(a\), proving that \(T\) is indeed linear.

    Distance-preserving linear operators are called isometries, and we return to them in Section [sec:10_4].

    Reflections and Projections

    In Section [sec:2_6] we studied the reflection \(Q_{m} : \mathbb{R}^2 \to \mathbb{R}^2\) in the line \(y = mx\) and projection \(P_{m} : \mathbb{R}^2 \to \mathbb{R}^2\) on the same line. We found (in Theorems [thm:006096] and [thm:006137]) that they are both linear and

    \[Q_{m} \mbox{ has matrix } \frac{1}{1 + m^2}\left[ \begin{array}{cc} 1 - m^2 & 2m \\ 2m & m^2 -1 \end{array} \right] \quad \mbox{ and } \quad P_{m} \mbox{ has matrix }\frac{1}{1 + m^2}\left[ \begin{array}{cr} 1 & m \\ m & m^2 \end{array} \right]. \nonumber \]

    We now look at the analogues in \(\mathbb{R}^3\).

    Let \(L\) denote a line through the origin in \(\mathbb{R}^3\). Given a vector \(\mathbf{v}\) in \(\mathbb{R}^3\), the reflection \(Q_{L}(\mathbf{v})\) of \(\mathbf{v}\) in \(L\) and the projection \(P_{L}(\mathbf{v})\) of \(\mathbf{v}\) on \(L\) are defined in Figure [fig:013008]. In the same figure, we see that

    \[\label{eq:refProjEq} P_{L}(\mathbf{v}) = \mathbf{v} + \frac{1}{2}[Q_{L}(\mathbf{v}) - \mathbf{v}] = \frac{1}{2}[Q_{L}(\mathbf{v}) + \mathbf{v}] \]

    so the fact that \(Q_{L}\) is linear (by Theorem [thm:012963]) shows that \(P_{L}\) is also linear.1

    However, Theorem [thm:011958] gives us the matrix of \(P_{L}\) directly. In fact, if \(\mathbf{d} = \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \neq \mathbf{0}\) is a direction vector for \(L\), and we write \(\mathbf{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]\), then

    \[P_{L}(\mathbf{v}) = \frac{\mathbf{v}\bullet \mathbf{d}}{\| \mathbf{d} \|^2}\mathbf{d} = \frac{ax + by + cz}{a^2 + b^2 + c^2} \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] = \frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} a^2 & ab & ac \\ ab & b^2 & bc \\ ac & bc & c^2 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] \nonumber \]

    as the reader can verify. Note that this shows directly that \(P_{L}\) is a matrix transformation and so gives another proof that it is linear.

    013009 Let \(L\) denote the line through the origin in \(\mathbb{R}^3\) with direction vector \(\mathbf{d} = \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \neq \mathbf{0}\). Then \(P_{L}\) and \(Q_{L}\) are both linear and

    \[P_{L} \mbox{ has matrix }\frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} a^2 & ab & ac \\ ab & b^2 & bc \\ ac & bc & c^2 \end{array} \right] \nonumber \]

    \[Q_{L} \mbox{ has matrix }\frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} a^2 - b^2 - c^2 & 2ab & 2ac \\ 2ab & b^2 - a^2 - c^2 & 2bc \\ 2ac & 2bc & c^2 - a^2 - b^2 \end{array} \right] \nonumber \]

    It remains to find the matrix of \(Q_{L}\). But ([eq:refProjEq]) implies that \(Q_{L}(\mathbf{v}) = 2P_{L}(\mathbf{v}) - \mathbf{v}\) for each \(\mathbf{v}\) in \(\mathbb{R}^3\), so if \(\mathbf{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]\) we obtain (with some matrix arithmetic):

    \[\begin{aligned} Q_{L}(\mathbf{v}) &= \left\lbrace \frac{2}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} a^2 & ab & ac \\ ab & b^2 & bc \\ ac & bc & c^2 \end{array} \right] - \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] \right\rbrace \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] \\ & = \frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} a^2 - b^2 - c^2 & 2ab & 2ac \\ 2ab & b^2 - a^2 - c^2 & 2bc \\ 2ac & 2bc & c^2 - a^2 - b^2 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]\end{aligned} \nonumber \]

    as required.

    In \(\mathbb{R}^3\) we can reflect in planes as well as lines. Let \(M\) denote a plane through the origin in \(\mathbb{R}^3\). Given a vector \(\mathbf{v}\) in \(\mathbb{R}^3\), the reflection \(Q_{M}(\mathbf{v})\) of \(\mathbf{v}\) in \(M\) and the projection \(P_{M}(\mathbf{v})\) of \(\mathbf{v}\) on \(M\) are defined in Figure [fig:013036]. As above, we have

    \[P_{M}(\mathbf{v}) = \mathbf{v} + \frac{1}{2}[Q_{M}(\mathbf{v}) - \mathbf{v}] = \frac{1}{2}[Q_{M}(\mathbf{v}) + \mathbf{v}] \nonumber \]

    so the fact that \(Q_{M}\) is linear (again by Theorem [thm:012963]) shows that \(P_{M}\) is also linear.

    Again we can obtain the matrix directly. If \(\mathbf{n}\) is a normal for the plane \(M\), then Figure [fig:013036] shows that

    \[P_{M}(\mathbf{v}) = \mathbf{v} - \proj{\mathbf{n}}{\mathbf{v}} = \mathbf{v} - \frac{\mathbf{v}\bullet \mathbf{n}}{\| \mathbf{n} \|^2}\mathbf{n} \mbox{ for all vectors }\mathbf{v}. \nonumber \]

    If \(\mathbf{n} = \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \neq \mathbf{0}\) and \(\mathbf{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]\), a computation like the above gives

    \[\begin{aligned} P_{M}(\mathbf{v}) &= \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] - \frac{ax + by + cz}{a^2 + b^2 + c^2} \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \\ &= \frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} b^2 + c^2 & -ab & -ac \\ -ab & a^2 + c^2 & -bc \\ -ac & -bc & b^2 + c^2 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]\end{aligned} \nonumber \]

    This proves the first part of

    013042 Let \(M\) denote the plane through the origin in \(\mathbb{R}^3\) with normal \(\mathbf{n} = \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] \neq \mathbf{0}\). Then \(P_{M}\) and \(Q_{M}\) are both linear and

    \[P_{M} \mbox{ has matrix }\frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} b^2 + c^2 & -ab & -ac \\ -ab & a^2 + c^2 & -bc \\ -ac & -bc & a^2 + b^2 \end{array} \right] \nonumber \]

    \[Q_{M} \mbox{ has matrix }\frac{1}{a^2 + b^2 + c^2} \left[ \begin{array}{ccc} b^2 + c^2 - a^2 & -2ab & -2ac \\ -2ab & a^2 + c^2 - b^2 & -2bc \\ -2ac & -2bc & a^2 + b^2 - c^2 \end{array} \right] \nonumber \]

    It remains to compute the matrix of \(Q_{M}\). Since \(Q_{M}(\mathbf{v}) = 2P_{M}(\mathbf{v}) - \mathbf{v}\) for each \(\mathbf{v}\) in \(\mathbb{R}^3\), the computation is similar to the above and is left as an exercise for the reader.

    Rotations

    In Section [sec:2_6] we studied the rotation \(R_{\theta} : \mathbb{R}^2 \to \mathbb{R}^2\) counterclockwise about the origin through the angle \(\theta\). Moreover, we showed in Theorem [thm:006021] that \(R_{\theta}\) is linear and has matrix \(\left[ \begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array} \right]\). One extension of this is given in the following example.

    013065 Let \(R_{z,\theta} : \mathbb{R}^3 \to \mathbb{R}^3\) denote rotation of \(\mathbb{R}^3\) about the \(z\) axis through an angle \(\theta\) from the positive \(x\) axis toward the positive \(y\) axis. Show that \(R_{z,\theta}\) is linear and find its matrix.

    First \(R\) is distance preserving and so is linear by Theorem [thm:012963]. Hence we apply Theorem [thm:005789] to obtain the matrix of \(R_{z,\theta}\).

    Let \(\mathbf{i} = \left[ \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right]\), \(\mathbf{j} = \left[ \begin{array}{c} 0 \\ 1 \\ 0 \end{array} \right]\), and \(\mathbf{k} = \left[ \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right]\) denote the standard basis of \(\mathbb{R}^3\); we must find \(R_{z,\theta}(\mathbf{i})\), \(R_{z,\theta}(\mathbf{j})\), and \(R_{z,\theta}(\mathbf{k})\). Clearly \(R_{z,\theta}(\mathbf{k}) = \mathbf{k}\). The effect of \(R_{z,\theta}\) on the \(x\)-\(y\) plane is to rotate it counterclockwise through the angle \(\theta\). Hence Figure [fig:013089] gives

    \[R_{z,\theta}(\mathbf{i}) = \left[ \begin{array}{c} \cos\theta \\ \sin\theta \\ 0 \end{array} \right],\ R_{z,\theta}(\mathbf{j}) = \left[ \begin{array}{c} -\sin\theta \\ \cos\theta \\ 0 \end{array} \right] \nonumber \]

    so, by Theorem [thm:005789], \(R_{z,\theta}\) has matrix

    \[\left[ \begin{array}{ccc} R_{z,\theta}(\mathbf{i}) & R_{z,\theta}(\mathbf{j}) & R_{z,\theta}(\mathbf{k}) \end{array} \right] = \left[ \begin{array}{ccc} \cos\theta & -\sin\theta & 0 \\ sin\theta & \cos\theta & 0\\ 0 & 0 & 1 \end{array} \right] \nonumber \]

    Example [exa:013065] begs to be generalized. Given a line \(L\) through the origin in \(\mathbb{R}^3\), every rotation about \(L\) through a fixed angle is clearly distance preserving, and so is a linear operator by Theorem [thm:012963]. However, giving a precise description of the matrix of this rotation is not easy and will have to wait until more techniques are available.

    Transformations of Areas and Volumes

    Let \(\mathbf{v}\) be a nonzero vector in \(\mathbb{R}^3\). Each vector in the same direction as \(\mathbf{v}\) whose length is a fraction \(s\) of the length of \(\mathbf{v}\) has the form \(s\mathbf{v}\) (see Figure [fig:013099]).

    With this, scrutiny of Figure [fig:013100] shows that a vector \(\mathbf{u}\) is in the parallelogram determined by \(\mathbf{v}\) and \(\mathbf{w}\) if and only if it has the form \(\mathbf{u} = s\mathbf{v} + t\mathbf{w}\) where \(0 \leq s \leq 1\) and \(0 \leq t \leq 1\). But then, if \(T : \mathbb{R}^3 \to \mathbb{R}^3\) is a linear transformation, we have

    \[T(s\mathbf{v} + t\mathbf{w}) = T(s\mathbf{v}) + T(t\mathbf{w}) = sT(\mathbf{v}) + tT(\mathbf{w}) \nonumber \]

    Hence \(T(s\mathbf{v} + t\mathbf{w})\) is in the parallelogram determined by \(T(\mathbf{v})\) and \(T(\mathbf{w})\). Conversely, every vector in this parallelogram has the form \(T(s\mathbf{v} + t\mathbf{w})\) where \(s\mathbf{v} + t\mathbf{w}\) is in the parallelogram determined by \(\mathbf{v}\) and \(\mathbf{w}\). For this reason, the parallelogram determined by \(T(\mathbf{v})\) and \(T(\mathbf{w})\) is called the image of the parallelogram determined by \(\mathbf{v}\) and \(\mathbf{w}\). We record this discussion as:

    013102 If \(T : \mathbb{R}^3 \to \mathbb{R}^3\) (or \(\mathbb{R}^2 \to \mathbb{R}^2\)) is a linear operator, the image of the parallelogram determined by vectors \(\mathbf{v}\) and \(\mathbf{w}\) is the parallelogram determined by \(T(\mathbf{v})\) and \(T(\mathbf{w})\).

    This result is illustrated in Figure [fig:013110], and was used in Examples [exa:003088] and [exa:003128] to reveal the effect of expansion and shear transformations.

    We now describe the effect of a linear transformation \(T: \mathbb{R}^3 \to \mathbb{R}^3\) on the parallelepiped determined by three vectors \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) in \(\mathbb{R}^3\) (see the discussion preceding Theorem [thm:012765]). If \(T\) has matrix \(A\), Theorem [thm:013102] shows that this parallelepiped is carried to the parallelepiped determined by \(T(\mathbf{u}) = A\mathbf{u}\), \(T(\mathbf{v}) = A\mathbf{v}\), and \(T(\mathbf{w}) = A\mathbf{w}\). In particular, we want to discover how the volume changes, and it turns out to be closely related to the determinant of the matrix \(A\).

    013115 Let \(\func{vol}(\mathbf{u}, \mathbf{v}, \mathbf{w})\) denote the volume of the parallelepiped determined by three vectors \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) in \(\mathbb{R}^3\), and let area \((\mathbf{p}, \mathbf{q})\) denote the area of the parallelogram determined by two vectors \(\mathbf{p}\) and \(\mathbf{q}\) in \(\mathbb{R}^2\). Then:

    1. If \(A\) is a \(3 \times 3\) matrix, then \(\func{vol}(A\mathbf{u}, A\mathbf{v}, A\mathbf{w}) = |\det (A)| \cdot \func{vol}(\mathbf{u}, \mathbf{v}, \mathbf{w})\).
    2. If \(A\) is a \(2 \times 2\) matrix, then \(\func{area}(A\mathbf{p}, A\mathbf{q}) = |\det (A)| \cdot \func{area}(\mathbf{p}, \mathbf{q})\).
    1. Let \(\left[ \begin{array}{ccc} \mathbf{u} & \mathbf{v} & \mathbf{w} \end{array}\right]\) denote the \(3 \times 3\) matrix with columns \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\). Then

      \[\func{vol}(A\mathbf{u}, A\mathbf{v}, A\mathbf{w}) = |A\mathbf{u}\bullet (A\mathbf{v} \times A\mathbf{w})| \nonumber \]

      \[\begin{aligned} A\mathbf{u}\bullet (A\mathbf{v} \times A\mathbf{w}) = \det \left[ \begin{array}{ccc} A\mathbf{u} & A\mathbf{v} & A\mathbf{w}\end{array}\right] &= \det (A\left[ \begin{array}{ccc} \mathbf{u} & \mathbf{v} & \mathbf{w} \end{array}\right]) \\ &= \det (A)\det \left[ \begin{array}{ccc} \mathbf{u} & \mathbf{v} & \mathbf{w} \end{array}\right] \\ &= \det (A)(\mathbf{u}\bullet (\mathbf{v} \times \mathbf{w}))\end{aligned} \nonumber \]

      where we used Definition [def:003447] and the product theorem for determinants. Finally (1) follows from Theorem [thm:012765] by taking absolute values.

    1. Given \(\mathbf{p} = \left[ \begin{array}{c} x\\ y \end{array} \right]\) in \(\mathbb{R}^2\), \(\mathbf{p}_{1} = \left[ \begin{array}{c} x \\ y \\ 0 \end{array} \right]\) in \(\mathbb{R}^3\). By the diagram, \(\func{area}(\mathbf{p}, \mathbf{q}) = \func{vol}(\mathbf{p}_{1}, \mathbf{q}_{1}, \mathbf{k})\) where \(\mathbf{k}\) is the (length \(1\)) coordinate vector along the \(z\) axis. If \(A\) is a \(2 \times 2\) matrix, write \(A_{1} = \left[ \begin{array}{cc} A & 0 \\ 0 & 1 \end{array} \right]\) in block form, and observe that \((A\mathbf{v})_{1} = (A_{1}\mathbf{v}_{1})\) for all \(\mathbf{v}\) in \(\mathbb{R}^2\) and \(A_{1}\mathbf{k} = \mathbf{k}\). Hence part (1) of this theorem shows

      \[\begin{aligned} \func{area}(A\mathbf{p}, A\mathbf{q}) &= \func{vol}(A_{1}\mathbf{p}_{1}, A_{1}\mathbf{q}_{1}, A_{1}\mathbf{k}) \\ &= |\det (A_{1})|\func{vol}(\mathbf{p}_{1}, \mathbf{q}_{1}, \mathbf{k})\\ &= |\det (A)|\func{area}(\mathbf{p}, \mathbf{q})\end{aligned} \nonumber \]

    Define the unit square and unit cube to be the square and cube corresponding to the coordinate vectors in \(\mathbb{R}^2\) and \(\mathbb{R}^3\), respectively. Then Theorem [thm:013115] gives a geometrical meaning to the determinant of a matrix \(A\):

    • If A is a \(2 \times 2\) matrix, then \(|\det (A)|\) is the area of the image of the unit square under multiplication by \(A\);
    • If A is a \(3 \times 3\) matrix, then \(|\det (A)|\) is the volume of the image of the unit cube under multiplication by \(A\).

    These results, together with the importance of areas and volumes in geometry, were among the reasons for the initial development of determinants.


    1. Note that Theorem [thm:012963] does not apply to \(P_{L}\) since it does not preserve distance.↩

    This page titled 4.4: Linear Operators on R³ is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform.