Skip to main content
Mathematics LibreTexts

4.3: More on the Cross Product

  • Page ID
    58850
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    The cross product \(\mathbf{v} \times \mathbf{w}\) of two \(\mathbb{R}^3\)-vectors \(\mathbf{v} = \left[ \begin{array}{r} x_{1}\\ y_{1}\\ z_{1} \end{array} \right]\) and \(\mathbf{w} = \left[ \begin{array}{r} x_{2}\\ y_{2}\\ z_{2} \end{array} \right]\) was defined in Section [sec:4_2] where we observed that it can be best remembered using a determinant:

    \[\label{eq:crossPdeterminant} \mathbf{v} \times \mathbf{w} = \det \left[ \begin{array}{rrr} \mathbf{i} & x_{1} & x_{2}\\ \mathbf{j} & y_{1} & y_{2}\\ \mathbf{k} & z_{1} & z_{2} \end{array} \right] = \left| \begin{array}{rr} y_{1} & y_{2}\\ z_{1} & z_{2} \end{array} \right|\mathbf{i} - \left| \begin{array}{rr} x_{1} & x_{2}\\ z_{1} & z_{2} \end{array} \right|\mathbf{j} + \left| \begin{array}{rr} x_{1} & x_{2}\\ y_{1} & y_{2} \end{array} \right|\mathbf{k} \]

    Here \(\mathbf{i} = \left[ \begin{array}{r} 1\\ 0\\ 0 \end{array} \right]\), \(\mathbf{j} = \left[ \begin{array}{r} 0\\ 1\\ 0 \end{array} \right]\), and \(\mathbf{k} = \left[ \begin{array}{r} 1\\ 0\\ 0 \end{array} \right]\) are the coordinate vectors, and the determinant is expanded along the first column. We observed (but did not prove) in Theorem [thm:012164] that \(\mathbf{v} \times \mathbf{w}\) is orthogonal to both \(\mathbf{v}\) and \(\mathbf{w}\). This follows easily from the next result.

    012676 If \(\mathbf{u} = \left[ \begin{array}{r} x_{0}\\ y_{0}\\ z_{0} \end{array} \right]\), \(\mathbf{v} = \left[ \begin{array}{r} x_{1}\\ y_{1}\\ z_{1} \end{array} \right]\), and \(\mathbf{w} = \left[ \begin{array}{r} x_{2}\\ y_{2}\\ z_{2} \end{array} \right]\), then \(\mathbf{u}\bullet (\mathbf{v} \times \mathbf{w}) = \det \left[ \begin{array}{rrr} x_{0} & x_{1} & x_{2}\\ y_{0} & y_{1} & y_{2}\\ z_{0} & z_{1} & z_{2} \end{array} \right]\).

    Recall that \(\mathbf{u}\bullet (\mathbf{v} \times \mathbf{w})\) is computed by multiplying corresponding components of \(\mathbf{u}\) and \(\mathbf{v} \times \mathbf{w}\) and then adding. Using equation ([eq:crossPdeterminant]), the result is:

    \[\mathbf{u}\bullet (\mathbf{v} \times \mathbf{w}) = x_{0}\left(\left| \begin{array}{rr} y_{1} & y_{2}\\ z_{1} & z_{2} \end{array} \right|\right) + y_{0}\left(- \left| \begin{array}{rr} x_{1} & x_{2}\\ z_{1} & z_{2} \end{array} \right|\right) +z_{0}\left( \left| \begin{array}{rr} x_{1} & x_{2}\\ y_{1} & y_{2} \end{array} \right|\right) = \det \left[ \begin{array}{rrr} x_{0} & x_{1} & x_{2}\\ y_{0} & y_{1} & y_{2}\\ z_{0} & z_{1} & z_{2} \end{array} \right] \nonumber \]

    where the last determinant is expanded along column 1.

    The result in Theorem [thm:012676] can be succinctly stated as follows: If \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) are three vectors in \(\mathbb{R}^3\), then

    \[\mathbf{u}\bullet (\mathbf{v} \times \mathbf{w}) = \det \left[ \begin{array}{ccc} \mathbf{u} & \mathbf{v} & \mathbf{w}\end{array}\right] \nonumber \]

    where \(\left[ \begin{array}{ccc} \mathbf{u} & \mathbf{v} & \mathbf{w}\end{array}\right]\) denotes the matrix with \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) as its columns. Now it is clear that \(\mathbf{v} \times \mathbf{w}\) is orthogonal to both \(\mathbf{v}\) and \(\mathbf{w}\) because the determinant of a matrix is zero if two columns are identical.

    Because of ([eq:crossPdeterminant]) and Theorem [thm:012676], several of the following properties of the cross product follow from properties of determinants (they can also be verified directly).

    012690 Let \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) denote arbitrary vectors in \(\mathbb{R}^3\).

    2

    1. \(\mathbf{u} \times \mathbf{v}\) is a vector.
    2. \(\mathbf{u} \times \mathbf{v}\) is orthogonal to both \(\mathbf{u}\) and \(\mathbf{v}\).
    3. \(\mathbf{u} \times \mathbf{0} = \mathbf{0} = \mathbf{0} \times \mathbf{u}\).
    4. \(\mathbf{u} \times \mathbf{u} = \mathbf{0}\).
    5. \(\mathbf{u} \times \mathbf{v} = -(\mathbf{v} \times \mathbf{u})\).
    6. \((k\mathbf{u}) \times \mathbf{v} = k(\mathbf{u} \times \mathbf{v}) = \mathbf{u} \times (k\mathbf{v})\) for any scalar \(k\).
    7. \(\mathbf{u} \times (\mathbf{v} + \mathbf{w}) = (\mathbf{u} \times \mathbf{v}) + (\mathbf{u} \times \mathbf{w})\).
    8. \((\mathbf{v} + \mathbf{w}) \times \mathbf{u} = (\mathbf{v} \times \mathbf{u}) + (\mathbf{w} \times \mathbf{u})\).

    (1) is clear; (2) follows from Theorem [thm:012676]; and (3) and (4) follow because the determinant of a matrix is zero if one column is zero or if two columns are identical. If two columns are interchanged, the determinant changes sign, and this proves (5). The proofs of (6), (7), and (8) are left as Exercise [ex:ch4_3_ex15].

    We now come to a fundamental relationship between the dot and cross products.

    Lagrange Identity012715 If \(\mathbf{u}\) and \(\mathbf{v}\) are any two vectors in \(\mathbb{R}^3\), then

    \[\| \mathbf{u} \times \mathbf{v} \|^2 = \| \mathbf{u} \|^2\| \mathbf{v} \|^2 - (\mathbf{u}\bullet \mathbf{v})^2 \nonumber \]

    Given \(\mathbf{u}\) and \(\mathbf{v}\), introduce a coordinate system and write \(\mathbf{u} = \left[ \begin{array}{r} x_{1}\\ y_{1}\\ z_{1} \end{array} \right]\) and \(\mathbf{v} = \left[ \begin{array}{r} x_{2}\\ y_{2}\\ z_{2} \end{array} \right]\) in component form. Then all the terms in the identity can be computed in terms of the components. The detailed proof is left as Exercise [ex:ch4_3_ex14].

    An expression for the magnitude of the vector \(\mathbf{u} \times \mathbf{v}\) can be easily obtained from the Lagrange identity. If \(\theta\) is the angle between \(\mathbf{u}\) and \(\mathbf{v}\), substituting \(\mathbf{u}\bullet \mathbf{v} = \|\mathbf{u}\|\|\mathbf{v}\| \cos \theta\) into the Lagrange identity gives

    \[\| \mathbf{u} \times \mathbf{v} \|^2 = \| \mathbf{u} \|^2\| \mathbf{v} \|^2 - \| \mathbf{u} \|^2\| \mathbf{v} \|^2\cos^2\theta = \| \mathbf{u} \|^2\| \mathbf{v} \|^2\sin^2\theta \nonumber \]

    using the fact that \(1 - \cos^{2} \theta = \sin^{2} \theta\). But \(\sin \theta\) is nonnegative on the range \(0 \leq \theta \leq \pi\), so taking the positive square root of both sides gives

    \[\| \mathbf{u} \times \mathbf{v} \| = \| \mathbf{u} \| \| \mathbf{v} \| \sin\theta \nonumber \]

    This expression for \(\|\mathbf{u} \times \mathbf{v}\|\) makes no reference to a coordinate system and, moreover, it has a nice geometrical interpretation. The parallelogram determined by the vectors \(\mathbf{u}\) and \(\mathbf{v}\) has base length \(\|\mathbf{v}\|\) and altitude \(\|\mathbf{u}\| \sin \theta\) (see Figure [fig:012736]). Hence the area of the parallelogram formed by \(\mathbf{u}\) and \(\mathbf{v}\) is

    \[(\| \mathbf{u} \| \sin\theta) \| \mathbf{v} \| = \| \mathbf{u} \times \mathbf{v} \| \nonumber \]

    This proves the first part of Theorem [thm:012738].

    012738 If \(\mathbf{u}\) and \(\mathbf{v}\) are two nonzero vectors and \(\theta\) is the angle between \(\mathbf{u}\) and \(\mathbf{v}\), then

    1. \(\|\mathbf{u} \times \mathbf{v}\| = \|\mathbf{u}\|\|\mathbf{v}\| \sin \theta =\) the area of the parallelogram determined by \(\mathbf{u}\) and \(\mathbf{v}\).
    2. \(\mathbf{u}\) and \(\mathbf{v}\) are parallel if and only if \(\mathbf{u} \times \mathbf{v} = \mathbf{0}\).

    By (1), \(\mathbf{u} \times \mathbf{v} = \mathbf{0}\) if and only if the area of the parallelogram is zero. By Figure [fig:012736] the area vanishes if and only if \(\mathbf{u}\) and \(\mathbf{v}\) have the same or opposite direction—that is, if and only if they are parallel.

    012749

    Find the area of the triangle with vertices \(P(2, 1, 0)\), \(Q(3, -1, 1)\), and \(R(1, 0, 1)\).

    We have \(\longvect{RP} = \left[ \begin{array}{r} 1\\ 1\\ -1 \end{array} \right]\) and \(\longvect{RQ} = \left[ \begin{array}{r} 2\\ -1\\ 0 \end{array} \right]\). The area of the triangle is half the area of the parallelogram (see the diagram), and so equals \(\frac{1}{2} \| \longvect{RP} \times \longvect{RQ} \|\). We have

    \[\longvect{RP} \times \longvect{RQ} = \det \left[ \begin{array}{rrr} \mathbf{i} & 1 & 2\\ \mathbf{j} & 1 & -1\\ \mathbf{k} & -1 & 0 \end{array} \right] = \left[ \begin{array}{r} -1\\ -2\\ -3 \end{array} \right] \nonumber \]

    so the area of the triangle is \(\frac{1}{2} \| \longvect{RP} \times \longvect{RQ} \| = \frac{1}{2}\sqrt{1 + 4 + 9} = \frac{1}{2}\sqrt{14}.\)

    If three vectors \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) are given, they determine a “squashed” rectangular solid called a parallelepiped (Figure [fig:012763]), and it is often useful to be able to find the volume of such a solid. The base of the solid is the parallelogram determined by \(\mathbf{u}\) and \(\mathbf{v}\), so it has area \(A = \|\mathbf{u} \times \mathbf{v}\|\) by Theorem [thm:012738]. The height of the solid is the length \(h\) of the projection of \(\mathbf{w}\) on \(\mathbf{u} \times \mathbf{v}\). Hence

    \[h = \left| \frac{\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v})}{\| \mathbf{u} \times \mathbf{v} \|^2}\right|\| \mathbf{u} \times \mathbf{v} \| = \frac{|\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v})|}{\| \mathbf{u} \times \mathbf{v} \|} = \frac{|\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v})|}{A} \nonumber \]

    Thus the volume of the parallelepiped is \(hA = |\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v})|\). This proves

    012765 The volume of the parallelepiped determined by three vectors \(\mathbf{w}\), \(\mathbf{u}\), and \(\mathbf{v}\) (Figure [fig:012763]) is given by \(|\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v})|\).

    012768 Find the volume of the parallelepiped determined by the vectors

    \[\mathbf{w} = \left[ \begin{array}{r} 1\\ 2\\ -1 \end{array} \right], \mathbf{u} = \left[ \begin{array}{r} 1\\ 1\\ 0 \end{array} \right], \mathbf{v} = \left[ \begin{array}{r} -2\\ 0\\ 1 \end{array} \right] \nonumber \]

    By Theorem [thm:012676], \(\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v}) = \det \left[ \begin{array}{rrr} 1 & 1 & -2\\ 2 & 1 & 0\\ -1 & 0 & 1 \end{array} \right] = -3\). Hence the volume is \(|\mathbf{w}\bullet (\mathbf{u} \times \mathbf{v})| = |-3| = 3\) by Theorem [thm:012765].

    We can now give an intrinsic description of the cross product \(\mathbf{u} \times \mathbf{v}\). Its magnitude \(\|\mathbf{u} \times \mathbf{v}\| = \|\mathbf{u}\|\|\mathbf{v}\| \sin \theta\) is coordinate-free. If \(\mathbf{u} \times \mathbf{v} \neq \mathbf{0}\), its direction is very nearly determined by the fact that it is orthogonal to both \(\mathbf{u}\) and \(\mathbf{v}\) and so points along the line normal to the plane determined by \(\mathbf{u}\) and \(\mathbf{v}\). It remains only to decide which of the two possible directions is correct.

    Before this can be done, the basic issue of how coordinates are assigned must be clarified. When coordinate axes are chosen in space, the procedure is as follows: An origin is selected, two perpendicular lines (the \(x\) and \(y\) axes) are chosen through the origin, and a positive direction on each of these axes is selected quite arbitrarily. Then the line through the origin normal to this \(x\)-\(y\) plane is called the \(z\) axis, but there is a choice of which direction on this axis is the positive one. The two possibilities are shown in Figure [fig:012779], and it is a standard convention that cartesian coordinates are always right-hand coordinate systems. The reason for this terminology is that, in such a system, if the \(z\) axis is grasped in the right hand with the thumb pointing in the positive \(z\) direction, then the fingers curl around from the positive \(x\) axis to the positive \(y\) axis (through a right angle).

    Suppose now that \(\mathbf{u}\) and \(\mathbf{v}\) are given and that \(\theta\) is the angle between them (so \(0 \leq \theta \leq \pi\)). Then the direction of \(\|\mathbf{u} \times \mathbf{v}\|\) is given by the right-hand rule.

    Right-hand Rule012781 If the vector \(\mathbf{u} \times \mathbf{v}\) is grasped in the right hand and the fingers curl around from \(\mathbf{u}\) to \(\mathbf{v}\) through the angle \(\theta\), the thumb points in the direction for \(\mathbf{u} \times \mathbf{v}\).

    To indicate why this is true, introduce coordinates in \(\mathbb{R}^3\) as follows: Let \(\mathbf{u}\) and \(\mathbf{v}\) have a common tail \(O\), choose the origin at \(O\), choose the \(x\) axis so that \(\mathbf{u}\) points in the positive \(x\) direction, and then choose the \(y\) axis so that \(\mathbf{v}\) is in the \(x\)-\(y\) plane and the positive \(y\) axis is on the same side of the \(x\) axis as \(\mathbf{v}\). Then, in this system, \(\mathbf{u}\) and \(\mathbf{v}\) have component form \(\mathbf{u} = \left[ \begin{array}{r} a\\ 0\\ 0 \end{array} \right]\) and \(\mathbf{v} = \left[ \begin{array}{r} b\\ c\\ 0 \end{array} \right]\) where \(a > 0\) and \(c > 0\). The situation is depicted in Figure [fig:012789]. The right-hand rule asserts that \(\mathbf{u} \times \mathbf{v}\) should point in the positive \(z\) direction. But our definition of \(\mathbf{u} \times \mathbf{v}\) gives

    \[\mathbf{u} \times \mathbf{v} = \det \left[ \begin{array}{rrr} \mathbf{i} & a & b\\ \mathbf{j} & 0 & c\\ \mathbf{k} & 0 & 0 \end{array} \right] = \left[ \begin{array}{c} 0\\ 0\\ ac \end{array} \right] = (ac)\mathbf{k} \nonumber \]

    and \((ac) \mathbf{k}\) has the positive \(z\) direction because \(ac > 0\).


    This page titled 4.3: More on the Cross Product is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform.