Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

1.4: Cross Product

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)

    In Section 1.3 we defined the dot product, which gave a way of multiplying two vectors. The resulting product, however, was a scalar, not a vector. In this section we will define a product of two vectors that does result in another vector. This product, called the cross product, is only defined for vectors in \(\mathbb{R}^{3}\). The definition may appear strange and lacking motivation, but we will see the geometric basis for it shortly.

    Definition 1.8

    Let \(\textbf{v} = (v_{1}, v_{2}, v_{3})\) and \(\textbf{w} = (w_{1}, w_{2}, w_{3})\) be vectors in \(\mathbb{R}^{3}\). The \(\textbf{cross product}\) of \(\textbf{v}\) and \(\textbf{w}\), denoted by \(\textbf{v} \times \textbf{w}\), is the vector in \(\mathbb{R}^{3}\) given by:

    \[\textbf{v} \times \textbf{w} = (v_{2}w_{3} - v_{3}w_{2}, v_{3}w_{1} - v_{1}w_{3}, v_{1}w_{2} - v_{2}w_{1})\]

    Example 1.7

    Find \(\textbf{i} \times \textbf{j}\).


    Since \(\textbf{i} = (1,0,0)\) and \(\textbf{j} = (0,1,0)\), then
    \textbf{i} \times \textbf{j} &= ((0)(0) - (0)(1),(0)(0) - (1)(0),(1)(1) - (0)(0))\\[4pt]
    &= (0,0,1)\\[4pt]
    &= \textbf{k}

    Similarly it can be shown that \(\textbf{j} \times \textbf{k} = \textbf{i}\) and \(\textbf{k} \times \textbf{i} = \textbf{j}\).

    Figure 1.4.1

    In the above example, the cross product of the given vectors was perpendicular to both those vectors. It turns out that this will always be the case.

    Theorem 1.11

    If the cross product \(\textbf{v} \times \textbf{w}\) of two nonzero vectors \(\textbf{v}\) and \(\textbf{w}\) is also a nonzero vector, then it is perpendicular to both \(\textbf{v}\) and \(\textbf{w}\).


    We will show that \((\textbf{v} \times \textbf{w}) \cdot \textbf{v} = 0\):
    (\textbf{v} \times \textbf{w}) \cdot \textbf{v} &= (v_{2}w_{3} - v_{3}w_{2}, v_{3}w_{1} - v_{1}w_{3}, v_{1}w_{2} - v_{2}w_{1}) \cdot (v_{1}, v_{2}, v_{3})\\[4pt]
    &= v_{2}w_{3}v_{1} - v_{3}w_{2}v_{1} +
    v_{3}w_{1}v_{2} - v_{1}w_{3}v_{2} + v_{1}w_{2}v_{3} -
    &= v_{1}v_{2}w_{3} - v_{1}v_{2}w_{3} + w_{1}v_{2}v_{3} -
    w_{1}v_{2}v_{3} + v_{1}w_{2}v_{3} - v_{1}w_{2}v_{3}\\[4pt]
    &= 0 \text{ , after rearranging the terms.}
    \(\therefore \textbf{v} \times \textbf{w} \perp \textbf{v}\) by Corollary 1.7. The proof that \(\textbf{v} \times \textbf{w} \perp \textbf{w}\) is similar.


    As a consequence of the above theorem and Theorem 1.9, we have the following:

    Corollary 1.12

    If the cross product \(\textbf{v} \times \textbf{w}\) of two nonzero vectors \(\textbf{v}\) and \(\textbf{w}\) is also a nonzero vector, then it is perpendicular to the span of \(\textbf{v}\) and \(\textbf{w}\).

    The span of any two nonzero, nonparallel vectors \(\textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\) is a plane \(P\), so the above corollary shows that \(\textbf{v} \times \textbf{w}\) is perpendicular to that plane. As shown in Figure 1.4.2, there are two possible directions for \(\textbf{v} \times \textbf{w}\), one the opposite of the other. It turns out (see Appendix B) that the direction of \(\textbf{v} \times \textbf{w}\) is given by the \(\textit{right-hand rule}\), that is, the vectors \(\textbf{v}, \textbf{w}\), \(\textbf{v} \times \textbf{w}\) form a right-handed system. Recall from Section 1.1 that this means that you can point your thumb upwards in the direction of \(\textbf{v} \times \textbf{w}\) while rotating \(\textbf{v}\) towards \(\textbf{w}\) with the remaining four fingers.

    Figure 1.4.2 Direction of v × w

    We will now derive a formula for the magnitude of \(\textbf{v} \times \textbf{w}\), for nonzero vectors \(\textbf{v}, \textbf{w}:\)

    \norm{\textbf{v} \times \textbf{w}}^{2} &= (v_{2}w_{3} - v_{3}w_{2})^{2} +
    (v_{3}w_{1} - v_{1}w_{3})^{2} + (v_{1}w_{2} - v_{2}w_{1})^{2}\\[4pt]
    &= v_{2}^{2}w_{3}^{2} - 2v_{2}w_{2}v_{3}w_{3} + v_{3}^{2}w_{2}^{2} +
    v_{3}^{2}w_{1}^{2} - 2v_{1}w_{1}v_{3}w_{3} + v_{1}^{2}w_{3}^{2} +
    v_{1}^{2}w_{2}^{2} - 2v_{1}w_{1}v_{2}w_{2} + v_{2}^{2}w_{1}^{2}\\[4pt]
    &= v_{1}^{2}(w_{2}^{2} + w_{3}^{2}) + v_{2}^{2}(w_{1}^{2} + w_{3}^{2}) +
    v_{3}^{2}(w_{1}^{2} + w_{2}^{2})
    - 2(v_{1}w_{1}v_{2}w_{2} +
    v_{1}w_{1}v_{3}w_{3} + v_{2}w_{2}v_{3}w_{3})\end{align*}

    and now adding and subtracting \(v_{1}^{2}w_{1}^{2}\), \(v_{2}^{2}w_{2}^{2}\), and \(v_{3}^{2}w_{3}^{2}\) on the right side gives

    \begin{align*} & = v_{1}^{2}(w_{1}^{2} + w_{2}^{2} + w_{3}^{2}) + v_{2}^{2}(w_{1}^{2} + w_{2}^{2} + w_{3}^{2}) + v_{3}^{2}(w_{1}^{2} + w_{2}^{2} + w_{3}^{2})\\[4pt] &\mathrel{\phantom{=}} {} - (v_{1}^{2}w_{1}^{2} + v_{2}^{2}w_{2}^{2} + v_{3}^{2}w_{3}^{2} + 2(v_{1}w_{1}v_{2}w_{2} + v_{1}w_{1}v_{3}w_{3} + v_{2}w_{2}v_{3}w_{3}))\\[4pt] &= (v_{1}^{2} + v_{2}^{2} + v_{3}^{2})(w_{1}^{2} + w_{2}^{2} + w_{3}^{2})\\[4pt] &\mathrel{\phantom{=}} {} - ((v_{1}w_{1})^{2} + (v_{2}w_{2})^{2} + (v_{3}w_{3})^{2} + 2(v_{1}w_{1})(v_{2}w_{2}) + 2(v_{1}w_{1})(v_{3}w_{3}) + 2(v_{2}w_{2})(v_{3}w_{3}))\end{align*}

    so using \((a + b + c)^{2} = a^{2} + b^{2} + c^{2} + 2ab + 2ac + 2bc\) for the subtracted term gives

    \begin{align*} & = (v_{1}^{2} + v_{2}^{2} + v_{3}^{2})(w_{1}^{2} + w_{2}^{2} + w_{3}^{2}) -
    (v_{1}w_{1} + v_{2}w_{2} + v_{3}w_{3})^{2}\\[4pt]
    &= \norm{\textbf{v}}^{2} \, \norm{\textbf{w}}^{2} - (\textbf{v} \cdot \textbf{w})^{2}\\[4pt]
    &= \norm{\textbf{v}}^{2} \, \norm{\textbf{w}}^{2} \biggl( 1 -
    \frac{(\textbf{v} \cdot \textbf{w})^{2}}{\norm{\textbf{v}}^{2} \, \norm{\textbf{w}}^{2}} \biggr)
    \text{ , since \(\norm{\textbf{v}} > 0\) and \(\norm{\textbf{w}} > 0\), so by Theorem 1.6}\\[4pt]
    &= \norm{\textbf{v}}^{2} \, \norm{\textbf{w}}^{2} (1 - \cos^{2} \theta)
    \text{ , where \(\theta\) is the angle between \(\textbf{v}\) and \(\textbf{w}\), so}\\[4pt]
    \norm{\textbf{v} \times \textbf{w}}^{2} &= \norm{\textbf{v}}^{2} \, \norm{\textbf{w}}^{2} \, \sin^{2} \theta
    \text{ , and since \(0^{\circ} \le \theta \le 180^{\circ}\), then \(\sin \theta \ge 0\), so we have:}

    If \(\theta\) is the angle between nonzero vectors \(\textbf{v}\) and \(\textbf{w}\) in \(\mathbb{R}^{3}\), then

    \[\norm{\textbf{v} \times \textbf{w}} = \norm{\textbf{v}}\,\norm{\textbf{w}}\,\sin \theta \]

    It may seem strange to bother with the above formula, when the magnitude of the cross product can be calculated directly, like for any other vector. The formula is more useful for its applications in geometry, as in the following example.

    Example 1.8

    Let \(\triangle PQR\) and \(PQRS\) be a triangle and parallelogram, respectively, as shown in Figure 1.4.3.

    Figure 1.4.3

    Think of the triangle as existing in \(\mathbb{R}^{3}\), and identify the sides \(QR\) and \(QP\) with vectors \(\textbf{v}\) and \(\textbf{w}\), respectively, in \(\mathbb{R}^{3}\). Let \(\theta\) be the angle between \(\textbf{v}\) and \(\textbf{w}\). The area \(A_{PQR}\) of \(\triangle PQR\) is \(\frac{1}{2} b h\), where \(b\) is the base of the triangle and \(h\) is the height. So we see that

    \[\nonumber b = \norm{\textbf{v}} \text{ and } h = \norm{\textbf{w}}\,\sin \theta \]

    \[\nonumber \begin{align} A_{PQR} &= \frac{1}{2}\,\norm{\textbf{v}}\,\norm{\textbf{w}}\,\sin \theta \\[4pt] \nonumber&= \frac{1}{2}\,\norm{\textbf{v} \times \textbf{w}} \\[4pt] \end{align}\]

    So since the area \(A_{PQRS}\) of the parallelogram \(PQRS\) is twice the area of the triangle \(\triangle PQR\), then

    \[\nonumber A_{PQRS} = \norm{\textbf{v}}\,\norm{\textbf{w}}\,\sin \theta\]

    By the discussion in Example 1.8, we have proved the following theorem:

    Theorem 1.13: Area of triangles and parallelograms

    1. The area \(A\) of a triangle with adjacent sides \(\textbf{v}, \textbf{w}\) (as vectors in \(\mathbb{R}^{3}\)) is:
      \[\nonumber A = \frac{1}{2}\,\norm{\textbf{v} \times \textbf{w}}\]
    2. The area \(A\) of a parallelogram with adjacent sides \(\textbf{v}, \textbf{w}\) (as vectors in \(\mathbb{R}^{3}\)) is:
      \[\nonumber A = \norm{\textbf{v} \times \textbf{w}}\]

    It may seem at first glance that since the formulas derived in Example 1.8 were for the adjacent sides \(QP\) and \(QR\) only, then the more general statements in Theorem 1.13 that the formulas hold for \(\textit{any}\) adjacent sides are not justified. We would get a different \(\textit{formula}\) for the area if we had picked \(PQ\) and \(PR\) as the adjacent sides, but it can be shown (see Exercise 26) that the different formulas would yield the same value, so the choice of adjacent sides indeed does not matter, and Theorem 1.13 is valid.

    Theorem 1.13 makes it simpler to calculate the area of a triangle in 3-dimensional space than by using traditional geometric methods.

    Example 1.9

    Calculate the area of the triangle \(\triangle PQR\), where \(P = (2,4,-7)\), \(Q = (3,7,18)\), and \(R =(-5,12,8)\).

    Figure 1.4.4


    Let \(\textbf{v} = \overrightarrow{PQ}\) and \(\textbf{w} = \overrightarrow{PR}\), as in Figure 1.4.4. Then \(\textbf{v} = (3,7,18) - (2,4,-7) = (1,3,25)\) and \(\textbf{w} = (-5,12,8) - (2,4,-7) = (-7,8,15)\), so the area \(A\) of the triangle \(\triangle PQR\) is

    A &= \frac{1}{2}\,\norm{\textbf{v} \times \textbf{w}} = \frac{1}{2}\,\norm{(1,3,25) \times (-7,8,15)}\\[4pt]
    &= \frac{1}{2}\,\norm{((3)(15) - (25)(8), (25)(-7) - (1)(15), (1)(8) - (3)(-7))}\\[4pt]
    &= \frac{1}{2}\,\norm{(-155, -190, 29)}\\[4pt]
    &= \frac{1}{2}\,\sqrt{(-155)^2 + (-190)^2 + 29^2} = \frac{1}{2}\,\sqrt{60966}\\[4pt]
    A &\approx 123.46

    Example 1.10

    Calculate the area of the parallelogram \(PQRS\), where \(P = (1,1)\), \(Q = (2,3)\), \(R = (5,4)\), and \(S = (4,2)\).

    Figure 1.4.5


    Let \(\textbf{v} = \overrightarrow{SP}\) and \(\textbf{w} = \overrightarrow{SR}\), as in Figure 1.4.5. Then \(\textbf{v} = (1,1) - (4,2) = (-3,-1)\) and \(\textbf{w} = (5,4) - (4,2) = (1,2)\). But these are vectors in \(\mathbb{R}^{2}\), and the cross product is only defined for vectors in \(\mathbb{R}^{3}\). However, \(\mathbb{R}^{2}\) can be thought of as the subset of \(\mathbb{R}^{3}\) such that the \(z\)-coordinate is always \(0\). So we can write \(\textbf{v} = (-3,-1,0)\) and \(\textbf{w} = (1,2,0)\). Then the area \(A\) of \(PQRS\) is

    A &= \norm{\textbf{v} \times \textbf{w}} = \norm{(-3,-1,0) \times (1,2,0)}\\[4pt]
    &= \norm{((-1)(0) - (0)(2), (0)(1) - (-3)(0), (-3)(2) - (-1)(1))}\\[4pt]
    &= \norm{(0,0, -5)}\\[4pt]
    A &= 5

    The following theorem summarizes the basic properties of the cross product.

    Theorem 1.14

    For any vectors \(\textbf{u}, \textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\), and scalar \(k\), we have

    1. \(\textbf{v} \times \textbf{w} = -\textbf{w} \times \textbf{v}\) Anticommutative Law
    2. \(\textbf{u} \times (\textbf{v} + \textbf{w}) = \textbf{u} \times \textbf{v} + \textbf{u} \times \textbf{w}\) Distributive Law
    3. \((\textbf{u} + \textbf{v}) \times \textbf{w} = \textbf{u} \times \textbf{w} + \textbf{v} \times \textbf{w}\) Distributive Law
    4. \((k\textbf{v}) \times \textbf{w} = \textbf{v} \times (k\textbf{w}) = k(\textbf{v} \times \textbf{w})\) Associative Law
    5. \(\textbf{v} \times \textbf{0} = \textbf{0} = \textbf{0} \times \textbf{v}\)
    6. \(\textbf{v} \times \textbf{v} = \textbf{0}\)
    7. \(\textbf{v} \times \textbf{w} = \textbf{0}\) if and only if \(\textbf{v} \parallel \textbf{w}\)


    The proofs of properties (b)-(f) are straightforward. We will prove parts (a) and (g) and leave the rest to the reader as exercises.

    Figure 1.4.6

    (a) By the definition of the cross product and scalar multiplication, we have:
    \textbf{v} \times \textbf{w} &= (v_{2}w_{3} - v_{3}w_{2},
    v_{3}w_{1} - v_{1}w_{3}, v_{1}w_{2} - v_{2}w_{1})\\[4pt]
    &= -(v_{3}w_{2} - v_{2}w_{3},
    v_{1}w_{3} - v_{3}w_{1}, v_{2}w_{1} - v_{1}w_{2})\\[4pt]
    &= -(w_{2}v_{3} - w_{3}v_{2},
    w_{3}v_{1} - w_{1}v_{3}, w_{1}v_{2} - w_{2}v_{1})\\[4pt]
    &= -\textbf{w} \times \textbf{v}
    Note that this says that \(\textbf{v} \times \textbf{w}\) and \(\textbf{w} \times \textbf{v}\) have the same magnitude but opposite direction (see Figure 1.4.6).

    (g) If either \(\textbf{v}\) or \(\textbf{w}\) is \(\textbf{0}\) then \(\textbf{v} \times \textbf{w} = \textbf{0}\) by part (e), and either \(\textbf{v} = \textbf{0} = 0\textbf{w}\) or \(\textbf{w} = \textbf{0} = 0\textbf{v}\), so \(\textbf{v}\) and \(\textbf{w}\) are scalar multiples, i.e. they are parallel.
    If both \(\textbf{v}\) and \(\textbf{w}\) are nonzero, and \(\theta\) is the angle between them, then by formula (1.11), \(\textbf{v} \times \textbf{w} = \textbf{0}\) if and only if \(\norm{\textbf{v}}\,\norm{\textbf{w}}\,\sin \theta = 0\), which is true if and only if \(\sin \theta = 0\) (since \(\norm{\textbf{v}} > 0\) and \(\norm{\textbf{w}} > 0)\). So since \(0^{\circ} \le \theta \le 180^{\circ}\), then \(\sin \theta = 0\) if and only if \(\theta = 0^{\circ}\) or \(180^{\circ}\). But the angle between \(\textbf{v}\) and \(\textbf{w}\) is \(0^{\circ}\) or \(180^{\circ}\) if and only if \(\textbf{v} \parallel \textbf{w}\).


    Example 1.11

    Adding to Example 1.7, we have

    \[\nonumber \textbf{i} \times \textbf{j} = \textbf{k},\quad \textbf{j} \times \textbf{k} = \textbf{i},\quad \textbf{k} \times \textbf{i} = \textbf{j}\]
    \[\nonumber \textbf{j} \times \textbf{i} = -\textbf{k},\quad \textbf{k} \times \textbf{j} = -\textbf{i},\quad \textbf{i} \times \textbf{k} = -\textbf{j}\]
    \[\nonumber \textbf{i} \times \textbf{i} = \textbf{j} \times \textbf{j} = \textbf{k} \times \textbf{k} = \textbf{0}\]

    Recall from geometry that a \(\textit{parallelepiped}\) is a 3-dimensional solid with 6 faces, all of which are parallelograms.

    Example 1.12

    \(\textit{Volume of a parallelepiped:}\) Let the vectors \(\textbf{u}, \textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\) represent adjacent sides of a parallelepiped \(P\), with \(\textbf{u}, \textbf{v}, \textbf{w}\) forming a right-handed system, as in Figure 1.4.7. Show that the volume of \(P\) is the \(\textit{scalar triple product}\) \(\textbf{u} \cdot (\textbf{v} \times \textbf{w})\).

    Figure 1.4.7 Parallelepiped \(P\)


    Recall that the volume \(\text{vol}(P)\) of a parallelepiped \(P\) is the area \(A\) of the base parallelogram times the height \(h\). By Theorem 1.13 (b), the area \(A\) of the base parallelogram is \(\norm{\textbf{v} \times \textbf{w}}\). And we can see that since \(\textbf{v} \times \textbf{w}\) is perpendicular to the base parallelogram determined by \(\textbf{v}\) and \(\textbf{w}\), then the height \(h\) is \(\norm{\textbf{u}}\,\cos \theta\), where \(\theta\) is the angle between \(\textbf{u}\) and \(\textbf{v} \times \textbf{w}\). By Theorem 1.6 we know that

    \[\nonumber \begin{align} \cos \theta &= \dfrac{\textbf{u} \cdot (\textbf{v} \times \textbf{w})}{\norm{\textbf{u}} \, \norm{\textbf{v} \times \textbf{w}}}. \text{ Hence,} \\[4pt] \nonumber \text{vol}(P) &= A \, h \\[4pt] \nonumber&= \norm{\textbf{v} \times \textbf{w}} \, \dfrac{\norm{\textbf{u}} \, \textbf{u} \cdot (\textbf{v} \times \textbf{w})}{\norm{\textbf{u}} \,\norm{\textbf{v} \times \textbf{w}}} \\[4pt] \nonumber&= \textbf{u} \cdot (\textbf{v} \times \textbf{w}) \\[4pt] \end{align}\]

    In Example 1.12 the height \(h\) of the parallelepiped is \(\norm{\textbf{u}}\,\cos \theta\), and not \(-\norm{\textbf{u}}\,\cos \theta\), because the vector \(\textbf{u}\) is on the same side of the base parallelogram's plane as the vector \(\textbf{v} \times \textbf{w}\) (so that \(\cos \theta > 0\)). Since the volume is the same no matter which base and height we use, then repeating the same steps using the base determined by \(\textbf{u}\) and \(\textbf{v}\) (since \(\textbf{w}\) is on the same side of that base's
    plane as \(\textbf{u} \times \textbf{v}\)), the volume is \(\textbf{w} \cdot (\textbf{u} \times \textbf{v})\). Repeating this with the base determined by \(\textbf{w}\) and \(\textbf{u}\), we have the following result:

    For any vectors \(\textbf{u}, \textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\),

    \[\textbf{u} \cdot (\textbf{v} \times \textbf{w}) = \textbf{w} \cdot (\textbf{u} \times \textbf{v}) = \textbf{v} \cdot (\textbf{w} \times \textbf{u})\label{Eq1.1.2}\]

    (Note that the equalities hold trivially if any of the vectors are \(\textbf{0}\).)

    Since \(\textbf{v} \times \textbf{w} = -\textbf{w} \times \textbf{v}\) for any vectors \(\textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\), then picking the wrong order for the three adjacent sides in the scalar triple product in Equation \ref{Eq1.1.2} will give you the negative of the volume of the parallelepiped. So taking the absolute value of the scalar triple product for any order of the three adjacent sides will \(\textit{always}\) give the volume:

    Theorem 1.15

    If vectors \(\textbf{u}, \textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\) represent any three adjacent sides of a parallelepiped, then the volume of the parallelepiped is \(|\textbf{u} \cdot (\textbf{v} \times \textbf{w})|\).

    Another type of triple product is the \(\textit{vector triple product}\) \(\textbf{u} \times (\textbf{v} \times \textbf{w})\). The proof of the following theorem is left as an exercise for the reader:

    Theorem 1.16

    For any vectors \(\textbf{u}, \textbf{v}, \textbf{w}\) in \(\mathbb{R}^{3}\),

    \[\textbf{u} \times (\textbf{v} \times \textbf{w}) = (\textbf{u} \cdot \textbf{w})\textbf{v} - (\textbf{u} \cdot \textbf{v})\textbf{w}\label{Eq1.13}\]

    An examination of the formula in Theorem 1.16 gives some idea of the geometry of the vector triple product. By the right side of Equation \ref{Eq1.13}, we see that \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) is a scalar combination of \(\textbf{v}\) and \(\textbf{w}\), and hence lies in the plane containing \(\textbf{v}\) and \(\textbf{w}\) (i.e. \(\textbf{u} \times (\textbf{v} \times \textbf{w})\), \(\textbf{v}\) and \(\textbf{w}\) are \(\textbf{coplanar}\)). This makes sense since, by Theorem 1.11, \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) is perpendicular to both \(\textbf{u}\) and \(\textbf{v} \times \textbf{w}\). In particular, being perpendicular to \(\textbf{v} \times \textbf{w}\) means that \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) lies in the plane containing \(\textbf{v}\) and \(\textbf{w}\), since that plane is itself perpendicular to \(\textbf{v} \times \textbf{w}\). But then how is \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) also perpendicular to \(\textbf{u}\), which could be any vector? The following example may help to see how this works.

    Example 1.13

    Find \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) for \(\textbf{u} = (1, 2, 4)\), \(\textbf{v} = (2, 2, 0)\), \(\textbf{w} = (1, 3, 0)\).


    Since \(\textbf{u} \cdot \textbf{v} = 6\) and \(\textbf{u} \cdot \textbf{w} = 7\), then

    \textbf{u} \times (\textbf{v} \times \textbf{w}) &= (\textbf{u} \cdot \textbf{w})\textbf{v} - (\textbf{u} \cdot \textbf{v})\textbf{w}\\[4pt]
    &= 7\,(2, 2, 0) - 6\,(1, 3, 0) = (14, 14, 0) - (6, 18, 0)\\[4pt]
    &= (8, -4, 0)

    Note that \(\textbf{v}\) and \(\textbf{w}\) lie in the \(xy\)-plane, and that \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) also lies in that plane. Also, \(\textbf{u} \times (\textbf{v} \times \textbf{w})\) is perpendicular to both \(\textbf{u}\) and \(\textbf{v} \times \textbf{w} = (0, 0, 4)\) (see Figure 1.4.8).

    Figure 1.4.8

    For vectors \(\textbf{v} = v_{1}\textbf{i} + v_{2}\textbf{j} + v_{3}\textbf{k}\) and \(\textbf{w} = w_{1}\textbf{i} + w_{2}\textbf{j} + w_{3}\textbf{k}\) in component form, the cross product is written as: \(\textbf{v} \times \textbf{w} = (v_{2}w_{3} - v_{3}w_{2})\textbf{i} + (v_{3}w_{1} - v_{1}w_{3})\textbf{j} + (v_{1}w_{2} - v_{2}w_{1})\textbf{k}\). It is often easier to use the component form for the cross product, because it can be represented as a \(\textit{determinant}\). We will not go too deeply into the theory of determinants; we will just cover what is essential for our purposes.

    A 2 \(\times\) 2 matrix} is an array of two rows and two columns of scalars, written as

    \[\nonumber \begin{bmatrix}a & b\\[4pt]c & d\end{bmatrix} \text{or} \begin{pmatrix}a & b\\[4pt]c & d\end{pmatrix}\]

    where \(a, b, c, d\) are scalars. The \(\textbf{determinant}\) of such a matrix, written as

    \[\nonumber \begin{vmatrix}a & b\\[4pt]c & d\end{vmatrix} \text{or} \det \begin{bmatrix}a & b\\[4pt]c & d\end{bmatrix},\]

    is the scalar defined by the following formula:

    \[\nonumber \begin{vmatrix}a & b\\[4pt]c & d\end{vmatrix}= ad - bc\]

    It may help to remember this formula as being the product of the scalars on the downward diagonal minus the product of the scalars on the upward diagonal.

    Example 1.14

    \[\nonumber \begin{vmatrix}1 & 2\\[4pt]3 & 4\end{vmatrix} = (1)(4) - (2)(3) = 4 - 6 = -2\]

    A \(3 \times 3\) matrix is an array of three rows and three columns of scalars, written as

    \[\nonumber \begin{bmatrix}a_{1} & a_{2} & a_{3}\\[4pt]
    b_{1} & b_{2} & b_{3}\\[4pt]
    c_{1} & c_{2} & c_{3}
    \begin{pmatrix}a_{1} & a_{2} & a_{3}\\[4pt]
    b_{1} & b_{2} & b_{3}\\[4pt]
    c_{1} & c_{2} & c_{3}
    and its determinant is given by the formula:
    \[\begin{vmatrix}a_{1} & a_{2} & a_{3}\\[4pt]
    b_{1} & b_{2} & b_{3}\\[4pt]
    c_{1} & c_{2} & c_{3}
    = a_{1} \begin{vmatrix}b_{2} & b_{3} \\[4pt] c_{2} & c_{3} \end{vmatrix} \;-\;
    a_{2} \begin{vmatrix} b_{1} & b_{3} \\[4pt] c_{1} & c_{3} \end{vmatrix} \;+\;
    a_{3} \begin{vmatrix} b_{1} & b_{2} \\[4pt] c_{1} & c_{2} \end{vmatrix}\]

    One way to remember the above formula is the following: multiply each scalar in the first row by the determinant of the \(2 \times 2\) matrix that remains after removing the row and column that contain that scalar, then sum those products up, putting alternating plus and minus signs in front of each (starting with a plus).

    Example 1.15

    \[\nonumber \left|\begin{array}{rrr}1 & 0 & 2\\[4pt]4 & -1 & 3\\[4pt]1 & 0 & 2\end{array}\right|
    = 1 \left|\begin{array}{rr} -1 & 3 \\[4pt] 0 & 2 \end{array}\right| \;-\;
    0 \left|\begin{array}{rr} 4 & 3 \\[4pt] 1 & 2 \end{array}\right| \;+\;
    2 \left|\begin{array}{rr} 4 & -1 \\[4pt] 1 & 0 \end{array}\right|
    = 1(-2 - 0) - 0(8 - 3) + 2(0 + 1) = 0\]

    We defined the determinant as a scalar, derived from algebraic operations on scalar entries in a matrix. However, if we put three \(\textit{vectors}\) in the first row of a \(3 \times 3\) matrix, then the definition still makes sense, since we would be performing scalar multiplication on those three vectors (they would be multiplied by the \(2 \times 2\) scalar determinants as before). This gives us a determinant that is now a vector, and lets us write the cross product of \(\textbf{v} = v_{1}\textbf{i} + v_{2}\textbf{j} + v_{3}\textbf{k}\) and \(\textbf{w} = w_{1}\textbf{i} + w_{2}\textbf{j} + w_{3}\textbf{k}\) as a determinant:

    \nonumber \textbf{v} \times \textbf{w} =
    \begin{vmatrix}\textbf{i} & \textbf{j} & \textbf{k} \\[4pt] v_{1} & v_{2} & v_{3} \\[4pt]
    w_{1} & w_{2} & w_{3}
    \end{vmatrix} &= \begin{vmatrix} v_{2} & v_{3} \\[4pt] w_{2} & w_{3} \end{vmatrix} \textbf{i} \;-\;
    \begin{vmatrix} v_{1} & v_{3} \\[4pt] w_{1} & w_{3} \end{vmatrix} \textbf{j} \;+\;
    \begin{vmatrix} v_{1} & v_{2} \\[4pt] w_{1} & w_{2} \end{vmatrix} \textbf{k}\\[4pt] \nonumber
    &= (v_{2}w_{3} - v_{3}w_{2})\textbf{i} + (v_{3}w_{1} - v_{1}w_{3})\textbf{j} +
    (v_{1}w_{2} - v_{2}w_{1})\textbf{k}

    Example 1.16

    Let \(\textbf{v} = 4\,\textbf{i} - \textbf{j} + 3\,\textbf{k}\) and \(\textbf{w} = \textbf{i} + 2\,\textbf{k}\). Then

    \[\nonumber \textbf{v} \times \textbf{w} =
    \textbf{i} & \textbf{j} & \textbf{k}\\[4pt]
    4 & -1 & 3\\[4pt]
    1 & 0 & 2
    = \left|\begin{array}{rr} -1 & 3 \\[4pt] 0 & 2 \end{array}\right| \textbf{i} \;-\;
    \left|\begin{array}{rr} 4 & 3 \\[4pt] 1 & 2 \end{array}\right| \textbf{j} \;+\;
    \left|\begin{array}{rr} 4 & -1 \\[4pt] 1 & 0 \end{array}\right| \textbf{k}
    = -2\,\textbf{i} - 5\,\textbf{j} + \textbf{k}\]

    The scalar triple product can also be written as a determinant. In fact, by Example 1.12, the following theorem provides an alternate definition of the determinant of a \(3 \times 3\) matrix as the volume of a parallelepiped whose adjacent sides are the rows of the matrix and form a right-handed system (a left-handed system would give the negative volume).

    Theorem 1.17


    For any vectors \(\textbf{u} = (u_{1}, u_{2}, u_{3})\), \(\textbf{v} = (v_{1}, v_{2}, v_{3})\), \(\textbf{w} = (w_{1}, w_{2}, w_{3})\) in \(\mathbb{R}^{3}\):
     \[\textbf{u} \cdot (\textbf{v} \times \textbf{w}) =
    \begin{vmatrix}u_{1} & u_{2} & u_{3}\\[4pt]
    v_{1} & v_{2} & v_{3}\\[4pt]
    w_{1} & w_{2} & w_{3}

    Example 1.17

    Find the volume of the parallelepiped with adjacent sides \(\textbf{u} = (2, 1, 3)\), \(\textbf{v} = (-1, 3, 2)\), \(\textbf{w} = (1, 1, -2)\) (see Figure 1.4.9).

    Figure 1.4.9 \(P\)


    By Theorem 1.15, the volume \(\text{vol}(P)\) of the parallelepiped \(P\) is the absolute value of the scalar triple product of the three adjacent sides (in any order). By Theorem 1.17,

    \nonumber \textbf{u} \cdot (\textbf{v} \times \textbf{w}) &=
    2 & 1 & 3\\[4pt]
    -1 & 3 & 2\\[4pt]
    1 & 1 &-2
    \end{array}\right|\\[4pt] \nonumber
    &= 2 \left|\begin{array}{rr} 3 & 2 \\[4pt] 1 & -2 \end{array}\right| \;-\;
    1 \left|\begin{array}{rr} -1 & 2 \\[4pt] 1 & -2 \end{array}\right| \;+\;
    3 \left|\begin{array}{rr} -1 & 3 \\[4pt] 1 & 1 \end{array}\right|\\[4pt] \nonumber
    &= 2(-8) - 1(0) + 3(-4) = -28 \text{,so}\\[4pt] \nonumber
    \text{vol}(P) &= |-28| = 28.

    Interchanging the dot and cross products can be useful in proving vector identities:

    Example 1.18

    Prove: \((\textbf{u} \times \textbf{v}) \cdot (\textbf{w} \times \textbf{z}) =
    \textbf{u} \cdot \textbf{w} & \textbf{u} \cdot \textbf{z}\\[4pt]
    \textbf{v} \cdot \textbf{w} & \textbf{v} \cdot \textbf{z}\end{vmatrix}\) for all vectors \(\textbf{u},
    \textbf{v}, \textbf{w}, \textbf{z}\) in \(\mathbb{R}^{3}\).


    Let \(\textbf{x} = \textbf{u} \times \textbf{v}\). Then

    (\textbf{u} \times \textbf{v}) \cdot (\textbf{w} \times \textbf{z}) &=
    \textbf{x} \cdot (\textbf{w} \times \textbf{z})\\[4pt]
    &= \textbf{w} \cdot (\textbf{z} \times \textbf{x}) \text{(by Equation \ref{Eq1.1.2})}\\[4pt]
    &= \textbf{w} \cdot (\textbf{z} \times (\textbf{u} \times \textbf{v}))\\[4pt]
    &= \textbf{w} \cdot ((\textbf{z} \cdot \textbf{v})\textbf{u} -
    (\textbf{z} \cdot \textbf{u})\textbf{v}) \text{(by Theorem 1.16)}\\[4pt]
    &= (\textbf{z} \cdot \textbf{v})(\textbf{w} \cdot \textbf{u}) -
    (\textbf{z} \cdot \textbf{u})(\textbf{w} \cdot \textbf{v})\\[4pt]
    &= (\textbf{u} \cdot \textbf{w})(\textbf{v} \cdot \textbf{z}) -
    (\textbf{u} \cdot \textbf{z})(\textbf{v} \cdot \textbf{w}) \text{(by commutativity of the dot product).}\\[4pt]
    &= \begin{vmatrix}
    \textbf{u} \cdot \textbf{w} & \textbf{u} \cdot \textbf{z}\\[4pt]
    \textbf{v} \cdot \textbf{w} & \textbf{v} \cdot \textbf{z}\end{vmatrix}

    Contributors and Attributions

    • Was this article helpful?