Skip to main content
Mathematics LibreTexts

8.4: Properties of the Determinant

  • Page ID
    2012
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We now know that the determinant of a matrix is non-zero if and only if that matrix is invertible. We also know that the determinant is a \(\textit{multiplicative}\) function, in the sense that \(\det (MN)=\det M \det N\). Now we will devise some methods for calculating the determinant.

    Recall that:
    \[\det M = \sum_{\sigma} \textit{sgn}(\sigma) m^{1}_{\sigma(1)}m^{2}_{\sigma(2)}\cdots m^{n}_{\sigma(n)}.\]

    A \(\textit{minor}\) of an \(n\times n\) matrix \(M\) is the determinant of any square matrix obtained from \(M\) by deleting one row and one column. In particular, any entry \(m^{i}_{j}\) of a square matrix \(M\) is associated to a minor obtained by deleting the \(i\)th row and \(j\)th column of \(M\).

    It is possible to write the determinant of a matrix in terms of its minors as follows:

    \begin{eqnarray*}
    \det M &=& \sum_{\sigma} \textit{sgn}(\sigma)\, m^{1}_{\sigma(1)}m^{2}_{\sigma(2)}\cdots m^{n}_{\sigma(n)} \\
    &=& m^{1}_{1}\, \sum_{\not{\sigma}^{1}} \textit{sgn}(\not{\sigma}^{1})\, m^{2}_{\not{\sigma}^{1}(2)}\cdots m^{n}_{\not{\sigma}^{1}(n)} \\
    & +& m^{1}_{2}\, \sum_{\not{\sigma}^{2}} \textit{sgn}(\not{\sigma}^{2})\, m^{2}_{\not{\sigma}^{2}(1)}
    m^{3}_{\not{\sigma}^{2}(3)}\cdots m^{n}_{\not{\sigma}^{2}(n)} \\
    & +& m^{1}_{3}\, \sum_{\not{\sigma}^{3}} \textit{sgn}(\not{\sigma}^{3})\, m^{2}_{\not{\sigma}^{3}(1)}m^{3}_{\not{\sigma}^{3}(2)}m^{4}_{\not{\sigma}^{3}(4)}\cdots m^{n}_{\not{\sigma}^{3}(n)}\\ &+& \cdots
    \end{eqnarray*}

    Here the symbols \(\not{\sigma}^{k}\) refers to the permutation \(\sigma\) with the input \(k\) removed. The summand on the \(j\)'th line of the above formula looks like the determinant of the minor obtained by removing the first and \(j\)'th column of \(M\). However we still need to replace sum of \(\not{\sigma}^{j}\) by a sum over permutations of column numbers of the matrix entries of this minor. This costs a minus sign whenever \(j-1\) is odd. In other words, to expand by minors we pick an entry \(m^{1}_{j}\) of the first row, then add \((-1)^{j-1}\) times the determinant of the matrix with row $i$ and column \(j\) deleted. An example will probably help:

    Let's compute the determinant of

    \[M=\begin{pmatrix}
    1 & 2 & 3 \\
    4 & 5 & 6 \\
    7 & 8 & 9 \\
    \end{pmatrix}\]

    using expansion by minors:

    \begin{eqnarray*}
    \det M & = & 1\det \begin{pmatrix}
    5 & 6 \\
    8 & 9 \\
    \end{pmatrix}
    -2 \det \begin{pmatrix}
    4 & 6 \\
    7 & 9 \\
    \end{pmatrix}
    +3 \det \begin{pmatrix}
    4 & 5 \\
    7 & 8 \\
    \end{pmatrix} \\
    & = & 1(5\cdot 9- 8\cdot 6) -2 (4\cdot 9- 7\cdot 6) + 3 (4\cdot 8- 7\cdot 5) \\
    & = & 0 \\
    \end{eqnarray*}

    Here, \(M^{-1}\) does not exist because \(\det M=0.\)

    Sometimes the entries of a matrix allow us to simplify the calculation of the determinant. Take

    \(N= \begin{pmatrix}
    1 & 2 & 3 \\
    4 & 0 & 0 \\
    7 & 8 & 9 \\
    \end{pmatrix}\).

    Notice that the second row has many zeros; then we can switch the first and second rows of \(N\) before expanding in minors to get:

    \begin{eqnarray*}
    \det \begin{pmatrix}
    1 & 2 & 3 \\
    4 & 0 & 0 \\
    7 & 8 & 9 \\
    \end{pmatrix} & = & -\det \begin{pmatrix}
    4 & 0 & 0 \\
    1 & 2 & 3 \\
    7 & 8 & 9 \\
    \end{pmatrix}\\
    &=& -4 \det \begin{pmatrix}
    2 & 3 \\
    8 & 9 \\
    \end{pmatrix} \\
    &=& 24
    \end{eqnarray*}

    Since we know how the determinant of a matrix changes when you perform row operations, it is often very beneficial to perform row
    operations before computing the determinant by brute force.

    Example \(\PageIndex{1}\):

    \begin{eqnarray*}
    \det\begin{pmatrix}
    1 & 2 & 3 \\
    4 & 5 & 6 \\
    7 & 8 & 9 \\
    \end{pmatrix}
    =
    \det\begin{pmatrix}
    1 & 2 & 3 \\
    3 & 3 & 3 \\
    6 & 6 & 6 \\
    \end{pmatrix}
    =
    \det\begin{pmatrix}
    1 & 2 & 3 \\
    3 & 3 & 3 \\
    0 & 0 & 0 \\
    \end{pmatrix}=0\, .
    \end{eqnarray*}

    Try to determine which row operations we made at each step of this computation.

    You might suspect that determinants have similar properties with respect to columns as what applies to rows:

    Theorem

    For any square matrix \(M\), we have:
    $$\det M^{T} = \det M\, .\]

    Proof

    By definition,

    \[\det M = \sum_{\sigma} \textit{sgn}(\sigma) m^{1}_{\sigma(1)}m^{2}_{\sigma(2)}\cdots m^{n}_{\sigma(n)}.\]

    For any permutation \(\sigma\), there is a unique inverse permutation \(\sigma^{-1}\) that undoes \(\sigma\). If \(\sigma\) sends \(i\rightarrow j\), then \(\sigma^{-1}\) sends \(j\rightarrow i\). In the two-line notation for a permutation, this corresponds to just flipping the permutation over. For example, if

    \(\sigma=\begin{bmatrix}
    1 & 2 & 3 \\
    2 & 3 & 1
    \end{bmatrix}\)

    then we can find \(\sigma^{-1}\) by flipping the permutation and then putting the columns in order:

    \[\sigma^{-1}=\begin{bmatrix}
    2 & 3 & 1 \\
    1 & 2 & 3
    \end{bmatrix}=\begin{bmatrix}
    1 & 2 & 3 \\
    3 & 1 & 2
    \end{bmatrix}\, .\]

    Since any permutation can be built up by transpositions, one can also find the inverse of a permutation \(\sigma\) by undoing each of the transpositions used to build up \(\sigma\); this shows that one can use the same number of transpositions to build \(\sigma\) and \(\sigma^{-1}\). In particular, \(sgn \sigma= sgn \sigma^{-1}\).

    Then we can write out the above in formulas as follows:

    \begin{eqnarray*}
    \det M &=& \sum_{\sigma} \textit{sgn}(\sigma) m^{1}_{\sigma(1)}m^{2}_{\sigma(2)}\cdots m^{n}_{\sigma(n)} \\
    &=& \sum_{\sigma} \textit{sgn}(\sigma) m_{1}^{\sigma^{-1}(1)}m_{2}^{\sigma^{-1}(2)}\cdots m_{n}^{\sigma^{-1}(n)} \\
    &=& \sum_{\sigma} \textit{sgn}(\sigma^{-1}) m_{1}^{\sigma^{-1}(1)}m_{2}^{\sigma^{-1}(2)}\cdots m_{n}^{\sigma^{-1}(n)} \\
    &=& \sum_{\sigma} \textit{sgn}(\sigma) m_{1}^{\sigma(1)}m_{2}^{\sigma(2)}\cdots m_{n}^{\sigma(n)} \\
    &=& \det M^{T}.
    \end{eqnarray*}

    The second-to-last equality is due to the existence of a unique inverse permutation: summing over permutations is the same as summing over all inverses of permutations. The final equality is by the definition of the transpose.

    detMT.jpg

    Example \(\PageIndex{2}\):

    Because of this theorem, we see that expansion by minors also works over columns. Let $$M=\begin{pmatrix}
    1 & 2 & 3 \\
    0 & 5 & 6 \\
    0 & 8 & 9 \\
    \end{pmatrix}\, .$$ Then $$\det M = \det M^{T} = 1\det \begin{pmatrix}
    5 & 8 \\
    6 & 9 \\
    \end{pmatrix}=-3\, .$$
    \end{example}

    Determinant of the Inverse

    Let \(M\) and \(N\) be \(n\times n\) matrices. We previously showed that

    \[\det (MN)=\det M \det N \text{, and } \det I=1.\]

    Then \(1 = \det I = \det (MM^{-1}) = \det M \det M^{-1}\). As such we have:

    Theorem

    \[\det M^{-1} = \frac{1}{\det M}\]

    Just so you don't forget this:

    detMm1.jpg

    Adjoint of a Matrix

    Recall that for a \(2\times 2\) matrix

    \[\begin{pmatrix}d & -b \\ -c & a\end{pmatrix}\begin{pmatrix}a & b \\ c & d\end{pmatrix}
    =\det \begin{pmatrix}a & b \\ c & d\end{pmatrix}\, I\, .\]

    adj2x2.jpg

    Or in a more careful notation: if

    \[M=\begin{pmatrix}
    m^{1}_{1} & m^{1}_{2} \\
    m^{2}_{1} & m^{2}_{2} \\
    \end{pmatrix}\, ,$$

    then

    \[M^{-1}=\frac{1}{m^{1}_{1}m^{2}_{2}-m^{1}_{2}m^{2}_{1}}\begin{pmatrix}
    m^{2}_{2} & -m^{1}_{2} \\
    -m^{2}_{1} & m^{1}_{1} \\
    \end{pmatrix}\, ,\]

    so long as \(\det M=m^{1}_{1}m^{2}_{2}-m^{1}_{2}m^{2}_{1}\neq 0\). The matrix

    \(\begin{pmatrix}
    m^{2}_{2} & -m^{1}_{2} \\
    -m^{2}_{1} & m^{1}_{1} \\
    \end{pmatrix}\)

    that appears above is a special matrix, called the \(\textit{adjoint}\) of \(M\). Let's define the adjoint for an \(n \times n\) matrix.

    The \(\textit{cofactor}\) of \(M\) corresponding to the entry \(m^{i}_{j}\) of \(M\) is the product of the minor associated to \(m^{i}_{j}\) and \((-1)^{i+j}\). This is written cofactor\((m^{i}_{j})\).

    Definition

    For \(M=(m^{i}_{j})\) a square matrix, The \(\textit{adjoint matrix}\) \(adj M\) is given by:

    \[adj M = (cofactor(m^{i}_{j}))^{T}\]

    Example \(\PageIndex{3}\):

    \[
    adj \begin{pmatrix}
    3 & -1 & -1 \\
    1 & 2 & 0 \\
    0 & 1 & 1 \\
    \end{pmatrix}
    =
    \begin{pmatrix}
    {\det \begin{pmatrix}
    2 & 0 \\
    1 & 1
    \end{pmatrix}}
    & {-\det \begin{pmatrix}
    1 & 0 \\
    0 & 1
    \end{pmatrix}}
    &{ \det \begin{pmatrix}
    1 & 2 \\
    0 & 1
    \end{pmatrix}}
    \\
    -\det \begin{pmatrix}
    -1 & -1 \\
    1 & 1
    \end{pmatrix}
    & \det \begin{pmatrix}
    3 & -1 \\
    0 & 1
    \end{pmatrix}
    & -\det \begin{pmatrix}
    3 & -1 \\
    0 & 1
    \end{pmatrix}
    \\
    \det \begin{pmatrix}
    -1 & -1 \\
    2 & 0
    \end{pmatrix}
    & -\det \begin{pmatrix}
    3 & -1 \\
    1 & 0
    \end{pmatrix}
    & \det \begin{pmatrix}
    3 & -1 \\
    1 & 2
    \end{pmatrix}
    \\
    \end{pmatrix}^{T}
    \]

    Let's multiply \(Madj M\). For any matrix \(N\), the \(i, j\) entry of \(MN\) is given by taking the dot product of the \(i\)th row of \(M\) and the \(j\)th column of \(N\). Notice that the dot product of the \(i\)th row of \(M\) and the \(i\)th column of \(adj M\) is just the expansion by minors of \(\det M\) in the \(i\)th row. Further, notice that the dot product of the \(i\)th row of \(M\) and the \(j\)th column of \(adj M\) with \(j\neq i\) is the same as expanding \(M\) by minors, but with the \(j\)th row replaced by the \(i\)th row. Since the determinant of any matrix with a row repeated is zero, then these dot products are zero as well.

    We know that the \(i,j\) entry of the product of two matrices is the dot product of the \(i\)th row of the first by the \(j\)th column of the second. Then:

    \[M adj M = (\det M) I\]

    Thus, when \(\det M\neq 0\), the adjoint gives an explicit formula for \(M^{-1}\).

    Theorem

    For \(M\) a square matrix with \(\det M\neq 0\) (equivalently, if \(M\) is invertible), then

    \[M^{-1}=\frac{1}{\det M}adj M\]


    adjM.jpg

    Continuing with the previous example,

    \[
    adj \begin{pmatrix}
    3 & -1 & -1 \\
    1 & 2 & 0 \\
    0 & 1 & 1 \\
    \end{pmatrix} = \begin{pmatrix}
    2 & 0 & 2 \\
    -1 & 3 & -1 \\
    1 & -3 & 7 \\
    \end{pmatrix}.
    \]

    Now, multiply:

    \begin{eqnarray*}
    \begin{pmatrix}
    3 & -1 & -1 \\
    1 & 2 & 0 \\
    0 & 1 & 1 \\
    \end{pmatrix}
    \begin{pmatrix}
    2 & 0 & 2 \\
    -1 & 3 & -1 \\
    1 & -3 & 7 \\
    \end{pmatrix}
    &=&
    \begin{pmatrix}
    6 & 0 & 0 \\
    0 & 6 & 0 \\
    0 & 0 & 6 \\
    \end{pmatrix} \\
    \Rightarrow \begin{pmatrix}
    3 & -1 & -1 \\
    1 & 2 & 0 \\
    0 & 1 & 1 \\
    \end{pmatrix}^{-1} & = & \frac{1}{6}\begin{pmatrix}
    2 & 0 & 2 \\
    -1 & 3 & -1 \\
    1 & -3 & 7 \\
    \end{pmatrix}
    \end{eqnarray*}

    This process for finding the inverse matrix is sometimes called \(\textit{Cramer's Rule}\).

    Application: Volume of a Parallelepiped

    Given three vectors \(u,v,w\) in \(\Re^{3}\), the parallelepiped determined by the three vectors is the "squished'' box whose edges are parallel to \(u, v\), and \(w\) as depicted in the figure below.

    From calculus, we know that the volume of this object is \(|u\cdot (v\times w)|\). This is the same as expansion by minors of the matrix whose columns are \(u,v,w\). Then:

    \[Volume=\big|\det \begin{pmatrix}u & v & w \end{pmatrix} \big|
    \]
    parallelepiped.jpg

     

    Contributor

    This page titled 8.4: Properties of the Determinant is shared under a not declared license and was authored, remixed, and/or curated by David Cherney, Tom Denton, & Andrew Waldron.

    • Was this article helpful?