Skip to main content
Mathematics LibreTexts

8.2: Determinants

  • Page ID
    286
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Now that we have developed the appropriate background material on permutations, we are finally ready to define the determinant and explore its many important properties.

    Summations indexed by the set of all permutations

    Given a positive integer \(n \in \mathbb{Z}_{+}\), we begin with the following definition:

    Definition 8.2.1: determinant

    Given a square matrix \(A = (a_{i j}) \in \mathbb{F}^{n \times n}\), the determinant of \(A\) is defined to be

    \begin{equation}
    \det(A) =
    \sum_{\pi \, \in \, \mathcal{S}_{n}}
    \mbox{sign}(\pi) a_{1,\pi(1)}a_{2,\pi(2)}\cdots a_{n,\pi(n)} \,,
    \label{eqn:det}
    \end{equation}

    where the sum is over all permutations of \(n\) elements (i.e., over the symmetric group).

    Note that each permutation in the summand of Equation \ref{eqn:det} permutes the \(n\) columns of the \(n\times n\) matrix.

    Example 8.2.2

    Suppose that \(A \in \mathbb{F}^{2 \times 2}\) is the \(2\times 2\) matrix

    \begin{equation*}
    A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix}.
    \end{equation*}

    Solution

    To calculate the determinant of \(A\), we first list the two permutations in \(S_2\):

    \begin{equation*}
    \mathrm{id} = \begin{pmatrix} 1 & 2 \\ 1 & 2 \end{pmatrix}
    \qquad \text{and} \qquad
    \sigma = \begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix}.
    \end{equation*}

    The permutation \(\mathrm{id}\) has sign \(1\), and the permutation \(\sigma\) has sign \(-1\). Thus, the determinant of \(A\) is given by

    \begin{equation*}
    \det(A) = a_{11} a_{22} - a_{12} a_{21}.
    \end{equation*}

    Were one to attempt to compute determinants directly using Equation \ref{eqn:det}, then one would need to sum up \(n!\) terms, where each summand is itself a product of \(n\) factors. This is an incredibly inefficient method for finding determinants since \(n!\) increases in size very rapidly as \(n\) increases, e.g., \(10!=3628800\). Thus, even if you could compute one summand per second without stopping, it would still take you well over a month to compute the determinant of a \(10 \times 10\) matrix using Equation \ref{eqn:det}. Fortunately, there are properties of the determinant (as summarized in Section 8.2.2 below) that can be used to greatly reduce the size of such computations. These properties of the determinant follow from general properties that hold for any summation taken over the symmetric group, which are in turn themselves based upon properties of permutations and the fact that addition and multiplication are commutative operations in the field \(\mathbb{F}\) (which, as usual, we take to be either \(\mathbb{R}\) or \(\mathbb{C}\)).

    Let \(T:\mathcal{S}_{n} \to V\) be a function defined on the symmetric group \(\mathcal{S}_{n}\) that takes values in some vector space \(V\). E.g., \(T(\pi)\) could be the term corresponding to the permutation \(\pi\) in Equation \ref{eqn:det}. Then, since the sum \[ \sum_{\pi \, \in \, \mathcal{S}_{n}} T(\pi) \] is finite, we are free to reorder the summands. In other words, the sum is independent of the order in which the terms are added, and so we are free to permute the term order without affecting the value of the sum. Some commonly used reorderings of such sums are the following:

    \begin{eqnarray}
    \sum_{\pi \, \in \, \mathcal{S}_{n}} T(\pi)
    & = & \sum_{\pi \, \in \, \mathcal{S}_{n}} T(\sigma \circ \pi)
    \label{eqn:sum_composition}\\
    & = & \sum_{\pi \, \in \, \mathcal{S}_{n}} T(\pi \circ \sigma)
    \label{eqn:sum_product} \\
    & = & \sum_{\pi \, \in \, \mathcal{S}_{n}} T(\pi^{-1}),
    \label{eqn:sum_inverse}
    \end{eqnarray}

    where \(\sigma\) is a fixed permutation.

    Equation \ref{eqn:sum_inverse} follows from the fact that, if \(\pi\) runs through each permutation in \(\mathcal{S}_{n}\) exactly once, then \(\sigma \circ \pi\) similarly runs through each permutation but in a potentially different order. I.e., the action of \(\sigma\) upon something like Equation \ref{eqn:det} is that \(\sigma\) merely permutes the permutations that index the terms. Put another way, there is a one-to-one correspondence between permutations in general and permutations composed with \(\sigma\).

    Similar reasoning holds for Equations \ref{8.2.3} and \ref{8.2.4}.

    Properties of the Determinant

    We summarize some of the most basic properties of the determinant below. The proof of the following theorem uses properties of permutations, properties of the sign function on permutations, and properties of sums over the symmetric group as discussed in Section 8.2.1 above. In thinking about these properties, it is useful to keep in mind that, using Equation \ref{eqn:det}, the determinant of an \(n \times n\) matrix \(A\) is the sum over all possible ways of selecting \(n\) entries of \(A\), where exactly one element is selected from each row and from each column of \(A\).

    Theorem 8.2.3

    Let \(n \in \mathbb{Z}_{+}\) be a positive integer, and suppose that \(A = (a _{i j}) \in \mathbb{F}^{n\times n}\) is an \(n \times n\) matrix. Then:

    1. \(\det(0_{n \times n}) = 0\) and \(\det(I_{n}) = 1\), where \(0_{n \times n}\) denotes the \(n \times n\) zero matrix and \(I_{n}\) denotes the \(n \times n\) identity matrix.

    2. \(\det(A^T)=\det(A)\), where \(A^T\) denotes the transpose of \(A\).

    3. denoting by \(A^{(\cdot, 1)}, A^{(\cdot, 2)}, \ldots, A^{(\cdot, n)} \in \mathbb{F}^n\) the columns of \(A\), \(\det(A)\) is a linear function of column \(A^{(\cdot, i)}\), for each \(i\) in \(\{1, \ldots, n\}\). In other words, if we denote \[ A = \left[ A^{(\cdot, 1)} \mid A^{(\cdot, 2)} \mid \cdots \mid A^{(\cdot, n)} \right] \] then, given any scalar \(z\) in \(\mathbb{F}\) and any vectors \(a_{1}, a_{2}, \ldots, a_{n}, c ,b \in \mathbb{F}^n\),

    \[ \begin{eqnarray*} \det \left[ a_1 \mid \cdots \mid a_{i-1} \mid z a_{i} \mid \cdots \mid a_n \right] & = & z \det \left[ a_1 \mid \cdots \mid a_{i-1} \mid a_{i} \mid \cdots \mid a_n \right], \\ \det \left[ a_1 \mid \cdots \mid a_{i-1} \mid b+c \mid \cdots \mid a_n \right] & = & \det \left[ a_1 \mid \cdots \mid b \mid \cdots \mid a_n \right] + \det \left[ a_1 \mid \cdots \mid c \mid \cdots \mid a_n \right]. \end{eqnarray*} \]

    4. \(\det(A)\) is an antisymmetric function of the columns of \(A\). In other words, given any positive integers \(1\leq i<j\leq n\) and denoting \(A = \left[ A^{(\cdot, 1)} \mid A^{(\cdot, 2)} \mid \cdots \mid A^{(\cdot, n)} \right]\), \[ \det(A) = - \det \left[ A^{(\cdot, 1)} \mid \cdots \mid A^{(\cdot, j)} \mid \cdots \mid A^{(\cdot, i)} \mid \cdots \mid A^{(\cdot, n)} \right]. \]

    5. if \(A\) has two identical columns, \(\det(A) = 0\).

    6. if \(A\) has a column of zero's, \(\det(A) = 0\).

    7. Properties 3-6 also hold when rows are used in place of columns.

    8. given any other matrix \(B \in \mathbb{F}^{n\times n}\), \[ \det(A B) =\det(A)\;\det(B).\]

    9. if \(A\) is either upper triangular or lower triangular, \[ \det(A) = a_{1 1} a_{2 2} \cdots a_{n n}.\]

    Proof

    First, note that Properties 1, 3, 6, and 9 follow directly from the sum given in Equation \ref{eqn:det}. Moreover, Property 5 follows directly from Property 4, and Property 7 follows directly from Property 2. Thus, we only need to prove Properties 2, 4, and 8.

    Proof of 2: Since the entries of \(A^T\) are obtained from those of \(A\) by interchanging the row and column indices, it follows that \(\det(A^T)\) is given by

    \[
    \det(A^T) = \sum_{\pi \, \in \, \mathcal{S}_{n}}
    \mbox{sign}(\pi)\; a_{\pi(1),1}a_{\pi(2),2}\cdots
    a_{\pi(n),n}\,.
    \]

    Using the commutativity of the product in \(\mathbb{F}\) and Equation (8.1.3), we see that

    \[
    \det(A^T) = \sum_{\pi \, \in \, \mathcal{S}_{n}}
    \mbox{sign}(\pi^{-1})\; a_{1,\pi^{-1}(1)}a_{2,\pi^{-1}(2)}
    \cdots a_{n,\pi^{-1}(n)}\,,
    \]

    which equals \(\det(A)\) by Equation \ref{8.2.5}.

    Proof of 4: Let \(B=\left[ A^{(\cdot, 1)} \mid \cdots \mid A^{(\cdot, j)} \mid \cdots \mid A^{(\cdot, i)} \mid \cdots \mid A^{(\cdot, n)} \right]\) be the matrix obtained from \(A\) by interchanging the \(i\)th and the \(j\)th column. Then note that

    \[ \det(B) = \sum_{\pi \, \in \, \mathcal{S}_{n}}
    \mbox{sign}(\pi)\; a_{1, \pi(1)}\cdots
    a_{j,\pi(i)}\cdots a_{i,\pi(j)}\cdots a_{n, \pi(n)}\,.
    \]

    Define \(\tilde{\pi}=\pi\circ t_{i j}\), and note that \(\pi=\tilde{\pi}\circ t_{i j}\). In particular, \(\pi(i)=\tilde{\pi}(j)\) and \(\pi(j)=\tilde{\pi}(i)\), from which
    \[ \det(B) = \sum_{\pi \, \in \, \mathcal{S}_{n}}
    \mbox{sign}(\tilde{\pi}\circ t_{i j})\; a_{1,
    \tilde{\pi}(1)} \cdots a_{i,\tilde{\pi}(i)}\cdots
    a_{j,\tilde{\pi}(j)}\cdots a_{n, \tilde{\pi}(n)}\,.
    \]

    It follows from Equations (8.1.1) and (8.1.2) that \(\mbox{sign}(\tilde{\pi}\circ t_{i j}) = -\mbox{sign}\;(\tilde{\pi})\). Thus, using Equation \ref{8.2.4}, we obtain \(\det(B) = -\det(A)\).

    Proof of 8: Using the standard expression for the matrix entries of the product \(AB\) in terms of the matrix entries of \(A=(a_{i j})\) and \(B=(b_{i j})\), we have that

    \begin{eqnarray*}
    \det (AB)
    & = &
    \sum_{\pi \, \in \, \mathcal{S}_{n}}\mbox{sign}(\pi) \sum_{k_1=1}^n \cdots \sum_{k_n=1}^n
    a_{1,k_1}b_{k_1,\pi(1)} \cdots a_{n,k_n}b_{k_n,\pi(n)}
    \\
    & = &
    \sum_{k_1=1}^n \cdots \sum_{k_n=1}^n a_{1,k_1} \cdots a_{n,k_n}
    \sum_{\pi \, \in \, \mathcal{S}_{n}} \mbox{sign}\;(\pi) b_{k_1,\pi(1)} \cdots b_{k_n,\pi(n)}.
    \end{eqnarray*}

    Note that, for fixed \(k_1, \ldots, k_n \in \{1, \ldots, n\}\), the sum \(\sum_{\pi \, \in \, \mathcal{S}_{n}} \mbox{sign}\;(\pi) b_{k_1,\pi(1)}\cdots b_{k_n,\pi(n)}\) is the determinant of a matrix composed of rows \(k_1,\ldots, k_n\) of \(B\). Thus, by property 5, it follows that this expression vanishes unless the \(k_i\) are pairwise distinct. In other words, the sum over all choices of \(k_1, \ldots, k_n \) can be restricted to those sets of indices \(\sigma(1),\ldots , \sigma(n)\) that are labeled by a permutation \(\sigma\in\mathcal{S}_n\). In other words,

    \[
    \det (AB) = \sum_{\sigma \, \in \, \mathcal{S}_{n}} a_{1,\sigma(1)}\cdots
    a_{n,\sigma(n)}\sum_{\pi \, \in \, \mathcal{S}_{n}} \mbox{sign}(\pi)\;
    b_{\sigma(1),\pi(1)}\cdots b_{\sigma(n),\pi(n)}\,.
    \]

    Now, proceeding with the same arguments as in the proof of Property 4 but with the role of \(t_{i j}\) replaced by an arbitrary permutation \(\sigma\), we obtain

    \[
    \det (AB) = \sum_{\sigma \, \in \, \mathcal{S}_{n}}\mbox{sign}(\sigma)\;
    a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}\sum_{\pi \, \in \, \mathcal{S}_{n}}
    \mbox{sign}(\pi\circ\sigma^{-1})\;
    b_{1,\pi\circ\sigma^{-1}(1)}\cdots
    b_{n,\pi\circ\sigma^{-1}(n)}\,.
    \]

    Using Equation (8.2.4), this last expression then becomes \((\det(A))(\det(B))\).

    Note that Properties 3 and 4 of Theorem 8.2.3 effectively summarize how multiplication by an Elementary Matrix interacts with the determinant operation. These Properties together with Property 9 facilitate numerical computation of determinants for very large matrices.

    \(\square\)

    Further Properties and Applications

    There are many applications of Theorem 8.2.3. We conclude these notes with a few consequences that are particularly useful when computing with matrices. In particular, we use the determinant to list several characterizations for matrix invertibility, and, as a corollary, give a method for using determinants to calculate eigenvalues. You should provide a proof of these results for your own benefit.

    Theorem 8.2.4

    Let \(n \in \mathbb{Z}_{+}\) and \(A \in \mathbb{F}^{n\times n}\). Then the following statements are equivalent:

    1. \(A\) is invertible.
    2. denoting \(x = \begin{bmatrix} x_{1} \\ \vdots \\ x_{n} \end{bmatrix}\), the matrix equation \(A x = 0\) has only the trivial solution \(x = 0\).
    3. denoting \(x = \begin{bmatrix} x_{1} \\ \vdots \\ x_{n} \end{bmatrix}\), the matrix equation \(A x = b\) has a solution for all \(b = \begin{bmatrix} b_{1} \\ \vdots \\ b_{n} \end{bmatrix} \in \mathbb{F}^{n}\).
    4. \(A\) can be factored into a product of elementary matrices.
    5. \(\det(A) \neq 0\).
    6. the rows (or columns) of \(A\) form a linearly independent set in \(\mathbb{F}^{n}\).
    7. zero is not an eigenvalue of \(A\).
    8. the linear operator \(T : \mathbb{F}^{n} \to \mathbb{F}^{n}\) defined by \(T(x) = A x\), for every \(x \in \mathbb{F}^{n}\), is bijective.

    Moreover, should \(A\) be invertible, then \[ \det(A^{-1}) = \frac{1}{\det(A)}.\]

    Given a matrix \(A \in \mathbb{C}^{n\times n}\) and a complex number \(\lambda \in \mathbb{C}\), the expression

    \[ P(\lambda) = \det(A - \lambda I_{n}) \]

    is called the characteristic polynomial of \(A\). Note that \(P(\lambda)\) is a basis independent polynomial of degree \(n\). Thus, as with the determinant, we can consider \(P(\lambda)\) to be associated with the linear map that has matrix \(A\) with respect to some basis. Since the eigenvalues of \(A\) are exactly those \(\lambda \in \mathbb{C}\) such that \(A - \lambda I\) is not invertible, the following is then an immediate corollary.

    Corollary 8.2.5

    The roots of the polynomial \(P(\lambda)=\det (A-\lambda I)\) are exactly the eigenvalues of \(A\).

    Computing Determinants with cofactor Expansions

    As noted in Section 8.2.1, it is generally impractical to compute determinants directly with Equation (8.2.1). In this section, we briefly describe the so-called cofactor expansions of a determinant. When properly applied, cofactor expansions are particularly useful for computing determinants by hand.

    Definition 8.2.6: Cofactors

    Let \(n \in \mathbb{Z}_{+}\) and \(A \in \mathbb{F}^{n\times n}\). Then, for each \(i, j \in \{ 1, 2, \ldots, n \}\), the \(i\) - \(j\) minor of \(A\), denoted \(M_{i j}\), is defined to be the determinant of the matrix obtained by removing the \(i^{\text{th}}\) row and \(j^{\text{th}}\) column from \(A\). Moreover, the \(i\) - \(j\) cofactor of \(A\) is defined to be

    \[ A_{i j} = (-1)^{i + j}M_{i j}. \]

    Cofactors themselves, though, are not terribly useful unless put together in the right way.

    Definition8.2.7: Cofactor Expansion

    Let \(n \in \mathbb{Z}_{+}\) and \(A = (a_{i j}) \in \mathbb{F}^{n\times n}\). Then, for each \(i, j\) in \(\{ 1, 2, \ldots, n \}\), the \(i^{\text{th}}\) row (resp. \(j^{\text{th}}\) column) cofactor expansion of \(A\) is the sum \({\displaystyle \sum_{j = 1}^{n} a_{i j} A_{i j}}\) (resp. \({\displaystyle \sum_{i = 1}^{n} a_{i j} A_{i j}}\)).

    Theorem 8.2.8

    Let \(n \in \mathbb{Z}_{+}\) and \(A \in \mathbb{F}^{n\times n}\). Then every row and column factor expansion of \(A\) is equal to the determinant of \(A\).

    Since the determinant of a matrix is equal to every row or column cofactor expansion, one can compute the determinant using a convenient choice of expansions until the calculation is reduced to one or more \(2 \times 2\) determinants. We close with an example.

    Example \(\PageIndex{9}\)

    By first expanding along the second column, we obtain

    \[
    \left| \begin{array}{rrrr}
    1 & 2 & -3 & 4 \\
    -4 & 2 & 1 & 3 \\
    3 & 0 & 0 & -3 \\
    2 & 0 & -2 & 3 \\
    \end{array}
    \right|
    =
    (-1)^{1 + 2}(2)
    \left| \begin{array}{rrrr}
    -4 & 1 & 3 \\
    3 & 0 & -3 \\
    2 & -2 & 3 \\
    \end{array}
    \right|
    +
    (-1)^{2 + 2}(2)
    \left| \begin{array}{rrrr}
    1 & -3 & 4 \\
    3 & 0 & -3 \\
    2 & -2 & 3 \\
    \end{array}
    \right|.
    \]

    Then, each of the resulting \(3 \times 3\) determinants can be computed by further expansion:

    \[
    \left|
    \begin{array}{rrrr}
    -4 & 1 & 3 \\
    3 & 0 & -3 \\
    2 & -2 & 3 \\
    \end{array}
    \right|
    =
    (-1)^{1 + 2}(1)
    \left|
    \begin{array}{rrrr}
    3 & -3 \\
    2 & 3 \\
    \end{array}
    \right|
    +
    (-1)^{3 + 2}(-2)
    \left|
    \begin{array}{rrrr}
    -4 & 3 \\
    3 & -3 \\
    \end{array}
    \right|
    = -15 + 6
    = -9.
    \]

    \[
    \left|
    \begin{array}{rrrr}
    1 & -3 & 4 \\
    3 & 0 & -3 \\
    2 & -2 & 3 \\
    \end{array}
    \right|
    =
    (-1)^{2 + 1}(3)
    \left|
    \begin{array}{rrrr}
    -3 & 4 \\
    -2 & 3 \\
    \end{array}
    \right|
    +
    (-1)^{2 + 3}(-3)
    \left|
    \begin{array}{rrrr}
    1 & -3 \\
    2 & -2 \\
    \end{array}
    \right|
    = 3 + 12
    = 15.
    \]

    It follows that the original determinant is then equal to \(-2(-9) + 2(15) = 48\).


    This page titled 8.2: Determinants is shared under a not declared license and was authored, remixed, and/or curated by Isaiah Lankham, Bruno Nachtergaele, & Anne Schilling.