Skip to main content
Mathematics LibreTexts

8.5: Determinants and Cramer’s Rule

  • Page ID
    4028
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In this section we assign to each square matrix \(A\) a real number, called the determinant of \(A\), which will eventually lead us to yet another technique for solving consistent independent systems of linear equations. The determinant is defined recursively, that is, we define it for \(1 \times 1\) matrices and give a rule by which we can reduce determinants of \(n \times n\) matrices to a sum of determinants of \((n-1) \times (n-1)\) matrices (We will talk more about the term 'recursively' in Section 9.1). This means we will be able to evaluate the determinant of a \(2 \times 2\) matrix as a sum of the determinants of \(1 \times 1\) matrices; the determinant of a \(3 \times 3\) matrix as a sum of the determinants of \(2 \times 2\) matrices, and so forth. To explain how we will take an \(n \times n\) matrix and distill from it an \((n-1) \times (n-1)\), we use the following notation.

    Note \(\PageIndex{1}\)

    Given an \(n \times n\) matrix \(A\) where \(n>1\), the matrix \(A_{ij}\) is the \((n-1) \times (n-1)\) matrix formed by deleting the \(i\)th row of \(A\) and the \(j\)th column of \(A\).

    For example, using the matrix \(A\) below, we find the matrix \(A_{23}\) by deleting the second row and third column of \(A\).

    \[ \begin{array}{ccc} A = \left[ \begin{array}{rr>{\columncolor[gray]{0.7}}r} 3 & 1 & 2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right] & \xrightarrow{\text{Delete \(R2\) and \(C3\)}} & A_{23} = \left[ \begin{array}{rr} 3 & 1 \\ 2 & 1 \\ \end{array} \right] \\ \end{array}\]

    We are now in the position to define the determinant of a matrix.

    Definition \(\PageIndex{1}\): Determinant

    Given an \(n \times n\) matrix \(A\) the determinant of \(A\), denoted \(\det(A)\), is defined as follows

    • If \(n=1\), then \(A = \left[ a_{11} \right]\) and \(\det(A) = \det\left( \left[ a_{11} \right] \right) = a_{11}\).
    • If \(n>1\), then \(A = \left[ a_{ij} \right]_{n \times n}\) and

    \[ \det(A) = \det\left( \left[ a_{ij} \right]_{n \times n} \right) = a_{11} \det\left(A_{11}\right)- a_{12} \det\left(A_{12}\right) + - \ldots + (-1)^{1+n} a_{1n} \det\left(A_1n\right)\]

    There are two commonly used notations for the determinant of a matrix \(A\): '\(\det(A)\)' and '\(|A|\)'

    We have chosen to use the notation \(\det(A)\) as opposed to \(|A|\) because we find that the latter is often confused with absolute value, especially in the context of a \(1 \times 1\) matrix. In the expansion \(a_{11} \det\left(A_{11}\right)- a_{12} \det\left(A_{12}\right) + - \ldots + (-1)^{1+n} a_{1n} \det\left(A_1n\right)\), the notation '\(+ - \ldots + (-1)^{1+n} a_{1n}\)' means that the signs alternate and the final sign is dictated by the sign of the quantity \((-1)^{1+n}\). Since the entries \(a_{11}\), \(a_{12}\) and so forth up through \(a_{1n}\) comprise the first row of \(A\), we say we are finding the determinant of \(A\) by 'expanding along the first row'. Later in the section, we will develop a formula for \(\det(A)\) which allows us to find it by expanding along any row.

    Applying Definition \ref{determinantdefn} to the matrix \(A = \left[ \begin{array}{rr} 4 & -3 \\ 2 & 1 \\ \end{array} \right]\) we get

    \[ \begin{array}{rcl} \det(A) & = & \det \left( \left[ \begin{array}{rr} 4 & -3 \\ 2 & 1 \\ \end{array} \right] \right)\\ & = & 4\det\left(A_{11}\right) - (-3)\det\left(A_{12}\right)\\ & = & 4 \det([1]) +3\det([2]) \\ & = & 4(1) + 3(2) \\ & = & 10 \\ \end{array}\]

    For a generic \(2 \times 2\) matrix \(A = \left[ \begin{array}{cc} a & b \\ c & d \\ \end{array} \right]\) we get

    \[ \begin{array}{rcl} \det(A) & = & \det \left( \left[ \begin{array}{cc} a & b \\ c & d \\ \end{array} \right] \right)\\ & = & a \det\left(A_{11}\right) - b \det\left(A_{12}\right) \\ & = & a \det\left(\left[ d \right]\right) - b \det\left(\left[c \right]\right) \\ & = & ad-bc \end{array}\]

    This formula is worth remembering

    Note \(\PageIndex{1}\)

    For a \(2 \times 2\) matrix,

    \[ \det \left( \left[ \begin{array}{cc} a & b \\ c & d \\ \end{array} \right] \right) = ad-bc \]

    Applying Definition \ref{determinantdefn} to the \(3 \times 3\) matrix \(A = \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right]\) we obtain

    \[ \begin{array}{rcl} \det(A) & = & \det \left( \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right] \right)\\ & = & 3\det\left(A_{11}\right) - 1\det\left(A_{12}\right) + 2\det\left(A_{13}\right) \\ & = & 3\det \left( \left[ \begin{array}{rr} -1 & 5 \\ 1 & 4 \\ \end{array} \right] \right) - \det \left( \left[ \begin{array}{rr} 0 & 5 \\ 2 & 4 \\ \end{array} \right] \right) + 2 \det \left( \left[ \begin{array}{rr} 0 & -1 \\ 2 & 1 \\ \end{array} \right] \right) \\ & = & 3((-1)(4) - (5)(1)) - ((0)(4)-(5)(2))+2((0)(1)-(-1)(2)) \\ & = & 3(-9)-(-10)+2(2) \\ & = & -13 \\ \end{array} \]

    To evaluate the determinant of a \(4 \times 4\) matrix, we would have to evaluate the determinants of four \(3 \times 3\) matrices, each of which involves the finding the determinants of three \(2 \times 2\) matrices. As you can see, our method of evaluating determinants quickly gets out of hand and many of you may be reaching for the calculator. There is some mathematical machinery which can assist us in calculating determinants and we present that here. Before we state the theorem, we need some more terminology.

    Note \(\PageIndex{1}\)

    Let \(A\) be an \(n \times n\) matrix and \(A_{ij}\) be defined as in Definition \ref{Aijdefn}. The \(ij\) of \(A\), denoted \(M_{ij}\) is defined by \(M_{ij} = \det\left(A_{ij}\right)\). The \(ij\) cofactor of \(A\), denoted \(C_{ij}\) is defined by \(C_{ij} = (-1)^{i+j}M_{ij} = (-1)^{i+j}\det\left(A_{ij}\right)\).

    We note that in Definition \ref{determinantdefn}, the sum

    \[a_{11} \det \left(A_{11} \right)- a_{12} \det\left(A_{12}\right) + - \ldots + (-1)^{1+n} a_{1n} \det\left(A_{1n}\right)\]

    can be rewritten as

    \[a_{11} (-1)^{1+1} \det\left(A_{11}\right) + a_{12} (-1)^{1+2} \det\left(A_{12}\right) + \ldots + a_{1n} (-1)^{1+n} \det\left(A_{1n}\right)\]

    which, in the language of cofactors is

    \[a_{11} C_{11} + a_{12}C_{12} + \ldots + a_{1n}C_{1n} \]

    We are now ready to state our main theorem concerning determinants.

    \(\PageIndex{1}\): Properties of the Determinant

    Let \(A = \left[a_{ij}\right]_{n \times n}\). \index{determinant of a matrix ! properties of} \index{matrix ! determinant ! properties of}

    • We may find the determinant by expanding along any row. That is, for any \(1 \leq k \leq n\), \[\det(A) = a_{k_1}C_{k_1} + a_{k_2}C_{k_2} + \ldots + a_{kn} C_{kn}\]
    • If \(A'\) is the matrix obtained from \(A\) by:
      • interchanging any two rows, then \(\det(A')=-\det(A)\).
      • replacing a row with a nonzero multiple (say \(c\)) of itself, then \(\det(A')=c\det(A)\)
      • replacing a row with itself plus a multiple of another row, then \(\det(A')=\det(A)\)
    • If \(A\) has two identical rows, or a row consisting of all \(0\)'s, then \(\det(A) = 0\).
    • If \(A\) is upper or lower triangular,\footnote{See Exercise \ref{triangularmatrices} in \ref{MatArithmetic}.} then \(\det(A)\) is the product of the entries on the main diagonal.\footnote{See page \pageref{maindiagonal} in Section \ref{MatArithmetic}.}
    • If \(B\) is an \(n \times n\) matrix, then \(\det(AB) = \det(A) \det(B)\).
    • \(\det\left(A^{n}\right) = \det(A)^{n}\) for all natural numbers \(n\).
    • \(A\) is invertible if and only if \(\det(A) \neq 0\). In this case, \(\det\left(A^{-1}\right) = \dfrac{1}{\det(A)}\).

    Unfortunately, while we can easily \textit{demonstrate} the results in Theorem \ref{determinantprops}, the proofs of most of these properties are beyond the scope of this text. We could prove these properties for generic \(2 \times 2\) or even \(3 \times 3\) matrices by brute force computation, but this manner of proof belies the elegance and symmetry of the determinant. We will prove what few properties we can after we have developed some more tools such as the Principle of Mathematical Induction in Section \ref{Induction} (for a very elegant treatment, take a course in Linear Algebra. There, you will most likely see the treatment of determinants logically reversed than what is presented here). Specifically, the determinant is defined as a function which takes a square matrix to a real number and satisfies some of the properties in Theorem \ref{determinantprops}. From that function, a formula for the determinant is developed.} For the moment, let us demonstrate some of the properties listed in Theorem \ref{determinantprops} on the matrix \(A\) below. (Others will be discussed in the Exercises.)

    \[A = \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right] \]

    We found \(\det(A) = -13\) by expanding along the first row. To take advantage of the \(0\) in the second row, we use Theorem \ref{determinantprops}to find \(\det(A) = -13\) by expanding along that row.

    \[ \begin{array}{rcl} \det \left( \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right] \right)& = & 0C_{21} + (-1)C_{22}+5C_{23} \\ & = & (-1) (-1)^{2+2} \det\left(A_{22}\right) + 5 (-1)^{2+3}\det\left(A_{23}\right) \\ & = & - \det \left( \left[ \begin{array}{rr} 3 & 2 \\ 2 & 4 \\ \end{array} \right] \right) -5 \det \left( \left[ \begin{array}{rr} 3 & 1 \\ 2 & 1 \\ \end{array} \right] \right) \\ & = & -((3)(4)-(2)(2)) - 5((3)(1)-(2)(1)) \\ & = & -8-5 \\ & = & -13 \, \, \checkmark \\ \end{array} \]

    In general, the sign of \((-1)^{i+j}\) in front of the minor in the expansion of the determinant follows an alternating pattern. Below is the pattern for \(2 \times 2\), \(3 \times 3\) and \(4 \times 4\) matrices, and it extends naturally to higher dimensions.

    \[ \begin{array}{ccc} \left[ \begin{array}{cc} + & - \\ - & + \\ \end{array} \right] & \qquad \left[ \begin{array}{ccc} + & - & + \\ - & + & - \\ + & - & + \end{array} \right] & \qquad \left[ \begin{array}{cccc} + & - & + & - \\ - & + & - & +\\ + & - & + & - \\ - & + & - & + \end{array} \right] \end{array} \]

    The reader is cautioned, however, against reading too much into these sign patterns. In the example above, we expanded the \(3 \times 3\) matrix \(A\) by its second row and the term which corresponds to the second entry ended up being negative even though the sign attached to the minor is \((+)\). These signs represent only the signs of the \((-1)^{i+j}\) in the formula; the sign of the corresponding entry as well as the minor itself determine the ultimate sign of the term in the expansion of the determinant.

    To illustrate some of the other properties in Theorem \ref{determinantprops}, we use row operations to transform our \(3 \times 3\) matrix \(A\) into an upper triangular matrix, keeping track of the row operations, and labeling each successive matrix.\footnote{Essentially, we follow the Gauss Jordan algorithm but we don't care about getting leading \(1\)'s.}

    \[ \begin{array}{ccccc} \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right] & \xrightarrow[\text{with \(-\frac{2}{3}R1+R3\)}]{\text{Replace \(R3\)}} & \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 0 & \frac{1}{3} & \frac{8}{3} \\ \end{array} \right] & \xrightarrow[\text{\)\frac{1}{3}R2+R3\)}]{\text{Replace \(R3\) with}} & \left[ \begin{array}{rrr} 3 & 1 & 2 \\ 0 & -1 & 5 \\ 0 & 0 & \frac{13}{3} \\ \end{array} \right] \\ A & & B & & C \\ \end{array}\]

    Theorem \ref{determinantprops} guarantees us that \(\det(A) = \det(B) = \det(C)\) since we are replacing a row with itself plus a multiple of another row moving from one matrix to the next. Furthermore, since \(C\) is upper triangular, \(\det(C)\) is the product of the entries on the main diagonal, in this case \(\det(C) = (3)(-1)\left(\frac{13}{3}\right) = -13\). This demonstrates the utility of using row operations to assist in calculating determinants. This also sheds some light on the connection between a determinant and invertibility. Recall from Section \ref{MatMethods} that in order to find \(A^{-1}\), we attempt to transform \(A\) to \(I_{n}\) using row operations

    \[ \begin{array}{ccc} \left[ \begin{array}{c|c} A & I_{n} \\ \end{array} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{array}{c|c} I_{n} & A^{-1} \\ \end{array} \right] \end{array}\]

    As we apply our allowable row operations on \(A\) to put it into reduced row echelon form, the determinant of the intermediate matrices can vary from the determinant of \(A\) by at most a \textit{nonzero} multiple. This means that if \(\det(A) \neq 0\), then the determinant of \(A\)'s reduced row echelon form must also be nonzero, which, according to Definition \ref{rowechelonform} means that all the main diagonal entries on \(A\)'s reduced row echelon form must be \(1\). That is, \(A\)'s reduced row echelon form is \(I_{n}\), and \(A\) is invertible. Conversely, if \(A\) is invertible, then \(A\) can be transformed into \(I_{n}\) using row operations. Since \(\det\left(I_{n}\right) = 1 \neq 0\), our same logic implies \(\det(A) \neq 0\). Basically, we have established that the determinant \textit{determines} whether or not the matrix \(A\) is invertible.\footnote{In Section \ref{CramersRuleMatrixAdjoints}, we learn determinants (specifically cofactors) are deeply connected with the inverse of a matrix.}

    It is worth noting that when we first introduced the notion of a matrix inverse, it was in the context of solving a linear matrix equation. In effect, we were trying to 'divide' both sides of the matrix equation \(AX = B\) by the matrix \(A\). Just like we cannot divide a real number by \(0\), Theorem \ref{determinantprops} tells us we cannot 'divide' by a matrix whose \textit{determinant} is \(0\). We also know that if the coefficient matrix of a system of linear equations is invertible, then system is consistent and independent. It follows, then, that if the determinant of said coefficient is not zero, the system is consistent and independent.

    Cramer's Rule and Matrix Adjoints

    In this section, we introduce a theorem which enables us to solve a system of linear equations by means of determinants only. As usual, the theorem is stated in full generality, using numbered unknowns \(x_1\), \(x_2\), etc., instead of the more familiar letters \(x\), \(y\), \(z\), etc. The proof of the general case is best left to a course in Linear Algebra.

    \(\PageIndex{1}\): Cramer's Rule

    Suppose \(AX = B\) is the matrix form of a system of \(n\) linear equations in \(n\) unknowns where \(A\) is the coefficient matrix, \(X\) is the unknowns matrix, and \(B\) is the constant matrix. If \(\det(A) \neq 0\), then the corresponding system is consistent and independent and the solution for unknowns \(x_1\), \(x_2\), \(\ldots x_{n}\) is given by:

    \[ x_{j} = \dfrac{\det\left(A_{j}\right)}{\det(A)},\]

    where \(A_{j}\) is the matrix \(A\) whose \(j\)th column has been replaced by the constants in \(B\).

    In words, Cramer's Rule tells us we can solve for each unknown, one at a time, by finding the ratio of the determinant of \(A_{j}\) to that of the determinant of the coefficient matrix. The matrix \(A_{j}\) is found by replacing the column in the coefficient matrix which holds the coefficients of \(x_{j}\) with the constants of the system. The following example fleshes out this method.

    Example \(\PageIndex{1}\): Application of Cramer's Rule

    Use Cramer's Rule to solve for the indicated unknowns.

    1. Solve \(\left\{ \begin{array}{rcr} 2x_1 - 3x_2 & = & 4 \\ 5x_1 + x_2 & = & -2 \end{array} \right.\) for \(x_1\) and \(x_2\)
    2. Solve \(\left\{ \begin{array}{rcr} 2x - 3y + z & = & -1 \\ x-y+z & = & 1 \\ 3x-4z & = & 0 \end{array} \right.\) for \(z\).

    Solution

    1. Writing this system in matrix form, we find \[ \begin{array}{ccc} A = \left[ \begin{array}{rr} 2 & -3 \\ 5 & 1 \\ \end{array} \right] & \qquad X = \left[ \begin{array}{r} x_1 \\ x_2 \\ \end{array} \right] & \qquad B = \left[ \begin{array}{r} 4 \\ -2 \\ \end{array} \right] \\ \end{array} \] To find the matrix \(A_1\), we remove the column of the coefficient matrix \(A\) which holds the coefficients of \(x_1\) and replace it with the corresponding entries in \(B\). Likewise, we replace the column of \(A\) which corresponds to the coefficients of \(x_2\) with the constants to form the matrix \(A_2\). This yields \[ \begin{array}{cc} A_1 = \left[ \begin{array}{rr} 4 & -3 \\ -2 & 1 \\ \end{array} \right] & \qquad A_2 = \left[ \begin{array}{rr} 2 & 4 \\ 5 & -2 \\ \end{array} \right] \\ \end{array} \] Computing determinants, we get \(\det(A) = 17\), \(\det\left(A_1\right) = -2\) and \(\det\left(A_2\right) = -24\), so that \[ \begin{array}{cc} x_1 = \dfrac{\det\left(A_1\right)}{\det(A)} = -\dfrac{2}{17} & \qquad x_2} = \dfrac{\det\left(A_2\right)}{\det(A)} = -\dfrac{24}{17} \\ \end{array} \] The reader can check that the solution to the system is \(\left(-\frac{2}{17}, -\frac{24}{17}\right)\).
    2. To use Cramer's Rule to find \(z\), we identify \(x_3\) as \(z\). We have \[ \begin{array}{cccc} A = \left[ \begin{array}{rrr} 2 & -3 & 1 \\ 1 & -1 & 1 \\ 3 & 0 & -4 \end{array} \right] & X = \left[ \begin{array}{r} x \\ y \\ z \end{array} \right] & B = \left[ \begin{array}{r} -1 \\ 1 \\ 0 \end{array} \right] & A_3 = A_{z} = \left[ \begin{array}{rrr} 2 & -3 & -1 \\ 1 & -1 & 1 \\ 3 & 0 & 0 \end{array} \right] \\ \end{array} \] Expanding both \(\det(A)\) and \(\det\left(A_{z}\right)\) along the third rows (to take advantage of the \(0\)'s) gives \[ z = \dfrac{\det\left(A_{z}\right)}{\det(A)} = \dfrac{-12}{-10} = \dfrac{6}{5} \] The reader is encouraged to solve this system for \(x\) and \(y\) similarly and check the answer.

    Our last application of determinants is to develop an alternative method for finding the inverse of a matrix.\footnote{We are developing a \textit{method} in the forthcoming discussion. As with the discussion in Section \ref{MatMethods} when we developed the first algorithm to find matrix inverses, we ask that you indulge us.} Let us consider the \(3 \times 3\) matrix \(A\) which we so extensively studied in Section \ref{determinantdefnandprops}

    \[A = \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right]\]

    We found through a variety of methods that \(\det(A) = -13\). To our surprise and delight, its inverse below has a remarkable number of \(13\)'s in the denominators of its entries. This is no coincidence.

    \[ A^{-1} = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right]\]

    Recall that to find \(A^{-1}\), we are essentially solving the matrix equation \(AX = I_3\), where \(X = \left[ x_{ij} \right]_{3 \times 3}\) is a \(3 \times 3\) matrix. Because of how matrix multiplication is defined, the first column of \(I_3\) is the product of \(A\) with the first column of \(X\), the second column of \(I_3\) is the product of \(A\) with the second column of \(X\) and the third column of \(I_3\) is the product of \(A\) with the third column of \(X\). In other words, we are solving three equations (the reader is encouraged to stop and think this through).

    \[\begin{array}{ccc} A\left[ \begin{array}{r} x_{11} \\ x_{21} \\ x_{31} \end{array} \right] = \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array} \right] & \qquad A\left[ \begin{array}{r} x_{12} \\ x_{22} \\ x_{32} \end{array} \right] = \left[ \begin{array}{r} 0 \\ 1 \\ 0 \end{array} \right] & \qquad A\left[ \begin{array}{r} x_{13} \\ x_{23} \\ x_{33} \end{array} \right] = \left[ \begin{array}{r} 0 \\ 0 \\ 1 \end{array} \right] \\ \end{array}\]

    We can solve each of these systems using Cramer's Rule. Focusing on the first system, we have

    \[ \begin{array}{ccc} A_1 = \left[ \begin{array}{rrr} 1 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 0 & 1 & 4 \\ \end{array} \right] & A_2 = \left[ \begin{array}{rrr} 3 & 1 & 2 \\ 0 & 0 & 5 \\ 2 & 0 & 4 \\ \end{array} \right] & A_3 = \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}1 \\ 0 & -1 & 0 \\ 2 & 1 & 0 \\ \end{array} \right] \end{array} \]

    If we expand \(\det\left(A_1\right)\) along the first row, we get

    \[ \begin{array}{rcl} \det\left(A_1\right) & = & \det\left( \left[ \begin{array}{rr} -1 & 5 \\ 1 & 4 \\ \end{array} \right] \right) - \det\left( \left[ \begin{array}{rr} 0 & 5 \\ 0 & 4 \\ \end{array} \right] \right) + 2 \det\left( \left[ \begin{array}{rr} 0 & -1 \\ 0 & 1 \\ \end{array} \right] \right) \\ & = & \det\left( \left[ \begin{array}{rr} -1 & 5 \\ 1 & 4 \\ \end{array} \right] \right) \end{array} \]

    Amazingly, this is none other than the \(C_{11}\) cofactor of \(A\). The reader is invited to check this, as well as the claims that \(\det\left(A_2\right) = C_{12}\) and \(\det\left(A_3\right) = C_{13}\).\footnote{In a solid Linear Algebra course you will learn that the properties in Theorem \ref{determinantprops} hold equally well if the word 'row' is replaced by the word 'column'. We're not going to get into column operations in this text, but they do make some of what we're trying to say easier to follow.} (To see this, though it seems unnatural to do so, expand along the first row.) Cramer's Rule tells us

    \[\begin{array}{ccc} x_{11} = \dfrac{\det\left(A_1\right)}{\det(A)} = \dfrac{C_{11}}{\det(A)}, & x_{21} = \dfrac{\det\left(A_2\right)}{\det(A)} = \dfrac{C_{12}}{\det(A)}, & x_{31} = \dfrac{\det\left(A_3\right)}{\det(A)} = \dfrac{C_{13}}{\det(A)} \end{array} \]

    So the first column of the inverse matrix \(X\) is:

    \[ \left[ \begin{array}{r} x_{11} \\ x_{21} \\ x_{31} \end{array} \right] = \left[ \begin{array}{r} \dfrac{C_{11}}{\det(A)} \\ \dfrac{C_{12}}{\det(A)} \\ \dfrac{C_{13}}{\det(A)} \end{array} \right] = \dfrac{1}{\det(A)} \left[ \begin{array}{r} C_{11} \\ C_{12} \\ C_{13} \end{array} \right] \]

    Notice the reversal of the subscripts going from the unknown to the corresponding cofactor of \(A\). This trend continues and we get

    \[ \begin{array}{cc} \left[ \begin{array}{r} x_{12} \\ x_{22} \\ x_{32} \end{array} \right] = \dfrac{1}{\det(A)} \left[ \begin{array}{r} C_{21} \\ C_{22} \\ C_{23} \end{array} \right] & \qquad \left[ \begin{array}{r} x_{13} \\ x_{23} \\ x_{33} \end{array} \right] = \dfrac{1}{\det(A)} \left[ \begin{array}{r} C_{31} \\ C_{32} \\ C_{33} \end{array} \right] \end{array}\]

    Putting all of these together, we have obtained a new and surprising formula for \(A^{-1}\), namely

    \[ A^{-1} = \dfrac{1}{\det(A)} \left[ \begin{array}{ccc} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \\ \end{array} \right] \]

    To see that this does indeed yield \(A^{-1}\), we find all of the cofactors of \(A\)

    \[\begin{array}{rcrrcrrcr} C_{11} & = & -9, & C_{21} & = & -2, & C_{31} & = & 7\\ C_{12} & = & 10, & C_{22} & = & 8, & C_{32} & = & -15 \\ C_{13} & = & 2, & C_{23} & = & -1, & C_{33} & = & -3 \\ \end{array} \]

    And, as promised,

    \[ A^{-1} = \dfrac{1}{\det(A)} \left[ \begin{array}{ccc} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \\ \end{array} \right] = -\dfrac{1}{13} \left[ \begin{array}{rrr} -9 & -2 & 7\\ 10 & 8 & -15 \\ 2 & -1 & -3 \\ \end{array} \right] = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right] \]

    To generalize this to invertible \(n \times n\) matrices, we need another definition and a theorem. Our definition gives a special name to the cofactor matrix, and the theorem tells us how to use it along with \(\det(A)\) to find the inverse of a matrix.

    Definition \(\PageIndex{1}\): Matrix Adjoint

    Let \(A\) be an \(n \times n\) matrix, and \(C_{ij}\) denote the \(ij\) cofactor of \(A\). The adjoint of \(A\), denoted \(\text{adj}(A)\) is the matrix whose \(ij\)-entry is the \(ji\) cofactor of \(A\), \(C_{ji}\). That is

    \[ \text{adj}(A) = \left[ \begin{array}{cccc} C_{11} & C_{21} & \ldots & C_{n1} \\ C_{12} & C_{22} & \ldots & C_{n2} \\ \vdots & \vdots & & \vdots \\ C_{1n} & C_{2n} & \ldots & C_{nn} \\ \end{array} \right] \] \end{defn}

    This new notation greatly shortens the statement of the formula for the inverse of a matrix.

    Note \(\PageIndex{1}\)

    Let \(A\) be an invertible \(n \times n\) matrix. Then

    \[ A^{-1} = \dfrac{1}{\det(A)} \text{adj}(A) \]

    For \(2 \times 2\) matrices, Theorem \ref{adjointinverse} reduces to a fairly simple formula.

    Note \(\PageIndex{1}\)

    For an invertible \(2 \times 2\) matrix,

    \[ \left[ \begin{array}{rr} a & b \\ c & d \\ \end{array} \right]^{-1} = \dfrac{1}{ad-bc} \left[ \begin{array}{rr} d & -b \\ -c & a \\ \end{array} \right] \]

    The proof of Theorem \ref{adjointinverse} is, like so many of the results in this section, best left to a course in Linear Algebra. In such a course, not only do you gain some more sophisticated proof techniques, you also gain a larger perspective. The authors assure you that persistence pays off. If you stick around a few semesters and take a course in Linear Algebra, you'll see just how pretty all things matrix really are - in spite of the tedious notation and sea of subscripts. Within the scope of this text, we will prove a few results involving determinants in Section \ref{Induction} once we have the Principle of Mathematical Induction well in hand. Until then, make sure you have a handle on the \textit{mechanics} of matrices and the theory will come eventually.


    This page titled 8.5: Determinants and Cramer’s Rule is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Carl Stitz & Jeff Zeager via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.