Skip to main content
Library homepage
Loading table of contents menu...
Mathematics LibreTexts

3.5: Cramer's Rule

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Learning Objectives
    • T/F: Cramer’s Rule is another method to compute the determinant of a matrix.
    • T/F: Cramer’s Rule is often used because it is more efficient than Gaussian elimination.
    • Mathematicians use what word to describe the connections between seemingly unrelated ideas?

    In the previous sections we have learned about the determinant, but we haven’t given a really good reason why we would want to compute it.\(^{1}\) This section shows one application of the determinant: solving systems of linear equations. We introduce this idea in terms of a theorem, then we will practice.

    Theorem \(\PageIndex{1}\)

    Cramer's Rule

    Let \(A\) be an \(n\times n\) matrix with \(\text{det}(A)\neq 0\) and let \(\vec{b}\) be an \(n\times 1\) column vector. Then the linear system

    \[A\vec{x}=\vec{b} \nonumber \]

    has solution

    \[x_{i}=\frac{\text{det}\left(A_{i}(\vec{b})\right)}{\text{det}(A)}, \nonumber \]

    where \(A_{i}(\vec{b})\) is the matrix formed by replacing the \(i^{\text{th}}\) column of \(A\) with \(\vec{b}\).

    Let’s do an example.

    Example \(\PageIndex{1}\)

    Use Cramer's Rule to solve the linear system \(A\vec{x}=\vec{b}\) where

    \[A=\left[\begin{array}{ccc}{1}&{5}&{-3}\\{1}&{4}&{2}\\{2}&{-1}&{0}\end{array}\right]\quad\text{and}\quad\vec{b}=\left[\begin{array}{c}{-36}\\{-11}\\{7}\end{array}\right]. \nonumber \]


    We first compute the determinant of \(A\) to see if we can apply Cramer’s Rule.

    \[\text{det}(A)=\left|\begin{array}{ccc}{1}&{5}&{-3}\\{1}&{4}&{2}\\{2}&{-1}&{0}\end{array}\right|=49. \nonumber \]

    Since \(\text{det}(A)\neq 0\), we can apply Cramer’s Rule. Following Theorem \(\PageIndex{1}\), we compute \(\text{det}\left(A_{1}(\vec{b})\right)\), \(\text{det}\left(A_{2}(\vec{b})\right)\) and \(\text{det}\left(A_{3}(\vec{b})\right)\).

    \[\text{det}\left(A_{1}(\vec{b})\right)=\left|\begin{array}{ccc}{\bf{-36}}&{5}&{-3}\\{\bf{-11}}&{4}&{2}\\{\bf{7}}&{-1}&{0}\end{array}\right|=49. \nonumber \]

    (We used a bold font to show where \(\vec{b}\) replaced the first column of \(A\).)

    \[\text{det}\left(A_{2}(\vec{b})\right)=\left|\begin{array}{ccc}{1}&{\bf{-36}}&{-3}\\{1}&{\bf{-11}}&{2}\\{2}&{\bf{7}}&{0}\end{array}\right|=-245. \nonumber \]

    \[\text{det}\left(A_{3}(\vec{b})\right)=\left|\begin{array}{ccc}{1}&{5}&{\bf{-36}}\\{1}&{4}&{\bf{-11}}\\{2}&{-1}&{\bf{7}}\end{array}\right|=49. \nonumber \]

    Therefore we can compute \(\vec{x}\):

    \[\begin{align}\begin{aligned} x_{1}&=\frac{\text{det}\left(A_{1}(\vec{b})\right)}{\text{det}(A)}=\frac{49}{49}=1 \\ x_{2}&=\frac{\text{det}\left(A_{2}(\vec{b})\right)}{\text{det}(A)}=\frac{-245}{49}=-5 \\ x_{3}&=\frac{\text{det}\left(A_{3}(\vec{b})\right)}{\text{det}(A)}=\frac{196}{49}=4\end{aligned}\end{align} \nonumber \]


    \[\vec{x}=\left[\begin{array}{c}{x_{1}}\\{x_{2}}\\{x_{3}}\end{array}\right]=\left[\begin{array}{c}{1}\\{-5}\\{4}\end{array}\right]. \nonumber \]

    Let’s do another example.

    Example \(\PageIndex{2}\)

    Use Cramer’s Rule to solve the linear system \(A\vec{x}=\vec{b}\) where

    \[A=\left[\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}\right]\quad\text{and}\quad\vec{b}=\left[\begin{array}{c}{-1}\\{1}\end{array}\right]. \nonumber \]


    The determinant of \(A\) is \(-2\), so we can apply Cramer’s Rule.

    \[\begin{align}\begin{aligned}\text{det}\left(A_{1}(\vec{b})\right)&=\left|\begin{array}{cc}{-1}&{2}\\{1}&{4}\end{array}\right| =-6 \\ \text{det}\left(A_{2}(\vec{b})\right)&=\left|\begin{array}{cc}{1}&{-1}\\{3}&{1}\end{array}\right|=4.\end{aligned}\end{align} \nonumber \]


    \[\begin{align}\begin{aligned}x_{1}&=\frac{\text{det}\left(A_{1}(\vec{b})\right)}{\text{det}(A)}=\frac{-6}{-2}=3 \\ x_{2}&=\frac{\text{det}\left(A_{2}(\vec{b})\right)}{\text{det}(A)}=\frac{4}{-2}=-2\end{aligned}\end{align} \nonumber \]


    \[\vec{x}=\left[\begin{array}{c}{x_{1}}\\{x_{2}}\end{array}\right]=\left[\begin{array}{c}{3}\\{-2}\end{array}\right]. \nonumber \]

    We learned in Section 3.4 that when considering a linear system \(A\vec{x}=\vec{b}\) where \(A\) is square, if \(\text{det}(A)\neq 0\) then \(A\) is invertible and \(A\vec{x}=\vec{b}\) has exactly one solution. We also stated in Key Idea 2.7.1 that if \(\text{det}(A) = 0\), then \(A\) is not invertible and so therefore either \(A\vec{x}=\vec{b}\) has no solution or infinite solutions. Our method of figuring out which of these cases applied was to form the augmented matrix \([A\:\vec{b}]\), put it into reduced row echelon form, and then interpret the results.

    Cramer’s Rule specifies that \(\text{det}(A)\neq 0\) (so we are guaranteed a solution). When \(\text{det}(A)=0\) we are not able to discern whether infinite solutions or no solution exists for a given vector \(\vec{b}\). Cramer’s Rule is only applicable to the case when exactly one solution exists.

    We end this section with a practical consideration. We have mentioned before that finding determinants is a computationally intensive operation. To solve a linear system with 3 equations and 3 unknowns, we need to compute 4 determinants. Just think: with 10 equations and 10 unknowns, we’d need to compute 11 really hard determinants of \(10\times 10\) matrices! That is a lot of work!

    The upshot of this is that Cramer’s Rule makes for a poor choice in solving numerical linear systems. It simply is not done in practice; it is hard to beat Gaussian elimination.\(^{2}\)

    So why include it? Because its truth is amazing. The determinant is a very strange operation; it produces a number in a very odd way. It should seem incredible to the reader that by manipulating determinants in a particular way, we can solve linear systems.

    In the next chapter we’ll see another use for the determinant. Meanwhile, try to develop a deeper appreciation of math: odd, complicated things that seem completely unrelated often are intricately tied together. Mathematicians see these connections and describe them as “beautiful.”


    [1] The closest we came to motivation is that if \(\text{det}(A) =0\), then we know that \(A\) is not invertible. But it seems that there may be easier ways to check.

    [2] A version of Cramer’s Rule is often taught in introductory differential equations courses as it can be used to find solutions to certain linear differential equations. In this situation, the entries of the matrices are functions, not numbers, and hence computing determinants is easier than using Gaussian elimination. Again, though, as the matrices get large, other solution methods are resorted to.

    This page titled 3.5: Cramer's Rule is shared under a CC BY-NC 3.0 license and was authored, remixed, and/or curated by Gregory Hartman et al. via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.