Skip to main content
Mathematics LibreTexts

3.4: Applications of the Determinant

  • Page ID
    20006
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
    Outcomes
    1. Use determinants to determine whether a matrix has an inverse, and evaluate the inverse using cofactors.
    2. Apply Cramer’s Rule to solve a \(2\times 2\) or a \(3\times 3\) linear system.
    3. Given data points, find an appropriate interpolating polynomial and use it to estimate points.

    A Formula for the Inverse

    The determinant of a matrix also provides a way to find the inverse of a matrix. Recall the definition of the inverse of a matrix in Definition 2.6.1. We say that \(A^{-1}\), an \(n \times n\) matrix, is the inverse of \(A\), also \(n \times n\), if \(AA^{-1} = I\) and \(A^{-1}A=I\).

    We now define a new matrix called the cofactor matrix of \(A\). The cofactor matrix of \(A\) is the matrix whose \(ij^{th}\) entry is the \(ij^{th}\) cofactor of \(A\). The formal definition is as follows.

    Definition \(\PageIndex{1}\): The Cofactor Matrix

    Let \(A=\left[ a_{ij}\right]\) be an \(n\times n\) matrix. Then the cofactor matrix of \(A\), denoted \(\mathrm{cof}\left( A\right)\), is defined by \(\mathrm{cof}\left( A\right) =\left[ \mathrm{cof}\left(A\right)_{ij}\right]\) where \(\mathrm{cof}\left(A\right)_{ij}\) is the \(ij^{th}\) cofactor of \(A\).

    Note that \(\mathrm{cof}\left(A\right)_{ij}\) denotes the \(ij^{th}\) entry of the cofactor matrix.

    We will use the cofactor matrix to create a formula for the inverse of \(A\). First, we define the adjugate of \(A\) to be the transpose of the cofactor matrix. We can also call this matrix the classical adjoint of \(A\), and we denote it by \(adj \left(A\right)\).

    In the specific case where \(A\) is a \(2 \times 2\) matrix given by \[A = \left[ \begin{array}{rr} a & b \\ c & d \end{array} \right]\nonumber \] then \({adj}\left(A\right)\) is given by \[{adj}\left(A\right) = \left[ \begin{array}{rr} d & -b \\ -c & a \end{array} \right]\nonumber \]

    In general, \({adj}\left(A\right)\) can always be found by taking the transpose of the cofactor matrix of \(A\). The following theorem provides a formula for \(A^{-1}\) using the determinant and adjugate of \(A\).

    Theorem \(\PageIndex{1}\): The Inverse and the Determinant

    Let \(A\) be an \(n\times n\) matrix. Then \[A \; {adj}\left(A\right) = {adj}\left(A\right)A = {\det \left(A\right)} I\nonumber \]

    Moreover \(A\) is invertible if and only if \(\det \left(A\right) \neq 0\). In this case we have: \[A^{-1} = \frac{1}{\det \left(A\right)} {adj}\left(A\right)\nonumber \]

    Notice that the first formula holds for any \(n \times n\) matrix \(A\), and in the case \(A\) is invertible we actually have a formula for \(A^{-1}\).

    Consider the following example.

    Example \(\PageIndex{1}\): Find Inverse Using the Determinant

    Find the inverse of the matrix \[A=\left[ \begin{array}{rrr} 1 & 2 & 3 \\ 3 & 0 & 1 \\ 1 & 2 & 1 \end{array} \right]\nonumber \] using the formula in Theorem \(\PageIndex{1}\).

    Solution

    According to Theorem \(\PageIndex{1}\), \[A^{-1} = \frac{1}{\det \left(A\right)} {adj}\left(A\right)\nonumber \]

    First we will find the determinant of this matrix. Using Theorems 3.2.1, 3.2.2, and 3.2.4, we can first simplify the matrix through row operations. First, add \(-3\) times the first row to the second row. Then add \(-1\) times the first row to the third row to obtain \[B = \left[ \begin{array}{rrr} 1 & 2 & 3 \\ 0 & -6 & -8 \\ 0 & 0 & -2 \end{array} \right]\nonumber \] By Theorem 3.2.4, \(\det \left(A\right) = \det \left(B\right)\). By Theorem 3.1.2, \(\det \left(B\right) = 1 \times -6 \times -2 = 12\). Hence, \(\det \left(A\right) = 12\).

    Now, we need to find \({adj} \left(A\right)\). To do so, first we will find the cofactor matrix of \(A\). This is given by \[\mathrm{cof}\left( A\right) = \left[ \begin{array}{rrr} -2 & -2 & 6 \\ 4 & -2 & 0 \\ 2 & 8 & -6 \end{array} \right]\nonumber \] Here, the \(ij^{th}\) entry is the \(ij^{th}\) cofactor of the original matrix \(A\) which you can verify. Therefore, from Theorem \(\PageIndex{1}\), the inverse of \(A\) is given by \[A^{-1} = \frac{1}{12}\left[ \begin{array}{rrr} -2 & -2 & 6 \\ 4 & -2 & 0 \\ 2 & 8 & -6 \end{array} \right] ^{T}= \left[ \begin{array}{rrr} -\frac{1}{6} & \frac{1}{3} & \frac{1}{6} \\ -\frac{1}{6} & -\frac{1}{6} & \frac{2}{3} \\ \frac{1}{2} & 0 & -\frac{1}{2} \end{array} \right]\nonumber \]

    Remember that we can always verify our answer for \(A^{-1}\). Compute the product \(AA^{-1}\) and \(A^{-1}A\) and make sure each product is equal to \(I\).

    Compute \(A^{-1}A\) as follows \[A^{-1}A = \left[ \begin{array}{rrr} -\frac{1}{6} & \frac{1}{3} & \frac{1}{6} \\ -\frac{1}{6} & -\frac{1}{6} & \frac{2}{3} \\ \frac{1}{2} & 0 & -\frac{1}{2} \end{array} \right] \left[ \begin{array}{rrr} 1 & 2 & 3 \\ 3 & 0 & 1 \\ 1 & 2 & 1 \end{array} \right] = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] = I\nonumber \] You can verify that \(AA^{-1} = I\) and hence our answer is correct.

    We will look at another example of how to use this formula to find \(A^{-1}\).

    Example \(\PageIndex{2}\): Find the Inverse From a Formula

    Find the inverse of the matrix \[A=\left[ \begin{array}{rrr} \frac{1}{2} & 0 & \frac{1}{2} \\ -\frac{1}{6} & \frac{1}{3} & - \frac{1}{2} \\ -\frac{5}{6} & \frac{2}{3} & - \frac{1}{2} \end{array} \right]\nonumber \] using the formula given in Theorem \(\PageIndex{1}\).

    Solution

    First we need to find \(\det \left(A\right)\). This step is left as an exercise and you should verify that \(\det \left(A\right) = \frac{1}{6}.\) The inverse is therefore equal to \[A^{-1} = \frac{1}{(1/6)}\; {adj} \left(A\right) = 6\; {adj} \left(A\right)\nonumber \]

    We continue to calculate as follows. Here we show the \(2 \times 2\) determinants needed to find the cofactors. \[A^{-1} = 6\left[ \begin{array}{rrr} \left| \begin{array}{rr} \frac{1}{3} & -\frac{1}{2} \\ \frac{2}{3} & -\frac{1}{2} \end{array} \right| & -\left| \begin{array}{rr} -\frac{1}{6} & -\frac{1}{2} \\ -\frac{5}{6} & -\frac{1}{2} \end{array} \right| & \left| \begin{array}{rr} -\frac{1}{6} & \frac{1}{3} \\ -\frac{5}{6} & \frac{2}{3} \end{array} \right| \\ -\left| \begin{array}{rr} 0 & \frac{1}{2} \\ \frac{2}{3} & -\frac{1}{2} \end{array} \right| & \left| \begin{array}{rr} \frac{1}{2} & \frac{1}{2} \\ -\frac{5}{6} & -\frac{1}{2} \end{array} \right| & -\left| \begin{array}{rr} \frac{1}{2} & 0 \\ -\frac{5}{6} & \frac{2}{3} \end{array} \right| \\ \left| \begin{array}{rr} 0 & \frac{1}{2} \\ \frac{1}{3} & -\frac{1}{2} \end{array} \right| & -\left| \begin{array}{rr} \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{6} & -\frac{1}{2} \end{array} \right| & \left| \begin{array}{rr} \frac{1}{2} & 0 \\ -\frac{1}{6} & \frac{1}{3} \end{array} \right| \end{array} \right] ^{T}\nonumber \]

    Expanding all the \(2\times 2\) determinants, this yields \[A^{-1} = 6\left[ \begin{array}{rrr} \frac{1}{6} & \frac{1}{3} & \frac{1}{6} \\ \frac{1}{3} & \frac{1}{6} & -\frac{1}{3} \\ -\frac{1}{6} & \frac{1}{6} & \frac{1}{6} \end{array} \right] ^{T}= \left[ \begin{array}{rrr} 1 & 2 & -1 \\ 2 & 1 & 1 \\ 1 & -2 & 1 \end{array} \right]\nonumber \]

    Again, you can always check your work by multiplying \(A^{-1}A\) and \(AA^{-1}\) and ensuring these products equal \(I\). \[A^{-1}A = \left[ \begin{array}{rrr} 1 & 2 & -1 \\ 2 & 1 & 1 \\ 1 & -2 & 1 \end{array} \right] \left[ \begin{array}{rrr} \frac{1}{2} & 0 & \frac{1}{2} \\ -\frac{1}{6} & \frac{1}{3} & - \frac{1}{2} \\ -\frac{5}{6} & \frac{2}{3} & - \frac{1}{2} \end{array} \right] = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]\nonumber \] This tells us that our calculation for \(A^{-1}\) is correct. It is left to the reader to verify that \(AA^{-1} = I\).

    The verification step is very important, as it is a simple way to check your work! If you multiply \(A^{-1}A\) and \(AA^{-1}\) and these products are not both equal to \(I\), be sure to go back and double check each step. One common error is to forget to take the transpose of the cofactor matrix, so be sure to complete this step.

    We will now prove Theorem \(\PageIndex{1}\).

    Theorem \(\PageIndex{1}\): The Inverse and the Determinant
    Proof

    (of Theorem \(\PageIndex{1}\)) Recall that the \((i,j)\)-entry of \({adj}(A)\) is equal to \(\mathrm{cof}(A)_{ji}\). Thus the \((i,j)\)-entry of \(B=A\cdot {adj}(A)\) is : \[B_{ij}=\sum_{k=1}^n a_{ik} {adj} (A)_{kj}= \sum_{k=1}^n a_{ik} \mathrm{cof} (A)_{jk}\nonumber \] By the cofactor expansion theorem, we see that this expression for \(B_{ij}\) is equal to the determinant of the matrix obtained from \(A\) by replacing its \(j\)th row by \(a_{i1}, a_{i2}, \dots a_{in}\) — i.e., its \(i\)th row.

    If \(i=j\) then this matrix is \(A\) itself and therefore \(B_{ii}=\det A\). If on the other hand \(i\neq j\), then this matrix has its \(i\)th row equal to its \(j\)th row, and therefore \(B_{ij}=0\) in his case. Thus we obtain: \[A \; {adj}\left(A\right) = {\det \left(A\right)} I\nonumber \] Similarly we can verify that: \[{adj}\left(A\right)A = {\det \left(A\right)} I\nonumber \] And this proves the first part of the theorem.

    Further if \(A\) is invertible, then by Theorem 3.2.5 we have: \[1 = \det \left( I \right) = \det \left( A A^{-1} \right) = \det \left( A \right) \det \left( A^{-1} \right)\nonumber \] and thus \(\det \left( A \right) \neq 0\). Equivalently, if \(\det \left( A \right) = 0\), then \(A\) is not invertible.

    Finally if \(\det \left( A \right) \neq 0\), then the above formula shows that \(A\) is invertible and that: \[A^{-1} = \frac{1}{\det \left(A\right)} {adj}\left(A\right)\nonumber \]

    This completes the proof.

    This method for finding the inverse of \(A\) is useful in many contexts. In particular, it is useful with complicated matrices where the entries are functions, rather than numbers.

    Consider the following example.

    Example \(\PageIndex{3}\): Inverse for Non-Constant Matrix

    Suppose \[A\left( t\right) =\left[ \begin{array}{ccc} e^{t} & 0 & 0 \\ 0 & \cos t & \sin t \\ 0 & -\sin t & \cos t \end{array} \right]\nonumber \] Show that \(A\left( t\right) ^{-1}\) exists and then find it.

    Solution

    First note \(\det \left( A\left( t\right) \right) = e^{t}(\cos^2 t + \sin^2 t) = e^{t}\neq 0\) so \(A\left( t\right) ^{-1}\) exists.

    The cofactor matrix is \[C\left( t\right) =\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & e^{t}\cos t & e^{t}\sin t \\ 0 & -e^{t}\sin t & e^{t}\cos t \end{array} \right]\nonumber \] and so the inverse is \[\frac{1}{e^{t}}\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & e^{t}\cos t & e^{t}\sin t \\ 0 & -e^{t}\sin t & e^{t}\cos t \end{array} \right] ^{T}= \left[ \begin{array}{ccc} e^{-t} & 0 & 0 \\ 0 & \cos t & -\sin t \\ 0 & \sin t & \cos t \end{array} \right]\nonumber \]

    Cramer’s Rule

    Another context in which the formula given in Theorem \(\PageIndex{1}\) is important is Cramer’s Rule. Recall that we can represent a system of linear equations in the form \(AX=B\), where the solutions to this system are given by \(X\). Cramer’s Rule gives a formula for the solutions \(X\) in the special case that \(A\) is a square invertible matrix. Note this rule does not apply if you have a system of equations in which there is a different number of equations than variables (in other words, when \(A\) is not square), or when \(A\) is not invertible.

    Suppose we have a system of equations given by \(AX=B\), and we want to find solutions \(X\) which satisfy this system. Then recall that if \(A^{-1}\) exists, \[\begin{aligned} AX&=B \\ A^{-1}\left(AX\right)&=A^{-1}B \\ \left(A^{-1}A\right)X&=A^{-1}B \\ IX&=A^{-1}B\\ X &= A^{-1}B\end{aligned}\] Hence, the solutions \(X\) to the system are given by \(X=A^{-1}B\). Since we assume that \(A^{-1}\) exists, we can use the formula for \(A^{-1}\) given above. Substituting this formula into the equation for \(X\), we have \[X=A^{-1}B=\frac{1}{\det \left( A\right) }{adj}\left( A\right)B\nonumber \] Let \(x_i\) be the \(i^{th}\) entry of \(X\) and \(b_j\) be the \(j^{th}\) entry of \(B\). Then this equation becomes \[x_i = \sum_{j=1}^{n}\left[ a_{ij}\right]^{-1}b_{j}=\sum_{j=1}^{n}\frac{1} {\det \left( A\right) } {adj}\left( A\right) _{ij}b_{j}\nonumber \] where \({adj}\left(A\right)_{ij}\) is the \(ij^{th}\) entry of \({adj}\left(A\right)\).

    By the formula for the expansion of a determinant along a column, \[x_{i}=\frac{1}{\det \left( A\right) }\det \left[ \begin{array}{ccccc} \ast & \cdots & b_{1} & \cdots & \ast \\ \vdots & & \vdots & & \vdots \\ \ast & \cdots & b_{n} & \cdots & \ast \end{array} \right]\nonumber \] where here the \(i^{th}\) column of \(A\) is replaced with the column vector \(\left[ b_{1}\cdots \cdot ,b_{n}\right] ^{T}\). The determinant of this modified matrix is taken and divided by \(\det \left( A\right)\). This formula is known as Cramer’s rule.

    We formally define this method now.

    Procedure \(\PageIndex{1}\): Using Cramer’s Rule

    Suppose \(A\) is an \(n\times n\) invertible matrix and we wish to solve the system \(AX=B\) for \(X =\left[ x_{1},\cdots ,x_{n}\right] ^{T}.\) Then Cramer’s rule says \[x_{i}= \frac{\det \left(A_{i}\right)}{\det \left(A\right)}\nonumber \] where \(A_{i}\) is the matrix obtained by replacing the \(i^{th}\) column of \(A\) with the column matrix \[B = \left[ \begin{array}{c} b_1 \\ \vdots \\ b_n \end{array} \right]\nonumber \]

    We illustrate this procedure in the following example.

    Example \(\PageIndex{4}\): Using Cramer's Rule

    Find \(x,y,z\) if \[\left[ \begin{array}{rrr} 1 & 2 & 1 \\ 3 & 2 & 1 \\ 2 & -3 & 2 \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] =\left[ \begin{array}{r} 1 \\ 2 \\ 3 \end{array} \right]\nonumber \]

    Solution

    We will use method outlined in Procedure \(\PageIndex{1}\) to find the values for \(x,y,z\) which give the solution to this system. Let \[B = \left[ \begin{array}{r} 1 \\ 2 \\ 3 \end{array} \right]\nonumber\]

    In order to find \(x\), we calculate \[x = \frac{\det \left(A_{1}\right)}{\det \left(A\right)}\nonumber \] where \(A_1\) is the matrix obtained from replacing the first column of \(A\) with \(B\).

    Hence, \(A_1\) is given by \[A_1 = \left[ \begin{array}{rrr} 1 & 2 & 1 \\ 2 & 2 & 1 \\ 3 & -3 & 2 \end{array} \right]\nonumber \]

    Therefore, \[x= \frac{\det \left(A_{1}\right)}{\det \left(A\right)} = \frac{\left| \begin{array}{rrr} 1 & 2 & 1 \\ 2 & 2 & 1 \\ 3 & -3 & 2 \end{array} \right| }{\left| \begin{array}{rrr} 1 & 2 & 1 \\ 3 & 2 & 1 \\ 2 & -3 & 2 \end{array} \right| }=\frac{1}{2}\nonumber \]

    Similarly, to find \(y\) we construct \(A_2\) by replacing the second column of \(A\) with \(B\). Hence, \(A_2\) is given by \[A_2 = \left[ \begin{array}{rrr} 1 & 1 & 1 \\ 3 & 2 & 1 \\ 2 & 3 & 2 \end{array} \right]\nonumber \]

    Therefore, \[y=\frac{\det \left(A_{2}\right)}{\det \left(A\right)} = \frac{\left| \begin{array}{rrr} 1 & 1 & 1 \\ 3 & 2 & 1 \\ 2 & 3 & 2 \end{array} \right| }{\left| \begin{array}{rrr} 1 & 2 & 1 \\ 3 & 2 & 1 \\ 2 & -3 & 2 \end{array} \right| }=-\frac{1}{7}\nonumber \]

    Similarly, \(A_3\) is constructed by replacing the third column of \(A\) with \(B\). Then, \(A_3\) is given by \[A_3 = \left[ \begin{array}{rrr} 1 & 2 & 1 \\ 3 & 2 & 2 \\ 2 & -3 & 3 \end{array} \right]\nonumber \]

    Therefore, \(z\) is calculated as follows.

    \[z= \frac{\det \left(A_{3}\right)}{\det \left(A\right)} = \frac{\left| \begin{array}{rrr} 1 & 2 & 1 \\ 3 & 2 & 2 \\ 2 & -3 & 3 \end{array} \right| }{\left| \begin{array}{rrr} 1 & 2 & 1 \\ 3 & 2 & 1 \\ 2 & -3 & 2 \end{array} \right| }=\frac{11}{14}\nonumber \]

    Cramer’s Rule gives you another tool to consider when solving a system of linear equations.

    We can also use Cramer’s Rule for systems of non linear equations. Consider the following system where the matrix \(A\) has functions rather than numbers for entries.

    Using Cramer’s Rule

    Example \(\PageIndex{5}\): Use Cramer's Rule for Non-Constant Matrix

    Solve for \(z\) if \[\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & e^{t}\cos t & e^{t}\sin t \\ 0 & -e^{t}\sin t & e^{t}\cos t \end{array} \right] \left[ \begin{array}{c} x \\ y \\ z \end{array} \right] =\left[ \begin{array}{c} 1 \\ t \\ {0.05in}t^{2} \end{array} \right]\nonumber \]

    Solution

    We are asked to find the value of \(z\) in the solution. We will solve using Cramer’s rule. Thus \[z={.05in} \frac{\left| \begin{array}{ccc} 1 & 0 & 1 \\ 0 & e^{t}\cos t & t \\ 0 & -e^{t}\sin t & t^{2} \end{array} \right| }{\left| \begin{array}{ccc} 1 & 0 & 0 \\ 0 & e^{t}\cos t & e^{t}\sin t \\ 0 & -e^{t}\sin t & e^{t}\cos t \end{array} \right| }= t\left( \left( \cos t\right) t+\sin t\right) e^{-t}\nonumber \]

    Polynomial Interpolation

    In studying a set of data that relates variables \(x\) and \(y\), it may be the case that we can use a polynomial to “fit” to the data. If such a polynomial can be established, it can be used to estimate values of \(x\) and \(y\) which have not been provided.

    Consider the following example.

    Example \(\PageIndex{6}\): Polynomial Interpolation

    Given data points \((1,4), (2,9), (3,12)\), find an interpolating polynomial \(p(x)\) of degree at most \(2\) and then estimate the value corresponding to \(x = \frac{1}{2}\).

    Solution

    We want to find a polynomial given by \[p(x) = r_0 + r_1x_1 + r_2x_2^2\nonumber \] such that \(p(1)=4, p(2)=9\) and \(p(3)=12\). To find this polynomial, substitute the known values in for \(x\) and solve for \(r_0, r_1\), and \(r_2\). \[\begin{aligned} p(1) &= r_0 + r_1 + r_2 = 4\\ p(2) &= r_0 + 2r_1 + 4r_2 = 9\\ p(3) &= r_0 + 3r_1 + 9r_2 = 12\end{aligned}\]

    Writing the augmented matrix, we have \[\left[ \begin{array}{rrr|r} 1 & 1 & 1 & 4 \\ 1 & 2 & 4 & 9 \\ 1 & 3 & 9 & 12 \end{array} \right]\nonumber\]

    After row operations, the resulting matrix is \[\left[ \begin{array}{rrr|r} 1 & 0 & 0 & -3 \\ 0 & 1 & 0 & 8 \\ 0 & 0 & 1 & -1 \end{array} \right]\nonumber \]

    Therefore the solution to the system is \(r_0 = -3, r_1 = 8, r_2 = -1\) and the required interpolating polynomial is \[p(x) = -3 + 8x - x^2\nonumber \]

    To estimate the value for \(x = \frac{1}{2}\), we calculate \(p(\frac{1}{2})\): \[\begin{aligned} p(\frac{1}{2}) &= -3 + 8(\frac{1}{2}) - (\frac{1}{2})^2\\ &= -3 + 4 - \frac{1}{4} \\ &= \frac{3}{4}\end{aligned}\]

    This procedure can be used for any number of data points, and any degree of polynomial. The steps are outlined below.

    Procedure \(\PageIndex{2}\): Finding an Interpolation Polynomial

    Suppose that values of \(x\) and corresponding values of \(y\) are given, such that the actual relationship between \(x\) and \(y\) is unknown. Then, values of \(y\) can be estimated using an interpolating polynomial \(p(x)\). If given \(x_1, ..., x_n\) and the corresponding \(y_1, ..., y_n\), the procedure to find \(p(x)\) is as follows:

    1. The desired polynomial \(p(x)\) is given by \[p(x) = r_0 + r_1 x + r_2 x^2 + ... + r_{n-1}x^{n-1}\nonumber \]
    2. \(p(x_i) = y_i\) for all \(i = 1, 2, ...,n\) so that \[\begin{array}{c} r_0 + r_1x_1 + r_2 x_1^2 + ... + r_{n-1}x_1^{n-1} = y_1 \\ r_0 + r_1x_2 + r_2 x_2^2 + ... + r_{n-1}x_2^{n-1} = y_2 \\ \vdots \\ r_0 + r_1x_n + r_2 x_n^2 + ... + r_{n-1}x_n^{n-1} = y_n \end{array}\nonumber \]
    3. Set up the augmented matrix of this system of equations \[\left[ \begin{array}{rrrrr|r} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} & y_1 \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} & y_2 \\ \vdots & \vdots & \vdots & &\vdots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} & y_n \\ \end{array} \right]\nonumber \]
    4. Solving this system will result in a unique solution \(r_0, r_1, \cdots, r_{n-1}\). Use these values to construct \(p(x)\), and estimate the value of \(p(a)\) for any \(x=a\).

    This procedure motivates the following theorem.

    Theorem \(\PageIndex{2}\): Polynomial Interpolation

    Given \(n\) data points \((x_1, y_1), (x_2, y_2), \cdots, (x_n, y_n)\) with the \(x_i\) distinct, there is a unique polynomial \(p(x) = r_0 + r_1x + r_2x^2 + \cdots + r_{n-1}x^{n-1}\) such that \(p(x_i) = y_i\) for \(i=1,2,\cdots, n\). The resulting polynomial \(p(x)\) is called the interpolating polynomial for the data points.

    We conclude this section with another example.

    Example \(\PageIndex{7}\): Polynomial Interpolation

    Consider the data points \((0,1), (1,2), (3,22), (5,66)\). Find an interpolating polynomial \(p(x)\) of degree at most three, and estimate the value of \(p(2)\).

    Solution

    The desired polynomial \(p(x)\) is given by: \[p(x) = r_0 + r_1 x + r_2x^2 + r_3x^3\nonumber \]

    Using the given points, the system of equations is \[\begin{aligned} p(0) &= r_0 = 1 \\ p(1) &= r_0 + r_1 + r_2 + r_3 = 2 \\ p(3) &= r_0 + 3r_1 + 9r_2 + 27r_3 = 22 \\ p(5) &= r_0 + 5r_1 + 25r_2 + 125r_3 = 66\end{aligned}\]

    The augmented matrix is given by: \[\left[ \begin{array}{rrrr|r} 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 1 & 1 & 2 \\ 1 & 3 & 9 & 27 & 22 \\ 1 & 5 & 25 & 125 & 66 \end{array} \right]\nonumber\]

    The resulting matrix is \[\left[ \begin{array}{rrrr|r} 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & -2 \\ 0 & 0 & 1 & 0 & 3 \\ 0 & 0 & 0 & 1 & 0 \end{array} \right]\nonumber\]

    Therefore, \(r_0 = 1, r_1 = -2, r_2 = 3, r_3 = 0\) and \(p(x) = 1 -2x + 3x^2\). To estimate the value of \(p(2)\), we compute \(p(2) = 1 -2(2) + 3(2^2) = 1 - 4 + 12 = 9\).


    This page titled 3.4: Applications of the Determinant is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Ken Kuttler (Lyryx) via source content that was edited to the style and standards of the LibreTexts platform.