Skip to main content
Mathematics LibreTexts

8.3: Systems of Linear Equations- Matrix Inverses

  • Page ID
    4027
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    We concluded Section \ref{MatArithmetic} by showing how we can rewrite a system of linear equations as the matrix equation \(AX=B\) where \(A\) and \(B\) are known matrices and the solution matrix \(X\) of the equation corresponds to the solution of the system. In this section, we develop the method for solving such an equation. To that end, consider the system

    \[ \left\{ \begin{array}{rcr} 2x-3y & = & 16 \\ 3x+4y & = & 7 \\ \end{array} \right.\]

    To write this as a matrix equation, we follow the procedure outlined on page \pageref{systemasmatrixeqn}. We find the coefficient matrix \(A\), the unknowns matrix \(X\) and constant matrix \(B\) to be

    \[ \begin{array}{ccc} A = \left[ \begin{array}{rr} 2 & -3 \\ 3 & 4 \\ \end{array} \right] & X = \left[ \begin{array}{r} x \\ y \\ \end{array} \right] & B = \left[ \begin{array}{r} 16 \\ 7 \\ \end{array} \right] \end{array}\]

    In order to motivate how we solve a matrix equation like \(AX = B\), we revisit solving a similar equation involving real numbers. Consider the equation \(3x = 5\). To solve, we simply divide both sides by \(3\) and obtain \(x = \frac{5}{3}\). How can we go about defining an analogous process for matrices? To answer this question, we solve \(3x=5\) again, but this time, we pay attention to the properties of real numbers being used at each step. Recall that dividing by \(3\) is the same as multiplying by \(\frac{1}{3} = 3^{-1}\), the so-called \textit{multiplicative inverse}\footnote{Every nonzero real number \(a\) has a multiplicative inverse, denoted \(a^{-1}\), such that \(a^{-1} \cdot a = a \cdot a^{-1} = 1\).} of \(3\).

    \[ \begin{array}{rclr} 3x & = & 5 \\ 3^{-1}(3x) & = & 3^{-1}(5) & \text{Multiply by the (multiplicative) inverse of \(3\)} \\ \left(3^{-1}\cdot 3\right) x & = & 3^{-1}(5) & \text{Associative property of multiplication} \\ 1 \cdot x & = & 3^{-1}(5) & \text{Inverse property} \\ x & = & 3^{-1}(5) & \text{Multiplicative Identity} \\ \end{array} \]

    If we wish to check our answer, we substitute \(x = 3^{-1}(5)\) into the original equation

    \[ \begin{array}{rclr} 3x & \stackrel{?}{=} & 5 \\ 3\left( 3^{-1}(5)\right) & \stackrel{?}{=} & 5 \\ \left(3 \cdot 3^{-1}\right)(5) & \stackrel{?}{=} & 5 & \text{Associative property of multiplication} \\ 1 \cdot 5 & \stackrel{?}{=} & 5 & \text{Inverse property} \\ 5 & \stackrel{\checkmark}{=} & 5 & \text{Multiplicative Identity} \\ \end{array} \]

    Thinking back to Theorem \ref{matrixmultprops}, we know that matrix multiplication enjoys both an associative property and a multiplicative identity. What's missing from the mix is a multiplicative inverse for the coefficient matrix \(A\). Assuming we can find such a beast, we can mimic our solution (and check) to \(3x=5\) as follows

    \[ \begin{array}{cc} \text{Solving \(AX = B\)} & \text{Checking our answer} \\ \begin{array}{rcl} AX & = & B \\ A^{-1}(AX) & = & A^{-1}B \\ \left(A^{-1}A\right) X & = & A^{-1}B \\ I_{2}X & = & A^{-1}B \\ X & = & A^{-1}B \\ \end{array} & \begin{array}{rcl} AX & \stackrel{?}{=} & B \\ A \left(A^{-1}B\right) & \stackrel{?}{=} & B \\ \left(AA^{-1}\right) B & \stackrel{?}{=}& B \\ I_{2}B & \stackrel{?}{=} & B \\ B & \stackrel{\checkmark}{=}& B \\ \end{array} \\ \end{array}\]

    The matrix \(A^{-1}\) is read `\)A\)-inverse' and we will define it formally later in the section. At this stage, we have no idea if such a matrix \(A^{-1}\) exists, but that won't deter us from trying to find it.\footnote{Much like Carl's quest to find Sasquatch.} We want \(A^{-1}\) to satisfy two equations, \(A^{-1}A = I_{2}\) and \(AA^{-1} = I_{2}\), making \(A^{-1}\) necessarily a \(2 \times 2\) matrix.\footnote{Since matrix multiplication isn't necessarily commutative, at this stage, these are two different equations.} Hence, we assume \(A^{-1}\) has the form

    \[ A^{-1} = \left[ \begin{array}{rr} x_{1} & x_{2} \\ x_{3} & x_{4} \\ \end{array} \right]\]

    for real numbers \(x_{1}\), \(x_{2}\), \(x_{3}\) and \(x_{4}\). For reasons which will become clear later, we focus our attention on the equation \(AA^{-1} = I_{2}\). We have

    \[\begin{array}{rcl} AA^{-1} & = & I_{2} \\ \left[ \begin{array}{rr} 2 & -3 \\ 3 & 4 \\ \end{array} \right] \left[ \begin{array}{rr} x_{1} & x_{2} \\ x_{3} & x_{4} \\ \end{array} \right] & = & \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array} \right] \\ \left[ \begin{array}{rr} 2x_{1} - 3x_{3} & 2x_{2} - 3x_{4} \\ 3x_{1} +4x_{3} & 3x_{2} +4x_{4} \\ \end{array} \right] & = & \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array} \right] \\ \end{array} \]

    This gives rise to two more systems of equations

    \[\begin{array}{cc} \left\{ \begin{array}{rcr} 2x_{1}-3x_{3} & = & 1 \\ 3x_{1}+4x_{3} & = & 0 \\ \end{array} \right. & \left\{ \begin{array}{rcr} 2x_{2}-3x_{4} & = & 0 \\ 3x_{2}+4x_{4} & = & 1 \\ \end{array} \right. \end{array}\]

    At this point, it may seem absurd to continue with this venture. After all, the intent was to solve one system of equations, and in doing so, we have produced two more to solve. Remember, the objective of this discussion is to develop a general method which, when used in the correct scenarios, allows us to do far more than just solve a system of equations. If we set about to solve these systems using augmented matrices using the techniques in Section \ref{AugMatrices}, we see that not only do both systems have the same coefficient matrix, this coefficient matrix is none other than the matrix \(A\) itself. (We will come back to this observation in a moment.)

    \[ \begin{array}{ccc} \left\{ \begin{array}{rcr} 2x_{1}-3x_{3} & = & 1 \\ 3x_{1}+4x_{3} & = & 0 \\ \end{array} \right. & \xrightarrow{\text{Encode into a matrix}} & \left[ \begin{array}{rr|r} 2 & -3 & 1 \\ 3 & 4 & 0 \\ \end{array} \right] \\ \left\{ \begin{array}{rcr} 2x_{2}-3x_{4} & = & 0 \\ 3x_{2}+4x_{4} & = & 1 \\ \end{array} \right. & \xrightarrow{\text{Encode into a matrix}} & \left[ \begin{array}{rr|r} 2 & -3 & 0 \\ 3 & 4 & 1 \\ \end{array} \right] \\ \end{array} \]

    To solve these two systems, we use Gauss-Jordan Elimination to put the augmented matrices into reduced row echelon form (we leave the details to the reader). For the first system, we get

    \[ \begin{array}{ccc} \left[ \begin{array}{rr|r} 2 & -3 & 1 \\ 3 & 4 & 0 \\ \end{array} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{array}{rr|r} 1 & 0 & \frac{4}{17} \\ 0 & 1 & -\frac{3}{17} \\ \end{array} \right] \\ \end{array}\]

    which gives \(x_{1} = \frac{4}{17}\) and \(x_{3} = -\frac{3}{17}\). To solve the second system, we use the exact same row operations, in the same order, to put its augmented matrix into reduced row echelon form (Think about why that works.) and we obtain

    \[ \begin{array}{ccc} \left[ \begin{array}{rr|r} 2 & -3 & 0 \\ 3 & 4 & 1 \\ \end{array} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{array}{rr|r} 1 & 0 & \frac{3}{17} \\ 0 & 1 & \frac{2}{17} \\ \end{array} \right] \\ \end{array}\]

    which means \(x_{2} = \frac{3}{17}\) and \(x_{4} = \frac{2}{17}\). Hence,

    \[ A^{-1} = \left[ \begin{array}{rr} x_{1} & x_{2} \\ x_{3} & x_{4} \\ \end{array} \right] = \left[ \begin{array}{rr} \frac{4}{17} & \frac{3}{17} \\ -\frac{3}{17} & \frac{2}{17} \\ \end{array} \right] \]

    We can check to see that \(A^{-1}\) behaves as it should by computing \(AA^{-1}\)

    \[ AA^{-1} = \left[ \begin{array}{rr} 2 & -3 \\ 3 & 4 \\ \end{array} \right] \left[ \begin{array}{rr} \frac{4}{17} & \frac{3}{17} \\ -\frac{3}{17} & \frac{2}{17} \\ \end{array} \right] = \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array} \right] = I_{2} \, \, \checkmark\]

    As an added bonus,

    \[ A^{-1}A = \left[ \begin{array}{rr} \frac{4}{17} & \frac{3}{17} \\ -\frac{3}{17} & \frac{2}{17} \\ \end{array} \right]\left[ \begin{array}{rr} 2 & -3 \\ 3 & 4 \\ \end{array} \right] = \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \\ \end{array} \right] = I_{2} \, \, \checkmark\]

    We can now return to the problem at hand. From our discussion at the beginning of the section on page \pageref{solvingmatrixeqn}, we know

    \[ X = A^{-1}B = \left[ \begin{array}{rr} \frac{4}{17} & \frac{3}{17} \\ -\frac{3}{17} & \frac{2}{17} \\ \end{array} \right]\left[ \begin{array}{r} 16 \\ 7 \\ \end{array} \right] = \left[ \begin{array}{r} 5 \\ -2 \\ \end{array} \right] \]

    so that our final solution to the system is \((x,y) = (5,-2)\).

    As we mentioned, the point of this exercise was not just to solve the system of linear equations, but to develop a general method for finding \(A^{-1}\). We now take a step back and analyze the foregoing discussion in a more general context. In solving for \(A^{-1}\), we used two augmented matrices, both of which contained the same entries as \(A\)

    \[ \begin{array}{rcl} \left[ \begin{array}{rr|r} 2 & -3 & 1 \\ 3 & 4 & 0 \\ \end{array} \right] & = & \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \textit{A}} & 1 \\ & 0 \end{tabular} \right] \\ \left[ \begin{array}{rr|r} 2 & -3 & 0 \\ 3 & 4 & 1 \\ \end{array} \right] & = & \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \textit{A}} & 0 \\ & 1 \end{tabular} \right] \\ \end{array} \]

    We also note that the reduced row echelon forms of these augmented matrices can be written as

    \[ \begin{array}{rcl} \left[ \begin{array}{rr|r} 1 & 0 & \frac{4}{17} \\ 0 & 1 & -\frac{3}{17} \\ \end{array} \right] & = & \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \(I_{2}\)} & \(x_{1}\) \\ & \(x_{3}\) \end{tabular} \right]\\ \left[ \begin{array}{rr|r} 1 & 0 & \hphantom{-}\frac{3}{17} \\ 0 & 1 & \frac{2}{17} \\ \end{array} \right] & = & \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \(I_{2}\)} & \(x_{2}\) \\ & \(x_{4}\) \end{tabular} \right] \end{array} \]

    where we have identified the entries to the left of the vertical bar as the identity \(I_{2}\) and the entries to the right of the vertical bar as the solutions to our systems. The long and short of the solution process can be summarized as

    \[ \begin{array}{ccc} \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \textit{A}} & 1 \\ & 0 \end{tabular} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \(I_{2}\)} & \(x_{1}\) \\ & \(x_{3}\) \end{tabular} \right] \\ \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \textit{A}} & 0 \\ & 1 \end{tabular} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{tabular}{c|r} \multirow{2}{10pt}{\large \(I_{2}\)} & \(x_{2}\) \\ & \(x_{4}\) \end{tabular} \right] \end{array}\]

    Since the row operations for both processes are the same, all of the arithmetic on the left hand side of the vertical bar is identical in both problems. The only difference between the two processes is what happens to the constants to the right of the vertical bar. As long as we keep these separated into columns, we can combine our efforts into one `super-sized' augmented matrix and describe the above process as

    \[ \begin{array}{ccc} \left[ \begin{tabular}{c|rr} \multirow{2}{10pt}{\large \textit{A}} & 1 & 0 \\ & 0 & 1 \end{tabular} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{tabular}{c|rr} \multirow{2}{10pt}{\large \(I_{2}\)} & \(x_{1}\) & \(x_{2}\) \\ & \(x_{3}\) & \(x_{4}\) \end{tabular} \right] \end{array}\]

    We have the identity matrix \(I_{2}\) appearing as the right hand side of the first super-sized augmented matrix and the left hand side of the second super-sized augmented matrix. To our surprise and delight, the elements on the right hand side of the second super-sized augmented matrix are none other than those which comprise \(A^{-1}\). Hence, we have

    \[ \begin{array}{ccc} \left[ \begin{array}{c|c} A & I_{2} \end{array} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{array}{c|c} I_{2} & A^{-1} \end{array} \right] \end{array}\]

    In other words, the process of finding \(A^{-1}\) for a matrix \(A\) can be viewed as performing a series of row operations which transform \(A\) into the identity matrix of the same dimension. We can view this process as follows. In trying to find \(A^{-1}\), we are trying to `undo' multiplication by the matrix \(A\). The identity matrix in the super-sized augmented matrix \([A | I]\) keeps a running memory of all of the moves required to `undo' \(A\). This results in exactly what we want, \(A^{-1}\). We are now ready to formalize and generalize the foregoing discussion. We begin with the formal definition of an invertible matrix.

    Note: \label{matrixinverse}

    An \(n \times n\) matrix \(A\) is said to be \index{matrix ! invertible} \index{matrix ! multiplicative inverse} \index{invertible ! matrix} \textbf{invertible} if there exists a matrix \(A^{-1}\), read `\)A\) inverse', such that \(A^{-1}A = AA^{-1}=I_{n}\). \index{inverse ! matrix, multiplicative}

    Note that, as a consequence of our definition, invertible matrices are square, and as such, the conditions in Definition \ref{matrixinverse} force the matrix \(A^{-1}\) to be same dimensions as \(A\), that is, \(n \times n\). Since not all matrices are square, not all matrices are invertible. However, just because a matrix is square doesn't guarantee it is invertible. (See the exercises.) Our first result summarizes some of the important characteristics of invertible matrices and their inverses.

    Note: label{inversematrixprops}

    Suppose \(A\) is an \(n \times n\) matrix.

    • If \(A\) is invertible then \(A^{-1}\) is unique.
    • \(A\) is invertible if and only if \(AX = B\) has a unique solution for every \(n \times r\) matrix \(B\).

    The proofs of the properties in Theorem \ref{inversematrixprops} rely on a healthy mix of definition and matrix arithmetic. To establish the first property, we assume that \(A\) is invertible and suppose the matrices \(B\) and \(C\) act as inverses for \(A\). That is, \(BA = AB = I_{n}\) and \(CA = AC = I_{n}\). We need to show that \(B\) and \(C\) are, in fact, the same matrix. To see this, we note that \(B = I_{n}B = (CA)B = C(AB) = CI_{n} = C\). Hence, any two matrices that act like \(A^{-1}\) are, in fact, the same matrix.\footnote{If this proof sounds familiar, it should. See the discussion following Theorem \ref{inversefunctionprops} on page \pageref{inversefunctionuniqueness}.} To prove the second property of Theorem \ref{inversematrixprops}, we note that if \(A\) is invertible then the discussion on page \pageref{solvingmatrixeqn} shows the solution to \(AX=B\) to be \(X = A^{-1}B\), and since \(A^{-1}\) is unique, so is \(A^{-1}B\). Conversely, if \(AX = B\) has a unique solution for every \(n \times r\) matrix \(B\), then, in particular, there is a unique solution \(X_{0}\) to the equation \(AX = I_{n}\). The solution matrix \(X_{0}\) is our candidate for \(A^{-1}\). We have \(AX_{0} = I_{n}\) by definition, but we need to also show \(X_{0}A = I_{n}\). To that end, we note that \(A\left(X_{0}A\right) = \left(AX_{0}\right)A = I_{n}A = A\). In other words, the matrix \(X_{0}A\) is a solution to the equation \(AX = A\). Clearly, \(X=I_{n}\) is also a solution to the equation \(AX = A\), and since we are assuming every such equation as a \textit{unique} solution, we must have \(X_{0}A = I_{n}\). Hence, we have \(X_{0}A = AX_{0} = I_{n}\), so that \(X_{0} = A^{-1}\) and \(A\) is invertible. The foregoing discussion justifies our quest to find \(A^{-1}\) using our super-sized augmented matrix approach

    \[ \begin{array}{ccc} \left[ \begin{array}{c|c} A & I_{n} \\ \end{array} \right] & \xrightarrow{\text{Gauss Jordan Elimination}} & \left[ \begin{array}{c|c} I_{n} & A^{-1} \\ \end{array} \right] \end{array}\]

    We are, in essence, trying to find the unique solution to the equation \(AX = I_{n}\) using row operations.

    What does all of this mean for a system of linear equations? Theorem \ref{inversematrixprops} tells us that if we write the system in the form \(AX=B\), then if the coefficient matrix \(A\) is invertible, there is only one solution to the system \(-\) that is, if \(A\) is invertible, the system is consistent and independent.\footnote{It can be shown that a matrix is invertible if and only if when it serves as a coefficient matrix for a system of equations, the system is always consistent independent. It amounts to the second property in Theorem \ref{inversematrixprops} where the matrices \(B\) are restricted to being \(n \times 1\) matrices. We note that, owing to how matrix multiplication is defined, being able to find unique solutions to \(AX = B\) for \(n \times 1\) matrices \(B\) gives you the same statement about solving such equations for \(n \times r\) matrices \(-\) since we can find a unique solution to them one column at a time.} We also know that the process by which we find \(A^{-1}\) is determined completely by \(A\), and not by the constants in \(B\). This answers the question as to why we would bother doing row operations on a super-sized augmented matrix to find \(A^{-1}\) instead of an ordinary augmented matrix to solve a system; by finding \(A^{-1}\) we have done all of the row operations we ever need to do, once and for all, since we can quickly solve \textit{any} equation \(AX = B\) using \textit{one} multiplication, \(A^{-1}B\).

    Example \(\PageIndex{1}\): \label{matrixinverseex}

    Let \(A = \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \\ \end{array} \right]\)

    1. Use row operations to find \(A^{-1}\). Check your answer by finding \(A^{-1}A\) and \(AA^{-1}\).
    2. Use \(A^{-1}\) to solve the following systems of equations
    3. \(\left\{ \begin{array}{rcl} 3x+y+2z & = & 26 \\-y+5z & = & 39 \\ 2x+y+4z&=& 117 \\ \end{array} \right.\)
    4. \(\left\{ \begin{array}{rcl} 3x+y+2z & = & 4 \\-y+5z & = & 2 \\ 2x+y+4z&=& 5 \\ \end{array} \right.\)
    5. \(\left\{ \begin{array}{rcl} 3x+y+2z & = & 1 \\-y+5z & = & 0 \\ 2x+y+4z&=& 0 \\ \end{array} \right.\)

    Solution

    1. We begin with a super-sized augmented matrix and proceed with Gauss-Jordan elimination.

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 3 & 1 & \hphantom{-}2 & 1 & 0 & 0 \\ 0 & -1 & 5 & 0 & 1 & 0 \\ 2 & 1 & 4 & 0 & 0 & 1 \\ \end{array} \right] & \xrightarrow[\text{with \(\frac{1}{3}R1\)}]{\text{Replace \(R1\)}} & \left[ \begin{array}{rrr|rrr} 1 & \frac{1}{3} & \hphantom{-}\frac{2}{3} & \frac{1}{3} & 0 & 0 \\ 0 & -1 & 5 & 0 & 1 & 0 \\ 2 & 1 & 4 & 0 & 0 & 1 \\ \end{array} \right] \end{array}\]

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 1 & \frac{1}{3} & \hphantom{-}\frac{2}{3} & \frac{1}{3} & 0 & 0 \\ 0 & -1 & 5 & 0 & 1 & 0 \\ 2 & 1 & 4 & 0 & 0 & 1 \\ \end{array} \right] \xrightarrow[\text{\)-2R1+R3\)}]{\text{Replace \(R3\) with}} \left[ \begin{array}{rrr|rrr} 1 & \frac{1}{3} & \hphantom{-}\frac{2}{3} & \frac{1}{3} & 0 & 0 \\ 0 & -1 & 5 & 0 & 1 & 0 \\ 0 & \frac{1}{3} & \frac{8}{3} & -\frac{2}{3} & 0 & 1 \\ \end{array} \right] \end{array}\]

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 1 & \frac{1}{3} & \hphantom{-}\frac{2}{3} & \frac{1}{3} & 0 & 0 \\ 0 & -1 & 5 & 0 & 1 & 0 \\ 0 & \frac{1}{3} & \frac{8}{3} & -\frac{2}{3} & 0 & 1 \\ \end{array} \right] & \xrightarrow[\text{with \((-1)R2\)}]{\text{Replace \(R2\)}} & \left[ \begin{array}{rrr|rrr} 1 & \hphantom{-}\frac{1}{3} & \frac{2}{3} & \frac{1}{3} & 0 & \hphantom{-}0 \\ 0 & 1 & -5 & 0 & -1 & 0 \\ 0 & \frac{1}{3} & \frac{8}{3} & -\frac{2}{3} & 0 & 1 \\ \end{array} \right] \end{array}\]

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 1 & \hphantom{-}\frac{1}{3} & \frac{2}{3} & \frac{1}{3} & 0 & \hphantom{-}0 \\ 0 & 1 & -5 & 0 & -1 & 0 \\ 0 & \frac{1}{3} & \frac{8}{3} & -\frac{2}{3} & 0 & 1 \\ \end{array} \right] \xrightarrow[\text{\)-\frac{1}{3}R2+R3\)}]{\text{Replace \(R3\) with}} \left[ \begin{array}{rrr|rrr} 1 & \hphantom{-}\frac{1}{3} & \frac{2}{3} & \frac{1}{3} & 0 & \hphantom{-}0 \\ 0 & 1 & -5 & 0 & -1 & 0 \\ 0 & 0 & \frac{13}{3} & -\frac{2}{3} & \frac{1}{3} & 1 \\ \end{array} \right] \end{array}\]

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 1 & \hphantom{-}\frac{1}{3} & \frac{2}{3} & \frac{1}{3} & 0 & \hphantom{-}0 \\ 0 & 1 & -5 & 0 & -1 & 0 \\ 0 & 0 & \frac{13}{3} & -\frac{2}{3} & \frac{1}{3} & 1 \\ \end{array} \right] & \xrightarrow[\text{with \(\frac{3}{13}R3\)}]{\text{Replace \(R3\)}} & \left[ \begin{array}{rrr|rrr} 1 & \hphantom{-}\frac{1}{3} & \frac{2}{3} & \frac{1}{3} & 0 & 0 \\ 0 & 1 & -5 & 0 & -1 & 0 \\ 0 & 0 & 1 & -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right] \end{array}\]

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 1 & \hphantom{-}\frac{1}{3} & \frac{2}{3} & \frac{1}{3} & 0 & 0 \\ 0 & 1 & -5 & 0 & -1 & 0 \\ 0 & 0 & 1 & -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right] & \xrightarrow[\text{\begin{tabular}{c} Replace \(R2\) with \\ \(5R3+R2\) \end{tabular}}]{\text{\begin{tabular}{c} Replace \(R1\) with \\ \(-\frac{2}{3}R3+R1\) \end{tabular}}} & \left[ \begin{array}{rrr|rrr} 1 & \frac{1}{3} & 0 & \frac{17}{39} & -\frac{2}{39} & -\frac{2}{13} \\ 0 & 1 & 0 &-\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ 0 & 0 & 1 & -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right] \end{array}\]

    \[\begin{array}{ccc} \left[ \begin{array}{rrr|rrr} 1 & \frac{1}{3} & 0 & \frac{17}{39} & -\frac{2}{39} & -\frac{2}{13} \\ 0 & 1 & 0 &-\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ 0 & 0 & 1 & -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right] & \xrightarrow[\text{\)-\frac{1}{3}R2+R1\)}]{\text{Replace \(R1\) with}} & \left[ \begin{array}{rrr|rrr} 1 & 0 & 0 & \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ 0 & 1 & 0 &-\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ 0 & 0 & 1 & -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right] \end{array}\]

    We find \(A^{-1} = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \\ \end{array} \right]\). To check our answer, we compute

    \[ A^{-1}A = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \end{array} \right]\left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \end{array} \right] = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] = I_{3} \, \, \checkmark \]

    and

    \[ AA^{-1} = \left[ \begin{array}{rrr} 3 & 1 & \hphantom{-}2 \\ 0 & -1 & 5 \\ 2 & 1 & 4 \end{array} \right] \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \end{array} \right] = \left[ \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] = I_{3} \, \, \checkmark \]

    1. \item Each of the systems in this part has \(A\) as its coefficient matrix. The only difference between the systems is the constants which is the matrix \(B\) in the associated matrix equation \(AX=B\). We solve each of them using the formula \(X = A^{-1}B\).
    2. \item \(X = A^{-1}B = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \end{array} \right] \left[ \begin{array}{r} 26 \\ 39 \\ 117 \end{array}\right] = \left[ \begin{array}{r} -39 \\ 91 \\ 26 \end{array}\right]\). Our solution is \((-39,91,26)\).
    3. \item \(X = A^{-1}B = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \end{array} \right] \left[ \begin{array}{r} 4 \\ 2 \\ 5 \end{array}\right] = \left[ \begin{array}{r} \frac{5}{13} \\ \frac{19}{13} \\ \frac{9}{13} \end{array}\right]\). We get \(\left( \frac{5}{13}, \frac{19}{13}, \frac{9}{13} \right)\).
    4. \item \(X = A^{-1}B = \left[ \begin{array}{rrr} \frac{9}{13} & \frac{2}{13} & -\frac{7}{13} \\ -\frac{10}{13} & -\frac{8}{13} & \frac{15}{13} \\ -\frac{2}{13} & \frac{1}{13} & \frac{3}{13} \end{array} \right] \left[ \begin{array}{r} 1 \\ 0 \\ 0 \end{array}\right] = \left[ \begin{array}{r} \frac{9}{13} \\ -\frac{10}{13} \\ -\frac{2}{13} \end{array}\right]\). We find \(\left( \frac{9}{13}, -\frac{10}{13}, -\frac{2}{13} \right)\).\footnote{Note that the solution is the first column of the \(A^{-1}\). The reader is encouraged to meditate on this `coincidence'.}

    In Example \ref{matrixinverseex}, we see that finding one inverse matrix can enable us to solve an entire family of systems of linear equations. There are many examples of where this comes in handy `in the wild', and we chose our example for this section from the field of electronics. We also take this opportunity to introduce the student to how we can compute inverse matrices using the calculator.

    Example \(\PageIndex{1}\):\label{circuitex}

    Consider the circuit diagram below.\footnote{The authors wish to thank Don Anthan of Lakeland Community College for the design of this example.} We have two batteries with source voltages \(V\!\!B_{1}\) and \(V\!\!B_{2}\), measured in volts \(V\), along with six resistors with resistances \(R_{1}\) through \(R_{6}\), measured in kiloohms, \(k\Omega\). Using Ohm's Law and Kirchhoff's Voltage Law, we can relate the voltage supplied to the circuit by the two batteries to the voltage drops across the six resistors in order to find the four `mesh' currents: \(i_{1}\), \(i_{2}\), \(i_{3}\) and \(i_{4}\), measured in milliamps, \(mA\). If we think of electrons flowing through the circuit, we can think of the voltage sources as providing the `push' which makes the electrons move, the resistors as obstacles for the electrons to overcome, and the mesh current as a net rate of flow of electrons around the indicated loops.

    \centerline{\includegraphics{./MatricesGraphics/CircuitDiagram01.pdf}}

    The system of linear equations associated with this circuit is

    \[ \left\{ \begin{array}{rcl} \left(R_{1} + R_{3}\right)i_{1} - R_{3}i_{2} - R_{1}i_{4} & = & V\!\!B_{1} \\ -R_{3}i_{1} + \left(R_{2} + R_{3} + R_{4}\right)i_{2} - R_{4}i_{3} - R_{2}i_{4} & = & 0 \\ -R_{4}i_{2} + \left(R_{4} + R_{6}\right)i_{3} - R_{6}i_{4} & = & -V\!\!B_{2} \\ -R_{1}i_{1} - R_{2}i_{2} - R_{6}i_{3} + \left(R_{1} + R_{2} + R_{5} + R_{6}\right)i_{4} & = & 0 \\ \end{array} \right.\]

    1. Assuming the resistances are all \(1 k\Omega\), find the mesh currents if the battery voltages are
    2. \(V\!\!B_{1} = 10 V\) and \(V\!\!B_{2} = 5 V\)
    3. \(V\!\!B_{1} = 10 V\) and \(V\!\!B_{2} = 0 V\)
    4. \(V\!\!B_{1} = 0 V\) and \(V\!\!B_{2} = 10 V\)
    5. \(V\!\!B_{1} = 10 V\) and \(V\!\!B_{2} = 10 V\)
    6. Assuming \(V\!\!B_{1} = 10 V\) and \(V\!\!B_{2} = 5 V\), find the possible combinations of resistances which would yield the mesh currents you found in 1(a).

    Solution

    \item Substituting the resistance values into our system of equations, we get

    \[ \left\{ \begin{array}{rcl} 2i_{1} - i_{2}-i_{4} & = & V\!\!B_{1} \\ -i_{1} + 3i_{2} - i_{3} - i_{4} & = & 0 \\ -i_{2} + 2i_{3} - i_{4} & = & -V\!\!B_{2} \\ -i_{1} - i_{2}-i_{3} + 4i_{4} & = & 0 \\ \end{array} \right.\]

    This corresponds to the matrix equation \(AX = B\) where

    \[ \begin{array}{ccc} A = \left[ \begin{array}{rrrr} 2 & -1 & 0 & -1 \\ -1 & 3 & -1 & -1 \\ 0 & -1 & 2 & -1 \\ -1 & -1 & -1 & 4 \end{array} \right] & X = \left[ \begin{array}{r} i_{1} \\ i_{2} \\ i_{3} \\ i_{4} \\ \end{array} \right] & B = \left[ \begin{array}{r} V\!\!B_{1} \\ 0 \\ -V\!\!B_{2} \\ 0 \end{array} \right] \end{array}\]

    When we input the matrix \(A\) into the calculator, we find

    from which we have

    \[A^{-1} = \left[ \begin{array}{rrrr} 1.625 & \hphantom{2}1.25 & 1.125 & \hphantom{2.2}1 \\ 1.25 & 1.5 & 1.25 & 1 \\ 1.125 & 1.25 & 1.625 & 1 \\ 1 & 1 & 1 & 1 \end{array} \right].\]

    To solve the four systems given to us, we find \(X=A^{-1}B\) where the value of \(B\) is determined by the given values of \(V\!\!B_{1}\) and \(V\!\!B_{2}\)

    \[\begin{array}{cccc} \text{1 (a)} \quad B = \left[ \begin{array}{r} 10 \\ 0 \\ -5 \\ 0 \end{array} \right], & \text{1 (b)} \quad B = \left[ \begin{array}{r} 10 \\ 0 \\ 0 \\ 0 \end{array} \right], & \text{1 (c)} \quad B = \left[ \begin{array}{r} 0 \\ 0 \\ -10 \\ 0 \end{array} \right], & \text{1 (d)} \quad B = \left[ \begin{array}{r} 10 \\ 0 \\ 10 \\ 0 \end{array} \right] \end{array} \]

    1. \item For \(V\!\!B_{1} = 10 V\) and \(V\!\!B_{2} = 5 V\), the calculator gives \(i_{1} = 10.625 \, \, mA\), \(i_{2} = 6.25 \, \, mA\), \(i_{3} = 3.125 \, \, mA\), and \(i_{4} = 5 \, \, mA\). We include a calculator screenshot below for this part (and this part only!) for reference.

    \centerline{\includegraphics[width=2in]{./MatricesGraphics/MATRIXINVERSE03.jpg}}

    1. \item By keeping \(V\!\!B_{1} = 10 V\) and setting \(V\!\!B_{2} = 0 V\), we are removing the effect of the second battery. We get \(i_{1} = 16.25 \, \, mA\), \(i_{2} = 12.5 \, \, mA\), \(i_{3} = 11.25 \, \, mA\), and \(i_{4} = 10 \, \, mA\).
    2. \item Part (c) is a symmetric situation to part (b) in so much as we are zeroing out \(V\!\!B_{1}\) and making \(V\!\!B_{2} = 10\). We find \(i_{1} = -11.25 \, \, mA\), \(i_{2} = -12.5 \, \, mA\), \(i_{3} = -16.25 \, \, mA\), and \(i_{4} = -10 \, \, mA\), where the negatives indicate that the current is flowing in the opposite direction as is indicated on the diagram. The reader is encouraged to study the symmetry here, and if need be, hold up a mirror to the diagram to literally `see' what is happening.
    3. \item For \(V\!\!B_{1} = 10 V\) and \(V\!\!B_{2} = 10 V\), we get \(i_{1} = 5 \, \, mA\), \(i_{2} = 0 \, \, mA\), \(i_{3} = -5 \, \, mA\), and \(i_{4} = 0 \, \, mA\). The mesh currents \(i_{2}\) and \(i_{4}\) being zero is a consequence of both batteries `pushing' in equal but opposite directions, causing the net flow of electrons in these two regions to cancel out.
    4. \item We now turn the tables and are given \(V\!\!B_{1} = 10 V\), \(V\!\!B_{2} = 5 V\), \(i_{1} = 10.625 \, \, mA\), \(i_{2} = 6.25 \, \, mA\), \(i_{3} = 3.125 \, \, mA\) and \(i_{4} = 5 \, \, mA\) and our unknowns are the resistance values. Rewriting our system of equations, we get

    \[ \left\{ \begin{array}{rcr} 5.625R_{1} + 4.375R_{3}& = & 10 \\ 1.25R_{2} - 4.375R_{3} + 3.125R_{4}& = & 0 \\ -3.125R_{4} - 1.875R_{6} & = & -5 \\ -5.625R_{1} - 1.25R_{2} + 5R_{5} + 1.875R_{6} & = & 0 \\ \end{array} \right.\]

    The coefficient matrix for this system is \(4 \times 6\) (4 equations with 6 unknowns) and is therefore not invertible. We do know, however, this system is consistent, since setting all the resistance values equal to \(1\) corresponds to our situation in problem 1a. This means we have an underdetermined consistent system which is necessarily dependent. To solve this system, we encode it into an augmented matrix

    \[ \left[ \begin{array}{rrrrrr|r} 5.25 & 0 & 4.375 & 0 & \hphantom{1.2}0 & 0 & 10 \\ 0 & 1.25 & -4.375 & 3.125 & 0 & 0 & 0 \\ 0 & 0 & 0 & -3.125 & 0 & -1.875 & -5 \\ -5.625 & -1.25 & 0 & 0 & 5 & 1.875 & 0 \\ \end{array} \right] \]

    and use the calculator to write in reduced row echelon form

    \[\left[ \begin{array}{rrrrrr|r} 1 & \hphantom{-1.}0 & 0.\overline{7} & \hphantom{-1.}0 & \hphantom{-1.}0 & 0 & 1.\overline{7} \\ 0 & 1 & -3.5 & 0 & 0 & -1.5 & -4 \\ 0 & 0 & 0 & 1 & 0 & 0.6 & 1.6 \\ 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ \end{array} \right] \]

    Decoding this system from the matrix, we get

    \[ \left\{ \begin{array}{rcr} R_{1} + 0.\overline{7}R_{3}& = & 1.\overline{7} \\ R_{2} - 3.5R_{3} - 1.5R_{6}& = & -4 \\ R_{4} + 0.6R_{6} & = & 1.6 \\ R_{5}& = & 1 \\ \end{array} \right.\]

    We have can solve for \(R_{1}\), \(R_{2}\), \(R_{4}\) and \(R_{5}\) leaving \(R_{3}\) and \(R_{6}\) as free variables. Labeling \(R_{3} = s\) and \(R_{6} = t\), we have \(R_{1} = - 0.\overline{7}s + 1.\overline{7}\), \(R_{2} = 3.5s + 1.5t - 4\), \(R_{4} = -0.6t + 1.6\) and \(R_{5} = 1\). Since resistance values are always positive, we need to restrict our values of \(s\) and \(t\). We know \(R_{3} = s > 0\) and when we combine that with \(R_{1} = - 0.\overline{7}s + 1.\overline{7} >0\), we get \(0 < s < \frac{16}{7}\). Similarly, \(R_{6} = t > 0\) and with \(R_{4} = -0.6t + 1.6 > 0\), we find \(0 < t < \frac{8}{3}\). In order visualize the inequality \(R_{2} = 3.5s + 1.5t - 4 > 0\), we graph the line \(3.5s + 1.5t - 4 =0\) on the \(st\)-plane and shade accordingly.\footnote{See Section \ref{Inequalities} for a review of this procedure.} Imposing the additional conditions \(0 < s < \frac{16}{7}\) and \(0 < t < \frac{8}{3}\), we find our values of \(s\) and \(t\) restricted to the region depicted on the right. Using the roster method, the values of \(s\) and \(t\) are pulled from the region \(\left\{ (s,t) : 0 < s < \frac{16}{7}, \, \, 0 < t < \frac{8}{3}, \, \, 3.5s+1.5t-4 > 0\right\}\). The reader is encouraged to check that the solution presented in 1(a), namely all resistance values equal to \(1\), corresponds to a pair \((s,t)\) in the region.


    This page titled 8.3: Systems of Linear Equations- Matrix Inverses is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Carl Stitz & Jeff Zeager via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.