Skip to main content
Mathematics LibreTexts

2.7: Finding the Inverse of a Matrix

  • Page ID
    19845
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    In Example 2.6.1, we were given \(A^{-1}\) and asked to verify that this matrix was in fact the inverse of \(A\). In this section, we explore how to find \(A^{-1}\).

    Let \[A=\left[ \begin{array}{rr} 1 & 1 \\ 1 & 2 \end{array} \right]\nonumber \] as in Example 2.6.1. In order to find \(A^{-1}\), we need to find a matrix \(\left[ \begin{array}{rr} x & z \\ y & w \end{array} \right]\) such that \[\left[ \begin{array}{rr} 1 & 1 \\ 1 & 2 \end{array} \right] \left[ \begin{array}{rr} x & z \\ y & w \end{array} \right] =\left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right]\nonumber \] We can multiply these two matrices, and see that in order for this equation to be true, we must find the solution to the systems of equations, \[\begin{array}{c} x+y=1 \\ x+2y=0 \end{array}\nonumber\] and \[\begin{array}{c} z+w=0 \\ z+2w=1 \end{array}\nonumber \] Writing the augmented matrix for these two systems gives \[\left[ \begin{array}{rr|r} 1 & 1 & 1 \\ 1 & 2 & 0 \end{array} \right] \nonumber \] for the first system and \[\left[ \begin{array}{rr|r} 1 & 1 & 0 \\ 1 & 2 & 1 \end{array} \right] \label{inverse2a}\] for the second.

    Let’s solve the first system. Take \(-1\) times the first row and add to the second to get \[\left[ \begin{array}{rr|r} 1 & 1 & 1 \\ 0 & 1 & -1 \end{array} \right]\nonumber \] Now take \(-1\) times the second row and add to the first to get \[\left[ \begin{array}{rr|r} 1 & 0 & 2 \\ 0 & 1 & -1 \end{array} \right]\nonumber \] Writing in terms of variables, this says \(x=2\) and \(y=-1.\)

    Now solve the second system, \(\eqref{inverse2a}\) to find \(z\) and \(w.\) You will find that \(z = -1\) and \(w = 1\).

    If we take the values found for \(x,y,z,\) and \(w\) and put them into our inverse matrix, we see that the inverse is \[A^{-1} = \left[ \begin{array}{rr} x & z \\ y & w \end{array} \right] = \left[ \begin{array}{rr} 2 & -1 \\ -1 & 1 \end{array} \right]\nonumber \]

    After taking the time to solve the second system, you may have noticed that exactly the same row operations were used to solve both systems. In each case, the end result was something of the form \(\left[ I|X\right]\) where \(I\) is the identity and \(X\) gave a column of the inverse. In the above, \[\left[ \begin{array}{c} x \\ y \end{array} \right]\nonumber \] the first column of the inverse was obtained by solving the first system and then the second column \[\left[ \begin{array}{c} z \\ w \end{array} \right]\nonumber \]

    To simplify this procedure, we could have solved both systems at once! To do so, we could have written \[\left[ \begin{array}{rr|rr} 1 & 1 & 1 & 0 \\ 1 & 2 & 0 & 1 \end{array} \right]\nonumber \]

    and row reduced until we obtained \[\left[ \begin{array}{rr|rr} 1 & 0 & 2 & -1 \\ 0 & 1 & -1 & 1 \end{array} \right]\nonumber \] and read off the inverse as the \(2\times 2\) matrix on the right side.

    This exploration motivates the following important algorithm.

    Algorithm \(\PageIndex{1}\): Matrix Inverse Algorithm

    Suppose \(A\) is an \(n\times n\) matrix. To find \(A^{-1}\) if it exists, form the augmented \(n\times 2n\) matrix \[\left[ A|I\right]\nonumber \] If possible do row operations until you obtain an \(n\times 2n\) matrix of the form \[\left[ I|B\right]\nonumber \] When this has been done, \(B=A^{-1}.\) In this case, we say that \(A\) is invertible. If it is impossible to row reduce to a matrix of the form \(\left[ I|B\right] ,\) then \(A\) has no inverse.

    This algorithm shows how to find the inverse if it exists. It will also tell you if \(A\) does not have an inverse.

    Consider the following example.

    Example \(\PageIndex{1}\): Finding the Inverse

    Let \(A=\left[ \begin{array}{rrr} 1 & 2 & 2 \\ 1 & 0 & 2 \\ 3 & 1 & -1 \end{array} \right]\). Find \(A^{-1}\) if it exists.

    Solution

    Set up the augmented matrix \[\left[ A|I\right] = \left[ \begin{array}{rrr|rrr} 1 & 2 & 2 & 1 & 0 & 0 \\ 1 & 0 & 2 & 0 & 1 & 0 \\ 3 & 1 & -1 & 0 & 0 & 1 \end{array} \right]\nonumber \]

    Now we row reduce, with the goal of obtaining the \(3 \times 3\) identity matrix on the left hand side. First, take \(-1\) times the first row and add to the second followed by \(-3\) times the first row added to the third row. This yields \[ \ \left[ \begin{array}{rrr|rrr} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & -2 & 0 & -1 & 1 & 0 \\ 0 & -5 & -7 & -3 & 0 & 1 \end{array} \right]\nonumber \] Then take 5 times the second row and add to -2 times the third row. \[\left[ \begin{array}{rrr|rrr} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & -10 & 0 & -5 & 5 & 0 \\ 0 & 0 & 14 & 1 & 5 & -2 \end{array} \right]\nonumber \] Next take the third row and add to \(-7\) times the first row. This yields \[\left[ \begin{array}{rrr|rrr} -7 & -14 & 0 & -6 & 5 & -2 \\ 0 & -10 & 0 & -5 & 5 & 0 \\ 0 & 0 & 14 & 1 & 5 & -2 \end{array} \right]\nonumber \] Now take \(-\frac{7}{5}\) times the second row and add to the first row. \[\left[ \begin{array}{rrr|rrr} -7 & 0 & 0 & 1 & -2 & -2 \\ 0 & -10 & 0 & -5 & 5 & 0 \\ 0 & 0 & 14 & 1 & 5 & -2 \end{array} \right]\nonumber \] Finally divide the first row by -7, the second row by -10 and the third row by 14 which yields \[\left[ \begin{array}{rrr|rrr} 1 & 0 & 0 & - \ \frac{1}{7} & \ \frac{2}{7} & \ \frac{2}{7} \\ 0 & 1 & 0 & \ \frac{1}{2} & - \ \frac{1}{2} & 0 \\ 0 & 0 & 1 & \ \frac{1}{14} & \ \frac{5}{14} & - \ \frac{1}{7} \end{array} \right]\nonumber \] Notice that the left hand side of this matrix is now the \(3 \times 3\) identity matrix \(I_3\). Therefore, the inverse is the \(3 \times 3\) matrix on the right hand side, given by \[\left[ \begin{array}{rrr} - \ \frac{1}{7} & \ \frac{2}{7} & \ \frac{2}{7} \\ \ \frac{1}{2} & - \ \frac{1}{2} & 0 \\ \ \frac{1}{14} & \ \frac{5}{14} & - \ \frac{1}{7} \end{array} \right]\nonumber \]

    It may happen that through this algorithm, you discover that the left hand side cannot be row reduced to the identity matrix. Consider the following example of this situation.

    Example \(\PageIndex{2}\): A Matrix Which Has No Inverse

    Let \(A=\left[ \begin{array}{rrr} 1 & 2 & 2 \\ 1 & 0 & 2 \\ 2 & 2 & 4 \end{array} \right]\). Find \(A^{-1}\) if it exists.

    Solution

    Write the augmented matrix \(\left[ A|I\right]\) \[\left[ \begin{array}{rrr|rrr} 1 & 2 & 2 & 1 & 0 & 0 \\ 1 & 0 & 2 & 0 & 1 & 0 \\ 2 & 2 & 4 & 0 & 0 & 1 \end{array} \right]\nonumber \] and proceed to do row operations attempting to obtain \(\left[ I|A^{-1}\right] .\) Take \(-1\) times the first row and add to the second. Then take \(-2\) times the first row and add to the third row. \[\left[ \begin{array}{rrr|rrr} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & -2 & 0 & -1 & 1 & 0 \\ 0 & -2 & 0 & -2 & 0 & 1 \end{array} \right]\nonumber \] Next add \(-1\) times the second row to the third row. \[\left[ \begin{array}{rrr|rrr} 1 & 2 & 2 & 1 & 0 & 0 \\ 0 & -2 & 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & -1 & -1 & 1 \end{array} \right]\nonumber \] At this point, you can see there will be no way to obtain \(I\) on the left side of this augmented matrix. Hence, there is no way to complete this algorithm, and therefore the inverse of \(A\) does not exist. In this case, we say that \(A\) is not invertible.

    If the algorithm provides an inverse for the original matrix, it is always possible to check your answer. To do so, use the method demonstrated in Example 2.6.1. Check that the products \(AA^{-1}\) and \(A^{-1}A\) both equal the identity matrix. Through this method, you can always be sure that you have calculated \(A^{-1}\) properly!

    One way in which the inverse of a matrix is useful is to find the solution of a system of linear equations. Recall from Definition 2.2.4 that we can write a system of equations in matrix form, which is of the form \(AX=B\). Suppose you find the inverse of the matrix \(A^{-1}\). Then you could multiply both sides of this equation on the left by \(A^{-1}\) and simplify to obtain \[\begin{array}{c} \left( A^{-1} \right) AX =A^{-1}B \\ \left(A^{-1}A\right) X = A^{-1}B \\ IX = A^{-1}B \\ X = A^{-1}B \end{array}\nonumber \] Therefore we can find \(X\), the solution to the system, by computing \(X=A^{-1}B\). Note that once you have found \(A^{-1}\), you can easily get the solution for different right hand sides (different \(B\)). It is always just \(A^{-1}B\).

    We will explore this method of finding the solution to a system in the following example.

    Example \(\PageIndex{3}\): Using the Inverse to Solve a System of Equations

    Consider the following system of equations. Use the inverse of a suitable matrix to give the solutions to this system. \[\begin{array}{c} x+z=1 \\ x-y+z=3 \\ x+y-z=2 \end{array}\nonumber \]

    Solution

    First, we can write the system of equations in matrix form \[AX = \left[ \begin{array}{rrr} 1 & 0 & 1 \\ 1 & -1 & 1 \\ 1 & 1 & -1 \end{array} \right] \left[ \begin{array}{r} x \\ y \\ z \end{array} \right] =\left[ \begin{array}{r} 1 \\ 3 \\ 2 \end{array} \right] = B \label{inversesystem1}\]

    The inverse of the matrix \[A = \left[ \begin{array}{rrr} 1 & 0 & 1 \\ 1 & -1 & 1 \\ 1 & 1 & -1 \end{array} \right]\nonumber \] is \[A^{-1} = \left[ \begin{array}{rrr} 0 & \ \frac{1}{2} & \ \frac{1}{2} \\ 1 & -1 & 0 \\ 1 & - \ \frac{1}{2} & - \ \frac{1}{2} \end{array} \right]\nonumber \]

    Verifying this inverse is left as an exercise.

    From here, the solution to the given system \(\eqref{inversesystem1}\) is found by \[\left[ \begin{array}{r} x \\ y \\ z \end{array} \right] = A^{-1}B = \left[ \begin{array}{rrr} 0 & \ \frac{1}{2} & \ \frac{1}{2} \\ 1 & -1 & 0 \\ 1 & - \ \frac{1}{2} & - \ \frac{1}{2} \end{array} \right] \left[ \begin{array}{r} 1 \\ 3 \\ 2 \end{array} \right] =\left[ \begin{array}{r} \ \frac{5}{2} \\ -2 \\ - \ \frac{3}{2} \end{array} \right]\nonumber \]

    What if the right side, \(B\), of \(\eqref{inversesystem1}\) had been \(\left[ \begin{array}{r} 0 \\ 1 \\ 3 \end{array} \right] ?\) In other words, what would be the solution to \[\left[ \begin{array}{rrr} 1 & 0 & 1 \\ 1 & -1 & 1 \\ 1 & 1 & -1 \end{array} \right] \left[ \begin{array}{r} x \\ y \\ z \end{array} \right] =\left[ \begin{array}{r} 0 \\ 1 \\ 3 \end{array} \right] ?\nonumber \] By the above discussion, the solution is given by \[\left[ \begin{array}{r} x \\ y \\ z \end{array} \right] = A^{-1}B = \left[ \begin{array}{rrr} 0 & \ \frac{1}{2} & \ \frac{1}{2} \\ 1 & -1 & 0 \\ 1 & - \ \frac{1}{2} & - \ \frac{1}{2} \end{array} \right] \left[ \begin{array}{r} 0 \\ 1 \\ 3 \end{array} \right] =\left[ \begin{array}{r} 2 \\ -1 \\ -2 \end{array} \right]\nonumber \] This illustrates that for a system \(AX=B\) where \(A^{-1}\) exists, it is easy to find the solution when the vector \(B\) is changed.

    We conclude this section with some important properties of the inverse.

    Theorem \(\PageIndex{1}\): Inverses of Transposes and Products

    Let \(A, B\), and \(A_i\) for \(i=1,...,k\) be \(n \times n\) matrices.

    1. If \(A\) is an invertible matrix, then \((A^{T})^{-1} = (A^{-1})^{T}\)
    2. If \(A\) and \(B\) are invertible matrices, then \(AB\) is invertible and \((AB)^{-1} = B^{-1}A^{-1}\)
    3. If \(A_1, A_2, ..., A_k\) are invertible, then the product \(A_1A_2 \cdots A_k\) is invertible, and \((A_1A_2 \cdots A_k)^{-1} = A_k^{-1}A_{k-1}^{-1} \cdots A_2^{-1}A_1^{-1}\)

    Consider the following theorem.

    Theorem \(\PageIndex{2}\): Properties of the Inverse

    Let \(A\) be an \(n \times n\) matrix and \(I\) the usual identity matrix.

    1. \(I\) is invertible and \(I^{-1} = I\)
    2. If \(A\) is invertible then so is \(A^{-1}\), and \((A^{-1})^{-1} = A\)
    3. If \(A\) is invertible then so is \(A^k\), and \((A^k)^{-1} = (A^{-1})^k\)
    4. If \(A\) is invertible and \(p\) is a nonzero real number, then \(pA\) is invertible and \((pA)^{-1} = \frac{1}{p}A^{-1}\)

    This page titled 2.7: Finding the Inverse of a Matrix is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Ken Kuttler (Lyryx) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?