$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 12.2: Matrix Inversion

$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

## Developing the Process

In Chapter 5 we defined the inverse of an $$n \times n$$ matrix. We noted that not all matrices have inverses, but when the inverse of a matrix exists, it is unique. This enables us to define the inverse of an $$n \times n$$ matrix $$A$$ as the unique matrix $$B$$ such that $$A B = B A =I\text{,}$$ where $$I$$ is the $$n \times n$$ identity matrix. In order to get some practical experience, we developed a formula that allowed us to determine the inverse of invertible $$2\times 2$$ matrices. We will now use the Gauss-Jordan procedure for solving systems of linear equations to compute the inverses, when they exist, of $$n\times n$$ matrices, $$n \geq 2\text{.}$$ The following procedure for a $$3\times 3$$ matrix can be generalized for $$n\times n$$ matrices, $$n\geq 2\text{.}$$

Given the matrix $$A = \left( \begin{array}{ccc} 1 & 1 & 2 \\ 2 & 1 & 4 \\ 3 & 5 & 1 \\ \end{array} \right)\text{,}$$ we want to find its inverse, the matrix $$B=\left( \begin{array}{ccc} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \\ \end{array} \right)\text{,}$$ if it exists, such that $$A B = I$$ and $$B A = I\text{.}$$ We will concentrate on finding a matrix that satisfies the first equation and then verify that B also satisfies the second equation.

The equation

\begin{equation*} \left( \begin{array}{ccc} 1 & 1 & 2 \\ 2 & 1 & 4 \\ 3 & 5 & 1 \\ \end{array} \right)\left( \begin{array}{ccc} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \\ \end{array} \right)= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) \end{equation*}

is equivalent to

\begin{equation*} \left( \begin{array}{ccc} x_{11}+x_{21}+2 x_{31} & x_{12}+x_{22}+2 x_{32} & x_{13}+x_{23}+2 x_{33} \\ 2 x_{11}+x_{21}+4 x_{31} & 2 x_{12}+x_{22}+4 x_{32} & 2 x_{13}+x_{23}+4 x_{33} \\ 3 x_{11}+5 x_{21}+x_{31} & 3 x_{12}+5 x_{22}+x_{32} & 3 x_{13}+5 x_{23}+x_{33} \\ \end{array} \right)= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) \end{equation*}

By definition of equality of matrices, this gives us three systems of equations to solve. The augmented matrix of one of the systems, the one equating the first columns of the two matrices is:

$\label{eq:1}\left(\begin{array}{ccc|c}1&1&2&1\\2&1&4&0\\3&5&1&0\end{array}\right)$

Using the Gauss-Jordan algorithm, we have:

\begin{equation*} \begin{split} \left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 2 & 1 & 4 & 0 \\ 3 & 5 & 1 & 0 \\ \end{array} \right) & \overset{-2 R_1+R_2}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 0 & -1 & 0 & -2 \\ 3 & 5 & 1 & 0 \\ \end{array} \right) \overset{-3 R_1+R_3}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 0 & -1 & 0 & -2 \\ 0 & 2 & -5 & -3 \\ \end{array} \right)\\ & \textrm{ }\overset{-1 R_2}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 2 & -5 & -3 \\ \end{array} \right)\\ & \textrm{ }\overset{-R_2+R_1\textrm{ and} -2R_2+R_3}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & -5 & -7 \\ \end{array} \right)\\ & \overset{-\frac{1}{5} R_3}{\longrightarrow }\textrm{ } \left( \begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 7/5 \\ \end{array} \right)\overset{-2 R_3+R_1}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 0 & 0 & -\frac{19}{5} \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & \frac{7}{5} \\ \end{array} \right)\\ \end{split} \end{equation*}

So $$x_{11}= -19/5, x_{21}=2$$ and $$x_{31}=7/5\text{,}$$ which gives us the first column of $$B\text{.}$$

The matrix form of the system to obtain $$x_{12}\text{,}$$ $$x_{22}\text{,}$$ and $$x_{32}\text{,}$$ the second column of B, is:

$\label{eq:2}\left(\begin{array}{ccc|c}1&1&2&0\\2&1&4&1\\3&5&1&0\end{array}\right)$

which reduces to

$\label{eq:3}\left(\begin{array}{ccc|c}1&0&0&\frac{9}{5} \\ 0&1&0&-1\\0&0&1&-\frac{2}{5}\end{array}\right)$

The critical thing to note here is that the coefficient matrix in $$\eqref{eq:2}$$ is the same as the matrix in $$\eqref{eq:1}$$, hence the sequence of row operations that we used in row reduction are the same in both cases.

To determine the third column of $$B\text{,}$$ we reduce

\begin{equation*} \left( \begin{array}{ccc|c} 1 & 1 & 2 & 0 \\ 2 & 1 & 4 & 0 \\ 3 & 5 & 1 & 1 \\ \end{array} \right) \end{equation*}

to obtain $$x_{13}= 2/5, x_{23}=0$$ and $$x_{33}=-1/5\text{.}$$ Here again it is important to note that the sequence of row operations used to solve this system is exactly the same as those we used in the first system. Why not save ourselves a considerable amount of time and effort and solve all three systems simultaneously? This we can do this by augmenting the coefficient matrix by the identity matrix $$I\text{.}$$ We then have, by applying the same sequence of row operations as above,

\begin{equation*} \left( \begin{array}{ccc|ccc} 1 & 1 & 2 & 1 & 0 & 0 \\ 2 & 1 & 4 & 0 & 1 & 0 \\ 3 & 5 & 1 & 0 & 0 & 1 \\ \end{array} \right)\longrightarrow \left( \begin{array}{ccc|ccc} 1 & 0 & 0 & -\frac{19}{5} & \frac{9}{5} & \frac{2}{5} \\ 0 & 1 & 0 & 2 & -1 & 0 \\ 0 & 0 & 1 & \frac{7}{5} & -\frac{2}{5} & -\frac{1}{5} \\ \end{array} \right) \end{equation*}

So that

\begin{equation*} B =\textrm{ }\left( \begin{array}{ccc} -\frac{19}{5} & \frac{9}{5} & \frac{2}{5} \\ 2 & -1 & 0 \\ \frac{7}{5} & -\frac{2}{5} & -\frac{1}{5} \\ \end{array} \right) \end{equation*}

The reader should verify that $$B A = I$$ so that $$A ^{-1} = B\text{.}$$

## General Method for Computing Inverses

As the following theorem indicates, the verification that $$B A = I$$ is not necessary. The proof of the theorem is beyond the scope of this text. The interested reader can find it in most linear algebra texts.

Theorem $$\PageIndex{1}$$

Let $$A$$ be an $$n \times n$$ matrix. If a matrix $$B$$ can be found such that $$A B = I\text{,}$$ then $$B A = I\text{,}$$ so that $$B = A^{-1}\text{.}$$ In fact, to find $$A^{-1}\text{,}$$ we need only find a matrix $$B$$ that satisfies one of the two conditions $$A B = I$$ or $$B A = I\text{.}$$

It is clear from Chapter 5 and our discussions in this chapter that not all $$n \times n$$ matrices have inverses. How do we determine whether a matrix has an inverse using this method? The answer is quite simple: the technique we developed to compute inverses is a matrix approach to solving several systems of equations simultaneously.

Example $$\PageIndex{1}$$: Recognition of a Non-Invertible Matrix

The reader can verify that if $$A=\left( \begin{array}{ccc} 1 & 2 & 1 \\ -1 & -2 & -1 \\ 0 & 5 & 8 \\ \end{array} \right)$$ then the augmented matrix $$\left( \begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ -1 & -2 & -2 & 0 & 1 & 0 \\ 0 & 5 & 8 & 0 & 0 & 1 \\ \end{array} \right)$$ reduces to

$\label{eq:4}\left(\begin{array}{ccc|ccc}1&2&1&1&0&0\\0&0&0&1&1&0\\0&5&8&0&0&1\end{array}\right)$

Although this matrix can be row-reduced further, it is not necessary to do so since, in equation form, we have:

Table $$\PageIndex{1}$$

 $$\begin{array}{l} x_{11}+2 x_{21}+x_{31}=1 \\ \textrm{ }0=1 \\ \textrm{ }5 x_{21}+8 x_{31}=0 \\ \end{array}$$ $$\begin{array}{l} x_{12}+2 x_{22}+x_{32}=0 \\ \textrm{ }0=1 \\ \textrm{ }5 x_{22}+8 x_{32}=0 \\ \end{array}$$ $$\begin{array}{l} x_{13}+2 x_{23}+x_{33}=0 \\ \textrm{ }0=0 \\ \textrm{ }5 x_{23}+8 x_{33}=1 \\ \end{array}$$

Clearly, there are no solutions to the first two systems, therefore $$A^{-1}$$ does not exist. From this discussion it should be obvious to the reader that the zero row of the coefficient matrix together with the nonzero entry in the fourth column of that row in matrix $$\eqref{eq:4}$$ tells us that $$A^{-1}$$ does not exist.

## Exercises

Exercise $$\PageIndex{1}$$

In order to develop an understanding of the technique of this section, work out all the details of Example $$\PageIndex{1}$$.

Exercise $$\PageIndex{2}$$

Use the method of this section to find the inverses of the following matrices whenever possible. If an inverse does not exist, explain why.

1. $$\displaystyle \left( \begin{array}{cc} 1 & 2 \\ -1 & 3 \\ \end{array} \right)$$
2. $$\displaystyle \left( \begin{array}{cccc} 0 & 3 & 2 & 5 \\ 1 & -1 & 4 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 3 & -1 \\ \end{array} \right)$$
3. $$\displaystyle \left( \begin{array}{ccc} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2 \\ \end{array} \right)$$
4. $$\displaystyle \left( \begin{array}{ccc} 1 & 2 & -1 \\ -2 & -3 & 1 \\ 1 & 4 & -3 \\ \end{array} \right)$$
5. $$\displaystyle \left( \begin{array}{ccc} 6 & 7 & 2 \\ 4 & 2 & 1 \\ 6 & 1 & 1 \\ \end{array} \right)$$
6. $$\displaystyle \left( \begin{array}{ccc} 2 & 1 & 3 \\ 4 & 2 & 1 \\ 8 & 2 & 4 \\ \end{array} \right)$$

Exercise $$\PageIndex{3}$$

Use the method of this section to find the inverses of the following matrices whenever possible. If an inverse does not exist, explain why.

1. $$\displaystyle \left( \begin{array}{cc} \frac{1}{3} & 2 \\ \frac{1}{5} & -1 \\ \end{array} \right)$$
2. $$\displaystyle \left( \begin{array}{cccc} 1 & 0 & 0 & 3 \\ 2 & -1 & 0 & 6 \\ 0 & 2 & 1 & 0 \\ 0 & -1 & 3 & 2 \\ \end{array} \right)$$
3. $$\displaystyle \left( \begin{array}{ccc} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \\ \end{array} \right)$$
4. $$\displaystyle \left( \begin{array}{ccc} 1 & 0 & 0 \\ 2 & 2 & -1 \\ 1 & -1 & 1 \\ \end{array} \right)$$
5. $$\displaystyle \left( \begin{array}{ccc} 2 & 3 & 4 \\ 3 & 4 & 5 \\ 4 & 5 & 6 \\ \end{array} \right)$$
6. $$\displaystyle \left( \begin{array}{ccc} 1 & \frac{1}{2} & \frac{1}{3} \\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} \\ \end{array} \right)$$
1. $$\displaystyle \left( \begin{array}{cc} \frac{15}{11} & \frac{30}{11} \\ \frac{3}{11} & -\frac{5}{11} \\ \end{array} \right)$$
2. $$\displaystyle \left( \begin{array}{ccc|c} -20 & \frac{21}{2} & \frac{9}{2} & -\frac{3}{2} \\ 2 & -1 & 0 & 0 \\ -4 & 2 & 1 & 0 \\ 7 & -\frac{7}{2} & -\frac{3}{2} & \frac{1}{2} \\ \end{array} \right)$$
3. The inverse does not exist. When the augmented matrix is row-reduced (see below), the last row of the first half cannot be manipulated to match the identity matrix.
4. $$\displaystyle \left( \begin{array}{ccc} 1 & 0 & 0 \\ -3 & 1 & 1 \\ -4 & 1 & 2 \\ \end{array} \right)$$
5. The inverse does not exist.
6. $$\displaystyle \left( \begin{array}{ccc} 9 & -36 & 30 \\ -36 & 192 & -180 \\ 30 & -180 & 180 \\ \end{array} \right)$$

Exercise $$\PageIndex{4}$$

1. Find the inverses of the following matrices.
1. $$\displaystyle \left( \begin{array}{ccc} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 5 \\ \end{array} \right)$$
2. $$\displaystyle \left( \begin{array}{cccc} -1 & 0 & 0 & 0 \\ 0 & \frac{5}{2} & 0 & 0 \\ 0 & 0 & \frac{1}{7} & 0 \\ 0 & 0 & 0 & \frac{3}{4} \\ \end{array} \right)$$
2. If $$D$$ is a diagonal matrix whose diagonal entries are nonzero, what is $$D^{-1}\text{?}$$

Exercise $$\PageIndex{5}$$

Express each system of equations in Exercise 12.1.1 in the form $$A x = B\text{.}$$ When possible, solve each system by first finding the inverse of the matrix of coefficients.

The solutions are in the solution section of Section 12.1, Exercise 12.1.1, We illustrate with the outline of the solution to part (c). The matrix version of the system is

$\left(\begin{array}{ccc}1&1&2\\1&2&-1\\1&3&1\end{array}\right)\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=\left(\begin{array}{c}1\\-1\\5\end{array}\right)\nonumber$

We compute the inverse of the matrix of coefficients and get

$A^{-1}=\left(\begin{array}{ccc}1&1&2\\1&2&-1\\1&3&1\end{array}\right)^{-1}=\frac{1}{5}\left(\begin{array}{ccc}5&5&-5\\-2&-1&3\\1&-2&1\end{array}\right)\nonumber$

and

$\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)=A^{-1}\left(\begin{array}{c}1\\-1\\5\end{array}\right)=\left(\begin{array}{c}-5\\ \frac{14}{5} \\ \frac{8}{5}\end{array}\right)\nonumber$

12.2: Matrix Inversion is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Al Doerr & Ken Levasseur.