2.6: The Identity and Inverses
There is a special matrix, denoted \(I\), which is called to as the identity matrix . The identity matrix is always a square matrix, and it has the property that there are ones down the main diagonal and zeroes elsewhere. Here are some identity matrices of various sizes.
\[\left[ 1\right] ,\left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] ,\left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] ,\left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right]\nonumber \]
The first is the \(1\times 1\) identity matrix, the second is the \(2\times 2\) identity matrix, and so on. By extension, you can likely see what the \(n\times n\) identity matrix would be. When it is necessary to distinguish which size of identity matrix is being discussed, we will use the notation \(I_n\) for the \(n \times n\) identity matrix.
The identity matrix is so important that there is a special symbol to denote the \(ij^{th}\) entry of the identity matrix. This symbol is given by \(I_{ij}=\delta _{ij}\) where \(\delta _{ij}\) is the Kronecker symbol defined by \[\delta _{ij}=\left\{ \begin{array}{c} 1 \text{ if }i=j \\ 0\text{ if }i\neq j \end{array} \right.\nonumber \]
\(I_n\) is called the identity matrix because it is a multiplicative identity in the following sense.
Suppose \(A\) is an \(m\times n\) matrix and \(I_{n}\) is the \(n\times n\) identity matrix . Then \(AI_{n}=A.\) If \(I_{m}\) is the \(m\times m\) identity matrix, it also follows that \(I_{m}A=A.\)
- Proof
-
The \((i,j)\)-entry of \(AI_n\) is given by: \[\sum_{k}a_{ik}\delta _{kj}=a_{ij}\nonumber \] and so \(AI_{n}=A.\) The other case is left as an exercise for you.
Below is a video on the identity matrix.
We now define the matrix operation which in some ways plays the role of division.
A square \(n\times n\) matrix \(A\) is said to have an inverse \(A^{-1}\) if and only if
\[AA^{-1}=A^{-1}A=I_n\nonumber \]
In this case, the matrix \(A\) is called invertible .
Such a matrix \(A^{-1}\) will have the same size as the matrix \(A\). It is very important to observe that the inverse of a matrix, if it exists, is unique. Another way to think of this is that if it acts like the inverse, then it \(\textbf{is}\) the inverse.
Suppose \(A\) is an \(n \times\ n\) matrix such that an inverse \(A^{-1}\) exists. Then there is only one such inverse matrix. That is, given any matrix \(B\) such that \(AB=BA=I\), \(B=A^{-1}\).
- Proof
-
In this proof, it is assumed that \(I\) is the \(n \times n\) identity matrix. Let \(A, B\) be \(n \times n\) matrices such that \(A^{-1}\) exists and \(AB=BA=I\). We want to show that \(A^{-1} = B\). Now using properties we have seen, we get:
\[A^{-1}=A^{-1}I=A^{-1}\left( AB\right) =\left( A^{-1}A\right) B=IB=B\nonumber \]
Hence, \(A^{-1} = B\) which tells us that the inverse is unique.
The next example demonstrates how to check the inverse of a matrix.
Let \(A=\left[ \begin{array}{rr} 1 & 1 \\ 1 & 2 \end{array} \right] .\) Show \(\left[ \begin{array}{rr} 2 & -1 \\ -1 & 1 \end{array} \right]\) is the inverse of \(A.\)
Solution
To check this, multiply \[\left[ \begin{array}{rr} 1 & 1 \\ 1 & 2 \end{array} \right] \left[ \begin{array}{rr} 2 & -1 \\ -1 & 1 \end{array} \right] = \ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right] = I\nonumber \] and \[\left[ \begin{array}{rr} 2 & -1 \\ -1 & 1 \end{array} \right] \left[ \begin{array}{rr} 1 & 1 \\ 1 & 2 \end{array} \right] = \ \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right] = I\nonumber \] showing that this matrix is indeed the inverse of \(A.\)
Unlike ordinary multiplication of numbers, it can happen that \(A\neq 0\) but \(A\) may fail to have an inverse. This is illustrated in the following example.
Let \(A=\left[ \begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array} \right] .\) Show that \(A\) does not have an inverse.
Solution
One might think \(A\) would have an inverse because it does not equal zero. However, note that \[\left[ \begin{array}{rr} 1 & 1 \\ 1 & 1 \end{array} \right] \left[ \begin{array}{r} -1 \\ 1 \end{array} \right] =\left[ \begin{array}{r} 0 \\ 0 \end{array} \right]\nonumber \] If \(A^{-1}\) existed, we would have the following \[\begin{aligned} \left[ \begin{array}{r} 0 \\ 0 \end{array} \right] &= A^{-1}\left( \left[ \begin{array}{r} 0 \\ 0 \end{array} \right] \right) \\ &= A^{-1}\left( A\left[ \begin{array}{r} -1 \\ 1 \end{array} \right] \right) \\ &=\left( A^{-1}A\right) \left[ \begin{array}{r} -1 \\ 1 \end{array} \right] \\ &=I\left[ \begin{array}{r} -1 \\ 1 \end{array} \right] \\ &=\left[ \begin{array}{r} -1 \\ 1 \end{array} \right]\end{aligned}\] This says that \[\left[ \begin{array}{r} 0 \\ 0 \end{array} \right] = \left[ \begin{array}{r} -1 \\ 1 \end{array} \right]\nonumber \] which is impossible! Therefore, \(A\) does not have an inverse.
In the next section, we will explore how to find the inverse of a matrix, if it exists.