4.3: Properties of the Determinant
View Properties of the Determinant on YouTube
The determinant, as we know, is a function that maps an \(n\)-by-\(n\) matrix to a scalar. We now define this determinant function by the following three properties.
The determinant of the identity matrix is one, i.e.,
\[\det\text{I}=1.\nonumber \]
This property essentially normalizes the determinant. The two-by-two illustration is
\[\left|\begin{array}{cc}1&0\\0&1\end{array}\right|=1\times 1-0\times 0=1.\nonumber \]
The determinant changes sign under row exchange. The two-by-two illustration is
\[\left|\begin{array}{cc}a&b\\c&d\end{array}\right|=ad-bc=-(cb-da)=-\left|\begin{array}{cc}c&d\\a&b\end{array}\right|.\nonumber \]
The determinant is a linear function of the first row, holding all other rows fixed. The two-by-two illustration is
\[\left|\begin{array}{cc}ka&kb\\c&d\end{array}\right|=kad-kbc=k(ad-bc)=k\left|\begin{array}{cc}a&b\\c&d\end{array}\right|\nonumber \]
and
\[\left|\begin{array}{cc}a+a'&b+b' \\ c&d\end{array}\right|=(a+a')d-(b+b')c=(ad-bc)+(a'd-b'c)=\left|\begin{array}{cc}a&b\\c&d\end{array}\right|+\left|\begin{array}{cc}a'&b' \\ c&d\end{array}\right|.\nonumber \]
Remarkably, Properties \(\PageIndex{1}\)-\(\PageIndex{3}\) are all we need to uniquely define the determinant function. It can be shown that these three properties hold in both the two-by-two and three-by-three cases, and for the Laplace expansion and the Leibniz formula for the general \(n\)-by-\(n\) case.
We now discuss further properties that follow from Properties \(\PageIndex{1}\)-\(\PageIndex{3}\). We will continue to illustrate these properties using a two-by-two matrix.
The determinant is a linear function of all the rows, e.g.,
\[\begin{align}
\left|\begin{array}{cc}
a & b \\
k c & k d
\end{array}\right| &=-\left|\begin{array}{cc}
k c & k d \\
a & b
\end{array}\right| \tag{Property 2}\
\[4pt]
&=-k\left|\begin{array}{cc}
c & d \\
a & b
\end{array}\right| \tag{Property 3} \
\[4pt]
&=k\left|\begin{array}{ll}
a & b \\
c & d
\end{array}\right| , \tag{Property 2}
\end{align} \nonumber \]
and similarly for the second linearity condition.
If a matrix has two equal rows, then the determinant is zero, e.g.,
\[\begin{aligned}\left|\begin{array}{cc}a&b\\a&b\end{array}\right|&=-\left|\begin{array}{cc}a&b\\a&b\end{array}\right|\qquad\text{(Property 2)} \\ &=0,\end{aligned} \nonumber \]
since zero is the only number equal to its negative.
If we add \(k\) times row-\(i\) to row-\(j\) the determinant doesn’t change, e.g.,
\[\begin{aligned}\left|\begin{array}{cc}a&b\\c+ka&d+kb\end{array}\right|&=\left|\begin{array}{cc}a&b\\c&d\end{array}\right|+k\left|\begin{array}{cc}a&b\\a&b\end{array}\right|\qquad\text{(Property 4)} \\ &=\left|\begin{array}{cc}a&b\\c&d\end{array}\right|.\qquad\text{(Property 5)}\end{aligned} \nonumber \]
This property together with Property \(\PageIndex{2}\) and \(\PageIndex{3}\) allows us to perform Gaussian elimination on a matrix to simplify the calculation of a determinant.
The determinant of a matrix with a row of zeros is zero, e.g.,
\[\begin{aligned}\left|\begin{array}{cc}a&b\\0&0\end{array}\right|&=0\left|\begin{array}{cc}a&b\\0&0\end{array}\right|\qquad\text{(Property 4)} \\ &=0.\end{aligned} \nonumber \]
The determinant of a diagonal matrix is just the product of the diagonal elements, e.g.,
\[\begin{aligned}\left|\begin{array}{cc}a&0\\0&d\end{array}\right|&=ad\left|\begin{array}{cc}1&0\\0&1\end{array}\right|\qquad\text{(Property 4)} \\ &=ad.\qquad\text{(Property 1)}\end{aligned} \nonumber \]
The determinant of an upper or lower triangular matrix is just the product of the diagonal elements, e.g.,
\[\begin{aligned} \left|\begin{array}{cc}a&b\\0&d\end{array}\right|&=\left|\begin{array}{cc}a&0\\0&d\end{array}\right|\qquad\text{(Property 6)} \\ &=ad.\qquad\text{(Property 8)}\end{aligned} \nonumber \]
In the above calculation, Property \(\PageIndex{6}\) is applied by multiplying the second row by \(−b/d\) and adding it to the first row.
A matrix with a nonzero determinant is invertible. A matrix with a zero determinant is singular. Row reduction (Property \(\PageIndex{6}\)), row exchange (Property \(\PageIndex{2}\)), and multiplication of a row by a nonzero scalar (Property \(\PageIndex{4}\)) can bring a square matrix to its reduced row echelon form. If \(\text{rref}(\text{A}) = \text{I}\), then the determinant is nonzero and the matrix is invertible. If \(\text{rref}(\text{A})\neq \text{I}\), then the last row is all zeros, the determinant is zero, and the matrix is singular.
The determinant of the product is equal to the product of the determinants, i.e.,
\[\operatorname{det} A B=\operatorname{det} A \operatorname{det} B . \nonumber \]
This identity turns out to be very useful, but its proof for a general \(n\) -by- \(n\) matrix is difficult. The proof for a two-by-two matrix can be done directly.
Let
\[\mathrm{A}=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right), \quad \mathrm{B}=\left(\begin{array}{ll} e & f \\ g & h \end{array}\right) \nonumber \]
Then
\[\mathrm{AB}=\left(\begin{array}{ll} a e+b g & a f+b h \\ c e+d g & c f+d h \end{array}\right) \nonumber \]
and
\[\begin{aligned} \operatorname{det} \mathrm{AB} &=\left|\begin{array}{cc} a e+b g & a f+b h \\ c e+d g & c f+d h \end{array}\right| \\ &=\left|\begin{array}{cc} a e \\ c e+d g & c f+d h \end{array}\right|+\left|\begin{array}{cc} a g & b h \\ c e+d g & c f+d h \end{array}\right| \\ &=\left|\begin{array}{ll} a e & a f \\ c e & c f \end{array}\right|+\left|\begin{array}{ll} a e & a f \\ d g & d h \end{array}\right|+\left|\begin{array}{cc} b g & b h \\ c e & c f \end{array}\right|+\left|\begin{array}{cc} b g & b h \\ d g & d h \end{array}\right| \\ &=a c\left|\begin{array}{cc} e & f \\ e & f \end{array}\right|+a d\left|\begin{array}{ll} e & f \\ g & h \end{array}\right|+b c\left|\begin{array}{ll} g & h \\ e & f \end{array}\right|+b d\left|\begin{array}{ll} g & h \\ g & h \end{array}\right| \\ &=(a d-b c)\left|\begin{array}{ll} e & f \\ g & h \end{array}\right| \\ &=\operatorname{det} \mathrm{A} d e t \mathrm{~B} . \end{aligned} \nonumber \]
Commuting two matrices doesn’t change the value of the determinant, i.e., \(\operatorname{det} \mathrm{AB}=\operatorname{det} \mathrm{BA}\) . The proof is simply
\[\begin{aligned} \operatorname{det} \mathrm{AB} &=\operatorname{det} \mathrm{A} \operatorname{det} \mathrm{B} \\ &=\operatorname{det} \mathrm{B} \operatorname{det} \mathrm{A} \\ &=\operatorname{det} \mathrm{BA} . \end{aligned} \nonumber \]
The determinant of the inverse is the inverse of the determinant, i.e., if A is invertible, then \(\operatorname{det}\left(\mathrm{A}^{-1}\right)=1 / \operatorname{det} \mathrm{A}\) . The proof is
\[\begin{aligned} 1 &=\operatorname{det} I \\ &=\operatorname{det}\left(\mathrm{AA}^{-1}\right) \\ &=\operatorname{det} \mathrm{A} \operatorname{det} \mathrm{A}^{-1} \end{aligned} \nonumber \]
Therefore,
\[\operatorname{det} \mathrm{A}^{-1}=\frac{1}{\operatorname{det} \mathrm{A}} \nonumber \]
The determinant of a matrix raised to an integer power is equal to the determinant of that matrix, raised to the integer power. Note that \(\mathrm{A}^{2}=\mathrm{AA}\) , \(\mathrm{A}^{3}=\mathrm{AAA}\) , etc. This property in equation form is given by
\[\operatorname{det}\left(\mathrm{A}^{p}\right)=(\operatorname{det} \mathrm{A})^{p}, \nonumber \]
where \(p\) is an integer. This result follows from the successive application of Property \(11 .\)
If \(\mathrm{A}\) is an \(n\) -by- \(n\) matrix, then
\[\operatorname{det} k \mathrm{~A}=k^{n} \operatorname{det} \mathrm{A} . \nonumber \]
Note that \(k\) A multiplies every element of A by the scalar \(k\) . This property follows simply from Property 4 applied \(n\) times.
The determinant of the transposed matrix is equal to the determinant of the matrix, i.e.
\[\operatorname{det} \mathrm{A}^{\mathrm{T}}=\operatorname{det} \mathrm{A} . \nonumber \]
When \(\mathrm{A}=\mathrm{LU}\) without any row exchanges, we have \(\mathrm{A}^{\mathrm{T}}=\mathrm{U}^{\mathrm{T}} \mathrm{L}^{\mathrm{T}}\) and
\[\begin{aligned} \operatorname{det} \mathrm{A}^{\mathrm{T}} &=\operatorname{det} \mathrm{U}^{\mathrm{T}} \mathrm{L}^{\mathrm{T}} \\ &=\operatorname{det} \mathrm{U}^{\mathrm{T}} \operatorname{det} \mathrm{L}^{\mathrm{T}} \\ &=\operatorname{det} \mathrm{U} \operatorname{det} \mathrm{L} \\ &=\operatorname{det} \mathrm{LU} \\ &=\operatorname{det} \mathrm{A} . \end{aligned} \nonumber \]
The same result can be shown to hold even if row interchanges are needed. The implication of Property 16 is that any statement about the determinant and the rows of A also apply to the columns of A. To compute the determinant, one can do either row reduction or column reduction!
It is time for some examples. We start with a simple three-by-three matrix and illustrate some approaches to a hand calculation of the determinant.
Compute the determinant of
\[A=\left(\begin{array}{rrr} 1 & 5 & 0 \\ 2 & 4 & -1 \\ 0 & -2 & 0 \end{array}\right) \nonumber \]
We show computations using the Leibniz formula and the Laplace expansion.
Solution
Method 1 (Leibniz formula): We compute the six terms directly by periodically extending the matrix and remembering that diagonals slanting down towards the right get plus signs and diagonals slanting down towards the left get minus signs. We have \(\operatorname{det} \mathrm{A}=1 \cdot 4 \cdot 0+5 \cdot(-1) \cdot 0+0 \cdot 2 \cdot(-2)-0 \cdot 4 \cdot 0-5 \cdot 2 \cdot 0-1 \cdot(-1) \cdot(-2)=-2\)
Method 2 (Laplace expansion): We expand using minors. We should choose an expansion across the row or down the column that has the most zeros. Here, the obvious choices are either the third row or the third column, and we can show both. Across the third row, we have
\[\operatorname{det} \mathrm{A}=-(-2) \cdot\left|\begin{array}{rr} 1 & 0 \\ 2 & -1 \end{array}\right|=-2 \nonumber \]
and down the third column, we have
\[\operatorname{det} \mathrm{A}=-(-1) \cdot\left|\begin{array}{rr} 1 & 5 \\ 0 & -2 \end{array}\right|=-2 \nonumber \]
Compute the determinant of
\[\mathrm{A}=\left(\begin{array}{rrrrr} 6 & 3 & 2 & 4 & 0 \\ 9 & 0 & -4 & 1 & 0 \\ 8 & -5 & 6 & 7 & 1 \\ 3 & 0 & 0 & 0 & 0 \\ 4 & 2 & 3 & 2 & 0 \end{array}\right) \nonumber \]
Solution
We first expand in minors across the fourth row:
\[\left|\begin{array}{rrrrr}6 & 3 & 2 & 4 & 0 \\ 9 & 0 & -4 & 1 & 0 \\ 8 & -5 & 6 & 7 & 1 \\ 3 & 0 & 0 & 0 & 0 \\ 4 & 2 & 3 & 2 & 0\end{array}\right|=-3\left|\begin{array}{rrrr}3 & 2 & 4 & 0 \\ 0 & -4 & 1 & 0 \\ -5 & 6 & 7 & 1 \\ 2 & 3 & 2 & 0\end{array}\right| \nonumber \]
We then expand in minors down the fourth column:
\[-3\left|\begin{array}{rrrr} 3 & 2 & 4 & 0 \\ 0 & -4 & 1 & 0 \\ -5 & 6 & 7 & 1 \\ 2 & 3 & 2 & 0 \end{array}\right|=3\left|\begin{array}{rrr} 3 & 2 & 4 \\ 0 & -4 & 1 \\ 2 & 3 & 2 \end{array}\right| . \nonumber \]
We can then multiply the third column by 4 and add it to the second column:
\[=3\left|\begin{array}{rrr} 3 & 2 & 4 \\ 0 & -4 & 1 \\ 2 & 3 & 2 \end{array}\right|=3\left|\begin{array}{ccc} 3 & 18 & 4 \\ 0 & 0 & 1 \\ 2 & 11 & 2 \end{array}\right| \text {, } \nonumber \]
and finally expand in minors across the second row:
\[3\left|\begin{array}{ccc} 3 & 18 & 4 \\ 0 & 0 & 1 \\ 2 & 11 & 2 \end{array}\right|=-3\left|\begin{array}{ll} 3 & 18 \\ 2 & 11 \end{array}\right|=-3(33-36)=9 \nonumber \]
The technique here is to try and zero out all the elements in a row or a column except one before proceeding to expand by minors across that row or column.
Recall the Fibonacci Q-matrix, which satisfies
\[\mathrm{Q}=\left(\begin{array}{ll} 1 & 1 \\ 1 & 0 \end{array}\right), \quad \mathrm{Q}^{n}=\left(\begin{array}{cc} F_{n+1} & F_{n} \\ F_{n} & F_{n-1} \end{array}\right) \nonumber \]
where \(F_{n}\) is the \(n\) th Fibonacci number. Prove Cassini’s identity
\[F_{n+1} F_{n-1}-F_{n}^{2}=(-1)^{n} . \nonumber \]
Solution
Repeated us of Property 10 yields \(\operatorname{det}\left(\mathrm{Q}^{n}\right)=(\operatorname{det} \mathrm{Q})^{n}\) . Applying this identity to the Fibonacci Q-matrix results in Cassini’s identity. For example, with \(F_{5}=5, F_{6}=8\) , \(F_{7}=13\) , we have \(13 \cdot 5-8^{2}=1\) . Cassini’s identity leads to an amusing dissection fallacy called the Fibonacci bamboozlement , which is not discussed further here.
Consider the tridiagonal matrix with ones on the main diagonal, ones on the first diagonal below the main, and negative ones on the first diagonal above the main. The matrix denoted by \(T_{n}\) is the \(n\) -by- \(n\) version of this matrix. For example, the first four matrices are given by
\[T_{1}=(1), \quad T_{2}=\left(\begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array}\right), \quad T_{3}=\left(\begin{array}{rrr} 1 & -1 & 0 \\ 1 & 1 & -1 \\ 0 & 1 & 1 \end{array}\right) \quad T_{4}=\left(\begin{array}{rrrr} 1 & -1 & 0 & 0 \\ 1 & 1 & -1 & 0 \\ 0 & 1 & 1 & -1 \\ 0 & 0 & 1 & 1 \end{array}\right) \nonumber \]
Show that \(\left|T_{n}\right|=F_{n+1}\) .
Solution
Let’s compute the first three determinants. We have \(\left|T_{1}\right|=1=F_{2}\) and \(\left|T_{2}\right|=2=F_{3}\) . We compute \(\left|T_{3}\right|\) going across the first row using minors:
\[\left|T_{3}\right|=1\left|\begin{array}{rr} 1 & -1 \\ 1 & 1 \end{array}\right|+1\left|\begin{array}{rr} 1 & -1 \\ 0 & 1 \end{array}\right|=2+1=3=F_{4} . \nonumber \]
To prove that \(\left|T_{n}\right|=F_{n+1}\) , we need only prove that \(\left|T_{n+1}\right|=\left|T_{n}\right|+\left|T_{n-1}\right|\) . We expand \(\left|T_{n+1}\right|\) in minors across the first row. Using \(\left|T_{4}\right|\) as an example, it is easy to see that
\[\left|T_{n+1}\right|=\left|T_{n}\right|+\left|\begin{array}{rrrrrr} 1 & -1 & 0 & 0 & 0 & \ldots \\ 0 & 1 & -1 & 0 & 0 & \ldots \\ 0 & 1 & 1 & -1 & 0 & \ldots \\ 0 & 0 & 1 & 1 & -1 & \ldots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ldots \end{array}\right| . \nonumber \]
The remaining determinant can be expanded down the first column to obtain \(\left|T_{n-1}\right|\) so that \(\left|T_{n+1}\right|=\left|T_{n}\right|+\left|T_{n-1}\right|\) .
This Fibonacci recursion relation together with \(\left|T_{1}\right|=\) 1 and \(\left|T_{2}\right|=2\) results in \(\left|T_{n}\right|=F_{n+1}\) .