4.1: Determinants- Definition
( \newcommand{\kernel}{\mathrm{null}\,}\)
- Learn the definition of the determinant.
- Learn some ways to eyeball a matrix with zero determinant, and how to compute determinants of upper- and lower-triangular matrices.
- Learn the basic properties of the determinant, and how to apply them.
- Recipe: compute the determinant using row and column operations.
- Theorems: existence theorem, invertibility property, multiplicativity property, transpose property.
- Vocabulary words: diagonal, upper-triangular, lower-triangular, transpose.
- Essential vocabulary word: determinant.
In this section, we define the determinant, and we present one way to compute it. Then we discuss some of the many wonderful properties the determinant enjoys.
The Definition of the Determinant
The determinant of a square matrix
The determinant is a function
satisfying the following properties:
- Doing a row replacement on
does not change . - Scaling a row of
by a scalar multiplies the determinant by . - Swapping two rows of a matrix multiplies the determinant by
. - The determinant of the identity matrix
is equal to .
In other words, to every square matrix
In each of the first three cases, doing a row operation on a matrix scales the determinant by a nonzero number. (Multiplying a row by zero is not a row operation.) Therefore, doing row operations on a square matrix
The main motivation behind using these particular defining properties is geometric: see Section 4.3. Another motivation for this definition is that it tells us how to compute the determinant: we row reduce and keep track of the changes.
Let us compute
The reduced row echelon form of the matrix is the identity matrix
Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant.
Compute
Solution
Let
Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant.
Compute
Solution
First we row reduce, then we compute the determinant in the opposite order:
The reduced row echelon form is
Here is the general method for computing determinants using row reduction.
Let
where
In other words, the determinant of
This is an efficient way of computing the determinant of a large matrix, either by hand or by computer. The computational complexity of row reduction is
Compute
Solution
We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used.
We made two row swaps and scaled once by a factor of
Compute
Solution
We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used.
We did not make any row swaps, and we scaled once by a factor of
Let us use the Recipe: Computing Determinants by Row Reducing to compute the determinant of a general
- If
then
- If
then
In either case, we recover Definition 3.5.2 in Section 3.5.
If a matrix is already in row echelon form, then you can simply read off the determinant as the product of the diagonal entries. It turns out this is true for a slightly larger class of matrices called triangular.
- The diagonal entries of a matrix
are the entries

Figure
- A square matrix is called upper-triangular if its nonzero entries all lie above the diagonal, and it is called lower-triangular if its nonzero entries all lie below the diagonal. It is called diagonal if all of its nonzero entries lie on the diagonal, i.e., if it is both upper-triangular and lower-triangular.

Figure
Let
- If
has a zero row or column, then - If
is upper-triangular or lower-triangular, then is the product of its diagonal entries.
- Proof
-
- Suppose that
has a zero row. Let be the matrix obtained by negating the zero row. Then by the second defining property, Definition . But so
Putting these together yields so .
Now suppose that has a zero column. Then is not invertible by Theorem 3.6.1 in Section 3.6, so its reduced row echelon form has a zero row. Since row operations do not change whether the determinant is zero, we conclude . - First suppose that
is upper-triangular, and that one of the diagonal entries is zero, say . We can perform row operations to clear the entries above the nonzero diagonal entries:
In the resulting matrix, the th row is zero, so by the first part.
Still assuming that is upper-triangular, now suppose that all of the diagonal entries of are nonzero. Then can be transformed to the identity matrix by scaling the diagonal entries and then doing row replacements:
Since and we scaled by the reciprocals of the diagonal entries, this implies is the product of the diagonal entries.
The same argument works for lower triangular matrices, except that the the row replacements go down instead of up.
- Suppose that
Compute the determinants of these matrices:
Solution
The first matrix is upper-triangular, the second is lower-triangular, and the third has a zero row:
A matrix can always be transformed into row echelon form by a series of row operations, and a matrix in row echelon form is upper-triangular. Therefore, we have completely justified Recipe: Computing Determinants by Row Reducing for computing the determinant.
The determinant is characterized by its defining properties, Definition
There exists one and only one function from the set of square matrices to the real numbers, that satisfies the four defining properties, Definition
We will prove the existence theorem in Section 4.2, by exhibiting a recursive formula for the determinant. Again, the real content of the existence theorem is:
No matter which row operations you do, you will always compute the same value for the determinant.
Magical Properties of the Determinant
In this subsection, we will discuss a number of the amazing properties enjoyed by the determinant: the invertibility property, Proposition
A square matrix is invertible if and only if
- Proof
-
If
is invertible, then it has a pivot in every row and column by the Theorem 3.6.1 in Section 3.6, so its reduced row echelon form is the identity matrix. Since row operations do not change whether the determinant is zero, and since this implies Conversely, if is not invertible, then it is row equivalent to a matrix with a zero row. Again, row operations do not change whether the determinant is nonzero, so in this case
By the invertibility property, a matrix that does not satisfy any of the properties of the Theorem 3.6.1 in Section 3.6 has zero determinant.
Let
- Proof
-
If the columns of
are linearly dependent, then is not invertible by condition 4 of the Theorem 3.6.1 in Section 3.6. Suppose now that the rows of are linearly dependent. If are the rows of then one of the rows is in the span of the others, so we have an equation likeIf we perform the following row operations on
then the second row of the resulting matrix is zero. Hence
is not invertible in this case either.Alternatively, if the rows of
are linearly dependent, then one can combine condition 4 of the Theorem 3.6.1 in Section 3.6 and the transpose property, Proposition below to conclude that .
In particular, if two rows/columns of
The following matrices all have zero determinant:
The proofs of the multiplicativity property, Proposition
If
- Proof
-
In this proof, we need to use the notion of an elementary matrix. This is a matrix obtained by doing one row operation to the identity matrix. There are three kinds of elementary matrices: those arising from row replacement, row scaling, and row swaps:
The important property of elementary matrices is the following claim.
Claim: If
is the elementary matrix for a row operation, then is the matrix obtained by performing the same row operation on .In other words, left-multiplication by an elementary matrix applies a row operation. For example,
The proof of the Claim is by direct calculation; we leave it to the reader to generalize the above equalities to
matrices.As a consequence of the Claim and the four defining properties, Definition
, we have the following observation. Let be any square matrix.- If
is the elementary matrix for a row replacement, then In other words, left-multiplication by does not change the determinant. - If
is the elementary matrix for a row scale by a factor of then In other words, left-multiplication by scales the determinant by a factor of . - If
is the elementary matrix for a row swap, then In other words, left-multiplication by negates the determinant.
Since
satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem . In other words, for all matrices we haveMultiplying through by
gives- Let
be the matrix obtained by swapping two rows of and let be the elementary matrix for this row replacement, so . Since left-multiplication by negates the determinant, we have so
- We have
Now we turn to the proof of the multiplicativity property. Suppose to begin that
is not invertible. Then is also not invertible: otherwise, implies By the invertibility property, Proposition , both sides of the equation are zero.Now assume that
is invertible, so . Define a functionWe claim that
satisfies the four defining properties of the determinant.- Let
be the matrix obtained by doing a row replacement on and let be the elementary matrix for this row replacement, so . Since left-multiplication by does not change the determinant, we have so
- Let
be the matrix obtained by scaling a row of by a factor of and let be the elementary matrix for this row replacement, so . Since left-multiplication by scales the determinant by a factor of we have so
- If
Recall that taking a power of a square matrix
If
For completeness, we set
If
for all
- Proof
-
Using the multiplicativity property, Proposition
, we computeand
the pattern is clear.
We have
by the multiplicativity property, Proposition
and the fourth defining property, Definition , which shows that . Thusand so on.
Compute
Solution
We have
Nowhere did we have to compute the
Here is another application of the multiplicativity property, Proposition
Let
- Proof
-
The determinant of the product is the product of the determinants by the multiplicativity property, Proposition
:By the invertibility property, Proposition
, this is nonzero if and only if is invertible. On the other hand, is nonzero if and only if each which means each is invertible.
For any number
Show that the product
is not invertible.
Solution
When
Hence any product involving
In order to state the transpose property, we need to define the transpose of a matrix.
The transpose of an

Figure
Like inversion, transposition reverses the order of matrix multiplication.
Let
- Proof
-
First suppose that
is a row vector an is a column vector, i.e., . ThenNow we use the row-column rule for matrix multiplication. Let
be the rows of and let be the columns of soBy the case we handled above, we have
. Then
For any square matrix
- Proof
-
We follow the same strategy as in the proof of the multiplicativity property, Proposition
: namely, we defineand we show that
satisfies the four defining properties of the determinant. Again we use elementary matrices, also introduced in the proof of the multiplicativity property, Proposition .- Let
be the matrix obtained by doing a row replacement on and let be the elementary matrix for this row replacement, so . The elementary matrix for a row replacement is either upper-triangular or lower-triangular, with ones on the diagonal:
It follows that is also either upper-triangular or lower-triangular, with ones on the diagonal, so by this Proposition . By the Fact and the multiplicativity property, Proposition , - Let
be the matrix obtained by scaling a row of by a factor of and let be the elementary matrix for this row replacement, so . Then is a diagonal matrix: Thus . By the Fact and the multiplicativity property, Proposition , - Let
be the matrix obtained by swapping two rows of and let be the elementary matrix for this row replacement, so . The is equal to its own transpose: Since (hence ) is obtained by performing one row swap on the identity matrix, we have . By the Fact and the multiplicativity property, Proposition , - Since
we have Since satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem . In other words, for all matrices we have
- Let
The transpose property, Proposition
This implies that the determinant has the curious feature that it also behaves well with respect to column operations. Indeed, a column operation on
The determinant satisfies the following properties with respect to column operations:
- Doing a column replacement on
does not change . - Scaling a column of
by a scalar multiplies the determinant by . - Swapping two columns of a matrix multiplies the determinant by
.
The previous corollary makes it easier to compute the determinant: one is allowed to do row and column operations when simplifying the matrix. (Of course, one still has to keep track of how the row and column operations change the determinant.)
Compute
Solution
It takes fewer column operations than row operations to make this matrix upper-triangular:
We performed two column replacements, which does not change the determinant; therefore,
Multilinearity
The following observation is useful for theoretical purposes.
We can think of
Let
is linear.
- Proof
-
First assume that
soWe have to show that
satisfies the defining properties, Definition 3.3.1, in Section 3.3.- By the first defining property, Definition
, scaling any row of a matrix by a number scales the determinant by a factor of . This implies that satisfies the second property, i.e., that - We claim that
. If is in then for some scalars . Let be the matrix with rows so By performing the row operations the first row of the matrix becomes Therefore, Doing the opposite row operations to the matrix with rows shows that which finishes the proof of the first property in this case.
Now suppose that is not in . This implies that is linearly dependent (otherwise it would form a basis for ), so = 0. If is not in then is linearly dependent by the increasing span criterion, Theorem 2.5.2 in Section 2.5, so for all as the matrix with rows is not invertible. Hence we may assume is in . By the above argument with the roles of and reversed, we have
For
we note thatBy the previously handled case, we know that
is linear:Multiplying both sides by
we see that is linear. - By the first defining property, Definition
For example, we have
By the transpose property, Proposition
In more theoretical treatments of the topic, where row reduction plays a secondary role, the defining properties of the determinant are often taken to be:
- The determinant
is multilinear in the rows of . - If
has two identical rows, then . - The determinant of the identity matrix is equal to one.
We have already shown that our four defining properties, Definition
Defining property
where
as desired.
- There is one and only one function
satisfying the four defining properties, Definition . - The determinant of an upper-triangular or lower-triangular matrix is the product of the diagonal entries.
- A square matrix is invertible if and only if
in this case, - If
and are matrices, then - For any square matrix
we have - The determinant can be computed by performing row and/or column operations.



