4.1: Determinants- Definition
( \newcommand{\kernel}{\mathrm{null}\,}\)
- Learn the definition of the determinant.
- Learn some ways to eyeball a matrix with zero determinant, and how to compute determinants of upper- and lower-triangular matrices.
- Learn the basic properties of the determinant, and how to apply them.
- Recipe: compute the determinant using row and column operations.
- Theorems: existence theorem, invertibility property, multiplicativity property, transpose property.
- Vocabulary words: diagonal, upper-triangular, lower-triangular, transpose.
- Essential vocabulary word: determinant.
In this section, we define the determinant, and we present one way to compute it. Then we discuss some of the many wonderful properties the determinant enjoys.
The Definition of the Determinant
The determinant of a square matrix A is a real number det(A). It is defined via its behavior with respect to row operations; this means we can use row reduction to compute it. We will give a recursive formula for the determinant in Section 4.2. We will also show in Subsection Magical Properties of the Determinant that the determinant is related to invertibility, and in Section 4.3 that it is related to volumes.
The determinant is a function
det:{square matrices}→R
satisfying the following properties:
- Doing a row replacement on A does not change det(A).
- Scaling a row of A by a scalar c multiplies the determinant by c.
- Swapping two rows of a matrix multiplies the determinant by −1.
- The determinant of the identity matrix In is equal to 1.
In other words, to every square matrix A we assign a number det(A) in a way that satisfies the above properties.
In each of the first three cases, doing a row operation on a matrix scales the determinant by a nonzero number. (Multiplying a row by zero is not a row operation.) Therefore, doing row operations on a square matrix A does not change whether or not the determinant is zero.
The main motivation behind using these particular defining properties is geometric: see Section 4.3. Another motivation for this definition is that it tells us how to compute the determinant: we row reduce and keep track of the changes.
Let us compute det(2114). First we row reduce, then we compute the determinant in the opposite order:
(2114)det=7R1↔R2→(1421)det=−7R2=R2−2R1→(140−7)det=−7R2=R2÷−7→(1401)det=1R1=R1−4R2→(1001)det=1
The reduced row echelon form of the matrix is the identity matrix I2, so its determinant is 1. The second-last step in the row reduction was a row replacement, so the second-final matrix also has determinant 1. The previous step in the row reduction was a row scaling by −1/7; since (the determinant of the second matrix times −1/7) is 1, the determinant of the second matrix must be −7. The first step in the row reduction was a row swap, so the determinant of the first matrix is negative the determinant of the second. Thus, the determinant of the original matrix is 7.
Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant.
Compute det(1003).
Solution
Let A=(1003). Since A is obtained from I2 by multiplying the second row by the constant 3, we have
det(A)=3det(I2)=3⋅1=3.
Note that our answer agrees with Definition 3.5.2 in Section 3.5 of the determinant.
Compute det(100001510).
Solution
First we row reduce, then we compute the determinant in the opposite order:
(100001510)det=−1R2↔R3→(100510001)det=1R2=R2−5R1→(100010001)det=1
The reduced row echelon form is I3, which has determinant 1. Working backwards from I3 and using the four defining properties Definition 4.1.1, we see that the second matrix also has determinant 1 (it differs from I3 by a row replacement), and the first matrix has determinant −1 (it differs from the second by a row swap).
Here is the general method for computing determinants using row reduction.
Let A be a square matrix. Suppose that you do some number of row operations on A to obtain a matrix B in row echelon form. Then
det(A)=(−1)r⋅(product of the diagonal entries of B)(product of scaling factors used),
where r is the number of row swaps performed.
In other words, the determinant of A is the product of diagonal entries of the row echelon form B, times a factor of ±1 coming from the number of row swaps you made, divided by the product of the scaling factors used in the row reduction.
This is an efficient way of computing the determinant of a large matrix, either by hand or by computer. The computational complexity of row reduction is O(n3); by contrast, the cofactor expansion algorithm we will learn in Section 4.2 has complexity O(n!)≈O(nn√n), which is much larger. (Cofactor expansion has other uses.)
Compute det(0−7−424637−1).
Solution
We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used.
(0−7−424637−1)R1↔R2→(2460−7−437−1)r=1R1=R1÷2→(1230−7−437−1)scaling factors =12R3=R3−3R1→(1230−7−401−10)R2↔R3→(12301−100−7−4)r=2R3=R3+7R2→(12301−1000−74)
We made two row swaps and scaled once by a factor of 1/2, so the Recipe: Computing Determinants by Row Reducing says that
det(0−7−424637−1)=(−1)2⋅1⋅1⋅(−74)1/2=−148.
Compute det(1232−11301).
Solution
We row reduce the matrix, keeping track of the number of row swaps and of the scaling factors used.
(1232−11301)R2=R2−2R1R3=R3−3R1→(1230−5−50−6−8)R2=R2÷5→(1230110−6−8)scaling factors =−15R3=R3+6R2→(12301100−2)
We did not make any row swaps, and we scaled once by a factor of −1/5, so the Recipe: Computing Determinants by Row Reducing says that
det(1232−11301)=1⋅1⋅(−2)−1/5=10.
Let us use the Recipe: Computing Determinants by Row Reducing to compute the determinant of a general 2×2 matrix A=(abcd).
- If a=0, then
det(abcd)=det(0bcd)=−det(cd0b)=−bc. - If a≠0, then
det(abcd)=a⋅det(1b/acd)=a⋅det(1b/a0d−c⋅b/a)=a⋅1⋅(d−bc/a)=ad−bc.
In either case, we recover Definition 3.5.2 in Section 3.5.
det(abcd)=ad−bc.
If a matrix is already in row echelon form, then you can simply read off the determinant as the product of the diagonal entries. It turns out this is true for a slightly larger class of matrices called triangular.
- The diagonal entries of a matrix A are the entries a11,a22,…:
Figure 4.1.1
- A square matrix is called upper-triangular if its nonzero entries all lie above the diagonal, and it is called lower-triangular if its nonzero entries all lie below the diagonal. It is called diagonal if all of its nonzero entries lie on the diagonal, i.e., if it is both upper-triangular and lower-triangular.
Figure 4.1.2
Let A be an n×n matrix.
- If A has a zero row or column, then det(A)=0.
- If A is upper-triangular or lower-triangular, then det(A) is the product of its diagonal entries.
- Proof
-
- Suppose that A has a zero row. Let B be the matrix obtained by negating the zero row. Then det(A)=−det(B) by the second defining property, Definition 4.1.1. But A=B, so det(A)=det(B):
(123000789)R2=−R2→(123000789).
Putting these together yields det(A)=−det(A), so det(A)=0.
Now suppose that A has a zero column. Then A is not invertible by Theorem 3.6.1 in Section 3.6, so its reduced row echelon form has a zero row. Since row operations do not change whether the determinant is zero, we conclude det(A)=0. - First suppose that A is upper-triangular, and that one of the diagonal entries is zero, say aii=0. We can perform row operations to clear the entries above the nonzero diagonal entries:
(a11⋆⋆⋆0a22⋆⋆000⋆000a44)→(a110⋆00a22⋆00000000a44)
In the resulting matrix, the ith row is zero, so det(A)=0 by the first part.
Still assuming that A is upper-triangular, now suppose that all of the diagonal entries of A are nonzero. Then A can be transformed to the identity matrix by scaling the diagonal entries and then doing row replacements:
(a⋆⋆0b⋆00c)scale bya−1,b−1,c−1→(1⋆⋆01⋆001)rowreplacements→(100010001)det=abc←det=1←det=1
Since det(In)=1 and we scaled by the reciprocals of the diagonal entries, this implies det(A) is the product of the diagonal entries.
The same argument works for lower triangular matrices, except that the the row replacements go down instead of up.
- Suppose that A has a zero row. Let B be the matrix obtained by negating the zero row. Then det(A)=−det(B) by the second defining property, Definition 4.1.1. But A=B, so det(A)=det(B):
Compute the determinants of these matrices:
(123045006)(−2000π001003−7)(1703400011/21e).
Solution
The first matrix is upper-triangular, the second is lower-triangular, and the third has a zero row:
det(123045006)=1⋅4⋅6=24det(−2000π001003−7)=−20⋅0⋅−7=0det(17−3400011/21e)=0.
A matrix can always be transformed into row echelon form by a series of row operations, and a matrix in row echelon form is upper-triangular. Therefore, we have completely justified Recipe: Computing Determinants by Row Reducing for computing the determinant.
The determinant is characterized by its defining properties, Definition 4.1.1, since we can compute the determinant of any matrix using row reduction, as in the above Recipe: Computing Determinants by Row Reducing. However, we have not yet proved the existence of a function satisfying the defining properties! Row reducing will compute the determinant if it exists, but we cannot use row reduction to prove existence, because we do not yet know that you compute the same number by row reducing in two different ways.
There exists one and only one function from the set of square matrices to the real numbers, that satisfies the four defining properties, Definition 4.1.1.
We will prove the existence theorem in Section 4.2, by exhibiting a recursive formula for the determinant. Again, the real content of the existence theorem is:
No matter which row operations you do, you will always compute the same value for the determinant.
Magical Properties of the Determinant
In this subsection, we will discuss a number of the amazing properties enjoyed by the determinant: the invertibility property, Proposition 4.1.2, the multiplicativity property, Proposition 4.1.3, and the transpose property, Proposition 4.1.4.
A square matrix is invertible if and only if det(A)≠0.
- Proof
-
If A is invertible, then it has a pivot in every row and column by the Theorem 3.6.1 in Section 3.6, so its reduced row echelon form is the identity matrix. Since row operations do not change whether the determinant is zero, and since det(In)=1, this implies det(A)≠0. Conversely, if A is not invertible, then it is row equivalent to a matrix with a zero row. Again, row operations do not change whether the determinant is nonzero, so in this case det(A)=0.
By the invertibility property, a matrix that does not satisfy any of the properties of the Theorem 3.6.1 in Section 3.6 has zero determinant.
Let A be a square matrix. If the rows or columns of A are linearly dependent, then det(A)=0.
- Proof
-
If the columns of A are linearly dependent, then A is not invertible by condition 4 of the Theorem 3.6.1 in Section 3.6. Suppose now that the rows of A are linearly dependent. If r1,r2,…,rn are the rows of A, then one of the rows is in the span of the others, so we have an equation like
r2=3r1−r3+2r4.
If we perform the following row operations on A:
R2=R2−3R1;R2=R2+R3;R2=R2−2R4
then the second row of the resulting matrix is zero. Hence A is not invertible in this case either.
Alternatively, if the rows of A are linearly dependent, then one can combine condition 4 of the Theorem 3.6.1 in Section 3.6 and the transpose property, Proposition 4.1.4 below to conclude that det(A)=0.
In particular, if two rows/columns of A are multiples of each other, then det(A)=0. We also recover the fact that a matrix with a row or column of zeros has determinant zero.
The following matrices all have zero determinant:
(02−105100−73),(5−15113−922−616),(3124000042512−1348),(πe113π3e3312−72).
The proofs of the multiplicativity property, Proposition 4.1.3, and the transpose property, 4.1.4, below, as well as the cofactor expansion theorem, Theorem 4.2.1 in Section 4.2, and the determinants and volumes theorem, Theorem 4.3.2 in Section 4.3, use the following strategy: define another function d:{n×n matrices}→R, and prove that d satisfies the same four defining properties as the determinant. By the existence theorem, Theorem 4.1.1, the function d is equal to the determinant. This is an advantage of defining a function via its properties: in order to prove it is equal to another function, one only has to check the defining properties.
If A and B are n×n matrices, then
det(AB)=det(A)det(B).
- Proof
-
In this proof, we need to use the notion of an elementary matrix. This is a matrix obtained by doing one row operation to the identity matrix. There are three kinds of elementary matrices: those arising from row replacement, row scaling, and row swaps:
(100010001)R2=R2−2R1→(100−210001)(100010001)R1=3R1→(300010001)(100010001)R1↔R2→(010100001)
The important property of elementary matrices is the following claim.
Claim: If E is the elementary matrix for a row operation, then EA is the matrix obtained by performing the same row operation on A.
In other words, left-multiplication by an elementary matrix applies a row operation. For example,
(100−210001)(a11a12a13a21a22a23a31a32a33)=(a11a12a13a21−2a11a22−2a12a23−2a13a31a32a33)(300010001)(a11a12a13a21a22a23a31a32a33)=(3a113a123a13a21a22a23a31a32a33)(010100001)(a11a12a13a21a22a23a31a32a33)=(a21a22a23a11a12a13a31a32a33).
The proof of the Claim is by direct calculation; we leave it to the reader to generalize the above equalities to n×n matrices.
As a consequence of the Claim and the four defining properties, Definition 4.1.1, we have the following observation. Let C be any square matrix.
- If E is the elementary matrix for a row replacement, then det(EC)=det(C). In other words, left-multiplication by E does not change the determinant.
- If E is the elementary matrix for a row scale by a factor of c, then det(EC)=cdet(C). In other words, left-multiplication by E scales the determinant by a factor of c.
- If E is the elementary matrix for a row swap, then det(EC)=−det(C). In other words, left-multiplication by E negates the determinant.
Since d satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem 4.1.1. In other words, for all matrices A, we have
det(A)=d(A)=det(AB)det(B).
Multiplying through by det(B) gives det(A)det(B)=det(AB).
- Let C′ be the matrix obtained by swapping two rows of C, and let E be the elementary matrix for this row replacement, so C′=EC. Since left-multiplication by E negates the determinant, we have det(ECB)=−det(CB), so
d(C′)=det(C′B)det(B)=det(ECB)det(B)=−det(CB)det(B)=−d(C). - We have
d(In)=det(InB)det(B)=det(B)det(B)=1.
Now we turn to the proof of the multiplicativity property. Suppose to begin that B is not invertible. Then AB is also not invertible: otherwise, (AB)−1AB=In implies B−1=(AB)−1A. By the invertibility property, Proposition 4.1.2, both sides of the equation det(AB)=det(A)det(B) are zero.
Now assume that B is invertible, so det(B)≠0. Define a function
d:{n×n matrices}→Rbyd(C)=det(CB)det(B).
We claim that d satisfies the four defining properties of the determinant.
- Let C′ be the matrix obtained by doing a row replacement on C, and let E be the elementary matrix for this row replacement, so C′=EC. Since left-multiplication by E does not change the determinant, we have det(ECB)=det(CB), so
d(C′)=det(C′B)det(B)=det(ECB)det(B)=det(CB)det(B)=d(C). - Let C′ be the matrix obtained by scaling a row of C by a factor of c, and let E be the elementary matrix for this row replacement, so C′=EC. Since left-multiplication by E scales the determinant by a factor of c, we have det(ECB)=cdet(CB), so
d(C′)=det(C′B)det(B)=det(ECB)det(B)=cdet(CB)det(B)=c⋅d(C).
Recall that taking a power of a square matrix A means taking products of A with itself:
A2=AAA3=AAAetc.
If A is invertible, then we define
A−2=A−1A−1A−3=A−1A−1A−1etc.
For completeness, we set A0=In if A≠0.
If A is a square matrix, then
det(An)=det(A)n
for all n≥1. If A is invertible, then the equation holds for all n≤0 as well; in particular,
det(A−1)=1det(A).
- Proof
-
Using the multiplicativity property, Proposition 4.1.3, we compute
det(A2)=det(AA)=det(A)det(A)=det(A)2
and
det(A3)=det(AAA)=det(A)det(AA)=det(A)det(A)det(A)=det(A)3;
the pattern is clear.
We have
1=det(In)=det(AA−1)=det(A)det(A−1)
by the multiplicativity property, Proposition 4.1.3 and the fourth defining property, Definition 4.1.1, which shows that det(A−1)=det(A)−1. Thus
det(A−2)=det(A−1A−1)=det(A−1)det(A−1)=det(A−1)2=det(A)−2,
and so on.
Compute det(A100), where
A=(4121).
Solution
We have det(A)=4−2=2, so
det(A100)=det(A)100=2100.
Nowhere did we have to compute the 100th power of A! (We will learn an efficient way to do that in Section 5.4.)
Here is another application of the multiplicativity property, Proposition 4.1.3.
Let A1,A2,…,Ak be n×n matrices. Then the product A1A2⋯Ak is invertible if and only if each Ai is invertible.
- Proof
-
The determinant of the product is the product of the determinants by the multiplicativity property, Proposition 4.1.3:
det(A1A2⋯Ak)=det(A1)det(A2)⋯det(Ak).
By the invertibility property, Proposition 4.1.2, this is nonzero if and only if A1A2⋯Ak is invertible. On the other hand, det(A1)det(A2)⋯det(Ak) is nonzero if and only if each det(Ai)≠0, which means each Ai is invertible.
For any number n we define
An=(1n12).
Show that the product
A1A2A3A4A5
is not invertible.
Solution
When n=2, the matrix A2 is not invertible, because its rows are identical:
A2=(1212).
Hence any product involving A2 is not invertible.
In order to state the transpose property, we need to define the transpose of a matrix.
The transpose of an m×n matrix A is the n×m matrix AT whose rows are the columns of A. In other words, the ij entry of AT is aji.
Figure 4.1.3
Like inversion, transposition reverses the order of matrix multiplication.
Let A be an m×n matrix, and let B be an n×p matrix. Then
(AB)T=BTAT.
- Proof
-
First suppose that A is a row vector an B is a column vector, i.e., m=p=1. Then
AB=(a1a2⋯an)(b1b2⋮bn)=a1b1+a2b2+⋯+anbn=(b1b2⋯bn)(a1a2⋮an)=BTAT.
Now we use the row-column rule for matrix multiplication. Let r1,r2,…,rm be the rows of A, and let c1,c2,…,cp be the columns of B, so
AB=(—r1——r2—⋮—rm—)(|||c1c2⋯cp|||)=(r1c1r1c2⋯r1cpr2c1r2c2⋯r2cp⋮⋮⋮rmc1rmc2⋯rmcp).
By the case we handled above, we have ricj=cTjrTi. Then
(AB)T=(r1c1r2c1⋯rmc1r1c2r2c2⋯rmc2⋮⋮⋮r1cpr2cp⋯rmcp)=(cT1rT1cT1rT2⋯cT1rTmcT2rT1cT2rT2⋯cT2rTm⋮⋮⋮cTprT1cTprT2⋯cTprTm)=(—cT1——cT2—⋮—cTp—)(|||rT1rT2⋯rTm|||)=BTAT.
For any square matrix A, we have
det(A)=det(AT).
- Proof
-
We follow the same strategy as in the proof of the multiplicativity property, Proposition 4.1.3: namely, we define
d(A)=det(AT),
and we show that d satisfies the four defining properties of the determinant. Again we use elementary matrices, also introduced in the proof of the multiplicativity property, Proposition 4.1.3.
- Let C′ be the matrix obtained by doing a row replacement on C, and let E be the elementary matrix for this row replacement, so C′=EC. The elementary matrix for a row replacement is either upper-triangular or lower-triangular, with ones on the diagonal: R1=R1+3R3:(103010001)R3=R3+3R1:(100010301).
It follows that ET is also either upper-triangular or lower-triangular, with ones on the diagonal, so det(ET)=1 by this Proposition 4.1.1. By the Fact 4.1.1 and the multiplicativity property, Proposition 4.1.3, d(C′)=det((C′)T)=det((EC)T)=det(CTET)=det(CT)det(ET)=det(CT)=d(C). - Let C′ be the matrix obtained by scaling a row of C by a factor of c, and let E be the elementary matrix for this row replacement, so C′=EC. Then E is a diagonal matrix: R2=cR2:(1000c0001). Thus det(ET)=c. By the Fact 4.1.1 and the multiplicativity property, Proposition 4.1.3, d(C′)=det((C′)T)=det((EC)T)=det(CTET)=det(CT)det(ET)=cdet(CT)=c⋅d(C).
- Let C′ be the matrix obtained by swapping two rows of C, and let E be the elementary matrix for this row replacement, so C′=EC. The E is equal to its own transpose: R1⟷R2:(010100001)=(010100001)T. Since E (hence ET) is obtained by performing one row swap on the identity matrix, we have det(ET)=−1. By the Fact 4.1.1 and the multiplicativity property, Proposition 4.1.3, d(C′)=det((C′)T)=det((EC)T)=det(CTET)=det(CT)det(ET)=−det(CT)=−d(C).
- Since ITn=In, we have d(In)=det(ITn)=det(In)=1. Since d satisfies the four defining properties of the determinant, it is equal to the determinant by the existence theorem 4.1.1. In other words, for all matrices A, we havedet(A)=d(A)=det(AT).
- Let C′ be the matrix obtained by doing a row replacement on C, and let E be the elementary matrix for this row replacement, so C′=EC. The elementary matrix for a row replacement is either upper-triangular or lower-triangular, with ones on the diagonal: R1=R1+3R3:(103010001)R3=R3+3R1:(100010301).
The transpose property, Proposition 4.1.4, is very useful. For concreteness, we note that det(A)=det(AT) means, for instance, that
det(123456789)=det(147258369).
This implies that the determinant has the curious feature that it also behaves well with respect to column operations. Indeed, a column operation on A is the same as a row operation on AT, and det(A)=det(AT).
The determinant satisfies the following properties with respect to column operations:
- Doing a column replacement on A does not change det(A).
- Scaling a column of A by a scalar c multiplies the determinant by c.
- Swapping two columns of a matrix multiplies the determinant by −1.
The previous corollary makes it easier to compute the determinant: one is allowed to do row and column operations when simplifying the matrix. (Of course, one still has to keep track of how the row and column operations change the determinant.)
Compute det(274313401).
Solution
It takes fewer column operations than row operations to make this matrix upper-triangular:
(274313401)C1=C1−4C3→(−1474−913001)C1=C1+9C2→(4974013001)
We performed two column replacements, which does not change the determinant; therefore,
det(274313401)=det(4974013001)=49.
Multilinearity
The following observation is useful for theoretical purposes.
We can think of det as a function of the rows of a matrix:
det(v1,v2,…,vn)=det(—v1——v2—⋮—vn—).
Let i be a whole number between 1 and n, and fix n−1 vectors v1,v2,…,vi−1,vi+1,…,vn in Rn. Then the transformation T:Rn→R defined by
T(x)=det(v1,v2,…,vi−1,x,vi+1,…,vn)
is linear.
- Proof
-
First assume that i=1, so
T(x)=det(x,v2,…,vn).
We have to show that T satisfies the defining properties, Definition 3.3.1, in Section 3.3.
- By the first defining property, Definition 4.1.1, scaling any row of a matrix by a number c scales the determinant by a factor of c. This implies that T satisfies the second property, i.e., that T(cx)=det(cx,v2,…,vn)=cdet(x,v2,…,vn)=cT(x).
- We claim that T(v+w)=T(v)+T(w). If w is in Span{v,v2,…,vn}, then w=cv+c2v2+⋯+cnvn for some scalars c,c2,…,cn. Let A be the matrix with rows v+w,v2,…,vn, so T(v+w)=det(A). By performing the row operations R1=R1−c2R2;R1=R1−c3R3;…R1=R1−cnRn, the first row of the matrix A becomes v+w−(c2v2+⋯+cnvn)=v+cv=(1+c)v. Therefore, T(v+w)=det(A)=det((1+c)v,v2,…,vn)=(1+c)det(v,v2,…,vn)=T(v)+cT(v)=T(v)+T(cv). Doing the opposite row operations R1=R1+c2R2;R1=R1+c3R3;…R1=R1+cnRn to the matrix with rows cv,v2,…,vn shows that T(cv)=det(cv,v2,…,vn)=det(cv+c2v2+⋯+cnvn,v2,…,vn)=det(w,v2,…,vn)=T(w), which finishes the proof of the first property in this case.
Now suppose that w is not in Span{v,v2,…,vn}. This implies that {v,v2,…,vn} is linearly dependent (otherwise it would form a basis for Rn), so T(v) = 0. If v is not in Span{v2,…,vn}, then {v2,…,vn} is linearly dependent by the increasing span criterion, Theorem 2.5.2 in Section 2.5, so T(x)=0 for all x, as the matrix with rows x,v2,…,vn is not invertible. Hence we may assume v is in Span{v2,…,vn}. By the above argument with the roles of v and w reversed, we have T(v+w)=T(v)+T(w).
For i≠1, we note that
T(x)=det(v1,v2,…,vi−1,x,vi+1,…,vn)=−det(x,v2,…,vi−1,v1,vi+1,…,vn).
By the previously handled case, we know that −T is linear:
−T(cx)=−cT(x)−T(v+w)=−T(v)−T(w).
Multiplying both sides by −1, we see that T is linear.
For example, we have
det(—v1——av+bw——v3—)=adet(—v1——v——v3—)+bdet(—v1——w——v3—)
By the transpose property, Proposition 4.1.4, the determinant is also multilinear in the columns of a matrix:
det(|||v1av+bwv3|||)=adet(|||v1vv3|||)+bdet(|||v1wv3|||).
In more theoretical treatments of the topic, where row reduction plays a secondary role, the defining properties of the determinant are often taken to be:
- The determinant det(A) is multilinear in the rows of A.
- If A has two identical rows, then det(A)=0.
- The determinant of the identity matrix is equal to one.
We have already shown that our four defining properties, Definition 4.1.1, imply these three. Conversely, we will prove that these three alternative properties imply our four, so that both sets of properties are equivalent.
Defining property 2 is just the second defining property, Definition 3.3.1, in Section 3.3. Suppose that the rows of A are v1,v2,…,vn. If we perform the row replacement Ri=Ri+cRj on A, then the rows of our new matrix are v1,v2,…,vi−1,vi+cvj,vi+1,…,vn, so by linearity in the ith row,
det(v1,v2,…,vi−1,vi+cvj,vi+1,…,vn)=det(v1,v2,…,vi−1,vi,vi+1,…,vn)+cdet(v1,v2,…,vi−1,vj,vi+1,…,vn)=det(v1,v2,…,vi−1,vi,vi+1,…,vn)=det(A),
where det(v1,v2,…,vi−1,vj,vi+1,…,vn)=0 because vj is repeated. Thus, the alternative defining properties imply our first two defining properties. For the third, suppose that we want to swap row i with row j. Using the second alternative defining property and multilinearity in the ith and jth rows, we have
0=det(v1,…,vi+vj,…,vi+vj,…,vn)=det(v1,…,vi,…,vi+vj,…,vn)+det(v1,…,vj,…,vi+vj,…,vn)=det(v1,…,vi,…,vi,…,vn)+det(v1,…,vi,…,vj,…,vn)+det(v1,…,vj,…,vi,…,vn)+det(v1,…,vj,…,vj,…,vn)=det(v1,…,vi,…,vj,…,vn)+det(v1,…,vj,…,vi,…,vn),
as desired.
We have
(−123)=−(100)+2(010)+3(001).
Therefore,
det(−1722−32311)=−det(1720−32011)+2det(0721−32011)+3det(0720−32111).
This is the basic idea behind cofactor expansions in Section 4.2.
- There is one and only one function det:{n×n matrices}→R satisfying the four defining properties, Definition 4.1.1.
- The determinant of an upper-triangular or lower-triangular matrix is the product of the diagonal entries.
- A square matrix is invertible if and only if det(A)≠0; in this case, det(A−1)=1det(A).
- If A and B are n×n matrices, then det(AB)=det(A)det(B).
- For any square matrix A, we have det(AT)=det(A).
- The determinant can be computed by performing row and/or column operations.