Loading [MathJax]/jax/output/HTML-CSS/jax.js
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

3.7: Multiple Eigenvalues

( \newcommand{\kernel}{\mathrm{null}\,}\)

It may very well happen that a matrix has some “repeated” eigenvalues. That is, the characteristic equation det(AλI)=0 may have repeated roots. As we have said before, this is actually unlikely to happen for a random matrix. If we take a small perturbation of A (we change the entries of A slightly), then we will get a matrix with distinct eigenvalues. As any system we will want to solve in practice is an approximation to reality anyway, it is not indispensable to know how to solve these corner cases. On the other hand, these cases do come up in applications from time to time. Furthermore, if we have distinct but very close eigenvalues, the behavior is similar to that of repeated eigenvalues, and so understanding that case will give us insight into what is going on.

Geometric Multiplicity

Take the diagonal matrix

A=[3003]

A has an eigenvalue 3 of multiplicity 2. We call the multiplicity of the eigenvalue in the characteristic equation the algebraic multiplicity. In this case, there also exist 2 linearly independent eigenvectors, [10] and [01] corresponding to the eigenvalue 3. This means that the so-called geometric multiplicity of this eigenvalue is also 2.

In all the theorems where we required a matrix to have n distinct eigenvalues, we only really needed to have n linearly independent eigenvectors. For example, x=Ax has the general solution

x=c1[10]e3t+c2[01]e3t.

Let us restate the theorem about real eigenvalues. In the following theorem we will repeat eigenvalues according to (algebraic) multiplicity. So for the above matrix A, we would say that it has eigenvalues 3 and 3.

Theorem 3.7.1

Take x=Px. Suppose the matrix P is n×n, has n real eigenvalues (not necessarily distinct), λ1,,λnand there are n linearly independent corresponding eigenvectors v1,,vn. Then the general solution to x=Px can be written as:

x=c1v1eλ1t+c2v2eλ2t++cnvneλnt

The geometric multiplicity of an eigenvalue of algebraic multiplicity n is equal to the number of corresponding linearly independent eigenvectors. The geometric multiplicity is always less than or equal to the algebraic multiplicity. We have handled the case when these two multiplicities are equal. If the geometric multiplicity is equal to the algebraic multiplicity, then we say the eigenvalue is complete.

In other words, the hypothesis of the theorem could be stated as saying that if all the eigenvalues of P are complete, then there are n linearly independent eigenvectors and thus we have the given general solution.

If the geometric multiplicity of an eigenvalue is 2 or greater, then the set of linearly independent eigenvectors is not unique up to multiples as it was before. For example, for the diagonal matrix A=[3003] we could also pick eigenvectors [11] and [11], or in fact any pair of two linearly independent vectors. The number of linearly independent eigenvectors corresponding to λ is the number of free variables we obtain when solving Av=λv. We pick specific values for those free variables to obtain eigenvectors. If you pick different values, you may get different eigenvectors.

Defective Eigenvalues

If an n×n matrix has less than n linearly independent eigenvectors, it is said to be deficient. Then there is at least one eigenvalue with an algebraic multiplicity that is higher than its geometric multiplicity. We call this eigenvalue defective and the difference between the two multiplicities we call the defect.

Example 3.7.1

The matrix

[3103]

has an eigenvalue 3 of algebraic multiplicity 2. Let us try to compute eigenvectors.

[0100][v1v2]=0

Solution

We must have that v2=0. Hence any eigenvector is of the form [v10]. Any two such vectors are linearly dependent, and hence the geometric multiplicity of the eigenvalue is 1. Therefore, the defect is 1, and we can no longer apply the eigenvalue method directly to a system of ODEs with such a coefficient matrix.

Roughly, the key observation is that if λ is an eigenvalue of A of algebraic multiplicity m, then we can find certain m linearly independent vectors solving (AλI)kv=0 for various powers k. We will call these generalized eigenvectors.

Let us continue with the example A=[3103] and the equation x=Ax. We have an eigenvalue λ=3 of (algebraic) multiplicity 2 and defect 1. We have found one eigenvector v1=[10]. We have the solution

x1=ve3t=[10]e3t

We are now stuck, we get no other solutions from standard eigenvectors. But we need two linearly independent solutions to find the general solution of the equation.

In this case, let us try (in the spirit of repeated roots of the characteristic equation for a single equation) another solution of the form

x2=(v2+v1t)e3t

We differentiate to get

x2=v1e3t+3(v2+v1t)e3t=(3v2+v1)e3t+3v1te3t.

As we are assuming that x2 is a solution, x2 must equal Ax2. So let’s compute Ax2:

Ax2=A(v2+v1t)e3t=Av2e3t+Av1te3t.

By looking at the coefficients of e3t and te3t we see 3v2+v1=Av2 and 3v1=Av1. This means that

(A3I)v2=v1,and(A3I)v1=0.

Therefore, x2 is a solution if these two equations are satisfied. The second equation is satisfied if v1 is an eigenvector, and we found the eigenvector above, so let v1=[10]. So, if we can find a v2 that solves (A3I)v2=v1, then we are done. This is just a bunch of linear equations to solve and we are by now very good at that. Let us solve (A3I)v2=v1. Write

[0100][ab]=[10].

By inspection we see that letting a=0 (a could be anything in fact) and b=1 does the job. Hence we can take v2=[01]. Our general solution to x=Ax is

x=c1[10]e3t+c2([01]+[10]t)e3t=[c1e3t+c2te3tc2e3t].

Let us check that we really do have the solution. First x1=c13e3t+c2e3t+3c2te3t=3x1+x2. Good. Now x2=3c2e3t=3x2. Good.

Note that the system x=Ax has a simpler solution since A is a so-called upper triangular matrix, that is every entry below the diagonal is zero. In particular, the equation for x2 does not depend on x1. Mind you, not every defective matrix is triangular.

Exercise 3.7.1

Solve x=[3103]x by first solving for x2 and then for x1 independently. Check that you got the same solution as we did above.

Let us describe the general algorithm. Suppose that λ is an eigenvalue of multiplicity 2, defect 1. First find an eigenvector v1 of λ. Then, find a vector v2 such that

(AλI)v2=v1

This gives us two linearly independent solutions

x1=v1eλtx2=(v2+v1t)eλt

Example 3.7.2

Consider the system x=[250020141]x. Compute the eigenvalues,

Solution

0=det(AλI)=det([2λ5002λ0141λ])=(2λ)2(1λ). The eigenvalues are 1 and 2, where 2 has multiplicity 2. We leave it to the reader to find that [001] is an eigenvector for the eigenvalue λ=1.

Let’s focus on λ=2. We compute eigenvectors: 0=(A2I)v=[050000141][v1v2v3]. The first equation says that v2=0, so the last equation is v1v3=0. Let v3 be the free variable to find that v1=v3. Perhaps let v3=1 to find an eigenvector [101]. Problem is that setting v3 to anything else just gets multiples of this vector and so we have a defect of 1. Let v1 be the eigenvector and let’s look for a generalized eigenvector v2: (A2I)v2=v1, or [050000141][abc]=[101], where we used a, b, c as components of v2 for simplicity. The first equation says 5b=1 so b=15. The second equation says nothing. The last equation is a+4bc=1, or a+45+c=1, or a+c=15. We let c be the free variable and we choose c=0. We find v2=[15150].

The general solution is therefore, x=c1[001]et+c2[101]e2t+c3([15150]+[101]t)e2t.

This machinery can also be generalized to higher multiplicities and higher defects. We will not go over this method in detail, but let us just sketch the ideas. Suppose that A has an eigenvalue λ of multiplicity m. We find vectors such that

(AλI)k(v)=(0),but(AλI)k1v0

Such vectors are called generalized eigenvectors (then v1=(AλI)k1v is an eigenvector). For every eigenvector v1 we find a chain of generalized eigenvectors v2 through vk such that:

(AλI)v1=0,(AλI)v2=v1,(AλI)vk=vk1.

Really once you find the vk such that (AλI)kvk=0 but (AλI)k1vk0, you find the entire chain since you can compute the rest, vk1=(AλI)vk, vk2=(AλI)vk1, etc. We form the linearly independent solutions

x1=v1eλtx2=(v2+v1t)eλtxk=(vk+vk1t+vk2t22++v2tk2(k2)!+v1tk1(k1)!)eλt)

Recall that k!=123(k1)k is the factorial. If you have an eigenvalue of geometric multiplicity , you will have to find such chains (some of them might be short: just the single eigenvector equation). We go until we form m linearly independent solutions where m is the algebraic multiplicity. We don’t quite know which specific eigenvectors go with which chain, so start by finding vk first for the longest possible chain and go from there.

For example, if λ is an eigenvalue of A of algebraic multiplicity 3 and defect 2, then solve (AλI)v1=0,(AλI)v2=v1,(AλI)v3=v2. That is, find v3 such that (AλI)3v3=0, but (AλI)2v30. Then you are done as v2=(AλI)v3 and v1=(AλI)v2. The 3 linearly independent solutions are x1=v1eλt,x2=(v2+v1t)eλt,x3=(v3+v2t+v1t22)eλt.

If on the other hand A has an eigenvalue λ of algebraic multiplicity 3 and defect 1, then solve (AλI)v1=0,(AλI)v2=0,(AλI)v3=v2. Here v1 and v2 are actual honest eigenvectors, and v3 is a generalized eigenvector. So there are two chains. To solve, first find a v3 such that (AλI)2v3=0, but (AλI)v30. Then v2=(AλI)v3 is going to be an eigenvector. Then solve for an eigenvector v1 that is linearly independent from v2. You get 3 linearly independent solutions x1=v1eλt,x2=v2eλt,x3=(v3+v2t)eλt.


This page titled 3.7: Multiple Eigenvalues is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Jiří Lebl via source content that was edited to the style and standards of the LibreTexts platform.

  • Was this article helpful?

Support Center

How can we help?