Processing math: 82%
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

10.5: Constant Coefficient Homogeneous Systems II

( \newcommand{\kernel}{\mathrm{null}\,}\)

1 newcommand dy , mathrmdy newcommand dx , mathrmdx newcommand dyx , frac mathrmdy mathrmdx newcommand ds , mathrmds newcommand dt , mathrmdt newcommand\dst , frac mathrmds mathrmdt

We saw in Section 10.4 that if an n×n constant matrix A has n real eigenvalues λ1, λ2, …, λn (which need not be distinct) with associated linearly independent eigenvectors x1, x2, …, xn, then the general solution of y=Ay is

y=c1x1eλ1t+c2x2eλ2t++cnxneλnt.

In this section we consider the case where A has n real eigenvalues, but does not have n linearly independent eigenvectors. It is shown in linear algebra that this occurs if and only if A has at least one eigenvalue of multiplicity r>1 such that the associated eigenspace has dimension less than r. In this case A is said to be defective. Since it is beyond the scope of this book to give a complete analysis of systems with defective coefficient matrices, we will restrict our attention to some commonly occurring special cases.

Example 10.5.1

Show that the system

y=[112549]y

does not have a fundamental set of solutions of the form {x1eλ1t,x2eλ2t}, where λ1 and λ2 are eigenvalues of the coefficient matrix A of Equation ??? and x1, and x2 are associated linearly independent eigenvectors.

Solution

The characteristic polynomial of A is

|11λ4λ|=(λ11)(λ+9)+100=λ22λ+1=(λ1)2.

Hence, λ=1 is the only eigenvalue of A. The augmented matrix of the system (AI)x=0 is

[102504100],

which is row equivalent to

[1520000]

Hence, x1=5x2/2 where x2 is arbitrary. Therefore all eigenvectors of A are scalar multiples of x1=[52], so A does not have a set of two linearly independent eigenvectors.

From Example 10.5.1 , we know that all scalar multiples of y1=[52]et are solutions of Equation ???; however, to find the general solution we must find a second solution y2 such that {y1,y2} is linearly independent. Based on your recollection of the procedure for solving a constant coefficient scalar equation

ay+by+cy=0

in the case where the characteristic polynomial has a repeated root, you might expect to obtain a second solution of Equation ??? by multiplying the first solution by t. However, this yields y2=[52]tet, which does not work, since

y2=[52](tet+et),while[112549]y2=[52]tet.

The next theorem shows what to do in this situation.

Theorem 10.5.1

Suppose the n×n matrix A has an eigenvalue λ1 of multiplicity 2 and the associated eigenspace has dimension 1; that is, all λ1-eigenvectors of A are scalar multiples of an eigenvector x. Then there are infinitely many vectors u such that

(Aλ1I)u=x.

Moreover, if u is any such vector then

y1=xeλ1tand y2=ueλ1t+xteλ1t

are linearly independent solutions of y=Ay.

A complete proof of this theorem is beyond the scope of this book. The difficulty is in proving that there’s a vector u satisfying Equation ???, since det(Aλ1I)=0. We’ll take this without proof and verify the other assertions of the theorem. We already know that y1 in Equation ??? is a solution of y=Ay. To see that y2 is also a solution, we compute

y2Ay2=λ1ueλ1t+xeλ1t+λ1xteλ1tAueλ1tAxteλ1t=(λ1u+xAu)eλ1t+(λ1xAx)teλ1t.

Since Ax=λ1x, this can be written as

y2Ay2=((Aλ1I)ux)eλ1t,

and now Equation ??? implies that y2=Ay2. To see that y1 and y2 are linearly independent, suppose c1 and c2 are constants such that

c1y1+c2y2=c1xeλ1t+c2(ueλ1t+xteλ1t)=0.

We must show that c1=c2=0. Multiplying Equation ??? by eλ1t shows that

c1x+c2(u+xt)=0.

By differentiating this with respect to t, we see that c2x=0, which implies c2=0, because x0. Substituting c2=0 into Equation ??? yields c1x=0, which implies that c1=0, again because x0

Example 10.5.2

Use Theorem 10.5.1 to find the general solution of the system

y=[112549]y

considered in Example 10.5.1 .

Solution

In Example 10.5.1 we saw that λ1=1 is an eigenvalue of multiplicity 2 of the coefficient matrix A in Equation ???, and that all of the eigenvectors of A are multiples of

x=[52].

Therefore

y1=[52]et

is a solution of Equation ???. From Theorem 10.5.1 , a second solution is given by y2=uet+xtet, where (AI)u=x. The augmented matrix of this system is

[102554102],

which is row equivalent to

[15212000],

Therefore the components of u must satisfy

u152u2=12,

where u2 is arbitrary. We choose u2=0, so that u1=1/2 and

u=[120].

Thus,

y2=[10]et2+[52]tet.

Since y1 and y2 are linearly independent by Theorem 10.5.1 , they form a fundamental set of solutions of Equation ???. Therefore the general solution of Equation ??? is

y=c1[52]et+c2([10]et2+[52]tet).

Note that choosing the arbitrary constant u2 to be nonzero is equivalent to adding a scalar multiple of y1 to the second solution y2 (Exercise 10.5.33).

Example 10.5.3

Find the general solution of

y=[3410212225]y.

Solution

The characteristic polynomial of the coefficient matrix A in Equation ??? is

|3λ41021λ2225λ|=(λ1)(λ+1)2.

Hence, the eigenvalues are λ1=1 with multiplicity 1 and λ2=1 with multiplicity 2. Eigenvectors associated with λ1=1 must satisfy (AI)x=0. The augmented matrix of this system is

[2410020202260],

which is row equivalent to

[101001200000].

Hence, x1=x3 and x2=2x3, where x3 is arbitrary. Choosing x3=1 yields the eigenvector

x1=[121].

Therefore

y1=[121]et

is a solution of Equation ???. Eigenvectors associated with λ2=1 satisfy (A+I)x=0. The augmented matrix of this system is

[4410022202240],

which is row equivalent to

[110000100000].

Hence, x3=0 and x1=x2, where x2 is arbitrary. Choosing x2=1 yields the eigenvector

x2=[110],

so

y2=[110]et

is a solution of Equation ???. Since all the eigenvectors of A associated with λ2=1 are multiples of x2, we must now use Theorem 10.5.1 to find a third solution of Equation ??? in the form

y3=uet+[110]tet,

where u is a solution of (A+I)u=x2. The augmented matrix of this system is

[4410122212240],

which is row equivalent to

[1101001120000].

Hence, u3=1/2 and u1=1u2, where u2 is arbitrary. Choosing u2=0 yields

u=[1012],

and substituting this into Equation ??? yields the solution

y3=[201]et2+[110]tet

of Equation ???. Since the Wronskian of {y1,y2,y3} at t=0 is

|1112101012|=12,

{y1,y2,y3} is a fundamental set of solutions of Equation ???. Therefore the general solution of Equation ??? is

y=c1[121]et+c2[110]et+c3([201]et2+[110]tet).

Theorem 10.5.2

Suppose the n×n matrix A has an eigenvalue λ1 of multiplicity 3 and the associated eigenspace is one–dimensional; that is, all eigenvectors associated with λ1 are scalar multiples of the eigenvector x. Then there are infinitely many vectors u such that

(Aλ1I)u=x,

and, if u is any such vector, there are infinitely many vectors v such that

(Aλ1I)v=u.

If u satisfies Equation ??? and v satisfies Equation ???, then

y1=xeλ1t,y2=ueλ1t+xteλ1t, and y3=veλ1t+uteλ1t+xt2eλ1t2

are linearly independent solutions of y=Ay.

Again, it is beyond the scope of this book to prove that there are vectors u and v that satisfy Equation ??? and Equation ???. Theorem 10.5.1 implies that y1 and y2 are solutions of y=Ay. We leave the rest of the proof to you (Exercise 10.5.34).

Example 10.5.4

Use Theorem 10.5.2 to find the general solution of

y=[111131022]y.

Solution

The characteristic polynomial of the coefficient matrix A in Equation ??? is

|1λ1113λ1022λ|=(λ2)3.

Hence, λ1=2 is an eigenvalue of multiplicity 3. The associated eigenvectors satisfy (A2I)x=0. The augmented matrix of this system is

[111011100200],

which is row equivalent to

[101001000000].

Hence, x1=x3 and x2=0, so the eigenvectors are all scalar multiples of

x1=[101].

Therefore

y1=[101]e2t

is a solution of Equation ???. We now find a second solution of Equation ??? in the form

y2=ue2t+[101]te2t,

where u satisfies (A2I)u=x1. The augmented matrix of this system is

[111111100201],

which is row equivalent to

[10112010120000].

Letting u3=0 yields u1=1/2 and u2=1/2; hence,

u=12[110]

and

y2=[110]e2t2+[101]te2t

is a solution of Equation ???. We now find a third solution of Equation ??? in the form

y3=ve2t+[110]te2t2+[101]t2e2t2

where v satisfies (A2I)v=u. The augmented matrix of this system is

[11112111120200],

which is row equivalent to

[1011201000000].

Letting v3=0 yields v1=1/2 and v2=0; hence,

v=12[100].

Therefore

y3=[100]e2t2+[110]te2t2+[101]t2e2t2

is a solution of Equation ???. Since y1, y2, and y3 are linearly independent by Theorem 10.5.2 , they form a fundamental set of solutions of Equation ???. Therefore the general solution of Equation ??? is

y=c1[101]e2t+c2([110]e2t2+[101]te2t)+c3([100]e2t2+[110]te2t2+[101]t2e2t2)

Theorem 10.5.3

Suppose the n×n matrix A has an eigenvalue λ1 of multiplicity 3 and the associated eigenspace is two–dimensional; that is, all eigenvectors of A associated with λ1 are linear combinations of two linearly independent eigenvectors x1 and x2. Then there are constants α and β (not both zero) such that if

x3=αx1+βx2,

then there are infinitely many vectors u such that

(Aλ1I)u=x3.

If u satisfies Equation ???, then

y1=x1eλ1ty2=x2eλ1t, andy3=ueλ1t+x3teλ1t,

are linearly independent solutions of y=Ay.

We omit the proof of this theorem.

Example 10.5.5

Use Theorem 10.5.3 to find the general solution of

y=[001111102]y.

Solution

The characteristic polynomial of the coefficient matrix A in Equation ??? is

|λ0111λ1102λ|=(λ1)3.

Hence, λ1=1 is an eigenvalue of multiplicity 3. The associated eigenvectors satisfy (AI)x=0. The augmented matrix of this system is

[101010101010],

which is row equivalent to

[101000000000].

Hence, x1=x3 and x2 is arbitrary, so the eigenvectors are of the form

x1=[x3x2x3]=x3[101]+x2[010].

Therefore the vectors

x1=[101]and x2=[010]

form a basis for the eigenspace, and

y1=[101]etandy2=[010]et

are linearly independent solutions of Equation ???. To find a third linearly independent solution of Equation ???, we must find constants α and β (not both zero) such that the system

(AI)u=αx1+βx2

has a solution u. The augmented matrix of this system is

[101α101β101α],

which is row equivalent to

[101α000βα0000].

Therefore Equation ??? has a solution if and only if β=α, where α is arbitrary. If α=β=1 then Equation ??? and Equation ??? yield

x3=x1+x2=[101]+[010]=[111],

and the augmented matrix Equation ??? becomes

[101100000000].

This implies that u1=1+u3, while u2 and u3 are arbitrary. Choosing u2=u3=0 yields

u=[100].

Therefore Equation ??? implies that

y3=uet+x3tet=[100]et+[111]tet

is a solution of Equation ???. Since y1, {\bf y}_2, and {\bf y}_3 are linearly independent by Theorem 10.5.3 , they form a fundamental set of solutions for Equation \ref{eq:10.5.15}. Therefore the general solution of Equation \ref{eq:10.5.15} is

{\bf y}=c_1\threecol101e^t+c_2\threecol010e^t +c_3\left(\threecol{-1}00e^t+\threecol111te^t\right).\bbox\nonumber

Geometric Properties of Solutions when n=2

We’ll now consider the geometric properties of solutions of a 2\times2 constant coefficient system

\label{eq:10.5.19} \twocol{y_1'}{y_2'}=\left[\begin{array}{cc}a_{11}&a_{12}\\[4pt]a_{21}&a_{22} \end{array}\right]\twocol{y_1}{y_2}

under the assumptions of this section; that is, when the matrix

A=\left[\begin{array}{cc}a_{11}&a_{12}\\[4pt]a_{21}&a_{22} \end{array}\right]\nonumber

has a repeated eigenvalue \lambda_1 and the associated eigenspace is one-dimensional. In this case we know from Theorem 10.5.1 that the general solution of Equation \ref{eq:10.5.19} is

\label{eq:10.5.20} {\bf y}=c_1{\bf x}e^{\lambda_1t}+c_2({\bf u}e^{\lambda_1t}+{\bf x}te^{\lambda_1t}),

where {\bf x} is an eigenvector of A and {\bf u} is any one of the infinitely many solutions of

\label{eq:10.5.21} (A-\lambda_1I){\bf u}={\bf x}.

We assume that \lambda_1\ne 0.

fig100501.svg
Figure 10.5.1 : Positive and negative half-planes.

Let L denote the line through the origin parallel to {\bf x}. By a half-line of L we mean either of the rays obtained by removing the origin from L. Equation \ref{eq:10.5.20} is a parametric equation of the half-line of L in the direction of {\bf x} if c_1>0, or of the half-line of L in the direction of -{\bf x} if c_1<0. The origin is the trajectory of the trivial solution {\bf y}\equiv{\bf 0}.

Henceforth, we assume that c_2\ne0. In this case, the trajectory of Equation \ref{eq:10.5.20} can’t intersect L, since every point of L is on a trajectory obtained by setting c_2=0. Therefore the trajectory of Equation \ref{eq:10.5.20} must lie entirely in one of the open half-planes bounded by L, but does not contain any point on L. Since the initial point (y_1(0),y_2(0)) defined by {\bf y}(0)=c_1{\bf x}_1+c_2{\bf u} is on the trajectory, we can determine which half-plane contains the trajectory from the sign of c_2, as shown in Figure . For convenience we’ll call the half-plane where c_2>0 the positive half-plane. Similarly, the-half plane where c_2<0 is the negative half-plane. You should convince yourself that even though there are infinitely many vectors {\bf u} that satisfy Equation \ref{eq:10.5.21}, they all define the same positive and negative half-planes. In the figures simply regard {\bf u} as an arrow pointing to the positive half-plane, since wen’t attempted to give {\bf u} its proper length or direction in comparison with {\bf x}. For our purposes here, only the relative orientation of {\bf x} and {\bf u} is important; that is, whether the positive half-plane is to the right of an observer facing the direction of {\bf x} (as in Figures 10.5.2 and 10.5.5 ), or to the left of the observer (as in Figures 10.5.3 and 10.5.4 ).

Multiplying Equation \ref{eq:10.5.20} by e^{-\lambda_1t} yields

e^{-\lambda_1t}{\bf y}(t)=c_1{\bf x}+c_2{\bf u}+c_2t {\bf x}.\nonumber

Since the last term on the right is dominant when |t| is large, this provides the following information on the direction of {\bf y}(t):

  1. Along trajectories in the positive half-plane (c_2>0), the direction of {\bf y}(t) approaches the direction of {\bf x} as t\to\infty and the direction of -{\bf x} as t\to-\infty.
  2. Along trajectories in the negative half-plane (c_2<0), the direction of {\bf y}(t) approaches the direction of -{\bf x} as t\to\infty and the direction of {\bf x} as t\to-\infty.

Since \lim_{t\to\infty}\|{\bf y}(t)\|=\infty\quad \text{and} \quad \lim_{t\to-\infty}{\bf y}(t)={\bf 0}\quad \text{if} \quad \lambda_1>0,\nonumber

or

\lim_{t-\to\infty}\|{\bf y}(t)\|=\infty \quad \text{and} \quad \lim_{t\to\infty}{\bf y}(t)={\bf 0} \quad \text{if} \quad \lambda_1<0,\nonumber there are four possible patterns for the trajectories of Equation \ref{eq:10.5.19}, depending upon the signs of c_2 and \lambda_1. Figures 10.5.2 - 10.5.5 illustrate these patterns, and reveal the following principle:

fig100502.svg
fig100503  no idea.svg
Figures 10.5.2 and 10.5.3 : (left) Positive eigenvalue; motion away from the origin. (right) Positive eigenvalue; motion away from the origin.
fig100504  no idea.svg
fig100505 no idea.svg
Figures 10.5.4 and 10.5.5 : Negative eigenvalue; motion toward the origin. (right) Negative eigenvalue; motion toward the origin.

If \lambda_1 and c_2 have the same sign then the direction of the traectory approaches the direction of -{\bf x} as \|{\bf y} \|\to0 and the direction of {\bf x} as \|{\bf y}\|\to\infty. If \lambda_1 and c_2 have opposite signs then the direction of the trajectory approaches the direction of {\bf x} as \|{\bf y} \|\to0 and the direction of -{\bf x} as \|{\bf y}\|\to\infty.


This page titled 10.5: Constant Coefficient Homogeneous Systems II is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by William F. Trench via source content that was edited to the style and standards of the LibreTexts platform.

Support Center

How can we help?