Processing math: 100%
Skip to main content
Library homepage
 

Text Color

Text Size

 

Margin Size

 

Font Type

Enable Dyslexic Font
Mathematics LibreTexts

7.2: Coupled First-Order Equations

( \newcommand{\kernel}{\mathrm{null}\,}\)

We now consider the general system of differential equations given by .x1=ax1+bx2,.x2=cx1+dx2, which can be written using vector notation as .x=Ax.

Before solving this system of odes using matrix techniques, I first want to show that we could actually solve these equations by converting the system into a single second-order equation. We take the derivative of the first equation and use both equations to write ..x1=a.x1+b.x2=a.x1+b(cx1+dx2)=a.x1+bcx1+d(.x1ax1)=(a+d).x1(adbc)x1.

The system of two first-order equations therefore becomes the following second-order equation:

..x1(a+d).x1+(adbc)x1=0.

If we had taken the derivative of the second equation instead, we would have obtained the identical equation for x2:

..x2(a+d).x2+(adbc)x2=0.

In general, a system of n first-order linear homogeneous equations can be converted into an equivalent n-th order linear homogeneous equation. Numerical methods usually require the conversion in reverse; that is, a conversion of an n-th order equation into a system of n first-order equations.

With the ansatz x1=eλt or x2=eλt, the second-order odes have the characteristic equation λ2(a+d)λ+(adbc)=0.

This is identical to the characteristic equation obtained for the matrix A from an eigenvalue analysis.

We will see that (???) can in fact be solved by considering the eigenvalues and eigenvectors of the matrix A. We will demonstrate the solution for three separate cases: (i) eigenvalues of A are real and there are two linearly independent eigenvectors; (ii) eigenvalues of A are complex conjugates, and; (iii) A has only one linearly independent eigenvector. These three cases are analogous to the cases considered previously when solving the homogeneous linear constant-coefficient second-order equation.

Distinct Real Eigenvalues

We illustrate the solution method by example.

Example 7.2.1

Find the general solution of .x1=x2+x2,.x2=4x1+x2.

Solution

View tutorial on YouTube

The equation to be solved may be rewritten in matrix form as ddt(x1x2)=(1141)(x1x2), or using vector notation, written as (???). We take as our ansatz x(t)=veλt, where v and λ are independent of t. Upon substitution into (???), we obtain λveλt=Aveλt; and upon cancellation of the exponential, we obtain the eigenvalue problem Av=λv.

Finding the characteristic equation using (7.1.8), we have 0=det(AλI)=λ22λ3=(λ3)(λ+1).

Therefore, the two eigenvalues are λ1=3 and λ2=1.

To determine the corresponding eigenvectors, we substitute the eigenvalues successively into (AλI)v=0.

We will write the corresponding eigenvectors v1 and v2 using the matrix notation (v1v2)=(v11v12v21v22), where the components of v1 and v2 are written with subscripts corresponding to the first and second columns of a 2×2 matrix.

For λ1=3, and unknown eigenvector v1, we have from (???) 2v11+v21=0,4v112v21=0.

Clearly, the second equation is just the first equation multiplied by 2, so only one equation is linearly independent. This will always be true, so for the 2×2 case we need only consider the first row of the matrix. The first eigenvector therefore satisfies v21=2v11. Recall that an eigenvector is only unique up to multiplication by a constant: we may therefore take v11=1 for convenience.

For λ2=1, and eigenvector v2=(v12,v22)T, we have from (???) 2v12+v22=0, so that v22=2v12. Here, we take v12=1.

Therefore, our eigenvalues and eigenvectors are given by λ1=3,v1=(12);λ2=1,v2=(12).

A node may be attracting or repelling depending on whether the eigenvalues are both negative (as is the case here) or positive. Observe that the trajectories collapse onto the v2 eigenvector since λ1<λ2<0 and decay is more rapid along the v1 direction.

Distinct Complex-Conjugate Eigenvalues

Example 7.2.2

Find the general solution of .x1=12x1+x2,.x2=x112x2.

Solution

View tutorial on YouTube

The equations in matrix form are ddt(x1x2)=(121112)(x1x2).

The ansatz x=veλt leads to the equation 0=det(AλI)=λ2+λ+54.

Therefore, λ=1/2±i; and we observe that the eigenvalues occur as a complex conjugate pair. We will denote the two eigenvalues as λ=12+iand¯λ=12i.

Now, for A a real matrix, if Av=λv, then A¯v=¯λ¯v. Therefore, the eigenvectors also occur as a complex conjugate pair. The eigenvector v associated with eigenvalue λ satisfies iv1+v2=0, and normalizing with v1=1, we have v=(1i).

We have therefore determined two independent complex solutions to the ode, that is, veλtand¯ve¯λt, and we can form a linear combination of these two complex solutions to construct two independent real solutions. Namely, if the complex functions z(t) and ¯z(t) are written as z(t)=Re {z(t)}+i Im {z(t)},¯z(t)=Re {z(t)}i Im {z(t)}, then two real functions can be constructed from the following linear combinations of z and ¯z:

z+¯z2=Re {z(t)}andz¯z2i=Im {z(t)}.

Thus the two real vector functions that can be constructed from our two complex vector functions are Re {veλt}=Re {(1i)e(12+i)t}=e12tRe {(1i)(cost+isint)}=e12t(costsint);

clipboard_e89c856e1d2a38335154607a7d32bfc87.png
Figure 7.2.1: Phase portrait with complex conjugate eigenvalues.

and Im {veλt}=e12tIm {(1i)(cost+isint)}=e12t(sintcost).

Taking a linear superposition of these two real solutions yields the general solution to the ode, given by x=e12t(A(costsint)+B(sintcost)).

The corresponding phase portrait is shown in Fig. 7.2.1. We say the origin is a spiral point. If the real part of the complex eigenvalue is negative, as is the case here, then solutions spiral into the origin. If the real part of the eigenvalue is positive, then solutions spiral out of the origin.

The direction of the spiral—here, it is clockwise—can be determined easily. If we examine the ode with x1=0 and x2=1, we see that .x1=1 and .x2=1/2. The trajectory at the point (0,1) is moving to the right and downward, and this is possible only if the spiral is clockwise. A counterclockwise trajectory would be moving to the left and downward.

Repeated Eigenvalues with One Eigenvector

Example 7.2.3

Find the general solution of .x1=x1x2,.x2=x1+3x2.

Solution

View tutorial on YouTube

The equations in matrix form are ddt(x1x2)=(1113)(x1x2).

The ansatz x=veλt leads to the characteristic equation 0=det(AλI)=λ24λ+4=(λ2)2.

Therefore, λ=2 is a repeated eigenvalue. The associated eigenvector is found from v1v2=0, or v2=v1; and normalizing with v1=1, we have λ=2,v=(11).

We have thus found a single solution to the ode, given by x1(t)=c1(11)e2t, and we need to find the missing second solution to be able to satisfy the initial conditions. An ansatz of t times the first solution is tempting, but will fail. Here, we will cheat and find the missing second solution by solving the equivalent secondorder, homogeneous, constant-coefficient differential equation.

We already know that this second-order differential equation for x1(t) has a characteristic equation with a degenerate eigenvalue given by λ=2. Therefore, the general solution for x1 is given by x1(t)=(c1+tc2)e2t.

Since from the first differential equation, x2=x1.x1, we compute .x1=(2c1+(1+2t)c2)e2t, so that x2=x1.x1=(c1+tc2)e2t(2c1+(1+2t)c2)e2t=c1e2t+c2(1t)e2t.

Combining our results for x1 and x2, we have therefore found (x1x2)=c1(11)e2t+c2[(01)+(11)t]e2t.

Our missing linearly independent solution is thus determined to x(t)=c2[(01)+(11)t]e2t.

The second term of (???) is just t times the first solution; however, this is not sufficient. Indeed, the correct ansatz to find the second solution directly is given by x=(w+tv)eλt,

clipboard_e0b7b8eba6c63b5388c47018b5bde1774.png
Figure 7.2.2: Phase portrait with only one eigenvector.

where λ and v is the eigenvalue and eigenvector of the first solution, and w is an unknown vector to be determined. To illustrate this direct method, we substitute (???) into .x=Ax, assuming Av=λv. Canceling the exponential, we obtain v+λ(w+tv)=Aw+λtv.

Further canceling the common term λtv and rewriting yields (AλI)w=v.

If A has only a single linearly independent eigenvector v, then (???) can be solved for w (otherwise, it cannot). Using A, λ and v of our present example, (???) is the system of equations given by (1111)(w1w2)=(11).

The first and second equation are the same, so that w2=(w1+1). Therefore, w=(w1(w1+1))=w1(11)+(01).

Notice that the first term repeats the first found solution, i.e., a constant times the eigenvector, and the second term is new. We therefore take w1=0 and obtain w=(01), as before.

The phase portrait for this ode is shown in Fig. 7.2.2. The dark line is the single eigenvector v of the matrix A. When there is only a single eigenvector, the origin is called an improper node.


This page titled 7.2: Coupled First-Order Equations is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform.

Support Center

How can we help?