Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

5.7 Linear Stability Analysis of Discrete-Time Nonlinear Dynamical Systems

 

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

All of the discussions above about eigenvalues and eigenvectors are for linear dynamical systems. Can we apply the same methodology to study the asymptotic behavior of nonlinear systems? Unfortunately, the answer is a depressing no. Asymptotic behaviors of nonlinear systems can be very complex, and there is no general methodology to systematically analyze and predict them. We will revisit this issue later.

Having said that, we can still use eigenvalues and eigenvectors to conduct a linear stability analysis of nonlinear systems, which is an analytical method to determine the stability of the system at or near its equilibrium point by approximating its dynamics around that point as a linear dynamical system (linearization). While linear stability analysis doesn’t tell much about a system’s asymptotic behavior at large, it is still very useful for many practical applications, because people are often interested in how to sustain a system’s state at or near a desired equilibrium, or perhaps how to disrupt the system’s status quo to induce a fundamental change. The basic idea of linear stability analysis is to rewrite the dynamics of the system in terms of a small perturbation added to the equilibrium point of your interest. Here I put an emphasis onto the word “small” for a reason. When we say small perturbation in this context, we mean not just small but really, really small (infinitesimally small in mathematical terms), so small that we can safely ignore its square or any higher-order terms. This operation is what linearization is all about. Here is how linear stability analysis works. Let’s consider the dynamics of a nonlinear difference equation

\[x_t =F(x_{t-1}) \label{(5.56)}\]

around its equilibrium point \(x_{eq}\). By definition, \(x_{eq}\) satisfies

\[x_{eq} =F(x_{eq}).\label{(5.57)}\]

To analyze the stability of the system around this equilibrium point, we switch our perspective from a global coordinate system to a local one, by zooming in and capturing a small perturbation added to the equilibrium point, \(∆x_{t} = x_{t} −x_{eq}\) . Specifically, we apply the following replacement

\[∆x_{t} \Rightarrow x_{eq} −x_{t}\label{(5.58)} \]

to Eq. (5.56), to obtain

\[x_{eq} +\Delta x_{t} =F(x_{eq} +\Delta x_{t-1}). \label{(5.59)}\]

The right hand side of the equation above is still a nonlinear function. If \(x_{t}\) is scalar and thus \(F(x)\) is a scalar function, the right hand side can be easily approximated using the Taylor expansion as follows: 

\[F(x_{eq} +\Delta {x_{t-1}}) =F(x_{eq})+F'(x_{eq})\Delta {x_{t-1}}+\frac{F''(x_{eq})}{2!}Delta {x_{t-1}^{2}}+\frac{F'''(x_{eq})}{3!}Delta {x_{t-1}^{3}}+...+ \label{(5.60)}\] \[\approx F(x_{eq})+F'(x_{eq})\Delta {x_{t-1}} \label{(5.61)}\]

This means that, for a scalar function \(F\), \(F_(x_{eq} +∆x)\) can be linearly approximated by the value of the function at \(x_{eq}\) plus a derivative of the function times the displacement from \(x_{eq}\). Together with this result and Eq. (5.57), Eq. (5.59) becomes the following very simple linear difference equation: 

\[\Delta {x_{t}} \approx F'{x_{eq}}\Delta {x_{t-1}}\label{(5.62)}\]

This means that, if \(|F_{0}(x_{eq)}| > 1\), \(∆_{x}\) grows exponentially, and thus the equilibrium point \(x_{eq}\) is unstable. Or if |\(F_{0}(x_{eq})| < 1\), \(∆_{x}\) shrinks exponentially, and thus \(x_{eq}\) is stable. Interestingly, this conclusion has some connection to the cobweb plot we discussed before. \(|F_{0}(x_{eq})|\) is the slope of function \(F\) at an equilibrium point (where the function curve crosses the diagonal straight line in the cobweb plot). If the slope is too steep, either positively or negatively, trajectories will diverge away from the equilibrium point. If the slope is less steep than 1, trajectories will converge to the point. You may have noticed such characteristics when you drew the cobweb plots. Linear stability analysis offers a mathematical explanation of that.

Now, what if \(F\) is a multidimensional nonlinear function? Such an \(F\) can be spelled out as a set of multiple scalar functions, as follows:

\[x_{1,t} =F_{1}(x_{1},x_{2,t-1},...,x_{n,t-1})\label{(5.63)}\]

\[x_{2,t} =F_{2}(x_{1},x_{2,t-1},...,x_{n,t-1})\label{(5.64)}\]

.

.

.

\[x_{n,t} =F_{n}(x_{1},x_{2,t-1},...,x_{n,t-1})\label{(5.65)}\]

Using variable replacement similar to Eq. \ref{5.58}, these equations are rewritten as follows:

\[x_{1,eq} +\Delta{x_{1,t}} =F_{1}(x_{1,eq}+\Delta{x_{1,t-1}},x_{2,eq}+\Delta{x_{2,t-1}}...,x_{n,eq}+\Delta{x_{n,t-1}}\label{(5.66)}\]

\[x_{2,eq} +\Delta{x_{2,t}} =F_{2}(x_{1,eq}+\Delta{x_{1,t-1}},x_{2,eq}+\Delta{x_{2,t-1}}...,x_{n,eq}+\Delta{x_{n,t-1}}\label{(5.67)}\]

.

.

.

\[x_{n,eq} +\Delta {x_{2,t}} =F_{2}(x_{1,eq}+\Delta {x_{1,t-1}},x_{2,eq}+\Delta {x_{2,t-1}}...,x_{n,eq}+\Delta {x_{n,t-1}}\label{(5.68)}\]

Since there are many ∆xi’s in this formula, the Taylor expansion might not apply simply. However, the assumption that they are extremely small helps simplify the analysis here. By zooming into an infinitesimally small area near the equilibrium point, each \(F_{i}\) looks like a completely flat “plane” in a multidimensional space (Fig. 5.14) where all nonlinear interactions among \(∆x_{i}\)’s are negligible. This means that the value of \(F_{i}\) can be approximated by a simple linear sum of independent contributions coming from the n dimensions, each of which can be calculated in a manner similar to Eq. \(\ref{5.61}\), as
\[F_{i}(x_{1,eq}+\Delta {x_{1,t-1}},x_{2,eq}+\Delta {x_{2,t-1}}...,x_{n,eq}+\Delta{ x_{n,t-1}}\]

\[\approx F_{i}(x_{eq} +\frac{\partial{F_{i}}}{\partial{x_{1}}}|_{eq} \Delta {x_{1,t-1}}+\frac{\partial{F_{i}}}{\partial{x_{2}}}|_{eq} \Delta {x_{2,t-1}}+...+\frac{\partial{F_{i}}}{\partial{x_{2}}}|_{eq}\Delta {x_{n,t-1}}.\label{(5.69)}\]

This linear approximation allows us to rewrite Eqs. (5.66)–(5.68) into the following, very concise linear equation:

\[x_{eq} +\Delta{x_{t}} \approx F(x_{eq}) +\begin{pmatrix}\frac{\partial {F_1} }{\partial{x_1}} & \frac{\partial{F_1}}{\partial{x_1}} &.&.&.&\frac{\partial{F_1}}{\partial{x_n}}\\  \frac{\partial{F_2}}{\partial{x_1}} &\frac{\partial{F_2}}{\partial{x_2}} &.&.&. &\frac{\partial{F_2}}{\partial{x_n}}\\  &  .   &  .   &. \\&   .   & .   &.\\ \frac{\partial{F_n} }{\partial{x_1}} &\frac{\partial{F_n}}{\partial{x_2}} &.&.&.&\frac{\partial{F_n}}{\partial{x_n}}\end{pmatrix} |_{x=x_{eq}} \Delta{x_{t-1}}\label{(5.70)}\]

The coefficient matrix filled with partial derivatives is called a Jacobian matrix of the original multidimensional function F. It is a linear approximation of the nonlinear function

fig5.14.png

around \(x = x_{eq}\), just like a regular derivative of a scalar function. Note that the orders of rows and columns of a Jacobian matrix must match. Its \(i\)-th row must be a list of spatial derivatives of \(F_{i}\), i.e., a function that determines the behavior of \(x_{i}\), while \(x_{i}\) must be used to differentiate functions for the \(i\)-th column.

By combining the result above with Eq. (5.57), we obtain

\[\Delta{x_{t}} \approx J \Delta{x_{t-1}},\label{(5.71)}\]

where \(J\) is the Jacobian matrix of \(F\) at \(x = x_{eq}\). Look at how simple it can get! The dynamics are approximated in a very simple linear form, which describes the behavior of the small perturbations around \(x_{eq}\) as the new origin.

Now we can calculate the eigenvalues of J to see if this system is stable or not, around \(x_{eq}\). If the absolute value of the dominant eigenvalue \(λ_{d}\)is less than 1,the equilibrium point is stable; even if a small perturbation is added to the system’s state, it asymptotically goes back to the equilibrium point. If\(|λd| > 1\), the equilibrium point is unstable; any small perturbation added to the system’s state grows exponentially and, eventually, the system’s state moves away from the equilibrium point. Sometimes, an unstable equilibrium point may come with other eigenvalues that show stability. Such equilibrium points are called saddle points, where nearby trajectories are attracted to the equilibrium point in some directions but are repelled in other directions. If\(|λd| = 1\), it indicates that the system may be \(neutral\) (also called \(Lyapunov stable\)), which means that the system’s state neither diverges away from nor converges to the equilibrium point. But actually, proving that the point is truly neutral requires more advanced nonlinear analysis, which is beyond the scope of this textbook. Finally, if the eigenvalues are complex conjugates, oscillatory dynamics are going on around the equilibrium points. Such equilibrium points are called a stable or unstable \(spiral focus\) or a \(neutral center\), depending on their stabilities. Figure 5.15 shows a schematic summary of these classifications of equilibrium points for two-dimensional cases. 

Linear stability analysis of discrete-time nonlinear systems

1. Find an equilibrium point of the system you are interested in.
2. Calculate the Jacobian matrix of the system at the equilibrium point.
3. Calculate the eigenvalues of the Jacobian matrix.
4. If the absolute value of the dominant eigenvalue is:

• Greater than 1⇒The equilibrium point is unstable.

fig 5.15.png

– If other eigenvalues have absolute values less than 1, the equilibrium point is a saddle point.

• Less than 1⇒The equilibrium point is stable.

• Equal to 1⇒The equilibrium point may be neutral (Lyapunov stable).

5. In addition, if there are complex conjugate eigenvalues involved, oscillatory dynamics are going on around the equilibrium point. If those complex conjugate eigenvalues are the dominant ones, the equilibrium point is called a stable or unstable \(spiral focus\) (or a \(neutral center\) if the point is neutral).

Exercise 5.15

Consider the following iterative map \((a > 0, b > 0)\)

\[x_{t} =x_{t-1}+asin(bx_{t-1})\label{(5.72)}\]

Conduct linear stability analysis to determine whether this model is stable or not at its equilibrium point \(x_{eq} = 0\).

Exercise 5.16

Consider the following two-dimensional difference equation model:

\[x_{t}=x_{t-1} +2x_{t-1}(1-x_{t-1})-x_{t-1}y_{t-1}\label{(5.73)}\]

\[y_{t} =y_{t-1} +2y_{t-1}(1-y_{t-1})-x_{t-1}y_{t-1}\label{(5.74)}\]

1. Find all of its equilibrium points.

2. Calculate the Jacobian matrix at the equilibrium point where \(x > 0\) and \(y > 0\).

3. Calculate the eigenvalues of the matrix obtained above. 

4. Based on the result, classify the equilibrium point into one of the following: stable point, unstable point, saddle point, stable spiral focus, unstable spiral focus, or neutral center.

 

 

Exercise 5.17

Consider the following two-dimensional difference equation model: 

\[x_{t} =x_{t-1}y_{t-1}\label{(5.75)}\]

\[y_{t}=y_{t-1})x_{t-1}-1)\label{(5.76)}\]

1. Find all equilibrium points(which you may have done already in Exercise 5.2).

2. Calculate the Jacobian matrix at each of the equilibrium points. 

3. Calculate the eigenvalues of each of the matrices obtained above.

4. Based on the results, discuss the stability of each equilibrium point.