# 14.4: Linear Stability Analysis of Reaction-Diffusion Systems

- Page ID
- 7853

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

You may have found that the linear stability analysis of continuous ﬁeld models isn’t as easy as that of non-spatial models. For the latter, we have a very convenient tool called the Jacobian matrices, and the stability analysis is just calculating a Jacobian matrix and then investigating its eigenvalues. Everything is so mechanistic and automatic, compared to what we went through in the previous section. You may wonder, aren’t there any easier shortcuts in analyzing the stability of continuous ﬁeld models?

Well,if you feel that way, you will become a big fan of the* reaction-diffusion systems* we discussed in Section 13.6. Their linear stability analysis is *much easier*, because of the clear separation of local reaction dynamics and spatial diffusion dynamics. To be more speciﬁc, you can bring the Jacobian matrix back to the analysis! Here is how and why it works.

Consider conducting a linear stability analysis to the following standard reaction-diffusion system:

\[\frac{\partial{f_{1}}}{\partial{t}} =R_{1} (f_{1},f_{2}, \cdots, f_{n})+D_{1}\nabla^{2} f_{1} \label{(14.80)}\]

\[\frac{\partial{f_{2}}}{\partial{t}} =R_{2}(f_{1}, f_{2}, \cdots, f_{n})+D_{2}\nabla^{2}f_{2} \label{(14.81)}\]

\[\vdots\]

\[\frac{\partial{f_{n}}}{\partial{t}} =R_{n}(f_{1}, f_{2}, \cdots, f_{n})+D_{n}\nabla^{2}f_{n} \label{(14.82)}\]

The homogeneous equilibrium state of this system, \((f_{1eq},f_{2eq},...,f_{neq})\), is a solution of the following equations:

\[0=R_{1}(f_{1eq}, f_{2eq}, \cdots. f_{neq}) \label{(14.83)}\]

\[0=R_{2}(f_{1eq}, f_{2eq}, \cdots. f_{neq}) \label{(14.84)}\]

\[\vdots\]

\[0=R_{n}(f_{1eq}, f_{2eq}, \cdots. f_{neq}) \label{(14.85)}\]

To conduct a linear stability analysis, we replace the original state variables as follows:

\[f_{i}(x,t) \Rightarrow f_{ieq} +\Delta{f_{i}(x,t)} =f_{ieq} +\sin{(\omega{x} +\phi)}\Delta{f_{1}(t)} \text{for all i}\label{(14.86)}\]

This replacement turns the dynamical equations into the following form:

\[ S\dfrac{\partial{\Delta}f_{1}}{\partial{t}} =R_{1}(f_{1eq} +S\Delta{f_{1}}. f_{2eq} +S\Delta{f_{2}}, \cdots f_{neq} +S\Delta{f_{n}}) -D_{1}\omega^{2}S\Delta{f_{1}} \label{(14.87)}\]

\[ S\dfrac{\partial{\Delta}f_{2}}{\partial{t}} =R_{2}(f_{1eq} +S\Delta{f_{1}}. f_{2eq} +S\Delta{f_{2}}, \cdots f_{neq} +S\Delta{f_{n}}) -D_{2}\omega^{2}S\Delta{f_{2}} \label{(14.88)}\]

\[\vdots\]

\[ S\frac{\partial{\Delta}f_{n}}{\partial{t}} =R_{n}(f_{1eq} +S\Delta{f_{1}}, f_{2eq} +S\Delta{f_{2}}, \cdots f_{neq} +S\Delta{f_{n}}) -D_{n}\omega^{2}S\Delta{f_{n}} \label{(14.89)}\]

Here I used \(S = sin(ωx + φ)\) only in the expressions above to shorten them. These equations can be summarized in a single vector form about \(∆f\),

\[\sin{(\omega{x} +\phi)}\frac{\partial{\Delta{f}}}{\partial{t}} =R(f_{eq} +\sin{(\omega{x} +\phi)}\Delta{f}) -D\omega^{2}\sin{\omega{x} +\phi)}\Delta{f}, \label{(14.90)} \]

where \(R\) is a vector function that represents all the reaction terms, and \(D\) is a diagonal matrix whose diagonal components are \(D_i\) for the i-th position. Now that all the diffusion terms have been simpliﬁed, if we can also linearize the reaction terms, we can complete the linearization task. And this is where the Jacobian matrix is brought back into the spotlight. The reaction terms are all local without any spatial operators involved, and therefore, from the discussion in Section 5.7, we know that the vector function \(R(f_{eq} + \sin{(ωx + φ)}∆f)\) can be linearly approximated as follows:

\[R(f_{eq} +\sin{(\omega{x} +\phi)\Delta{f}})\approx R(f_{eq}) + \begin{pmatrix} \dfrac{\partial{R_{1}}}{\partial{f_{1}}} & \dfrac{\partial{R_{1}}}{\partial{f_{2}}} & \cdots & \dfrac{\partial{R_{1}}}{\partial{f_{n}}} \\ \dfrac{\partial{R_{2}}}{\partial{f_{1}}} & \dfrac{\partial{R_{2}}}{\partial{f_{2}}} & \cdots & \dfrac{\partial{R_{2}}}{\partial{f_{n}}} \\ \vdots & \vdots & \ddots & \vdots \\ \dfrac{\partial{R_{n}}}{\partial{f_{1}}} & \dfrac{\partial{R_{n}}}{\partial{f_{2}}} & \cdots & \dfrac{\partial{R_{n}}}{\partial{f_{n}}} \end{pmatrix}| _{f=f_{eq}} \sin{(\omega{x} +\phi)}\Delta{f} \label{(14.91)}\]

\[=\sin{(\omega{x} +\phi)} \begin{pmatrix} \dfrac{\partial{R_{1}}}{\partial{f_{1}}} &\dfrac{\partial{R_{1}}}{\partial{f_{2}}} &\cdots & \dfrac{\partial{R_{1}}}{\partial{f_{n}}} \\ \dfrac{\partial{R_{2}}}{\partial{f_{1}}} &\dfrac{\partial{R_{2}}}{\partial{f_{2}}} &\cdots & \dfrac{\partial{R_{2}}}{\partial{f_{n}}} \\ \vdots & \vdots & \ddots & \vdots \\ \dfrac{\partial{R_{n}}}{\partial{f_{1}}} &\dfrac{\partial{R_{n}}}{\partial{f_{2}}} &\cdots & \dfrac{\partial{R_{n}}}{\partial{f_{n}}} \end{pmatrix} |_{f=f_{eq}} \Delta{f}\label{(14.92)}\]

Note that we can eliminate \(R(f_eq)\) because of Eqs. \ref{(14.83)}, \ref{(14.84)} \ref{(14.85)}. By plugging this result into Eq. \ref{(14.90)}, we obtain

\[\sin{(\omega{x} +\phi)} \frac{\partial{\Delta{f}}}{\partial{t}} =\sin{(\omega{x} +\phi)}J|_{f=f_{eq}} \Delta{f} =D{\omega^{2}} \sin{(\omega{x} +\phi)}\Delta{f}, \label{(14.93)}\]

\[\frac{\partial{\Delta{f}}}{\partial{t}} - (J-D{\omega^{2}})|_{f=f_{eq}}\Delta{f}, \label{(14.94)}\]

where \(J\) is the Jacobian matrix of the reaction terms (\(R\)). Very simple! Now we just need to calculate the eigenvalues of this coefﬁcient matrix to study the stability of the system.

The stability of a reaction-diffusion system at its homogeneous equilibrium state \(f_{eq}\) can be studied by calculating the eigenvalues of

\[(J-D\omega^{2})|_{f=f_{eq}}, \label{(14.95)}\]

where \(J\) is the Jacobian matrix of the reaction terms, \(D\) is the diagonal matrix made of diffusion constants, and \(w\) is a parameter that determines the spatial frequency of perturbations.

This shortcut in linear stability analysis is made possible thanks to the clear separation of reaction and diffusion terms in reaction-diffusion systems. I hope you now understand part of the reasons why so many researchers are fond of this modeling framework.

Let’s apply this new knowledge to some example. Here is the Turing model we discussed before:

\[\frac{\partial{u}}{\partial{t}} = a(u-h) +b(v-k)+D_{u}\nabla^{2}{u} \label{(14.96)}\]

\[\frac{\partial{v}}{\partial{t}} =c(v-h) +d(v-k) +D_{v} \nabla^{2{v} \label{(14.97)}}\]

Using Eq. \ref{(14.95)}, we can immediately calculate its coefﬁcient matrix:

\[(\begin{pmatrix} a & \\ c &d \end{pmatrix} - \begin{pmatrix} D_{u} & 0 \\ 0 & D_{v} \end{pmatrix} \omega^{2})|_{(u, v)=(h, k)} = \begin{pmatrix} a-D_{u}\omega^{2} & b \\ c & d-D_{v} \omega^{2} \end{pmatrix} \label{(14.98)}\]

From the discussion in Section 7.4, we already know that, in order for this matrix to show stability, its determinant must be positive and its trace must be negative (see Fig. 7.4.1). Therefore, the condition for the homogeneous equilibrium state of this system to be stable is that both of the following two inequalities must be true for all real values of \(ω\):

\[0 < (a- D_{u} \omega^{2})(d-D_{v}\omega^{2}) -bc \label{(14.99)}\]

\[0 > a- D_{u}\omega^{2} +d-D_{v} \omega^{2} \label{(14.100)}\]

These inequalities can be rewritten using \(det(A)\) and \(Tr(A)\) of\( A =\begin{pmatrix}a & b \\ c & d \end{pmatrix}\), as follows:

\[aD_{v}\omega^{2} +dD_{u}\omega^{2}-D_{u}D_{v}\omega^{4} <\det{(A)} \label{(14.101)}\]

\[D_{u}\omega^{2} +D_{v}\omega^{2} > Tr{(A)} \label{(14.102)}\]

Now, imagine that the original non-spatial model without diffusion terms was already stable, i.e., \(det(A) > 0 \)and \(Tr(A) < 0\). Is there any possibility that the introduction of diffusion to the model could destabilize the system by itself? The second inequality is always true for negative \(Tr(A)), because its left hand side can’t be negative. But the ﬁrst inequality can be violated, if

\[g(z) =-D_{u}D_{v}z^{2} +(aD_{v} +dD_{u})z -\det{(A)} \qquad {(with \ z = \omega^{2})} \label{(14.103)}\]

can take a positive value for some \(z > 0\). \(g(z)\) can be rewritten as

\[g(z) =-D_{u}D_{v} (z -\dfrac{aD_{v} +dD_{u}}{2D_{uD_{v}}})^{2} +\frac{(aD_{v} +dD_{u})^{2}}{4D_{u}D_{v}} -\det{(A)}. \label{(14.104)}\]

There are two potential scenarios in which this polynomial can be positive for some \(z > 0\), as shown in Fig. 14.4.1.

* Figure \(\PageIndex{1}\)*: Two possible scenarios in which \(g(z)\) can take a positive value for some \(z > 0\). Left: When the peak exists on the positive side of \(z\). Right: When the peak exists on the negative side of \(z\).

If the peak exists on the positive side of \(z (aD_{v} + dD_{u} > 0\); Fig. 14.4.1 left), the only condition is that the peak should stick out above the \(z\)-axis, i.e.

\[\dfrac{(aD_{v} +dD_{u})^{2}}{4D_{u}D_{v}} -\det{(A)} >0. \label{(14.105)}\]

Or, if the peak exists on the negative side \(z (aD_{v} +dD_{u} < 0\); Fig. 14.4.1 right), the condition is that the intercept of \(g(z)\) should be positive, i.e.,

\[g(0) =-\det{(A)} >0, \label{(14.106)}\]

but this can’t be true if the original non-spatial model is stable. Therefore, the only possibility for diffusion to destabilize the otherwise stable system is the ﬁrst case, whose condition can be simpliﬁed to

\[aD_{v} +dD_{u} >2 \sqrt{D_{u}D_{v} \det{(A).}} \label{(14.107)}\]

Let’s test how well this prediction applies to actual dynamics of Turing models. In the previous chapter, we used \((a,b,c,d) = (1,−1,2,−1.5)\) and \((D_{u},D_{v}) = (10−^{4},6 × 10^{−4)}\) to generate the simulation result shown in Fig. 13.17. With these parameter settings, det(A) = −1.5−(−2) = 0.5 > 0 and Tr(A) = −0.5 < 0, so the system would be stable if there were no diffusion terms. However,

\[aD_{v} +dD_{u} = 6×10^{−4 }−1.5×106{−4} = 4.5×10^{−4}, \label{(14.108)}\]

\[2\sqrt{D_{u}D_{v}\det{(A)}} =2\sqrt{10^{−4} ×6×10^{−4} ×0.5 } = 2×10^{−4} \sqrt{3} \approx 3.464 ×10^{−4} \label{(14.109)} \]

therefore inequality \ref{(14.107)} holds. This indicates that the homogeneous equilibrium state must be unstable and non-homogeneous spatial patterns should arise which you can actually see in Fig.13.17. As brieﬂy mentioned in Section 13.6, this is called the *diffusion induced instability*. It is quite a counter-intuitive phenomenon, because diffusion is usually considered a process where a non-homogeneous structure is being destroyed by random motion. But here, the system is stable at its homogeneous state w*ithout diffusion*, but it can spontaneously create non-homogeneous structures* with diffusion*. This is a really nice example of how mind-boggling the behavior of complex systems can be sometimes.

Exercise \(\PageIndex{1}\)

Conduct a linear stability analysis of the spatially extended predator-prey model, around its non-zero homogeneous equilibrium state, and discuss the results:

\[\frac{\partial{r}}{\partial{t}} =ar-brf +D_{r}\nabla^{2}r \label{(14.110)}\]

\[\frac{\partial{f}}{\partial{t}} =-cf -drf +D_{f} \nabla^{2}f \label{(14.111)}\]

Assume that all model parameters are positive.

Exercise \(\PageIndex{2}\)

Conduct a linear stability analysis of the Gray-Scott model, around its homogeneous equilibrium state \((u_{eq},v_{eq}) = (1,0)\), and discuss the results:

\[\frac{\partial{u}}{\partial{t}} =F(1-u) uv^{2} +D_{u} \nabla^{2}u \label{(14.112)}\]

\[\frac{\partial{v}}{\partial{t}} =-(F+k ) v +uv^{2} +D_{v} \nabla^{2}v \label{(14.113)}]

Again, assume that all model parameters are positive.

There are a few more useful predictions we can make about spontaneous pattern formation in reaction-diffusion systems. Let’s continue to use the Turing model discussed above as an example. We can calculate the actual eigenvalues of the coefﬁcient matrix, as follows:

\[\begin{align} \begin{vmatrix} 1−10−4ω^{2} −λ & -1 \\ 2 & −1.5−6×10−4ω^{2} −λ \end {vmatrix}|=0 \label{(14.114)} \\[4pt] (1 −10−4ω^{2} −λ)(−1.5−6×10−4ω^{2} −λ) -(-2) =0 \label{(14.115)} \\[4pt] \lambda^{2} + (0.5 + 7×10−4ω^{2})λ + (1−10^{−4}ω^{2})(−1.5−6×10^{−4}ω^{2}) + 2 = 0 \label{(14.116)} \\[4pt] \lambda =\frac{1}{2} (-(0.5 + 7×10^{−4}ω^{2} ) \pm \sqrt{(0.5 + 7×10^{−4}ω^{2})2 −4(1−10^{−4}ω^{2})(−1.5−6×10^{−4}ω^{2})−8} ) \label{(14.117)} \\[4pt] =\frac{1}{2} −(0.5 + 7×10^{−4}ω^{2})±\sqrt{2.5×10^{−7}w^{4} + 2.5×10^{−3}w^{2} −1.75}) \label{(14.118)} \end{align}\]

Out of these two eigenvalues, the one that could have a positive real part is the one with the “\(+\)” sign (let’s call it \(λ_{+}).

Here, what we are going to do is to calculate the value of \(ω\) that attains the largest real part of \(λ_+\). This is a meaningful question, because the largest real part of eigenvalues corresponds to the dominant eigenfunction (\(\sin{(ωx + φ)}\)) that grows fastest, which should be the most visible spatial pattern arising in the system’s state. If we ﬁnd out such a value of \(ω\), then \(2π/ω\) gives us the length scale of the dominant eigenfunction.

We can get an answer to this question by analyzing where the extremum of \(λ_+\) occurs. To make analysis simpler, we let \(z = ω^2\) again and use \(z\) as an independent variable, as follows:

\[\begin{align} \frac{d\lambda_{+}}{dz} =\frac{1}{2}( -7×10^{-4}+\frac{5 ×10^{-7}z +2.5 × 10^{-3}}{2\sqrt{2.5×10^{-7} +2.5×10^{-3}z -1.75}}) =0 \label{(14.119)} \\[4pt] 7× 10^{-4}(2 \sqrt{2.5×10^{-7}z^{2} 2.5×10^{-3}z-1.75)} =5×10^{-7}z +2.5×10^{-3} \label{(14.120)} \\[4pt] 1.96 ×10^{-6}(2.5×10^{-7}z^{2} +2.5 ×10^{-3}z-1.75) =2.5 ×10^{-13}z^{2}+2.5 × 10^{-9}z +6.25 × 10^{-6} \label{(14.121)} \\[4pt] (... blah \ blah \ blah ...) \\[4pt] 2.4 × 10^{-13} z^{2} +2.4× 10^{-9}z -9.68× 10^{-6} =0 \label{(14.122)} \\[4pt] z = 3082.9, −13082.9 \label{(14.123)}\end{align}\]

Phew. Exhausted. Anyway, since \(z = ω^2\), the value of \(ω\) that corresponds to the dominant eigenfunction is

\[\omega = \sqrt{3082.9} =55.5239. \label{(14.124)}\]

Now we can calculate the length scale of the corresponding dominant eigenfunction, which is

\[\ =\dfrac{2\pi}{\omega} \approx 0.113162. \label{(14.125)}\]

This number gives the characteristic distance between the ridges( or between the valleys) in the dominant eigenfunction, which is measured in unit length. 0.11 means that there are about nine ridges and valleys in one unit of length. In fact, the simulation shown in Fig. 13.17 was conducted in a \([0,1]×[0,1]\) unit square domain, so go ahead and count how many waves are lined up along its \(x\)- or \(y\)-axis in its ﬁnal conﬁguration.

Have you ﬁnished counting them? Yes, the simulation result indeed showed about nine waves across each axis! This is an example of how we can predict not only the stability of the homogeneous equilibrium state, but also the characteristic length scale of the spontaneously forming patterns if the equilibrium state turns out to be unstable.

Exercise \(\PageIndex{3}\)

Estimate the length scale of patterns that form in the above Turing model with \((a,b,c,d) = (0.5,−1,0.75,−1)\) and \((D_u,D_v) = (10^{−4},10^{−3})\). Then conﬁrm your prediction with numerical simulations.

Exercise \(\PageIndex{4}\)

If both diffusion constants are multiplied by the same factor \( ψ\),how does that affect the length scale of the patterns?

Okay, let’s make just one more prediction, and we will be done. Here we predict the critical ratio of the two diffusion constants, at which the system stands right at the threshold between homogenization and pattern formation. Using a new parameter \(ρ = D_{v}/D_{u}\), the condition for instability (inequality \ref{(14.107)}) can be further simpliﬁed as follows:

\[a\rho{D_{u}} +dD_{u} > 2\sqrt{\rho{D_{u}^{2}} \det{(A)}} \label{(14.126)}\]

\[a \rho +d > 2\sqrt{\rho\det{(A)}} \label{(14.127)}\]

For the parameter values we used above, this inequality is solved as follows:

\[\rho -1.5 > 2 \sqrt{0.5\rho}\label{(14.128)}\]

\[\rho^{2} -5\rho +2.25 >0 \text{(with ρ−1.5 > 0)} \label{(14.129)}\]

\[\rho > 4.5 \label{(14.130)}\]

This means that the diffusion of \(v\) must be at least 4.5 times faster than \(u\) in order to cause the diffusion instability. In other words, u acts more locally, while the effects of \(v\) reach over longer spatial ranges. If you look back at the original coefﬁcient matrix \(\begin{pmatrix} 1 & -1 \\ 2 & -1.5 \end{pmatrix}\) , you will realize that \(u\) tends to increase both \(u\) and \(v\), while \(v\) tends to suppress \(u\) and \(v\). Therefore, this represents typical “short-range activation and long-range inhibition” dynamics that we discussed in Section 11.5, which is essential in many pattern formation processes.

Figure 14.4.2 shows the numerical simulation results with the ratio of the diffusion constants systematically varied. Indeed, a sharp transition of the results across \(ρ = 4.5\) is actually observed! This kind of transition of a reaction-diffusion system’s behavior between homogenization and pattern formation is called a *Turing bifurcation*, which Turing himself showed in his monumental paper in the 1950s [44].

Exercise \(\PageIndex{5}\)

Below is a variant of the Turing pattern formation model:

\[\frac{\partial{u}}{\partial{t}} =u(v-1) -\alpha +D_{u} \nabla^{2}u \label{(14.131)} \]

\[\frac{\partial{v}}{\partial{t}} =\beta -uv +D_{v}\nabla^{2}v\label{(14.132)}\]

Here \(α\) and \(β\) are positive parameters. Let \((α,β) = (12,16)\) throughout this exercise. Do the following:

- Find its homogeneous equilibrium state.
- Examine the stability of the homogeneous equilibrium state without diffusion terms.
- With \((D_{u},D_{v}) = (10^{−4},10^{−3})\), conduct a linear stability analysis of this model around the homogeneous equilibrium state to determine whether nonhomogeneous patterns form spontaneously. If they do, estimate the length scale of the patterns.
- Determine the critical ratio of the two diffusion constants.
- Conﬁrm your predictions with numerical simulations.

* Figure\(\PageIndex{2}\): *Numerical simulation results of the Turing pattern formation model with \((a,b,c,d) = (1,−1,2,−1.5)\), \(D_{u} = 10^{−4},\) and \(D_{v} = ρD_-{u}\). Densities of \(u\) are plotted in grayscale (darker = greater). The value of \(ρ\) is given below each result.