# 4.2: Classiﬁcations of Model Equations

- Page ID
- 7783

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

There are some technical terminologies I need to introduce before moving on to further discussions:

**Linear system** A dynamical equation whose rules involve just a linear combination of state variables (a constant times a variable, a constant, or their sum).

**Nonlinear system **Anything else (e.g., equation involving squares, cubes, radicals, trigonometric functions, etc., of state variables).

**First-order system** A difference equation whose rules involve state variables of the immediate past (at time \(t−1\)) only^{a}.

**Higher-order** system Anything else.

__ __

^{a}Note that the meaning of “order” in this context is different from the order of terms in polynomials.

**Autonomous system** A dynamical equation whose rules don’t explicitly include time \(t\) or any other external variables.

**Non-autonomous system** A dynamical equation whose rules do include time \(t\) or other external variables explicitly.

Exercise \(\PageIndex{1}\)

Decide whether each of the following examples is (1) linear or nonlinear, (2) ﬁrst-order or higher-order, and (3) autonomous or non-autonomous

- \(x_{t} = ax_{t−1} + b\)
- \( x_{t} = ax_{t−1} + bx_{t−2} + cx_{t−3}\)
- \( x_{t} = ax_{t−1}(1−x_{t−1})\)
- \( x_{t} = ax_{t−1} + bxt−2^{2} + \sqrt[c]{x_{t−1}x_{t−3}}\)
- \( x_{t} = ax_{t−1}x_{t−2} + bx_{t−3} + sin(t)\)
- \(x_{t} = ax_{t−1} + by_{t−1}, y_{t} = cx_{t−1} + dy_{t−1}\)

Also, there are some useful things that you should know about these classiﬁcations:

Non-autonomous, higher-order difference equations can always be converted into autonomous, ﬁrst-order forms, by introducing additional state variables.

For example, the second-order difference equation

\[x_{t}=x_{t-1}+x_{t-2} \label{(4.5)}\]

(which is called the ** Fibonacci sequence **can be converted into a ﬁrst-order form by introducing a “memory” variable \(y\) as follows:

\[y_{t} = x_{t-1}\label{(4.6)}\]

Using this, \(x_{t−2}\) can be rewritten as \(y_{t−1}\). Therefore the equation can be rewritten as follows:

\[ \begin{align} x_{t} &= x_{t-1}+y_{t-1}\label{(4.7)} \\[4pt] y_{t} &= x_{t-1}\label{(4.8)} \end{align}\]

This is now ﬁrst-order. This conversion technique works for third-order or any higher-order equations as well, as long as the historical dependency is ﬁnite. Similarly, a non-autonomous equation

\[x_{t} = x_{t-1} +t\label{(4.9)}\]

can be converted into an autonomous form by introducing a “clock” variable z as follows:

\[z_{t}= z_{t-1} +1, z_{0} =1\label{(4.10)}\]

This deﬁnition guarantees \(z_{t−1} = t\). Using this, the equation can be rewritten as

\[x_{t} = x_{t-1}+ z_{t-1},\label{(4.11)}\]

which is now autonomous. These mathematical tricks might look like some kind of cheating, but they really aren’t. The take-home message on this is that autonomous ﬁrst-order equations can cover all the dynamics of any non-autonomous, higher-order equations. This gives us conﬁdence that we can safely focus on autonomous ﬁrst-order equations without missing anything fundamental. This is probably why autonomous ﬁrst-order difference equations are called by a particular name: *iterative maps*.

Exercise \(\PageIndex{2}\)

Convert the following difference equations into an autonomous, ﬁrst-order form.

1. \(x_{t} = x_{t-1}(1-x_{t-1})sint\)

2. \(x_{t} = x_{t-1} +x_{t-2}-x_{t-3}\)

Another important thing about dynamical equations is the following distinction between linear and nonlinear systems:

Linear equations are always analytically solvable, while nonlinear equations don’t have analytical solutions in general.

Here, an *analytical solution* means a solution written in the form of \(x_{t} = f(t)\) without using state variables on the right hand side. This kind of solution is also called a closed form solution because the right hand side is “closed,” i.e., it only needs \(t\) and doesn’t need \(x\). Obtaining a closed-form solution is helpful because it gives you a way to calculate (i.e., predict) the system’s state directly from \(t\) at any point in time in the future, without actually simulating the whole history of its behavior. Unfortunately this is not possible for nonlinear systems in most cases.