1: Getting Started - The Language of ODEs
This is a course about ordinary differential equations (ODEs). So we begin by defining what we mean by this term.
DEFINITION 1: ORDINARY DIFFERENTIAL EQUATIONS
An ordinary differential equation (ODE) is an equation for a function of one variable that involves (‘’ordinary”) derivatives of the function (and, possibly, known functions of the same variable).
We give several examples below.
- \(\frac{d^{2}x}{dt^2}+\omega^{2}x = 0\)
- \(\frac{d^{2}x}{dt^2}-\alpha x\frac{dx}{dt}-x+x^3 = \sin(\omega t)\)
- \(\frac{d^{2}x}{dt^2}- \mu(1-x^2)\frac{dx}{dt}+x = 0\)
- \(\frac{d^{3}f}{d\eta^3} +f\frac{d^{2}f}{d\eta^2}+ \beta(1-(\frac{d^{2}f}{d\eta^2})^2) = 0\)
- \(\frac{d^{4}y}{dx^4}+x^2\frac{d^{2}y}{dx^2}+x^5 = 0\)
ODEs can be succinctly written by adopting a more compact notation for the derivatives. We rewrite the examples above with this shorthand notation.
- \(\ddot{x}+ \omega^{2}x = 0\)
- \(\ddot{x}-\alpha x\dot{x}-x+x^3 = \sin(\omega t)\)
- \(\ddot{x}-\mu(1-x^2)\dot{x}+x = 0\)
- \(f'''+ff''+\beta(1-(f'')^2) = 0\)
- \(y''''+x^{2}y''+x^5 = 0\)
Characterizing ODEs
Now that we have defined the notion of an ODE, we will need to develop some additional concepts in order to more deeply describe the structure of ODEs. The notions of "structure” are important since we will see that they play a key role in how we understand the nature of the behavior of solutions of ODEs.
DEFINITION 2: DEPENDENT VARIABLE
The value of the function, e.g for example 1, \(x(t)\).
DEFINITION 3: INDEPENDENT VARIABLE
The argument of the function, e.g for example 1, t.
We summarize a list of the dependent and independent variables in the five examples of ODEs given above.
| Example | Dependent variable | Independent Variable |
|---|---|---|
| 1 | \(x\) | \(t\) |
| 2 | \(x\) | \(t\) |
| 3 | \(x\) | \(t\) |
| 4 | \(f\) | \(\eta\) |
| 5 | \(y\) | \(x\) |
The notion of ''order'' is an important characteristic of ODEs.
DEFINITION 4: ORDER OF AN ODE
The number associated with the largest derivative of the dependent variable in the ODE.
We give the order of each of the ODEs in the five examples above.
| Example | Order |
|---|---|
| 1 | Second Order |
| 2 | Second Order |
| 3 | Second Order |
| 4 | Third Order |
| 5 | Fourth Order |
Distinguishing between the independent and dependent variables enables us to define the notion of autonomous and nonautonomous ODEs.
DEFINITION 5: AUTONOMOUS and NONAUTONOMOUS
An ODE is said to be autonomous if none of the coefficients (i.e. functions) multiplying the dependent variable, or any of its derivatives, depend explicitly on the independent variable, and also if no terms not depending on the dependent variable or any of it derivatives depend explicitly on the independent variable. Otherwise, it is said to be nonautonomous.
Or, more succinctly, an ODE is autonomous if the independent variable does not explicitly appear in the equation. Otherwise, it is nonautonomous. We apply this definition to the five examples above, and summarize the results in the table below.
| Example | Order |
|---|---|
| 1 | autonomous |
| 2 | nonautonomous |
| 3 | autonomous |
| 4 | autonomous |
| 5 | nonautonomous |
All scalar ODEs, i.e. the value of the dependent variable is a scalar, can be written as first order equations where the new dependent variable is a vector having the same dimension as the order of the ODE. This is done by constructing a vector whose components consist of the dependent variable and all of its derivatives below the highest order. This vector is the new dependent variable. We illustrate this for the five examples above.
-
\(\dot{x} = v\),
\(\dot{v} = -w^{2}x, (x, v) \in \mathbb{R} \times \mathbb{R}\).
-
\(\dot{x} = v\),
\(\dot{v} = \alpha xv+x-x^3+sin(\omega t), (x, v) \in \mathbb{R} \times \mathbb{R}\).
-
\(\dot{x} = v\),
\(\dot{v} = \mu(1-x^2)v-x, (x, v) \in \mathbb{R} \times \mathbb{R}\).
-
f' = v,
f'' = u,
\(f''' = -ff''-\beta(1-(f'')^2)\)
or
f' = v,
v' = f'' = u,
\(u' = f''' = -fu-\beta(1-u^2)\)
or
\(\begin{pmatrix} {f'}\\ {v'}\\ {u'} \end{pmatrix}\) = \(\begin{pmatrix} {v}\\ {u}\\ {-fu-\beta(1-u^2)} \end{pmatrix}\), \((f, v, u) \in \mathbb{R} \times \mathbb{R} \times \mathbb{R}\)
-
y' = w,
y'' = v,
y''' = u,
\(y'''' = -x^{2}y''-x^5\)or
y' = w,
w' = y'' = v,
v' = y''' = u,
\(u' = y'''' = -x^{2}v-x^5\)or
\(\begin{pmatrix} {y'}\\ {w'}\\ {v'}\\ {u'} \end{pmatrix}\) = \(\begin{pmatrix} {w}\\ {v}\\ {u}\\ {-x^{2}v-x^5} \end{pmatrix}\), \((y, w, v, u) \in \mathbb{R} \times \mathbb{R} \times \mathbb{R} \times \mathbb{R}\)
Therefore without loss of generality, the general form of the ODE that we will study can be expressed as a first order vector ODE:
\[\dot{x} = f(x), x(t_{0}) \equiv x_{0}, x \in \mathbb{R}^{n}, autonomous, \label{1.1}\]
\[\dot{x} = f (x, t), x(t_{0}) \equiv x_{0}, x \in \mathbb{R}^{n}, nonautonomous, \label{1.2}\]
where \(x(t_{0}) \equiv x_{0}\), is referred to as the initial condition.
This first order vector form of ODEs allows us to discuss many properties of ODEs in a way that is independent of the order of the ODE. It also lends itself to a natural geometrical description of the solutions of ODEs that we will see shortly.
A key characteristic of ODEs is whether or not they are linear or nonlinear .
Definition: LINEAR AND NONLINEAR ODES
An ODE is said to be linear if it is a linear function of the dependent variable. If it is not linear, it is said to be nonlinear.
Note that the independent variable does not play a role in whether or not the ODE is linear or nonlinear.
| Example | Order |
|---|---|
| 1 | linear |
| 2 | nonlinear |
| 3 | nonlinear |
| 4 | nonlinear |
| 5 | linear |
When written as a first order vector equation the (vector) space of dependent variables is referred to as the phase space of the ODE. The ODE then has the geometric interpretation as a vector field on phase space. The structure of phase space, e.g. its dimension and geometry, can have a significant influence on the nature of solutions of ODEs. We will encounter ODEs defined on different types of phase space, and of different dimensions. Some examples are given in the following lists.
1-dimension
- \(\mathbb{R}\) –the real line,
- \(\mathbb{I} \in \mathbb{R}\) –an interval on the real line,
- \(S^{1}\) –the circle.
‘’Solving” One dimensional Autonomous ODEs. Formally (we will explain what that means shortly) an expression for the solution of a one dimensional autonomous ODE can be obtained by integration. We explain how this is done, and what it means. Let \(\mathbb{P}\) denote one of the one dimensional phase spaces described above. We consider the autonomous vector field defined on P as follows:
\[\dot{x} = \frac{dx}{dt} = f(x) , x(t_{0}) = x_{0}, x \in P \label{1.3}\]
This is an example of a one dimensional separable ODE which can be written as follows:
\[\int_{x(t_{0})}^{x(t)} \frac{dx'}{f(x')} = \int_{t_{0}}^{t} dt' = t-t_{0}. \label{1.4}\]
If we can compute the integral on the left hand side of (1.4), then it may be possible to solve for x(t). However, we know that not all functions \(\frac{1}{f(x)}\) can be integrated. This is what we mean by we can ‘’formally” solve for the solution for this example. We may not be able to represent the solution in a form that is useful.
The higher dimensional phase spaces that we will consider will be constructed as Cartesian products of these three basic one dimensional phase spaces.
2 -dimensions
- \(\mathbb{R}^2 = \mathbb{R} \times \mathbb{R}\) –the plane,
- \(\mathbb{T}^2 = \mathbb{S} \times \mathbb{S}\) –the two torus,
- \(\mathbb{C} = \mathbb{I} \times \mathbb{S}\) – the (finite) cylinder,
- \(\mathbb{C} = \mathbb{R} \times \mathbb{S}\) –the (infinite) cylinder.
In many applications of ODEs the independent variable has the interpretation of time, which is why the variable t is often used to denote the independent variable. Dynamics is the study of how systems change in time. When written as a first order system ODEs are often referred to as dynamical systems, and ODEs are said to generate vector fields on phase space. For this reason the phrases ODE and vector field \(t\) end to be used synonomously.
Existence of Solutions
Several natural questions arise when analyzing an ODE. ''Does the ODE have a solution?'' ''Are solutions unique?'' (And what does ''unique'' mean?) The standard way of treating this in an ODE course is to ''prove a big theorem'' about existence and uniqueness. Rather, than do that (you can find the proof in hundreds of books, as well as in many sites on the internet), we will consider some examples that illustrate the main issues concerning what these questions mean, and afterwards we will describe sufficient conditions for an ODE to have a unique solution (and then consider what ''uniqueness'' means).
First, do ODEs have solutions? Not necessarily, as the following example shows.
Example \(\PageIndex{1}\): An example of an ODE that has no solutions
Consider the following ODE defined on \(\mathbb{R}\):
\[\dot{x}^2+x^2+t^2 = -1, x \in \mathbb{R}. \nonumber\]
This ODE has no solutions since the left hand side is nonnegative and the right hand side is strictly negative.
Then you can ask the question– ''if the ODE has solutions, are they unique?'' Again, the answer is ''not necessarily'', as the following example shows.
Example \(\PageIndex{2}\): An example illustrating the meaning of uniqueness
\[\dot{x} = ax , x \in \mathbb{R}^n, \nonumber\]
where a is an arbitrary constant. The solution is given by
\[x(t) = ce^{at}. \label{1.6}\]
So we see that there are an infinite number of solutions, depending upon the choice of the constant \(c\). So what could uniqueness of solutions mean? If we evaluate the solution in Equation \ref{1.6} at \(t = 0\) we see that
\[x(0) = c \nonumber\]
Substituting this into the solution in Equation \ref{1.6}, the solution has the form:
\[x(t) = x(0)e^{at}. \label{1.8}\]
From the form of Equation \ref{1.8} we can see exactly what "uniquess of solutions" means. For a given initial condition, there is exactly one solution of the ODE satisfying that initial condition.
Example \(\PageIndex{8}\)
An example of an ODE with non-unique solutions. Consider the following ODE defined on R:
\[\dot{x} = 3x^{\frac{2}{3}}, x(0) = 0, x \in \mathbb{R}. \nonumber\]
It is easy to see that a solution satisfying \(x(0) = 0\) is \(x = 0\). However, one can verify directly by substituting into the equation that the following is also a solution satisfying \(x(0) = 0\):
\[x(t) = \left\{\begin{array}{ll}{0,} & {t \le a} \\ {(t-a)^3,} & {t>a} \end{array}\right\} \nonumber\]
for any \(a > 0\). Hence, in this example, there are an infinite number of solutions satisfying the same initial condition. This example illustrates precisely what we mean by uniqueness. Given an initial condition, only one (‘’uniqueness”) solution satisfies the initial condition at the chosen initial time.
There is another question that comes up. If we have a unique solution does it exist for all time? Not necessarily, as the following example shows.
Example \(\PageIndex{9}\): An example of an ODE with unique solutions that exists only for a finite time
Consider the following ODE on \(\mathbb{R}\):
\[\dot{x} = x^2, x(0) = x_{0}, x \in \mathbb{R} \nonumber \]
We can easily integrate this equation (it is separable) to obtain the following solution satisfying the initial condition:
\[x(t) = \frac{x_{0}}{1-x_{0}t}. \nonumber\]
The solution becomes infinite, or ''does not exist'' or ''blows up'' at \(t\). This \(x_{0}\) is what ''does not exist'' means. So the solution only exists for a finite time, and this ''time of existence'' depends on the initial condition.
Uniqueness of Solutions
These three examples contain the essence of the ''existence issues'' for ODEs that will concern us. They are the ''standard examples'' that can be found in many textbooks. Now we will state the standard ''existence and uniqueness'' theorem for ODEs. The statement is an example of the power and flexibility of expressing a general ODE as a first order vector equation. The statement is valid for any (finite) dimension.
We consider the general vector field on \(\mathbb{R}^n\)
\[\dot{x} = f(x,t), x(t_{0}) = x_{0}, x \in \mathbb{R}. \label{1.13}\]
It is important to be aware that for the general result we are going to state it does not matter whether or not the ODE is autonomous or nonautonomous.
We define the domain of the vector field. Let \(\mathbb{U} \rightarrow \mathbb{R}^n\) be an open set and let \(\mathbb{I} \rightarrow \mathbb{R}\) be an interval. Then we express that the n-dimensional vector field is defined on this domain as follows:
\(f: \mathbb{U} \rightarrow \mathbb{R}^n\),
\((x, t) \rightarrow f(x, t)\)
We need a definition to describe the ‘’regularity” of the vector field.
DEFINITION 7: \(C^R\) FUNCTION
We say that f(x,t) is \(C^r\) on \(\mathbb{U} \times \mathbb{I} \subset \mathbb{R}^n \times \mathbb{R}\) if it is \(r\) times differentiable and each derivative is a continuous function (on the same domain). If r = 0, f (x, t) is just said to be continuous.
Now we can state sufficient conditions for (1.13) to have a unique solution. We suppose that f(x,t) is \(C^r\), \(r \ge 1\). We choose any point \((x_{0}, t_{0})\) \(\in \mathbb{U} \times \mathbb{I}\). Then there exists a unique solution of (1.13) satisfying this initial condition. We denote this solution by \(x(t, t_{0}, x_{0})\), and reflect in the notation that it satisfies the initial condition by \(x(t_{0}, t_{0}, x_{0}) = x_{0}\). This unique solution exists for a time interval centered at the initial time t0, denoted by \((t_{0}-\epsilon, t_{0}+\epsilon)\), for some \(\epsilon > 0\). Moreover, this solution, \(x(t, t_{0}, x_{0})\), is a \(C^r\) function of t, \(t_{0}\), \(x_{0}\). Note that from Example 9 \(\epsilon\) may depend on \(x_{0}\). This also explains how a solution ‘’fails to exist”–it becomes unbounded (‘’blow up”) in a finite time.
Finally, we remark that existence and uniqueness of ODEs is the mathematical manifestation of determinism. If the initial condition is specified (with 100% accuracy), then the past and the future is uniquely determined. The key phrase here is ''100% accuracy''. Numbers cannot be specified with 100% accuracy. There will always be some imprecision in the specification of the initial condition. Chaotic dynamical systems are deterministic dynamical systems having the property that imprecisions in the initial conditions may be magnified by the dynamical evolution, leading to seemingly random behavior (even though the system is completely deterministic).