# 2.8: Theory of Existence and Uniqueness

- Page ID
- 384

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Recall the theorem that says that if a first order differential satisfies continuity conditions, then the initial value problem will have a unique solution in some neighborhood of the initial value. More precisely,

Theorem: A Result For Nonlinear First Order Differential Equations

Let

\[ y'=f(x,y) \;\;\; y(x_0)=y_0 \]

be a differential equation such that both partial derivatives

\[f_x \;\;\; \text{and} \;\;\; f_y \]

are continuous in some rectangle containing \((x_0,y_0)\)/

Then there is a (possibly smaller) rectangle containing \((x_0,y_0)\) such that there is a unique solution \(f(x)\) that satisfies it.

Although a rigorous proof of this theorem is outside the scope of the class, we will show how to construct a solution to the initial value problem. First by translating the origin we can change the initial value problem to

\[y(0) = 0.\]

Next we can change the question as follows. \(f(x)\) is a solution to the initial value problem if and only if

\[f'(x) = f(x,f(x)) \;\;\; \text{and} \;\;\; f(0) = 0.\]

Now integrate both sides to get

\[ \phi (t) = \int _0^t f(s,\phi (s)) \, ds .\]

Notice that if such a function exists, then it satisfies \(f(0) = 0\).

The equation above is called the *integral equation* associated with the differential equation.

It is easier to prove that the integral equation has a unique solution, then it is to show that the original differential equation has a unique solution. The strategy to find a solution is the following. First guess at a solution and call the first guess \(f_0(t)\). Then plug this solution into the integral to get a new function. If the new function is the same as the original guess, then we are done. Otherwise call the new function \(f_1(t)\). Next plug in \(f_1(t)\) into the integral to either get the same function or a new function \(f_2(t)\). Continue this process to get a sequence of functions \(f_n(t)\). Finally take the limit as \(n\) approaches infinity. This limit will be the solution to the integral equation. In symbols, define recursively

\[f_0(t) = 0\]

\[ \phi_{n+1} (t) = \int _0^t f(s,\phi_n (s)) \, ds .\]

Example \(\PageIndex{1}\)

Consider the differential equation

\[y' = y + 2, \;\;\; y(0) = 0.\]

We write the corresponding integral equation

\[ y(t) = \int_0^t \left(y(s)+2 \right) \, ds .\]

We choose

\[ f_0(t) = 0\]

and calculate

\[ \phi_1(t) = \int_0^t \left(0+2 \right) \, ds = 2t\]

and

\[ \phi_2(t) = \int_0^t \left(2s+2 \right) \, ds = t^2 + 2t\]

and

\[ \phi_3(t) = \int_0^t \left(s^2+2s+2 \right) \, ds = \frac{t^3}{3}+t^2 + 2t\]

and

\[ \phi_4(t) = \int_0^t \left(\frac{s^3}{3}+s^2+2s+2 \right) \, ds = \frac{t^4}{3.4}+ \frac{t^3}{3}+t^2 + 2t.\]

Multiplying and dividing by 2 and adding 1 gives

\[\frac{f_4(t)}{2} + 1 = \frac{t^4}{4.3.2}+\frac{t^3}{3.2}+\frac{t^2}{2}+\frac{t}{1}+\frac{1}{1}.\]

The pattern indicates that

\[\frac{f_n(t)}{2} + 1 = \sum\frac{t^n}{n!}\]

or

\[\frac{f(t)}{2} + 1 = e^t.\]

Solving we get

\[f(t) = 2\left(e^t - 1\right).\]

This may seem like a proof of the uniqueness and existence theorem, but we need to be sure of several details for a true proof.

- Does \(f_n(t)\) exist for all \(n\). Although we know that \(f(t,y)\) is continuous near the initial value, the integral could possible result in a value that lies outside this rectangle of continuity. This is why we may have to get a smaller rectangle.
- Does the sequence \(f_n(t)\) converge? The limit may not exist.
- If the sequence \(f_n(t)\) does converge, is the limit continuous?
- Is \(f(t)\) the only solution to the integral equation?

Larry Green (Lake Tahoe Community College)

Integrated by Justin Marshall.