# 2.9: Theory of Linear vs. Nonlinear Differential Equations

- Page ID
- 385

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

In this section we compare the answers to the two main questions in differential equations for linear and nonlinear first order differential equations. Recall that for a first order linear differential equation

\[ y' + p(x)y = g(x) \]

we had the solution

\[\begin{align} \large y &= e^{-\int p(x)\,dx} \int g(x) \, e^{\int p(x) \, dx} \, dx + C \\ &= \frac{1}{m} \int g(x)m \; dx + C. \end{align} \]

Recall that if a function is continuous then the integral always exists. If we are given an initial value

\[ y(x_0) = y_0 \]

then we can uniquely solve for \(C\) to get a solution. This immediately shows that there exists a solution to all first order linear differential equations. This also establishes uniqueness since the derivation shows that all solutions must be of the form above. Notice that if the constant of integration for \(m\) is chosen to be different from 0, then the constant cancels itself from the negative exponent outside the integral and the positive exponent inside. This proves that the answers to both of the key questions are affirmative for first order linear differential equations.

Theorem: Existence and Uniqueness for First order Linear Differential Equations

Let

\[ y' + p(x)y = g(x) \]

with

\[ y(x_0) = y_0 \]

be a first order linear differential equation such that \(p(x)\) and \(g(x)\) are both *continuous *for \(a < x < b\). Then there is a unique solution \(f(x)\) that satisfies it.

Example \(\PageIndex{1}\)

Determine where the differential equation

\[ (\cos\, x)\, y' + (\sin\, x)\, y = x^2\]

with

\[ y(0) = 4. \]

has a unique solution.

**Solution**

Dividing by \(\cos x\) to get it into standard form gives

\[ y' + (\tan\, x)\, y = x^2 \, \sec\, x. \]

Since \(\tan x\) is continuous for

\[ -\dfrac{\pi}{2} < x < \dfrac{\pi}{2} \]

and this interval contains 0, the differential equation is guaranteed to have a unique solution on this interval

Example \(\PageIndex{2}\)

Now consider the differential equation

\[ \dfrac{dy}{dx} = \dfrac{x}{y} \;\;\; \text{with} \;\;\; y(5) = -3.\]

Notice that the theorem does not apply, since the differential equation is nonlinear. We can separate and solve.

\[ y\,dy = x\,dx \]

\[\implies y^2 = x^2 + C. \]

Now plugging in the initial value, we get

\[\begin{align} 9 &= 25 + C \\ \implies C &= -16 \end{align}\]

\[\implies y^2 = x^2 - 16. \]

Taking the square root of both sides, we would get a plus or minus solution, however the initial condition states that the value must be negative so the solution is

\[ y = -(x^2 - 16)^{1/2}. \]

This equation is only valid for

\[ |x| > 4.\]

Notice that the original equation is not continuous at \(y = 0\), but the interval where the solution is valid could not have been guessed without solving the differential equation.

Example \(\PageIndex{2}\): nonlinear First order differential equation

Consider the nonlinear differential equation

\[ y' = y^{1/5} \;\;\; \text{with} \;\;\; y(0) = 0. \]

**Solution**

Separating and integrating we get

\[\begin{align} y^{ -\frac{1}{5}} \; dy &= dx \\ \implies \dfrac{5}{4} y^{\frac{4}{5}} &= x + C_1 \\ \implies y^{\frac{4}{5}} &= \dfrac{4}{5} x + C. \end{align} \]

Plugging in \(y(0) = 0\) gives \(C = 0\). The final solution is

\[ y = (\dfrac{4}{5} x)^{\frac{5}{4}}. \]

We can also see that

\[ y = -(\frac{4}{5}\; x)^{\frac{5}{4}} \]

and

\[ y = 0 \]

are both also solutions to the differential equation that satisfy the initial value problem. Hence uniqueness fails miserably here.

It seems hopeless in answering the two main questions for nonlinear differential equation. As a consolation, the following theorem can be constructed that defers a proof until another time.

Theorem: A result for Nonlinear First Order Differential Equations

Let

\[ y' = f(x,y) \;\;\; \text{and} \;\;\; y(x_0) = y_0 \]

be a differential equation such that both partial derivatives

\[f_x \;\;\; \text{and} \;\;\; f_y\]

are *continuous *in some rectangle containing \((x_0,y_0)\). Then there is a (possibly smaller) rectangle containing \((x_0,y_0)\) such that there is a unique solution \(f(x)\) that satisfies it.

Notice that in the prior example

\[ f_y(x,y) = y^{-\dfrac{4}{5}} \]

is undefined at \((0,0)\) and did not have a unique solution. If instead we wanted to solve the differential equation

\[ y' = y^{\frac{1}{5}} \;\;\; \text{with} \;\;\; y(0) = 5 \]

This would have a unique solution inside some rectangle containing \((0,5)\), but not containing the origin. Notice the theorem does not tell us how large this rectangle is.

Larry Green (Lake Tahoe Community College)

Integrated by Justin Marshall.