# 2.1: Linear First Order Equations

- Last updated

- Save as PDF

- Page ID
- 30691

- William F. Trench
- Andrew G. Cowles Distinguished Professor Emeritus (Mathamatics) at Trinity University

A first order differential equation is said to be *linear *if it can be written as

\[\label{eq:2.1.1} y' + p(x)y = f(x).\]

A first order differential equation that cannot be written like this is *nonlinear*. We say that Equation \ref{eq:2.1.1} is homogeneous if \(f \equiv 0\); otherwise it is *nonhomogeneous*. Since \(y \equiv 0\) is obviously a solution of the homgeneous equation

\[y' + p(x)y = 0, \nonumber\]

we call it the *trivial solution*. Any other solution is *nontrivial*.

Example \(\PageIndex{1}\)

The first order equations

\[\begin{aligned} x ^ { 2 } y ^ { \prime } + 3 y & = x ^ { 2 } \\[4pt] x y ^ { \prime } - 8 x ^ { 2 } y & = \sin x \\[4pt] x y ^ { \prime } + ( \ln x ) y & = 0 \\[4pt] y ^ { \prime } & = x ^ { 2 } y - 2 \end{aligned}\]

are not in the form in Equation \ref{eq:2.1.1}, but they are linear, since they can be rewritten as

\[\begin{aligned} y ^ { \prime } + \frac { 3 } { x ^ { 2 } } y & = 1 \\[4pt] y ^ { \prime } - 8 x y & = \frac { \sin x } { x } \\[4pt] y ^ { \prime } + \frac { \ln x } { x } y & = 0 \\[4pt] y ^ { \prime } - x ^ { 2 } y & = - 2 \end{aligned}\]

Example \(\PageIndex{2}\)

Here are some nonlinear first order equations:

\[\begin{aligned} x y ^ { \prime } + 3 y ^ { 2 } & = 2 x & \text { (because } y \text { is squared) } \\[4pt] y y ^ { \prime } & = 3 & \text { (because of the product } y y ^ { \prime } ) \\[4pt] y ^ { \prime } + x e ^ { y } & = 12 & \text { (because of } e ^ { y } ) \end{aligned}\]

## General Solution of a Linear First Order Equation

To motivate a definition that we’ll need, consider the simple linear first order equation

\[\label{eq:2.1.2} y'={1\over x^2}.\]

From calculus we know that \(y\) satisfies this equation if and only if

\[\label{eq:2.1.3} y=-{1\over x}+c,\]

where \(c\) is an arbitrary constant. We call \(c\) a *parameter* and say that Equation \ref{eq:2.1.3} defines a *one–parameter family* of functions. For each real number \(c\), the function defined by Equation \ref{eq:2.1.3} is a solution of Equation \ref{eq:2.1.2} on \((-\infty,0)\) and \((0,\infty)\); moreover, every solution of Equation \ref{eq:2.1.2} on either of these intervals is of the form Equation \ref{eq:2.1.3} for some choice of \(c\). We say that Equation \ref{eq:2.1.3} is *the general solution* of Equation \ref{eq:2.1.2}.

We’ll see that a similar situation occurs in connection with any first order linear equation

\[\label{eq:2.1.4} y'+p(x)y=f(x);\]

that is, if \(p\) and \(f\) are continuous on some open interval \((a,b)\) then there’s a unique formula \(y=y(x,c)\) analogous to Equation \ref{eq:2.1.3} that involves \(x\) and a parameter \(c\) and has the these properties:

- For each fixed value of \(c\), the resulting function of \(x\) is a solution of Equation \ref{eq:2.1.4} on \((a,b)\).
- If \(y\) is a solution of Equation \ref{eq:2.1.4} on \((a,b)\), then \(y\) can be obtained from the formula by choosing \(c\) appropriately.

We’ll call \(y=y(x,c)\) the *general solution* of Equation \ref{eq:2.1.4}.

When this has been established, it will follow that an equation of the form

\[\label{eq:2.1.5} P_0(x)y'+P_1(x)y=F(x)\]

has a general solution on any open interval \((a,b)\) on which \(P_0\), \(P_1\), and \(F\) are all continuous and \(P_0\) has no zeros, since in this case we can rewrite Equation \ref{eq:2.1.5} in the form Equation \ref{eq:2.1.4} with \(p=P_1/P_0\) and \(f=F/P_0\), which are both continuous on \((a,b)\).

To avoid awkward wording in examples and exercises, we will not specify the interval \((a,b)\) when we ask for the general solution of a specific linear first order equation. Let’s agree that this always means that we want the general solution on every open interval on which \(p\) and \(f\) are continuous if the equation is of the form Equation \ref{eq:2.1.4}, or on which \(P_0\), \(P_1\), and \(F\) are continuous and \(P_0\) has no zeros, if the equation is of the form Equation \ref{eq:2.1.5}. We leave it to you to identify these intervals in specific examples and exercises.

For completeness, we point out that if \(P_0\), \(P_1\), and \(F\) are all continuous on an open interval \((a,b)\), but \(P_0\) *does* have a zero in \((a,b)\), then Equation \ref{eq:2.1.5} may fail to have a general solution on \((a,b)\) in the sense just defined. Since this isn’t a major point that needs to be developed in depth, we will not discuss it further; however, see *Exercise 2.1.44* for an example.

## Homogeneous Linear First Order Equations

We begin with the problem of finding the general solution of a homogeneous linear first order equation. The next example recalls a familiar result from calculus.

Example \(\PageIndex{3}\)

Let \(a\) be a constant.

- Find the general solution of \[y'-ay=0.\label{eq:2.1.6}\]
- Solve the initial value problem \[y'-ay=0,\quad y(x_0)=y_0.\nonumber \]

**Solution a**

(a) You already know from calculus that if \(c\) is any constant, then \(y=ce^{ax}\) satisfies Equation \ref{eq:2.1.6}. However, let’s pretend you’ve forgotten this, and use this problem to illustrate a general method for solving a homogeneous linear first order equation.

We know that Equation \ref{eq:2.1.6} has the trivial solution \(y\equiv0\). Now suppose \(y\) is a nontrivial solution of Equation \ref{eq:2.1.6}. Then, since a differentiable function must be continuous, there must be some open interval \(I\) on which \(y\) has no zeros. We rewrite Equation \ref{eq:2.1.6} as

\[{y'\over y}=a \nonumber\]

for \(x\) in \(I\). Integrating this shows that

\[\ln|y|=ax+k,\quad \text{so} \quad |y|=e^ke^{ax}, \nonumber\]

where \(k\) is an arbitrary constant. Since \(e^{ax}\) can never equal zero, \(y\) has no zeros, so \(y\) is either always positive or always negative. Therefore we can rewrite \(y\) as

\[\label{eq:2.1.7} y=ce^{ax}\]

where

\[c=\left\{\begin{array}{cl}\phantom{-}e^k&\text{if} y>0, \\[4pt] -e^k&\text{if} y<0.\end{array}\right. \nonumber\]

This shows that every nontrivial solution of Equation \ref{eq:2.1.6} is of the form \(y=ce^{ax}\) for some nonzero constant \(c\). Since setting \(c=0\) yields the trivial solution, *all* solutions of Equation \ref{eq:2.1.6} are of the form Equation \ref{eq:2.1.7}. Conversely, Equation \ref{eq:2.1.7} is a solution of Equation \ref{eq:2.1.6} for every choice of \(c\), since differentiating Equation \ref{eq:2.1.7} yields \(y'=ace^{ax}=ay\).

**Solution b**

Imposing the initial condition \(y(x_0)=y_0\) yields \(y_0=ce^{ax_0}\), so \(c=y_0e^{-ax_0}\) and

\[y=y_0e^{-ax_0}e^{ax}=y_0e^{a(x-x_0)}. \nonumber\]

Figure \(\PageIndex{1}\) show the graphs of this function with \(x_{0}=0\), \(y_{0}=1\), and various values of \(a\).

Example \(\PageIndex{4}\)

a. Find the general solution of

\[xy'+y=0.\label{eq:2.1.8}\]

b. Solve the initial value problem

\[xy'+y=0,\quad y(1)=3.\label{eq:2.1.9}\]

**Solution a**

We rewrite Equation \ref{eq:2.1.8} as

\[\label{eq:2.1.10} y'+{1\over x}y=0,\]

where \(x\) is restricted to either \((-\infty,0)\) or \((0,\infty)\). If \(y\) is a nontrivial solution of Equation \ref{eq:2.1.10}, there must be some open interval I on which \(y\) has no zeros. We can rewrite Equation \ref{eq:2.1.10} as

\[{y'\over y}=-{1\over x} \nonumber\]

for \(x\) in \(I\). Integrating shows that

\[\ln|y|=-\ln|x|+k,\quad so\quad|y|={e^k\over|x|}. \nonumber\]

Since a function that satisfies the last equation can’t change sign on either \((-\infty,0)\) or \((0,\infty)\), we can rewrite this result more simply as

\[\label{eq:2.1.11} y={c\over x}\]

where

\[c=\left\{\begin{array}{cl}\phantom{-}e^k&\text{if} y>0, \\[4pt] -e^k& \,\text{if} y<0.\end{array}\right. \nonumber\]

We have now shown that every solution of Equation \ref{eq:2.1.10} is given by Equation \ref{eq:2.1.11} for some choice of \(c\). (Even though we assumed that \(y\) was nontrivial to derive Equation \ref{eq:2.1.11}, we can get the trivial solution by setting \(c=0\) in Equation \ref{eq:2.1.11}.) Conversely, any function of the form Equation \ref{eq:2.1.11} is a solution of Equation \ref{eq:2.1.10}, since differentiating Equation \ref{eq:2.1.11} yields

\[y'=-{c\over x^2}, \nonumber\]

and substituting this and Equation \ref{eq:2.1.11} into Equation \ref{eq:2.1.10} yields

\[\begin{aligned} y'+{1\over x}y&=&-{c\over x^2}+{1\over x}{c\over x}\\[4pt] &=&-{c\over x^2}+{c\over x^2}=0.\end{aligned}\]

Figure \(\PageIndex{2}\) shows the graphs of some solutions corresponding to various values of \(c\)

**Solution b**

Imposing the initial condition \(y(1)=3\) in Equation \ref{eq:2.1.11} yields \(c=3\). Therefore the solution of Equation \ref{eq:2.1.9} is

\[y={3\over x}. \nonumber\]

The interval of validity of this solution is \((0,\infty)\).

The results in Examples \(\PageIndex{3a}\) and \(\PageIndex{4b}\) are special cases of the next theorem.

Theorem \(\PageIndex{2}\)

If \(p\) is continuous on \((a,b),\) then the general solution of the homogeneous equation

\[\label{eq:2.1.12} y'+p(x)y=0\]

on \((a,b)\) is

\[y=ce^{-P(x)}, \nonumber\]

where

\[\label{eq:2.1.13} P(x)=\int p(x)\,dx\]

is any antiderivative of \(p\) on \((a,b);\) that is \(,\)

\[\label{eq:2.1.15} P'(x)=p(x), \quad a<x<b\]

Proof

If \(y=ce^{-P(x)}\), differentiating \(y\) and using Equation \ref{eq:2.1.15} shows that

\[y'=-P'(x)ce^{-P(x)}=-p(x)ce^{-P(x)}=-p(x)y, \nonumber\]

so \(y'+p(x)y=0\); that is, \(y\) is a solution of Equation \ref{eq:2.1.12}, for any choice of \(c\).

Now we’ll show that any solution of Equation \ref{eq:2.1.12} can be written as \(y=ce^{-P(x)}\) for some constant \(c\). The trivial solution can be written this way, with \(c=0\). Now suppose \(y\) is a nontrivial solution. Then there’s an open subinterval \(I\) of \((a,b)\) on which \(y\) has no zeros. We can rewrite Equation \ref{eq:2.1.12} as

\[\label{eq:2.1.16} \frac{y'}{y}=-p(x)\]

for \(x\) in \(I\). Integrating Equation \ref{eq:2.1.16} and recalling Equation \ref{eq:2.1.13} yields

\[\ln|y|=-P(x) + k, \nonumber\]

where \(k\) is a constant. This implies that

\[|y|=e^ke^{-P(x)}. \nonumber\]

Since \(P\) is defined for all \(x\) in \((a,b)\) and an exponential can never equal zero, we can take \(I=(a,b)\), so \(y\) has zeros on \((a,b)\) \((a,b)\), so we can rewrite the last equation as \(y=ce^{-P(x)}\), where

\[c=\left\{\begin{array}{cl}\phantom{-}e^k&\text{if } y>0\text{ on } (a,b),\\[4pt] -e^k&\text{if } y<0\text{ on }(a,b).\end{array}\right. \nonumber\]

REMARK: Rewriting a first order differential equation so that one side depends only on \(y\) and \(y'\) and the other depends only on \(x\) is called separation of variables. We did this in Examples \(\PageIndex{3}\) and \(\PageIndex{4}\), and in rewriting Equation \ref{eq:2.1.12} and Equation \ref{eq:2.1.16}. We will apply this method to nonlinear equations in Section 2.2.

## Linear Nonhomogeneous First Order Equations

We’ll now solve the nonhomogeneous equation

\[\label{eq:2.1.17} y'+p(x)y=f(x).\]

When considering this equation we call

\[y'+p(x)y=0\nonumber \]

the *complementary equation*.

We’ll find solutions of Equation \ref{eq:2.1.17} in the form \(y=uy_1\), where \(y_1\) is a nontrivial solution of the complementary equation and \(u\) is to be determined. This method of using a solution of the complementary equation to obtain solutions of a nonhomogeneous equation is a special case of a method called *variation of parameters*, which you’ll encounter several times in this book. (Obviously, \(u\) can’t be constant, since if it were, the left side of Equation \ref{eq:2.1.17} would be zero. Recognizing this, the early users of this method viewed \(u\) as a “parameter” that varies; hence, the name “variation of parameters.”)

If

\[y=uy_1, \quad \text{then}\quad y'=u'y_1+uy_1'.\nonumber \]

Substituting these expressions for \(y\) and \(y'\) into Equation \ref{eq:2.1.17} yields

\[u'y_1+u(y_1'+p(x)y_1)=f(x),\nonumber \]

which reduces to

\[\label{eq:2.1.18} u'y_1=f(x),\]

since \(y_1\) is a solution of the complementary equation; that is,

\[y_1'+p(x)y_1=0.\nonumber \]

In the proof of Theorem 2.2.1 we saw that \(y_1\) has no zeros on an interval where \(p\) is continuous. Therefore we can divide Equation \ref{eq:2.1.18} through by \(y_1\) to obtain

\[u'=f(x)/y_1(x).\nonumber \]

We can integrate this (introducing a constant of integration), and multiply the result by \(y_1\) to get the general solution of Equation \ref{eq:2.1.17}. Before turning to the formal proof of this claim, let’s consider some examples.

Example \(\PageIndex{5}\)

Find the general solution of

\[\label{eq:2.1.19} y'+2y=x^3e^{-2x}.\]

By applying a of Example 2.1.3 with \(a=-2\), we see that \(y_1=e^{-2x}\) is a solution of the complementary equation \(y'+2y=0\). Therefore we seek solutions of Equation \ref{eq:2.1.19} in the form \(y=ue^{-2x}\), so that

\[\label{eq:2.1.20} y'=u'e^{-2x}-2ue^{-2x}\quad \text{and} \quad y'+2y=u'e^{-2x}-2ue^{-2x}+2ue^{-2x}=u'e^{-2x}.\]

Therefore \(y\) is a solution of Equation \ref{eq:2.1.19} if and only if

\[u'e^{-2x}=x^3e^{-2x}\quad \text{or, equivalently},\quad u'=x^3.\nonumber \]

Therefore

\[u={x^4\over4}+c,\nonumber \]

and

\[y=ue^{-2x}=e^{-2x}\left({x^4\over4}+c\right)\nonumber \]

is the general solution of Equation \ref{eq:2.1.19}.

Figure \(\PageIndex{3}\) shows a direction field and some integral curves for Equation \ref{eq:2.1.19}.

Example \(\PageIndex{6}\)

Find the general solution

\[\label{eq:2.1.29} y'+(\cot x)y=x\csc x.\]

Solve the initial value problem

\[\label{eq:2.1.30} y'+(\cot x)y=x\csc x,\quad y(\pi/2)=1.\]

Here \(p(x)=\cot x\) and \(f(x)= x\csc x\) are both continuous except at the points \(x=r\pi\), where \(r\) is an integer. Therefore we seek solutions of Equation \ref{eq:2.1.29} on the intervals \(\left(r\pi, (r+1)\pi \right)\). We need a nontrival solution \(y_1\) of the complementary equation; thus, \(y_1\) must satisfy \(y_1'+(\cot x)y_1=0\), which we rewrite as

\[\label{eq:2.1.22} {y_1'\over y_1}=-\cot x=-{\cos x\over\sin x}.\]

Integrating this yields

\[\ln|y_1|=-\ln|\sin x|,\nonumber \]

where we take the constant of integration to be zero since we need only *one* function that satisfies Equation \ref{eq:2.1.22}. Clearly \(y_1=1/\sin x\) is a suitable choice. Therefore we seek solutions of Equation \ref{eq:2.1.29} in the form

\[y={u\over\sin x},\nonumber \]

so that

\[\label{eq:2.1.23} y'={u'\over\sin x}-{u\cos x\over\sin^2x}\]

and

\[\label{eq:2.1.24} \begin{array}{rcl} y'+(\cot x)y&=& {u'\over\sin x}-{u\cos x\over\sin^2x}+{u\cot x\over\sin x}\\[4pt] &=&{u'\over\sin x}-{u\cos x\over\sin^2x}+{u\cos x\over\sin^2 x}\\[4pt] &=&{u'\over\sin x}. \end{array}\]

Therefore \(y\) is a solution of Equation \ref{eq:2.1.29} if and only if

\[u'/\sin x=x\csc x=x/\sin x\quad \text{or, equivalently,}\quad u'=x.\nonumber \]

Integrating this yields

\[\label{eq:2.1.25} u={x^2\over2}+c, \quad\text{ and}\quad y={u\over\sin x}= {x^2\over 2\sin x}+ {c\over\sin x}.\]

is the general solution of Equation \ref{eq:2.1.29} on every interval \(\left(r\pi,(r+1)\pi\right)\) ( \(r=\)integer).

b. Imposing the initial condition \(y(\pi/2)=1\) in Equation \ref{eq:2.1.25} yields

\[1={\pi^2\over 8}+c\text{ or} c=1-{\pi^2\over 8}.\nonumber \]

Thus,

\[y={x^2\over 2\sin x}+{(1-\pi^2/8)\over\sin x}\nonumber \]

is a solution of Equation \ref{eq:2.1.29}. The interval of validity of this solution is \((0,\pi)\); Figure 2.1.4 shows its graph.

REMARK: It wasn’t necessary to do the computations \ref{eq:2.1.23} and \ref{eq:2.1.24} in Example \(\PageIndex{6}\), since we showed in the discussion preceding Example 2.1.5 that if \(y = uy_{1}\) where \(y'_{1}+ p(x)y_{1}=0\), then \(y'+ p(x)y = u'y_{1}\). We did these computations so you would see this happen in this specific example. We recommend that you include these “unnecesary” computations in doing exercises, until you’re confident that you really understand the method. After that, omit them.

We summarize the method of variation of parameters for solving

\[\label{eq:2.1.26} y'+p(x)y=f(x)\]

as follows:

a. Find a function \(y_1\) such that

\[{y_1'\over y_1}=-p(x).\nonumber \]

For convenience, take the constant of integration to be zero.

b. Write

\[\label{eq:2.1.27} y=uy_1\]

to remind yourself of what you’re doing.

c. Write \(u'y_1=f\) and solve for \(u'\); thus, \(u'=f/y_1\).

d. Integrate \(u'\) to obtain \(u\), with an arbitrary constant of integration.

e. Substitute \(u\) into Equation \ref{eq:2.1.27} to obtain \(y\).

To solve an equation written as

\[P_0(x)y'+P_1(x)y=F(x),\nonumber \]

we recommend that you divide through by \(P_0(x)\) to obtain an equation of the form Equation \ref{eq:2.1.26} and then follow this procedure.

## Solutions in Integral Form

Sometimes the integrals that arise in solving a linear first order equation can’t be evaluated in terms of elementary functions. In this case the solution must be left in terms of an integral.

Example \(\PageIndex{7}\)

Find the general solution of

\[y'-2xy=1.\nonumber \]

Solve the initial value problem

\[\label{eq:2.1.28} y'-2xy=1,\quad y(0)=y_0.\]

a. To apply variation of parameters, we need a nontrivial solution \(y_1\) of the complementary equation; thus, \(y_1'-2xy_1=0\), which we rewrite as

\[{y_1'\over y_1}=2x.\nonumber \]

Integrating this and taking the constant of integration to be zero yields

\[\ln|y_1|=x^2,\quad \text{so} \quad|y_1|=e^{x^2}.\nonumber \]

We choose \(y_1=e^{x^2}\) and seek solutions of Equation \ref{eq:2.1.28} in the form \(y=ue^{x^2}\), where

\[u'e^{x^2}=1,\quad \text{so} \quad u'=e^{-x^2}.\nonumber \]

Therefore

\[u=c+\int e^{-x^2}dx,\nonumber \]

but we can’t simplify the integral on the right because there’s no elementary function with derivative equal to \(e^{-x^2}\). Therefore the best available form for the general solution of Equation \ref{eq:2.1.28} is

\[\label{eq:2.1.49}y=ue^{x^2}= e^{x^2}\left(c+\int e^{-x^2}dx\right).\]

b. Since the initial condition in Equation \ref{eq:2.1.28} is imposed at \(x_0=0\), it is convenient to rewrite Equation \ref{eq:2.1.49} as

\[\begin{aligned}y=e^{x^2}\left(c+\int_{0}^{x}e^{-t^2}dt \right), \quad\text{since}\quad\int_{0}^{0}e^{-t^2}dt=0\end{aligned}\nonumber \]

Setting \(x=0\) and \(y=y_0\) here shows that \(c=y_0\). Therefore the solution of the initial value problem is

\[\label{eq:2.1.51}y=e^{x^2}\left(y_0 +\int^x_0 e^{-t^2}dt\right).\]

For a given value of \(y_0\) and each fixed \(x\), the integral on the right can be evaluated by numerical methods. An alternate procedure is to apply the numerical integration procedures discussed in Chapter 3 directly to the initial value problem Equation \ref{eq:2.1.28}. Figure 2.1.5 shows graphs of of Equation \ref{eq:2.1.51} for several values of \(y_0\).

## An Existence and Uniqueness Theorem

The method of variation of parameters leads to this theorem.

Theorem \(\PageIndex{2}\) Existence and Uniqueness Theorem

Suppose \(p\) and \(f\) are continuous on an open interval \((a,b),\) and let \(y_1\) be any nontrivial solution of the complementary equation

\[y'+p(x)y=0 \nonumber\]

on \((a,b)\). Then:

- The general solution of the nonhomogeneous equation \[\label{eq:2.1.31} y'+p(x)y=f(x)\] on \((a,b)\) is \[\label{eq:2.1.32} y=y_1(x)\left(c +\int f(x)/y_1(x)\,dx\right).\]
- If \(x_0\) is an arbitrary point in \((a,b)\) and \(y_0\) is an arbitrary real number \(,\) then the initial value problem \[y'+p(x)y=f(x),\quad y(x_0)=y_0\nonumber \] has the unique solution \[ y=y_1(x)\left({y_0\over y_1(x_0)} +\int^x_{x_0} {f(t)\over y_1(t)}\, dt\right)\nonumber \] on \((a,b).\)

**Proof**-
(a) To show that Equation \ref{eq:2.1.32} is the general solution of Equation \ref{eq:2.1.31} on \((a,b)\), we must prove that:

- If \(c\) is any constant, the function \(y\) in Equation \ref{eq:2.1.32} is a solution of Equation \ref{eq:2.1.31} on \((a,b)\).
- If \(y\) is a solution of Equation \ref{eq:2.1.31} on \((a,b)\) then \(y\) is of the form Equation \ref{eq:2.1.32} for some constant \(c\).

To prove (i), we first observe that any function of the form Equation \ref{eq:2.1.32} is defined on \((a,b)\), since \(p\) and \(f\) are continuous on \((a,b)\). Differentiating Equation \ref{eq:2.1.32} yields

\[y'=y_1'(x)\left(c +\int f(x)/y_1(x)\, dx\right)+f(x).\nonumber \]

Since \(y_1'=-p(x)y_1\), this and Equation \ref{eq:2.1.32} imply that

\[\begin{aligned} y'&=-p(x)y_1(x)\left(c +\int f(x)/y_1(x)\, dx\right)+f(x)\\[4pt] &=-p(x)y(x)+f(x),\end{aligned}\nonumber \]

which implies that \(y\) is a solution of Equation \ref{eq:2.1.31}.

To prove (ii), suppose \(y\) is a solution of Equation \ref{eq:2.1.31} on \((a,b)\). From the proof of Theorem 2.1.1, we know that \(y_1\) has no zeros on \((a,b)\), so the function \(u=y/y_1\) is defined on \((a,b)\). Moreover, since

\[y'=-py+f\quad \text{and}\quad y_1'=-py_1,\nonumber \]

\[\begin{aligned} u'&={y_1y'-y_1'y\over y_1^2} \\[4pt] &={y_1(-py+f)-(-py_1)y\over y_1^2}={f\over y_1}.\end{aligned}\nonumber \]

Integrating \(u'=f/y_1\) yields

\[u=\left(c +\int f(x)/y_1(x)\, dx\right),\nonumber \]

which implies Equation \ref{eq:2.1.32}, since \(y=uy_1\).

(b) We’ve proved (a), where \(\int f(x)/y_1(x)\,dx\) in Equation \ref{eq:2.1.32} is an arbitrary antiderivative of \(f/y_1\). Now it is convenient to choose the antiderivative that equals zero when \(x=x_0\), and write the general solution of Equation \ref{eq:2.1.31} as

\[y=y_1(x)\left(c +\int^x_{x_0} {f(t)\over y_1(t)}\, dt\right). \nonumber\]

Since

\[y(x_0)= y_1(x_0)\left(c +\int^{x_0}_{x_0} {f(t)\over y_1(t)}\, dt\right)=cy_1(x_0), \nonumber\]

we see that \(y(x_0)=y_0\) if and only if \(c=y_0/y_1(x_0)\).