Skip to main content
Mathematics LibreTexts

8.3: Applications of Nonlinear Systems

  • Page ID
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    In this section we study two very standard examples of nonlinear systems. First, we look at the nonlinear pendulum equation. We saw the pendulum equation’s linearization before, but we noted it was only valid for small angles and short times. Now we find out what happens for large angles. Next, we look at the predator-prey equation, which finds various applications in modeling problems in biology, chemistry, economics, and elsewhere.


    The first example we study is the pendulum equation \(\theta''+\frac{g}{L} \sin \theta = 0\). Here, \(\theta\) is the angular displacement, \(g\) is the gravitational acceleration, and \(L\) is the length of the pendulum. In this equation we disregard friction, so we are talking about an idealized pendulum.

    Pendulum of mass m, length L and angle theta
    Figure \(\PageIndex{1}\)

    This equation is a conservative equation, so we can use our analysis of conservative equations from the previous section. Let us change the equation to a two-dimensional system in variables \((\theta,\omega)\) by introducing the new variable \(\omega\): \[\begin{bmatrix} \theta \\ \omega \end{bmatrix} ' = \begin{bmatrix} \omega \\ - \frac{g}{L} \sin \theta \end{bmatrix} . \nonumber \]

    The critical points of this system are when \(\omega = 0\) and \(-\frac{g}{L} \sin \theta = 0\), or in other words if \(\sin \theta = 0\). So the critical points are when \(\omega = 0\) and \(\theta\) is a multiple of \(\pi\). That is, the points are \(\ldots (-2\pi,0), (-\pi,0), (0,0), (\pi,0), (2\pi,0) \ldots\). While there are infinitely many critical points, they are all isolated. Let us compute the Jacobian matrix: \[\begin{bmatrix} \frac{\partial}{\partial \theta} \Bigl( \omega \Bigr) & \frac{\partial}{\partial \omega} \Bigl( \omega \Bigr) \\ \frac{\partial}{\partial \theta} \Bigl( - \frac{g}{L} \sin \theta \Bigr) & \frac{\partial}{\partial \omega} \Bigl( - \frac{g}{L} \sin \theta \Bigr) \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ - \frac{g}{L} \cos \theta & 0 \end{bmatrix} . \nonumber \]

    For conservative equations, there are two types of critical points. Either stable centers, or saddle points. The eigenvalues of the Jacobian matrix are \(\lambda = \pm \sqrt{-\frac{g}{L}\cos \theta}\).

    The eigenvalues are going to be real when \(\cos \theta < 0\). This happens at the odd multiples of \(\pi\). The eigenvalues are going to be purely imaginary when \(\cos \theta > 0\). This happens at the even multiples of \(\pi\). Therefore the system has a stable center at the points \(\ldots (-2\pi,0), (0,0), (2\pi,0) \ldots\), and it has an unstable saddle at the points \(\ldots (-3\pi,0), (-\pi,0), (\pi,0), (3\pi,0) \ldots\). Look at the phase diagram in Figure \(\PageIndex{2}\), where for simplicity we let \(\frac{g}{L} = 1\).

    Phase plane diagram and some trajectories of the nonlinear pendulum equation.
    Figure \(\PageIndex{2}\): Phase plane diagram and some trajectories of the nonlinear pendulum equation.

    In the linearized equation we have only a single critical point, the center at \((0,0)\). Now we see more clearly what we meant when we said the linearization is good for small angles. The horizontal axis is the deflection angle. The vertical axis is the angular velocity of the pendulum. Suppose we start at \(\theta = 0\) (no deflection), and we start with a small angular velocity \(\omega\). Then the trajectory keeps going around the critical point \((0,0)\) in an approximate circle. This corresponds to short swings of the pendulum back and forth. When \(\theta\) stays small, the trajectories really look like circles and hence are very close to our linearization.

    When we give the pendulum a big enough push, it goes across the top and keeps spinning about its axis. This behavior corresponds to the wavy curves that do not cross the horizontal axis in the phase diagram. Let us suppose we look at the top curves, when the angular velocity \(\omega\) is large and positive. Then the pendulum is going around and around its axis. The velocity is going to be large when the pendulum is near the bottom, and the velocity is the smallest when the pendulum is close to the top of its loop.

    At each critical point, there is an equilibrium solution. Consider the solution \(\theta = 0\); the pendulum is not moving and is hanging straight down. This is a stable place for the pendulum to be, hence this is a stable equilibrium.

    The other type of equilibrium solution is at the unstable point, for example \(\theta = \pi\). Here the pendulum is upside down. Sure you can balance the pendulum this way and it will stay, but this is an unstable equilibrium. Even the tiniest push will make the pendulum start swinging wildly.

    See Figure \(\PageIndex{3}\) for a diagram. The first picture is the stable equilibrium \(\theta = 0\). The second picture corresponds to those in the phase diagram around \(\theta =0\) when the angular velocity is small. The next picture is the unstable equilibrium \(\theta = \pi\). The last picture corresponds to the wavy lines for large angular velocities.

    Theta = 0:  back and forth small angular velocities.  Theta = pi:  full circle large angular velocities
    Figure \(\PageIndex{3}\): Various possibilities for the motion of the pendulum.

    The quantity

    \[\frac{1}{2} \omega^2 - \frac{g}{L} \cos \theta \nonumber \]

    is conserved by any solution. This is the energy or the Hamiltonian of the system.

    We have a conservative equation and so (exercise) the trajectories are given by

    \[\omega = \pm \sqrt{ \frac{2g}{L} \cos \theta + C} , \nonumber \]

    for various values of \(C\). Let us look at the initial condition of \((\theta_0,0)\), that is, we take the pendulum to angle \(\theta_0\), and just let it go (initial angular velocity 0). We plug the initial conditions into the above and solve for \(C\) to obtain

    \[C = - \frac{2g}{L} \cos \theta_0 . \nonumber \]

    Thus the expression for the trajectory is

    \[\omega = \pm \sqrt{ \frac{2g}{L}} \sqrt{ \cos \theta - \cos \theta_0 } . \nonumber \]

    Let us figure out the period. That is, the time it takes for the pendulum to swing back and forth. We notice that the oscillation about the origin in the phase plane is symmetric about both the \(\theta\) and the \(\omega\) axis. That is, in terms of \(\theta\), the time it takes from \(\theta_0\) to \(-\theta_0\) is the same as it takes from \(-\theta_0\) back to \(\theta_0\). Furthermore, the time it takes from \(-\theta_0\) to \(0\) is the same as to go from \(0\) to \(\theta_0\). Therefore, let us find how long it takes for the pendulum to go from angle 0 to angle \(\theta_0\), which is a quarter of the full oscillation and then multiply by 4.

    We figure out this time by finding \(\frac{dt}{d\theta}\) and integrating from \(0\) to \(\omega_0\). The period is four times this integral. Let us stay in the region where \(\omega\) is positive. Since \(\omega = \frac{d\theta}{dt}\), inverting we get

    \[\frac{dt}{d\theta} = \sqrt{\frac{L}{2g}} \frac{1}{\sqrt{\cos \theta - \cos \theta_0 }} . \nonumber \]

    Therefore the period \(T\) is given by

    \[T = 4 \sqrt{\frac{L}{2g}} \int_0^{\theta_0} \frac{1}{\sqrt{\cos \theta - \cos \theta_0 }}\, d\theta . \nonumber \]

    The integral is an improper integral, and we cannot in general evaluate it symbolically. We must resort to numerical approximation if we want to compute a particular \(T\).

    Recall from Section 2.4, the linearized equation \(\theta''+\frac{g}{L}\theta = 0\) has period

    \[T_{\text{linear}} = 2\pi \sqrt{\frac{L}{g}} . \nonumber \]

    We plot \(T\), \(T_{\text{linear}}\), and the relative error \(\frac{T-T_{\text{linear}}}{T}\) in Figure \(\PageIndex{4}\). The relative error says how far is our approximation from the real period percentage-wise. Note that \(T_{\text{linear}}\) is simply a constant, it does not change with the initial angle \(\theta_0\). The actual period \(T\) gets larger and larger as \(\theta_0\) gets larger. Notice how the relative error is small when \(\theta_0\) is small. It is still only \(15\%\) when \(\theta_0 = \frac{\pi}{2}\), that is, a 90 degree angle. The error is \(3.8\%\) when starting at \(\frac{\pi}{4}\), a 45 degree angle. At a 5 degree initial angle, the error is only \(0.048 \%\).

    Left graph goes up to 7.4.  Right graph goes up to 0.15
    Figure \(\PageIndex{4}\): The plot of \(T\) and \(T_{\text{linear}}\) with \(\frac{g}{L}=1\) (left), and the plot of the relative error \(\frac{T-T_{\text{linear}}}{T}\) (right), for \(\theta_{0}\) between \(0\) and \(\frac{\pi}{2}\).

    While it is not immediately obvious from the formula, it is true that

    \[\lim_{\theta_0 \uparrow \pi} T = \infty . \nonumber \]

    That is, the period goes to infinity as the initial angle approaches the unstable equilibrium point. So if we put the pendulum almost upside down it may take a very long time before it gets down. This is consistent with the limiting behavior, where the exactly upside down pendulum never makes an oscillation, so we could think of that as infinite period.

    Predator-Prey or Lotka-Volterra Systems

    One of the most common simple applications of nonlinear systems are the so-called predator-prey orLotka-Volterra\(^{1}\) systems. For example, these systems arise when two species interact, one as the prey and one as the predator. It is then no surprise that the equations also see applications in economics. The system also arises in chemical reactions. In biology, this system of equations explains the natural periodic variations of populations of different species in nature. Before the application of differential equations, these periodic variations in the population baffled biologists.

    We keep with the classical example of hares and foxes in a forest, it is the easiest to understand. \[\begin{align}\begin{aligned} & x = \# \text{ of hares (the prey),} \\ & y = \# \text{ of foxes (the predator).} \end{aligned}\end{align} \nonumber \] When there are a lot of hares, there is plenty of food for the foxes, so the fox population grows. However, when the fox population grows, the foxes eat more hares, so when there are lots of foxes, the hare population should go down, and vice versa. The Lotka–Volterra model proposes that this behavior is described by the system of equations \[\begin{align}\begin{aligned} & x' = (a-by)x, \\ & y' = (cx-d)y, \end{aligned}\end{align} \nonumber \] where \(a,b,c,d\) are some parameters that describe the interaction of the foxes and hares\(^{2}\). In this model, these are all positive numbers.

    Let us analyze the idea behind this model. The model is a slightly more complicated idea based on the exponential population model. First expand, \[x' = (a-by)x = ax - byx . \nonumber \] The hares are expected to simply grow exponentially in the absence of foxes, that is where the \(ax\) term comes in, the growth in population is proportional to the population itself. We are assuming the hares always find enough food and have enough space to reproduce. However, there is another component \(-byx\), that is, the population also is decreasing proportionally to the number of foxes. Together we can write the equation as \((a-by)x\), so it is like exponential growth or decay but the constant depends on the number of foxes.

    The equation for foxes is very similar, expand again \[y' = (cx-d)y = cxy-dy . \nonumber \] The foxes need food (hares) to reproduce: the more food, the bigger the rate of growth, hence the \(cxy\) term. On the other hand, there are natural deaths in the fox population, and hence the \(-dy\) term.

    Without further delay, let us start with an explicit example. Suppose the equations are \[x' = (0.4-0.01y)x, \qquad y' = (0.003x-0.3)y . \nonumber \] See Figure \(\PageIndex{5}\) for the phase portrait. In this example it makes sense to also plot \(x\) and \(y\) as graphs with respect to time. Therefore the second graph in Figure \(\PageIndex{5}\) is the graph of \(x\) and \(y\) on the vertical axis (the prey \(x\) is the thinner line with taller peaks), against time on the horizontal axis. The particular solution graphed was with initial conditions of \(20\) foxes and \(50\) hares.

    The phase portrait (left) and graphs of x and y for a sample solution (right)
    Figure \(\PageIndex{5}\): The phase portrait (left) and graphs of \(x\) and \(y\) for a sample solution (right).

    Let us analyze what we see on the graphs. We work in the general setting rather than putting in specific numbers. We start with finding the critical points. Set \((a-by)x = 0\), and \((cx-d)y = 0\). The first equation is satisfied if either \(x=0\) or \(y=\frac{a}{b}\). If \(x=0\), the second equation implies \(y=0\). If \(y= \frac{a}{b}\), the second equation implies \(x=\frac{d}{c}\). There are two equilibria: at \((0,0)\) when there are no animals at all, and at \((\frac{d}{c},\frac{a}{b})\). In our specific example \(x = \frac{d}{c} = 100\), and \(y = \frac{a}{b} = 40\). This is the point where there are 100 hares and 40 foxes.

    We compute the Jacobian matrix: \[\begin{bmatrix} a-by & -bx \\ cy & cx-d \end{bmatrix} . \nonumber \] At the origin \((0,0)\) we get the matrix \(\left[ \begin{smallmatrix} a & 0 \\ 0 & -d \end{smallmatrix} \right]\), so the eigenvalues are \(a\) and \(-d\), hence real and of opposite signs. So the critical point at the origin is a saddle. This makes sense. If you started with some foxes but no hares, then the foxes would go extinct, that is, you would approach the origin. If you started with no foxes and a few hares, then the hares would keep multiplying without check, and so you would go away from the origin.

    OK, how about the other critical point at \((\frac{d}{c},\frac{a}{b})\). Here the Jacobian matrix becomes \[\begin{bmatrix} 0 & -\frac{bd}{c} \\ \frac{ac}{b} & 0 \end{bmatrix} . \nonumber \] The eigenvalues satisfy \(\lambda^2 + ad = 0\). In other words, \(\lambda = \pm i \sqrt{ad}\). The eigenvalues being purely imaginary, we are in the case where we cannot quite decide using only linearization. We could have a stable center, spiral sink, or a spiral source. That is, the equilibrium could be asymptotically stable, stable, or unstable. Of course I gave you a picture above that seems to imply it is a stable center. But never trust a picture only. Perhaps the oscillations are getting larger and larger, but only very slowly. Of course this would be bad as it would imply something will go wrong with our population sooner or later. And I only graphed a very specific example with very specific trajectories.

    How can we be sure we are in the stable situation? As we said before, in the case of purely imaginary eigenvalues, we have to do a bit more work. Previously we found that for conservative systems, there was a certain quantity that was conserved on the trajectories, and hence the trajectories had to go in closed loops. We can use a similar technique here. We just have to figure out what is the conserved quantity. After some trial and error we find the constant \[C = \frac{y^a x^d}{e^{cx+by}} = y^a x^d e^{-cx-by} \nonumber \] is conserved. Such a quantity is called the constant of motion. Let us check \(C\) really is a constant of motion. How do we check, you say? Well, a constant is something that does not change with time, so let us compute the derivative with respect to time: \[C' = a y^{a-1}y' x^d e^{-cx-by} + y^a d x^{d-1} x' e^{-cx-by} + y^a x^d e^{-cx-by} (-cx'-by') . \nonumber \] Our equations give us what \(x'\) and \(y'\) are so let us plug those in:

    \[\begin{align}\begin{aligned} C' & = a y^{a-1} (cx-d)y x^d e^{-cx-by} + y^a d x^{d-1} (a-by)x e^{-cx-by} \\ & \phantom{mm} + y^a x^d e^{-cx-by} \bigl(-c(a-by)x-b(cx-d)y\bigr) \\ & = y^a x^d e^{-cx-by} \Bigl( a (cx-d) + d (a-by) + \bigl(-c(a-by)x-b(cx-d)y\bigr) \Bigr) \\ & = 0 . \end{aligned}\end{align} \nonumber \] So along the trajectories \(C\) is constant. In fact, the expression \(C = \frac{y^a x^d}{e^{cx+by}}\) gives us an implicit equation for the trajectories. In any case, once we have found this constant of motion, it must be true that the trajectories are simple curves, that is, the level curves of \(\frac{y^a x^d}{e^{cx+by}}\). It turns out, the critical point at \((\frac{d}{c},\frac{a}{b})\) is a maximum for \(C\) (left as an exercise). So \((\frac{d}{c},\frac{a}{b})\) is a stable equilibrium point, and we do not have to worry about the foxes and hares going extinct or their populations exploding.

    One blemish on this wonderful model is that the number of foxes and hares are discrete quantities and we are modeling with continuous variables. Our model has no problem with there being \(0.1\) fox in the forest for example, while in reality that makes no sense. The approximation is a reasonable one as long as the number of foxes and hares are large, but it does not make much sense for small numbers. One must be careful in interpreting any results from such a model.

    An interesting consequence (perhaps counterintuitive) of this model is that adding animals to the forest might lead to extinction, because the variations will get too big, and one of the populations will get close to zero. For example, suppose there are \(20\) foxes and \(50\) hares as before, but now we bring in more foxes, bringing their number to \(200\). If we run the computation, we find the number of hares will plummet to just slightly more than \(1\) hare in the whole forest. In reality that most likely means the hares die out, and then the foxes will die out as well as they will have nothing to eat.

    Showing that a system of equations has a stable solution can be a very difficult problem. When Isaac Newton put forth his laws of planetary motions, he proved that a single planet orbiting a single sun is a stable system. But any solar system with more than \(1\) planet proved very difficult indeed. In fact, such a system behaves chaotically (see Section 8.5), meaning small changes in initial conditions lead to very different long-term outcomes. From numerical experimentation and measurements, we know the earth will not fly out into the empty space or crash into the sun, for at least some millions of years or so. But we do not know what happens beyond that.


    [1] Named for the American mathematician, chemist, and statistician Alfred James Lotka (1880–1949) and the Italian mathematician and physicist Vito Volterra (1860–1940).

    [2] This interaction does not end well for the hare.

    This page titled 8.3: Applications of Nonlinear Systems is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Jiří Lebl via source content that was edited to the style and standards of the LibreTexts platform.