Skip to main content
Mathematics LibreTexts

3.5: Theorem of Cauchy-Kovalevskaya

  • Page ID
    2146
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Consider the quasilinear system of first order (3.3.1) of Section 3.3. Assume an initial manifolds \(\mathcal{S}\) is given by \(\chi(x)=0\), \(\nabla\chi\not=0\), and suppose that \(\chi\) is not characteristic. Then, see Section 3.3, the system (3.3.1) can be written as

    \begin{eqnarray}
    \label{syst2}
    u_{x_n}&=&\sum_{i=1}^{n-1}a^i(x,u)u_{x_i}+b(x,u)\\
    \label{syst2initial}
    u(x_1,\ldots,x_{n-1},0)&=&f(x_1,\ldots,x_{n-1})
    \end{eqnarray}

    Here is \(u=(u_1,\ldots,u_m)^T\), \(b=(b_1,\ldots,b_n)^T\) and \(a^i\) are \((m\times m)\)-matrices.

    We assume that \(a^i\), \(b\) and \(f\) are in \(C^\infty\) with respect to their arguments. From (\ref{syst2}) and (\ref{syst2initial}) it follows that we can calculate formally all derivatives \(D^\alpha u\) in a neighborhood of the plane \(\{x:\ x_n=0\}\), in particular in a neighborhood of \(0\in\mathbb{R}\). Thus we have a formal power series of \(u(x)\) at \(x=0\):

    $$u(x)\sim \sum\frac{1}{\alpha !}D^\alpha u(0) x^\alpha.\]

    For notations and definitions used here and in the following see the appendix to this section.

    Then, as usually, two questions arise:

    1. Does the power series converge in a neighborhood of \(0\in\mathbb{R}\)?
    2. Is a convergent power series a solution of the initial value problem (\ref{syst2}), (\ref{syst2initial})?

    Remark. Quite different to this power series method is the method of asymptotic expansions. Here one is interested in a good approximation of an unknown solution of an equation by a finite sum \(\sum_{i=0}^N\phi_i(x)\) of functions \(\phi_i\). In general, the infinite sum
    \(\sum_{i=0}^\infty\phi_i(x)\) does not converge, in contrast to the power series method of this section. See [15] for some asymptotic formulas in capillarity.

    Theorem 3.1. (Cauchy-Kovalevskaya). There is a neighborhood of \(0\in\mathbb{R}\) such there is a real analytic solution of the initial value problem (\ref{syst2}), (\ref{syst2initial}). This solution is unique in the class of real analytic functions.

    Proof. The proof is taken from F. John \cite{John}. We introduce \(u-f\) as the new solution for which we are looking at and we add a new coordinate \(u^\star\) to the solution vector by setting \(u^\star (x)=x_n\). Then

    $$u^\star_{x_n}=1,\ u^\star_{x_k}=0,\ k=1,\ldots,n-1,\ u^\star(x_1,\ldots,x_{n-1},0)=0\]

    and the extended system (\ref{syst2}), (\ref{syst2initial}) is

    $$
    \left(\begin{array}{c}
    u_{1,x_n}\\
    \vdots\\
    u_{m,x_n}\\
    u^\star_{x_n}
    \end{array}\right)=
    \sum_{i=1}^{n-1}\left(\begin{array}{cc}
    a^i&0\\
    0&0
    \end{array}\right)
    \left(\begin{array}{c}
    u_{1,x_i}\\
    \vdots\\
    u_{m,x_i}\\
    u^\star_{x_i}
    \end{array}\right)+
    \left(\begin{array}{c}
    b_1\\
    \vdots\\
    b_m\\
    1
    \end{array}\right),
    \]

    where the associated initial condition is \(u(x_1,\ldots,x_{n-1},0)=0\).

    The new \(u\) is \(u=(u_1,\ldots,u_m)^T\), the new \(a^i\) are \(a^i(x_1,\ldots,x_{n-1},u_1,\ldots,u_m,u^\star)\) and the new \(b\) is \(b=(x_1,\ldots,x_{n-1},u_1,\ldots,u_m,u^\star)^T\).

    Thus we are led to an initial value problem of the type

    \begin{eqnarray}
    \label{syst3}
    u_{j,x_n}&=&\sum_{i=1}^{n-1}\sum_{k=1}^Na_{jk}^i(z)u_{k,x_i}+b_j(z),\ j=1,\ldots,N\\
    \label{syst3initial}
    u_j(x)&=&0\ \ \mbox{if}\ x_n=0,
    \end{eqnarray}
    where \(j=1,\ldots,N\) and \(z=(x_1,\ldots,x_{n-1},u_1,\ldots,u_N)\).

    The point here is that \(a_{jk}^i\) and \(b_j\) are independent of \(x_n\). This fact simplifies the proof of the theorem.

    From (\ref{syst3}) and (\ref{syst3initial}) we can calculate formally all \(D^\beta u_j\). Then we have formal power series for \(u_j\):

    $$u_j(x)\sim \sum_\alpha c_\alpha^{(j)}x^\alpha,\]

    where

    $$c_\alpha^{(j)}=\frac{1}{\alpha!}D^\alpha u_j(0).\]

    We will show that these power series are (absolutely) convergent in a neighborhood of \(0\in\mathbb{R}\), i.e., they are real analytic functions, see the appendix for the definition of real analytic functions. Inserting these functions into the left and into the right hand side of (\ref{syst3}) we obtain on the right and on the left hand side real analytic functions. This follows since compositions of real analytic functions are real analytic again, see Proposition A7 of the appendix to this section. The resulting power series on the left and on the right have the same coefficients caused by the calculation of the derivatives \(D^\alpha u_j(0)\) from (\ref{syst3}). It follows that \(u_j(x)\), \(j=1,\ldots,n\), defined by its formal power series are solutions of the initial value problem (\ref{syst3}), (\ref{syst3initial}).

    Set

    $$d=\left(\frac{\partial}{\partial z_1},\ldots,\frac{\partial}{\partial z_{N+n-1}}\right)\]

    Lemma A. Assume \(u\in C^\infty\) in a neighborhood of \(0\in\mathbb{R}\). Then

    $$D^\alpha u_j(0)=P_\alpha\left(d^\beta a_{jk}^i(0),d^\gamma b_j(0)\right),$$

    where \(|\beta|,\ |\gamma|\le|\alpha|\) and \(P_\alpha\) are polynomials in the indicated arguments with non-negative integers as coefficients which are independent of \(a^i\) and of \(b\).

    Proof. It follows from equation (\ref{syst3}) that

    \begin{equation}
    \label{hilf1}
    D_nD^\alpha u_j(0)=P_\alpha (d^\beta a_{jk}^i(0),d^\gamma b_j(0),D^\delta u_k(0)).
    \end{equation}

    Here is \(D_n=\partial/\partial x_n\) and \(\alpha,\ \beta,\ \gamma,\ \delta\) satisfy the inequalities

    $$|\beta|,\ |\gamma|\le|\alpha|,\ \ |\delta|\le|\alpha|+1,\]

    and, which is essential in the proof, the last coordinates in the multi-indices \(\alpha=(\alpha_1,\ldots,\alpha_n)\), \(\delta=(\delta_1,\ldots,\delta_n)\) satisfy \(\delta_n\le\alpha_n\) since the right hand side of (\ref{syst3}) is independent of \(x_n\).
    Moreover, it follows from (\ref{syst3}) that the polynomials \(P_\alpha\) have integers as coefficients. The initial condition (\ref{syst3initial}) implies

    \begin{equation}
    \label{hilf2}
    D^\alpha u_j(0)=0,
    \end{equation}

    where \(\alpha=(\alpha_1,\ldots,\alpha_{n-1},0)\), that is, \(\alpha_n=0\). Then, the proof is by induction with respect to \(\alpha_n\). The induction starts with \(\alpha_n=0\), then we replace $D^\delta u_k(0)$ in the right hand side of (\ref{hilf1}) by (\ref{hilf2}), that is by zero. Then it follows from (\ref{hilf1}) that

    $$D^\alpha u_j(0)=P_\alpha (d^\beta a_{jk}^i(0),d^\gamma b_j(0),D^\delta u_k(0)),\]

    where \(\alpha=(\alpha_1,\ldots,\alpha_{n-1},1)\).

    \(\Box\)

    Definition. Let \(f=(f_1,\ldots,f_m)\), \(F=(F_1,\ldots,F_m)\), \(f_i=f_i(x)\), \(F_i=F_i(x)\), and \(f,\ F\in C^\infty\). We say \(f\) is majorized by \(F\) if
    $$
    |D^\alpha f_k(0)|\le D^\alpha F_k(0),\ \ k=1,\ldots,m
    $$
    for all \(\alpha\). We write \(f<<F\), if \(f\) is majorized by \(F\).

    Definition. The initial value problem
    \begin{eqnarray}
    \label{syst4}
    U_{j,x_n}&=&\sum_{i=1}^{n-1}\sum_{k=1}^N A_{jk}^i(z)U_{k,x_i}+B_j(z)\\
    \label{syst4initial}
    U_j(x)&=&0 \qquad \mbox{if}\ x_n=0,
    \end{eqnarray}
    \(j=1,\ldots,N\), \(A_{jk}^i,\ B_j\) real analytic, is called majorizing problem to (\ref{syst3}), (\ref{syst3initial}) if
    $$
    a_{jk}^i<<A_{jk}^i\ \ \mbox{and}\ b_j<<B_j.
    \]

    Lemma B. The formal power series
    $$
    \sum_\alpha \frac{1}{\alpha!}D^\alpha u_j(0)x^\alpha,
    $$
    where \(D^\alpha u_j(0)\) are defined in Lemma A, is convergent in a neighborhood of \(0\in\mathbb{R}\) if there exists a majorizing problem which has a real analytic solution \(U\) in \(x=0\), and
    $$
    |D^\alpha u_j(0)|\le D^\alpha U_j(0).
    $$

    Proof. It follows from Lemma A and from the assumption of Lemma B that
    \begin{eqnarray*}
    |D^\alpha u_j(0)|&\le&P_\alpha\left(|d^\beta a_{jk}^i(0)|,|d^\gamma b_j(0)|\right)\\
    &\le&P_\alpha\left(|d^\beta A_{jk}^i(0)|,|d^\gamma B_j(0)|\right)\equiv D^\alpha U_j(0).
    \end{eqnarray*}
    The formal power series
    $$
    \sum_\alpha \frac{1}{\alpha!}D^\alpha u_j(0)x^\alpha,
    $$
    is convergent since
    $$
    \sum_\alpha \frac{1}{\alpha!}|D^\alpha u_j(0)x^\alpha|\le
    \sum_\alpha \frac{1}{\alpha!}D^\alpha U_j(0)|x^\alpha|.
    $$
    The right hand side is convergent in a neighborhood of \(x\in\mathbb{R}\) by assumption.

    \(\Box\)

    Lemma C. There is a majorising problem which has a real analytic solution.

    Proof. Since \(a_{ij}^i(z)\), \(b_j(z)\) are real analytic in a neighborhood of \(z=0\) it follows from Proposition A5 of the appendix to this section that there are positive constants \(M\) and \(r\) such that all these functions are majorized by
    $$
    \frac{Mr}{r-z_1-\ldots-z_{N+n-1}}.
    $$
    Thus a majorizing problem is
    \begin{eqnarray*}
    U_{j,x_n}&=&\frac{Mr}{r-x_1-\ldots-x_{n-1}-U_1-\ldots-U_N}\left(1+\sum_{i=1}^{n-1}\sum_{k=1}^NU_{k,x_i}\right)\\
    U_j(x)&=&0\ \ \mbox{if}\ x_n=0,
    \end{eqnarray*}
    \(j=1,\ldots,N\).

    The solution of this problem is
    $$
    U_j(x_1,\ldots,x_{n-1},x_n)=V(x_1+\ldots+x_{n-1},x_n), \ j=1,\ldots,N,
    $$
    where \(V(s,t)\), \(s=x_1+\ldots+x_{n-1}\), \(t=x_n\), is the solution of the Cauchy initial value problem
    \begin{eqnarray*}
    V_t&=&\frac{Mr}{r-s-NV}\left(1+N(n-1)V_s\right),\\
    V(s,0)&=&0.
    \end{eqnarray*}
    which has the solution, see an exercise,
    $$
    V(s,t)=\frac{1}{Nn}\left(r-s-\sqrt{(r-s)^2-2nMNrt}\right).
    $$
    This function is real analytic in \((s,t)\) at \((0,0)\). It follows that \(U_j(x)\) are also real analytic functions. Thus the Cauchy-Kovalevskaya theorem is shown.

    \(\Box\)

    Example 3.5.1: Ordinary differential equations

    Consider the initial value problem
    \begin{eqnarray*}
    y'(x)&=&f(x,y(x))\\
    y(x_0)&=&y_0,
    \end{eqnarray*}
    where \(x_0\in\mathbb{R}^1\) and \(y_0\in\mathbb{R}\) are given. Assume \(f(x,y)\) is real analytic in a neighborhood of \((x_0,y_0)\in\mathbb{R}^1\times\mathbb{R}\). Then it follows from the above theorem that there exists an analytic solution \(y(x)\) of the initial value problem in a neighborhood of \(x_0\). This solution is unique in the class of analytic functions according to the theorem of Cauchy-Kovalevskaya. From the Picard-Lindel\"of theorem it follows that this analytic solution is unique even in the class of \(C^1\)-functions.

    Example 3.5.2: Partial differential equations of second order

    Consider the boundary value problem for two variables
    \begin{eqnarray*}
    u_{yy}&=&f(x,y,u,u_x,u_y,u_{xx},u_{xy})\\
    u(x,0)&=&\phi(x)\\
    u_y(x,0)&=&\psi(x).
    \end{eqnarray*}
    We assume that \(\phi,\ \psi\) are analytic in a neighborhood of \(x=0\) and that \(f\) is real analytic in a neighbourhood of
    $$
    (0,0,\phi(0),\phi'(0),\psi(0),\psi'(0)).
    $$
    There exists a real analytic solution in a neighborhood of \(0\in\mathbb{R}^2\) of the above initial value problem.

    In particular, there is a real analytic solution in a neighborhood of \(0\in\mathbb{R}^2\) of the initial value problem
    \begin{eqnarray*}
    \triangle u&=&1\\
    u(x,0)&=&0\\
    u_y(x,0)&=&0.
    \end{eqnarray*}

    The proof follows by writing the above problem as a system. Set \(p=u_x\), \(q=u_y\), \(r=u_{xx}\),
    \(s=u_{xy}\), \(t=u_{yy}\), then
    $$
    t=f(x,y,u,p,q,r,s).
    $$
    Set \(U=(u,p,q,r,s,t)^T\), \(b=(q,0,t,0,0,f_y+f_uq+f_qt)^T\) and
    $$
    A=\left(\begin{array}{cccccc}
    0&0&0&0&0&0\\
    0&0&1&0&0&0\\
    0&0&0&0&0&0\\
    0&0&0&0&1&0\\
    0&0&0&0&0&1\\
    0&0&f_p&0&f_r&f_s
    \end{array}\right).
    $$
    Then the rewritten differential equation is the system
    \(U_y=AU_x+b\) with the initial condition
    $$
    U(x,0)=\left(\phi(x),\phi'(x),\psi(x),\phi''(x),\psi'(x),f_0(x)\right),
    $$
    where \(f_0(x)=f(x,0,\phi(x),\phi'(x),\psi(x),\phi''(x),\psi'(x))\).

    Contributors and Attributions


    This page titled 3.5: Theorem of Cauchy-Kovalevskaya is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

    • Was this article helpful?