Skip to main content
Mathematics LibreTexts

4.4: Appendix- The Fredholm Alternative Theorem

  • Page ID
    90939
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Given that \(L y=f\), when can one expect to find a solution? Is it unique? These questions are answered by the Fredholm Alternative Theorem. This theorem occurs in many forms from a statement about solutions to systems of algebraic equations to solutions of boundary value problems and integral equations. The theorem comes in two parts, thus the term "alternative". Either the equation has exactly one solution for all \(f\), or the equation has many solutions for some \(f\) ’s and none for the rest.

    The reader is familiar with the statements of the Fredholm Alternative for the solution of systems of algebraic equations. One seeks solutions of the system \(A x=b\) for \(A\) an \(n \times m\) matrix. Defining the matrix adjoint, \(A^{*}\) through \(\langle A x, y\rangle=\left\langle x, A^{*} y\right\rangle\) for all \(x, y, \in \mathcal{C}^{n}\), then either

    Theorem \(\PageIndex{1}\): First Alternative

    The equation \(A x=b\) has a solution if and only if \(\langle b, v\rangle=0\) for all \(v\) satisfying \(A^{*} v=0\).

    or

    Theorem \(\PageIndex{2}\): Second Alternative

    A solution of \(A x=b\), if it exists, is unique if and only if \(x=0\) is the only solution of \(A x=0\).

    The second alternative is more familiar when given in the form: The solution of a nonhomogeneous system of \(n\) equations and \(n\) unknowns is unique if the only solution to the homogeneous problem is the zero solution. Or, equivalently, \(A\) is invertible, or has nonzero determinant.

    Proof of Theorem \(\PageIndex{2}\)

    Proof

    We prove the second theorem first. Assume that \(A x=0\) for \(x \neq 0\) and \(A x_{0}=b\). Then \(A\left(x_{0}+\alpha x\right)=b\) for all \(\alpha\). Therefore, the solution is not unique. Conversely, if there are two different solutions, \(x_{1}\) and \(x_{2}\), satisfying \(A x_{1}=b\) and \(A x_{2}=b\), then one has a nonzero solution \(x=x_{1}-x_{2}\) such that \(A x=A\left(x_{1}-x_{2}\right)=0\).

    The proof of the first part of the first theorem is simple. Let \(A^{*} v=0\) and \(A x_{0}=b\). Then we have

    \[\langle b, v\rangle=\left\langle A x_{0}, v\right\rangle=\left\langle x_{0}, A^{*} v\right\rangle=0 .\nonumber \]

    For the second part we assume that \(\langle b, v\rangle=0\) for all \(v\) such that \(A^{*} v=0\). Write \(b\) as the sum of a part that is in the range of \(A\) and a part that in the space orthogonal to the range of \(A, b=b_{R}+b_{O}\). Then, \(0=\left\langle b_{O}, A x\right\rangle=<\) \(A^{*} b, x>\) for all \(x\). Thus, \(A^{*} b_{O}\). Since \(\langle b, v\rangle=0\) for all \(v\) in the nullspace of \(A^{*}\), then \(\left\langle b, b_{O}\right\rangle=0\).

    Therefore, \(\langle b, v\rangle=0\) implies that

    \[0=\left\langle b, b_{O}\right\rangle=\left\langle b_{R}+b_{O}, b_{O}\right\rangle=\left\langle b_{O}, b_{O}\right\rangle \text {. }\nonumber \]

    This means that \(b_{O}=0\), giving \(b=b_{R}\) is in the range of \(A\). So, \(A x=b\) has a solution.

    Example \(\PageIndex{1}\)

    Determine the allowed forms of \(\mathbf{b}\) for a solution of \(A \mathbf{x}=\mathbf{b}\) to exist, where

    \[A=\left(\begin{array}{ll} 1 & 2 \\ 3 & 6 \end{array}\right) .\nonumber \]

    Solution

    First note that \(A^{*}=\bar{A}^{T}\). This is seen by looking at

    \[\begin{align} \langle A \mathbf{x}, \mathbf{y}\rangle &=\left\langle\mathbf{x}, A^{*} \mathbf{y}\right\rangle\nonumber \\ \sum_{i=1}^{n} \sum_{j=1}^{n} a_{i j} x_{j} \bar{y}_{i} &=\sum_{j=1}^{n} x_{j} \sum_{j=1}^{n} a_{i j} \bar{y}_{i}\nonumber \\ &=\sum_{j=1}^{n} x_{j} \sum_{j=1}^{n}\left(\bar{a}^{T}\right)_{j i} y_{i} .\label{eq:1} \end{align} \]

    For this example,

    \[A^{*}=\left(\begin{array}{ll} 1 & 3 \\ 2 & 6 \end{array}\right) .\nonumber \]

    We next solve \(A^{*} \mathbf{v}=0\). This means, \(v_{1}+3 v_{2}=0\). So, the nullspace of \(A^{*}\) is spanned by \(\mathbf{v}=(3,-1)^{T}\). For a solution of \(A \mathbf{x}=\mathbf{b}\) to exist, \(\mathbf{b}\) would have to be orthogonal to \(\mathbf{v}\). Therefore, a solution exists when

    \[\mathbf{b}=\alpha\left(\begin{array}{l} 1 \\ 3 \end{array}\right) .\nonumber \]

    So, what does the Fredholm Alternative say about solutions of boundary value problems? We extend the Fredholm Alternative for linear operators. A more general statement would be

    Theorem \(\PageIndex{3}\)

    If \(L\) is a bounded linear operator on a Hilbert space, then \(L y=f\) has a solution if and only if \(\langle f, v\rangle=0\) for every \(v\) such that \(L^{\dagger} v=0\).

    The statement for boundary value problems is similar. However, we need to be careful to treat the boundary conditions in our statement. As we have seen, after several integrations by parts we have that

    \[\langle\mathcal{L} u, v\rangle=S(u, v)+\left\langle u, \mathcal{L}^{+} v\right\rangle,\nonumber \]

    where \(S(u, v)\) involves the boundary conditions on \(u\) and \(v\). Note that for nonhomogeneous boundary conditions, this term may no longer vanish.

    Theorem \(\PageIndex{4}\)

    The solution of the boundary value problem \(\mathcal{L} u=f\) with boundary conditions \(B u=g\) exists if and only if

    \[\langle f, v\rangle-S(u, v)=0\nonumber \]

    for all \(v\) satisfying \(\mathcal{L}^{+} v=0\) and \(B^{+} v=0\).

    Example \(\PageIndex{2}\)

    Consider the problem

    \[u^{\prime \prime}+u=f(x), \quad u(0)-u(2 \pi)=\alpha, u^{\prime}(0)-u^{\prime}(2 \pi)=\beta .\nonumber \]

    Solution

    Only certain values of \(\alpha\) and \(\beta\) will lead to solutions. We first note that

    \[L=L^{+}=\frac{d^{2}}{d x^{2}}+1 .\nonumber \]

    Solutions of

    \[L^{\dagger} v=0, \quad v(0)-v(2 \pi)=0, v^{\prime}(0)-v^{\prime}(2 \pi)=0\nonumber \]

    are easily found to be linear combinations of \(v=\sin x\) and \(v=\cos x\).

    Next, one computes

    \[\begin{align} S(u, v) &=\left[u^{\prime} v-u v^{\prime}\right]_{0}^{2 \pi}\nonumber \\ &=u^{\prime}(2 \pi) v(2 \pi)-u(2 \pi) v^{\prime}(2 \pi)-u^{\prime}(0) v(0)+u(0) v^{\prime}(0) .\label{eq:2} \end{align} \]

    For \(v(x)=\sin x\), this yields

    \[S(u, \sin x)=-u(2 \pi)+u(0)=\alpha .\nonumber \]

    Similarly,

    \[S(u, \cos x)=\beta .\nonumber \]

    Using \(\langle f, v\rangle-S(u, v)=0\), this leads to the conditions that we were seeking,

    \[\begin{aligned} &\int_{0}^{2 \pi} f(x) \sin x d x=\alpha, \\ &\int_{0}^{2 \pi} f(x) \cos x d x=\beta . \end{aligned} \nonumber \]


    This page titled 4.4: Appendix- The Fredholm Alternative Theorem is shared under a CC BY-NC-SA 3.0 license and was authored, remixed, and/or curated by Russell Herman via source content that was edited to the style and standards of the LibreTexts platform.