Skip to main content
Mathematics LibreTexts

1.10.1: Homogeneous Equations

  • Page ID
    134755
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    A system of equations in the variables \(x_1, x_2, \dots, x_n\) is called homogeneous if all the constant terms are zero—that is, if each equation of the system has the form

    \[a_1x_1 + a_2x_2 + \dots + a_nx_n = 0 \nonumber \]

    Clearly \(x_1 = 0, x_2 = 0, \dots, x_n = 0\) is a solution to such a system; it is called the trivial solution. Any solution in which at least one variable has a nonzero value is called a nontrivial solution. Our chief goal in this section is to give a useful condition for a homogeneous system to have nontrivial solutions. The following example is instructive.

    Example \(\PageIndex{1}\)

    Show that the following homogeneous system has nontrivial solutions.

    \[ \begin{array}{rlrlrlrcr} x_1 & - & x_2 & + & 2x_3 & - & x_4 & = & 0 \\ 2x_1 & + &2x_2 & & & + & x_4 & = & 0 \\ 3x_1 & + & x_2 & + & 2x_3 & - & x_4 & = & 0 \end{array} \nonumber \]

    Solution

    The reduction of the augmented matrix to reduced row-echelon form is outlined below.

    \[\left[ \begin{array}{rrrr|r} 1 & -1 & 2 & -1 & 0 \\ 2 & 2 & 0 & 1 & 0 \\ 3 & 1 & 2 & -1 & 0 \end{array} \right] \rightarrow \left[ \begin{array}{rrrr|r} 1 & -1 & 2 & -1 & 0 \\ 0 & 4 & -4 & 3 & 0 \\ 0 & 4 & -4 & 2 & 0 \end{array} \right] \rightarrow \left[ \begin{array}{rrrr|r} 1 & 0 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{array} \right] \nonumber \]

    The leading variables are \(x_1\), \(x_2\), and \(x_4\), so \(x_3\) is assigned as a parameter—say \(x_3 = t\). Then the general solution is \(x_1 = -t\), \(x_2 = t\), \(x_3 = t\), \(x_4 = 0\). Hence, taking \(t = 1\) (say), we get a nontrivial solution: \(x_1 = -1\), \(x_2 = 1\), \(x_3 = 1\), \(x_4 = 0\).

    The existence of a nontrivial solution in Example \(\PageIndex{1}\) is ensured by the presence of a parameter in the solution. This is due to the fact that there is a nonleading variable (\(x_3\) in this case). But there must be a nonleading variable here because there are four variables and only three equations (and hence at most three leading variables). This discussion generalizes to a proof of the following fundamental theorem.

    Theorem \(\PageIndex{1}\)

    If a homogeneous system of linear equations has more variables than equations, then it has a nontrivial solution (in fact, infinitely many).

    Proof. Suppose there are \(m\) equations in \(n\) variables where \(n > m\), and let \(R\) denote the reduced row-echelon form of the augmented matrix. If there are \(r\) leading variables, there are \(n - r\) nonleading variables, and so \(n - r\) parameters. Hence, it suffices to show that \(r < n\). But \(r \leq m\) because \(R\) has \(r\) leading 1s and \(m\) rows, and \(m < n\) by hypothesis. So \(r \leq m < n\), which gives \(r < n\).

    Note that the converse of Theorem [thm:001473] is not true: if a homogeneous system has nontrivial solutions, it need not have more variables than equations (the system \(x_1 + x_2 = 0\), \(2x_1 + 2x_2 = 0\) has nontrivial solutions but \(m = 2 = n\).)

    Theorem [thm:001473] is very useful in applications. The next example provides an illustration from geometry.

    Example \(\PageIndex{2}\)

    We call the graph of an equation \(ax^2 + bxy + cy^2 + dx + ey + f = 0\) a conic if the numbers \(a\), \(b\), and \(c\) are not all zero. Show that there is at least one conic through any five points in the plane that are not all on a line.

    Solution

    Let the coordinates of the five points be \((p_1, q_1)\), \((p_2, q_2)\), \((p_3, q_3)\), \((p_4, q_4)\), and \((p_5, q_5)\). The graph of \(ax^2 + bxy + cy^2 + dx + ey + f = 0\) passes through \((p_i, q_i)\) if

    \[ap_i^2 + bp_iq_i + cq_i^2 + dp_i + eq_i + f = 0 \nonumber \]

    This gives five equations, one for each \(i\), linear in the six variables \(a\), \(b\), \(c\), \(d\), \(e\), and \(f\). Hence, there is a nontrivial solution by Theorem [thm:001473]. If \(a = b = c = 0\), the five points all lie on the line with equation \(dx + ey + f = 0\), contrary to assumption. Hence, one of \(a\), \(b\), \(c\) is nonzero.

    Linear Combinations and Basic Solutions

    As for rows, two columns are regarded as equal if they have the same number of entries and corresponding entries are the same. Let \(\mathbf{x}\) and \(\mathbf{y}\) be columns with the same number of entries. As for elementary row operations, their sum \(\mathbf{x} + \mathbf{y}\) is obtained by adding corresponding entries and, if \(k\) is a number, the scalar product \(k\mathbf{x}\) is defined by multiplying each entry of \(\mathbf{x}\) by \(k\). More precisely:

    \[\mbox{If } \mathbf{x} = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right] \mbox{and } \mathbf{y} = \left[ \begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_n \end{array} \right] \mbox{then } \mathbf{x} + \mathbf{y} = \left[ \begin{array}{c} x_1 + y_1 \\ x_2 + y_2 \\ \vdots \\ x_n + y_n \end{array} \right] \mbox{and } k\mathbf{x} = \left[ \begin{array}{c} kx_1 \\ kx_2 \\ \vdots \\ kx_n \end{array} \right]. \nonumber \]

    A sum of scalar multiples of several columns is called a linear combination of these columns. For example, \(s\mathbf{x} + t\mathbf{y}\) is a linear combination of \(\mathbf{x}\) and \(\mathbf{y}\) for any choice of numbers \(s\) and \(t\).

    Example \(\PageIndex{3}\)

    If \(\mathbf{x} = \left[ \begin{array}{r} 3 \\ -2 \\ \end{array} \right]\) and \(\mathbf{y} = \left[ \begin{array}{r} -1 \\ 1 \\ \end{array} \right]\) then \(2\mathbf{x} + 5\mathbf{y} = \left[ \begin{array}{r} 6 \\ -4 \\ \end{array} \right] + \left[ \begin{array}{r} -5 \\ 5 \\ \end{array} \right] = \left[ \begin{array}{r} 1 \\ 1 \\ \end{array} \right]\)..

    Example \(\PageIndex{4}\)

    Let \(\mathbf{x} = \left[ \begin{array}{r} 1 \\ 0 \\ 1 \end{array} \right], \mathbf{y} = \left[ \begin{array}{r} 2 \\ 1 \\ 0 \end{array} \right]\) and \(\mathbf{z} = \left[ \begin{array}{r} 3 \\ 1 \\ 1 \end{array} \right]\). If \(\mathbf{v} = \left[ \begin{array}{r} 0 \\ -1 \\ 2 \end{array} \right]\) and \(\mathbf{w} = \left[ \begin{array}{r} 1 \\ 1 \\ 1 \end{array} \right]\), determine whether \(\mathbf{v}\) and \(\mathbf{w}\) are linear combinations of \(\mathbf{x}\), \(\mathbf{y}\) and \(\mathbf{z}\).

    Solution

    For \(\mathbf{v}\), we must determine whether numbers \(r\), \(s\), and \(t\) exist such that \(\mathbf{v} = r\mathbf{x} + s\mathbf{y} + t\mathbf{z}\), that is, whether

    \[\left[ \begin{array}{r} 0 \\ -1 \\ 2 \end{array} \right] = r \left[ \begin{array}{r} 1 \\ 0 \\ 1 \end{array} \right] + s \left[ \begin{array}{r} 2 \\ 1 \\ 0 \end{array} \right] + t \left[ \begin{array}{r} 3 \\ 1 \\ 1 \end{array} \right] = \left[ \begin{array}{c} r + 2s + 3t \\ s + t \\ r + t \end{array} \right] \nonumber \]

    Equating corresponding entries gives a system of linear equations \(r + 2s + 3t = 0\), \(s + t = -1\), and \(r + t = 2\) for \(r\), \(s\), and \(t\). By gaussian elimination, the solution is \(r = 2 - k\), \(s = -1 - k\), and \(t = k\) where \(k\) is a parameter. Taking \(k = 0\), we see that \(\mathbf{v} = 2\mathbf{x} - \mathbf{y}\) is a linear combination of \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{z}\).

    Turning to \(\mathbf{w}\), we again look for \(r\), \(s\), and \(t\) such that \(\mathbf{w} = r\mathbf{x} + s\mathbf{y} + t\mathbf{z}\); that is,

    \[\left[ \begin{array}{r} 1 \\ 1 \\ 1 \end{array} \right] = r \left[ \begin{array}{r} 1 \\ 0 \\ 1 \end{array} \right] + s \left[ \begin{array}{r} 2 \\ 1 \\ 0 \end{array} \right] + t \left[ \begin{array}{r} 3 \\ 1 \\ 1 \end{array} \right] = \left[ \begin{array}{c} r + 2s + 3t \\ s + t \\ r + t \end{array} \right] \nonumber \]

    leading to equations \(r + 2s + 3t = 1\), \(s + t = 1\), and \(r + t = 1\) for real numbers \(r\), \(s\), and \(t\). But this time there is no solution as the reader can verify, so \(\mathbf{w}\) is not a linear combination of \(\mathbf{x}\), \(\mathbf{y}\), and \(\mathbf{z}\).

    Our interest in linear combinations comes from the fact that they provide one of the best ways to describe the general solution of a homogeneous system of linear equations. When solving such a system with \(n\) variables \(x_1, x_2, \dots, x_n\), write the variables as a column1 matrix: \(\mathbf{x} = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right]\). The trivial solution is denoted \(\mathbf{0} = \left[ \begin{array}{c} 0 \\ 0 \\ \vdots \\ 0 \end{array} \right]\). As an illustration, the general solution in Example [exa:001449] is \(x_1 = -t\), \(x_2 = t\), \(x_3 = t\), and \(x_4 = 0\), where \(t\) is a parameter, and we would now express this by saying that the general solution is \(\mathbf{x} = \left[ \begin{array}{r} -t \\ t \\ t \\ 0 \end{array} \right]\), where \(t\) is arbitrary.

    Note

    The reason for using columns will be apparent later.

    Now let \(\mathbf{x}\) and \(\mathbf{y}\) be two solutions to a homogeneous system with \(n\) variables. Then any linear combination \(s\mathbf{x} + t\mathbf{y}\) of these solutions turns out to be again a solution to the system. More generally:

    \[ \mbox{ \textit{Any linear combination of solutions to a homogeneous system is again a solution.}} \nonumber \]

    In fact, suppose that a typical equation in the system is \(a_1x_1 + a_2x_2 + \dots + a_nx_n = 0\), and suppose that \(\mathbf{x} = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right]\), \(\mathbf{y} = \left[ \begin{array}{c} y_1 \\ y_2 \\ \vdots \\ y_n \end{array} \right]\) are solutions. Then \(a_1x_1 + a_2x_2 + \dots + a_nx_n = 0\) and \(a_1y_1 + a_2y_2 + \dots + a_ny_n = 0\). Hence \(s\mathbf{x} + t\mathbf{y} = \left[ \begin{array}{c} sx_1 + ty_1 \\ sx_2 + ty_2 \\ \vdots \\ sx_n + ty_n \end{array} \right]\) is also a solution because

    \[\begin{aligned} a_1(sx_1 + ty_1) &+ a_2(sx_2 + ty_2) + \dots + a_n(sx_n + ty_n) \\ &= [a_1(sx_1) + a_2(sx_2) + \dots + a_n(sx_n)] + [a_1(ty_1) + a_2(ty_2) + \dots + a_n(ty_n)] \\ &= s(a_1x_1 + a_2x_2 + \dots + a_nx_n) + t(a_1y_1 + a_2y_2 + \dots + a_ny_n) \\ &= s(0) + t(0)\\ &= 0\end{aligned} \nonumber \]

    A similar argument shows that Statement [eq:homogeneousstatement] is true for linear combinations of more than two solutions.

    The remarkable thing is that every solution to a homogeneous system is a linear combination of certain particular solutions and, in fact, these solutions are easily computed using the gaussian algorithm. Here is an example.

    Example \(\PageIndex{5}\)

    Solve the homogeneous system with coefficient matrix

    \[A = \left[ \begin{array}{rrrr} 1 & -2 & 3 & -2 \\ -3 & 6 & 1 & 0 \\ -2 & 4 & 4 & -2 \\ \end{array} \right] \nonumber \]

    Solution

    The reduction of the augmented matrix to reduced form is

    \[\left[ \begin{array}{rrrr|r} 1 & -2 & 3 & -2 & 0 \\ -3 & 6 & 1 & 0 & 0 \\ -2 & 4 & 4 & -2 & 0 \\ \end{array} \right] \rightarrow \def\arraystretch{1.5} \left[ \begin{array}{rrrr|r} 1 & -2 & 0 & -\frac{1}{5} & 0 \\ 0 & 0 & 1 & -\frac{3}{5} & 0 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} \right] \nonumber \]

    so the solutions are \(x_1 = 2s + \frac{1}{5}t\), \(x_2 = s\), \(x_3 = \frac{3}{5}t\), and \(x_4 = t\) by gaussian elimination. Hence we can write the general solution \(\mathbf{x}\) in the matrix form

    \[\mathbf{x} = \left[ \begin{array}{r} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right] = \left[ \begin{array}{c} 2s + \frac{1}{5}t \\ s \\ \frac{3}{5}t \\ t \end{array} \right] = s \left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 0 \end{array} \right] + t \left[ \begin{array}{r} \frac{1}{5} \\ 0 \\ \frac{3}{5} \\ 1 \end{array} \right] = s\mathbf{x}_1 + t\mathbf{x}_2. \nonumber \]

    Here \(\mathbf{x}_1 = \left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 0 \end{array} \right]\) and \(\mathbf{x}_2 = \left[ \begin{array}{r} \frac{1}{5} \\ 0 \\ \frac{3}{5} \\ 1 \end{array} \right]\) are particular solutions determined by the gaussian algorithm.


    The solutions \(\mathbf{x}_1\) and \(\mathbf{x}_2\) in Example \(\PageIndex{5}\) are denoted as follows:

    Definition: \(\PageIndex{1}\) BasicSolutions

    The gaussian algorithm systematically produces solutions to any homogeneous linear system, called basic solutions, one for every parameter.

    Moreover, the algorithm gives a routine way to express every solution as a linear combination of basic solutions as in Example [exa:001560], where the general solution \(\mathbf{x}\) becomes

    \[\mathbf{x} = s \left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 0 \end{array} \right] + t \left[ \begin{array}{r} \frac{1}{5} \\ 0 \\ \frac{3}{5} \\ 1 \end{array} \right] = s \left[ \begin{array}{r} 2 \\ 1 \\ 0 \\ 0 \end{array} \right] + \frac{1}{5}t \left[ \begin{array}{r} 1 \\ 0 \\ 3 \\ 5 \end{array} \right] \nonumber \]

    Hence by introducing a new parameter \(r = t/5\) we can multiply the original basic solution \(\mathbf{x}_2\) by 5 and so eliminate fractions. For this reason:

    Convention:
    Any nonzero scalar multiple of a basic solution will still be called a basic solution.

    In the same way, the gaussian algorithm produces basic solutions to every homogeneous system, one for each parameter (there are no basic solutions if the system has only the trivial solution). Moreover every solution is given by the algorithm as a linear combination of these basic solutions (as in Example \(\PageIndex{5}\). If \(A\) has rank \(r\), Theorem \(\PageIndex{2}\) shows that there are exactly \(n-r\) parameters, and so \(n-r\) basic solutions. This proves:

    Theorem \(\PageIndex{2}\)

    Let \(A\) be an \(m \times n\) matrix of rank \(r\), and consider the homogeneous system in \(n\) variables with \(A\) as coefficient matrix. Then:

    1. The system has exactly \(n-r\) basic solutions, one for each parameter.
    2. Every solution is a linear combination of these basic solutions.
    Example \(\PageIndex{1}\)

    Find basic solutions of the homogeneous system with coefficient matrix \(A\), and express every solution as a linear combination of the basic solutions, where

    \[A = \left[ \begin{array}{rrrrr} 1 & -3 & 0 & 2 & 2 \\ -2 & 6 & 1 & 2 & -5 \\ 3 & -9 & -1 & 0 & 7 \\ -3 & 9 & 2 & 6 & -8 \end{array} \right] \nonumber \]

    Solution

    The reduction of the augmented matrix to reduced row-echelon form is

    \[\left[ \begin{array}{rrrrr|r} 1 & -3 & 0 & 2 & 2 & 0 \\ -2 & 6 & 1 & 2 & -5 & 0 \\ 3 & -9 & -1 & 0 & 7 & 0 \\ -3 & 9 & 2 & 6 & -8 & 0 \end{array} \right] \rightarrow \left[ \begin{array}{rrrrr|r} 1 & -3 & 0 & 2 & 2 & 0 \\ 0 & 0 & 1 & 6 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right] \nonumber \]

    so the general solution is \(x_1 = 3r - 2s - 2t\), \(x_2 = r\), \(x_3 = -6s + t\), \(x_4 = s\), and \(x_5 = t\) where \(r\), \(s\), and \(t\) are parameters. In matrix form this is

    \[\mathbf{x} = \left[ \begin{array}{r} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{array} \right] = \left[ \begin{array}{c} 3r - 2s - 2t \\ r \\ -6s + t \\ s \\ t \end{array} \right] = r \left[ \begin{array}{r} 3 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right] + s \left[ \begin{array}{r} -2 \\ 0 \\ -6 \\ 1 \\ 0 \end{array} \right] + t \left[ \begin{array}{r} -2 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right] \nonumber \]

    Hence basic solutions are

    \[\mathbf{x}_1 = \left[ \begin{array}{r} 3 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right], \ \mathbf{x}_2 = \left[ \begin{array}{r} -2 \\ 0 \\ -6 \\ 1 \\ 0 \end{array} \right],\ \mathbf{x}_3 = \left[ \begin{array}{r} -2 \\ 0 \\ 1 \\ 0 \\ 1 \end{array} \right] \nonumber \]


    This page titled 1.10.1: Homogeneous Equations is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.