Skip to main content
Mathematics LibreTexts

2.5: Solution Sets for Systems of Linear Equations

  • Page ID
    1781
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Algebra problems can have multiple solutions. For example \(x(x-1)=0\) has two solutions: \(0\) and \(1\). By contrast, equations of the form \(Ax=b\) with \(A\) a linear operator have have the following property.

    If \(A\) is a linear operator and \(b\) is a known then \(Ax=b\) has either

    [1.] One solution

    [2.] No solutions

    [3.] Infinitely many solutions

    The Geometry of Solution Sets: Hyperplanes

    Consider the following algebra problems and their solutions

    [1.] \(6x=12\), one solution: \(2\)

    [2a.] \(0x=12\), no solution

    [2b.] \(0x=0\), one solution for each number: \(x\)

    In each case the linear operator is a \(1\times 1\) matrix. In the first case, the linear operator is invertible. In the other two cases it is not. In the first case, the solution set is a point on the number line, in the third case the solution set is the whole number line.

    Lets examine similar situations with larger matrices.

    [1.] \(\begin{pmatrix}6 &0 \\0 &2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} =\begin{pmatrix}12 \\ 6\end{pmatrix}\), one solution: \(\begin{pmatrix}2 \\ 3\end{pmatrix}\)

    [2b.] \(\begin{pmatrix}1 &3 \\0 &0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} =\begin{pmatrix}4 \\ 1 \end{pmatrix}\), no solutions

    [2bi.] \(\begin{pmatrix}1 &3 \\0 &0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} =\begin{pmatrix}4 \\ 0\end{pmatrix} \), one solution for each number \(y\): \(\begin{pmatrix}4-3y \\ y\end{pmatrix} \)

    [2bii.] \(\begin{pmatrix}0 &0 \\0 &0 \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} =\begin{pmatrix}0 \\ 0\end{pmatrix} \), one solution for each pair of numbers \(x,y\):\(\begin{pmatrix}x\\ y\end{pmatrix}\)

    Again, in the first case the linear operator is invertible while in the other cases it is not. When the operator is not invertible the solution set can be empty, a line in the plane or the plane itself.

    For a system of equations with \(r\) equations and \(k\) variables, one can have a number of different outcomes. For example, consider the case of \(r\) equations in three variables. Each of these equations is the equation of a plane in three-dimensional space. To find solutions to the system of equations, we look for the common intersection of the planes (if an intersection exists). Here we have five different possibilities:

    [1.] \(\textbf{Unique Solution.}\) The planes have a unique point of intersection.

    [2a.] \(\textbf{No solutions.}\) Some of the equations are contradictory, so no solutions exist.

    [2bi.] \(\textbf{Line.}\) The planes intersect in a common line; any point on that line then gives a solution to the system of equations.

    [2bii.] \(\textbf{Plane.}\) Perhaps you only had one equation to begin with, or else all of the equations coincide geometrically. In this case, you have a plane of solutions, with two free parameters.

    [2biii.] \(\textbf{All of \(\mathbb{R}^3\).}\) If you start with no information, then any point in \(\mathbb{R}^3\) is a solution. There are three free parameters.

    In general, for systems of equations with \(k\) unknowns, there are \(k+2\) possible outcomes, corresponding to the possible numbers (i.e. \(0,1,2,\dots,k\)) of free parameters in the solutions set plus the possibility of no solutions. These types of "solution sets'' are "hyperplanes'', generalizations of planes the behave like planes in \(\mathbb{R}^3\) in many ways.

    Particular Solution \(+\) Homogeneous solutions

    In the standard approach, variables corresponding to columns that do not contain a pivot (after going to reduced row echelon form) are \(\textit{free}\). We called them non-pivot variables. They index elements of the solutions set by acting as coefficients of vectors.

    Non-pivot columns determine terms of the solutions

    \[\begin{pmatrix}
    1 & 0 & 1 & -1 \\
    0 & 1 & -1& 1 \\
    0 &0 & 0 & 0 \\
    \end{pmatrix}
    \begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix}
    =
    \begin{pmatrix}1\\1\\0\end{pmatrix}
    \Leftrightarrow
    \left\{
    \begin{array}{lcr}
    1x_1 +0x_2+ 1x_3 - 1x_4 & = 1 \\
    0x_1 +1x_2 - 1x_3 + 1x_4 & = 1 \\
    0x_1 +0x_2 + 0x_3 + 0x_4 & = 0
    \end{array}
    \right.
    $$
    Following the standard approach, express the pivot variables in terms of the non-pivot variables and add "freebee equations". Here \(x_3\) and \(x_4\) are non-pivot variables.
    $$\begin{eqnarray*}
    \left.
    \begin{array}{rcl}
    x_1 & = &1 -x_3+x_4 \\
    x_2 & = &1 +x_3-x_4 \\
    x_3 & = &\phantom{1+~\,}x_3\\
    x_4 & =&\phantom{1+x_3+~}x_4
    \end{array}
    \right\}
    \Leftrightarrow
    \begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix}
    = \begin{pmatrix}1\\1\\0\\0\end{pmatrix} + x_3\begin{pmatrix}-1\\1\\1\\0\end{pmatrix} + x_4\begin{pmatrix}1\\-1\\0\\1\end{pmatrix}
    \end{eqnarray*}$$
    The preferred way to write a solution set is with set notation. $$S = \left\{\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix} = \begin{pmatrix}1\\1\\ 0\\0\end{pmatrix} + \mu_1 \begin{pmatrix}-1\\1\\1\\0\end{pmatrix} + \mu_2 \begin{pmatrix}1\\-1\\ 0 \\1\end{pmatrix} : \mu_1,\mu_2\in {\mathbb R} \right\}$$
    Notice that the first two components of the second two terms come from the non-pivot columns
    Another way to write the solution set is
    \[S= \left\{ X_0 + \mu_1 Y_1 + \mu_2 Y_2 : \mu_1,\mu_2 \in {\mathbb R} \right\} \]
    where
    \[X_0= \begin{pmatrix}1\\1\\0 \\0\end{pmatrix}, Y_1=\begin{pmatrix}-1\\1\\1\\0\end{pmatrix} , Y_2= \begin{pmatrix}1\\-1\\0 \\1\end{pmatrix}
    \]

    Here \(X_0\) is called a particular solution while \(Y_1\) and \(Y_2\) are called homogeneous solutions.

    Linearity and these parts

    With the previous example in mind, lets say that the matrix equation \(MX=V\) has solution set \(\{ X_0 + \mu_1 Y_1 + \mu_2 Y_2):\mu_1,\mu_2 \in {\mathbb R} \}\). Recall that matrices are linear operators. Thus
    $$M( X_0 + \mu_1 Y_1 + \mu_2 Y_2) = MX_0 + \mu_1MY_1 + \mu_2MY_2 =V$$
    for \(\textit{any}\) \(\mu_1, \mu_2 \in \mathbb{R}\). Choosing \(\mu_1=\mu_2=0\), we obtain
    $$MX_0=V\, .$$
    This is why \(X_0\) is an example of a \(\textit{particular solution}\).

    Setting \(\mu_1=1, \mu_2=0\), and using the particular solution \(MX_0=V\), we obtain
    $$MY_1=0\, .$$
    Likewise, setting \(\mu_1=0, \mu_2=1\), we obtain $$MY_2=0\, .$$
    Here \(Y_1\) and \(Y_2\) are examples of what are called \(\textit{homogeneous}\) solutions to the system. They \(\textit {do not}\) solve the original equation \(MX=V\), but instead its associated \(\textit {homogeneous equation}\) \(M Y =0\).

    One of the fundamental lessons of linear algebra: the solution set to \(Ax=b\) with \(A\) a linear operator consists of a particular solution plus homogeneous solutions.

    \[general ~solution = particular~ solution + homogeneous~ solutions.\]

    Example \(\PageIndex{1}\):

    Consider the matrix equation of the previous example. It has solution set
    \[S = \left\{\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix} = \begin{pmatrix}1\\1\\0 \\0\end{pmatrix} + \mu_1 \begin{pmatrix}-1\\1\\1\\0\end{pmatrix} + \mu_2 \begin{pmatrix}1\\-1\\ 0\\1\end{pmatrix} \right\} \]
    Then \(MX_0=V\) says that \(\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix} =
    \begin{pmatrix}1\\1\\0 \\ 0\end{pmatrix}\) solves the original matrix equation, which is certainly true, but this is not the only solution.

    \(MY_1=0\) says that \(\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix} = \begin{pmatrix}-1\\1\\1\\ 0\end{pmatrix}\) solves the homogeneous equation.

    \(MY_2=0\) says that \(\begin{pmatrix}x_1\\x_2\\x_3\\x_4\end{pmatrix} =
    \begin{pmatrix}1\\-1\\0 \\1\end{pmatrix}\) solves the homogeneous equation.

    Notice how adding any multiple of a homogeneous solution to the particular solution yields another particular solution
    

    Contributor


    This page titled 2.5: Solution Sets for Systems of Linear Equations is shared under a not declared license and was authored, remixed, and/or curated by David Cherney, Tom Denton, & Andrew Waldron.

    • Was this article helpful?