# 1.2: Systems of Equations, Algebraic Procedures

- Page ID
- 14497

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)## Algebraic Procedures

- Use elementary operations to find the solution to a linear system of equations.
- Find the row-echelon form and reduced row-echelon form of a matrix.
- Determine whether a system of linear equations has no solution, a unique solution or an infinite number of solutions from its .
- Solve a system of equations using Gaussian Elimination and Gauss-Jordan Elimination.
- Model a physical system with linear equations and then solve.

We have taken an in depth look at graphical representations of systems of equations, as well as how to find possible solutions graphically. Our attention now turns to working with systems algebraically.

A **system of linear equations** is a list of equations, \[\begin{array}{c} a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}=b_{1} \\ a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}=b_{2} \\ \vdots \\ a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}=b_{m} \end{array}\nonumber\] where \(a_{ij}\) and \(b_{j}\) are real numbers. The above is a system of \(m\) equations in the \(n\) variables, \(x_{1},x_{2}\cdots ,x_{n}\). Written more simply in terms of summation notation, the above can be written in the form \[\sum_{j=1}^{n}a_{ij}x_{j}=b_{i}, \text{ }i=1,2,3,\cdots ,m\nonumber\]

The relative size of \(m\) and \(n\) is not important here. Notice that we have allowed \(a_{ij}\) and \(b_{j}\) to be any real number. We can also call these numbers **scalars** . We will use this term throughout the text, so keep in mind that the term **scalar** just means that we are working with real numbers.

Now, suppose we have a system where \(b_{i} = 0\) for all \(i\). In other words every equation equals \(0\). This is a special type of system.

A system of equations is called **homogeneous** if each equation in the system is equal to \(0\). A homogeneous system has the form \[\begin{array}{c} a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}= 0 \\ a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}= 0 \\ \vdots \\ a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}= 0 \end{array}\nonumber \] where \(a_{ij}\) are scalars and \(x_{i}\) are variables.

Recall from the previous section that our goal when working with systems of linear equations was to find the point of intersection of the equations when graphed. In other words, we looked for the solutions to the system. We now wish to find these solutions algebraically. We want to find values for \(x_{1},\cdots ,x_{n}\) which solve all of the equations. If such a set of values exists, we call \(\left( x_{1},\cdots ,x_{n}\right)\) the **solution set**.

Recall the above discussions about the types of solutions possible. We will see that systems of linear equations will have one unique solution, infinitely many solutions, or no solution. Consider the following definition.

A system of linear equations is called **consistent** if there exists at least one solution. It is called **inconsistent** if there is no solution.

If you think of each equation as a condition which must be satisfied by the variables, consistent would mean there is some choice of variables which can satisfy **all** the conditions. Inconsistent would mean there is no choice of the variables which can satisfy all of the conditions.

The following sections provide methods for determining if a system is consistent or inconsistent, and finding solutions if they exist.

### Elementary Operations

We begin this section with an example. Recall from Example 1.1.1 that the solution to the given system was \(\left(x, y \right) = \left( -1, 4 \right)\).

Algebraically verify that \(\left(x, y \right) = \left( -1, 4 \right)\) is a solution to the following system of equations.

\[\begin{array}{c} x+y=3 \\ y-x=5 \end{array}\nonumber \]

###### Solution

By graphing these two equations and identifying the point of intersection, we previously found that \(\left(x, y \right) = \left( -1, 4 \right)\) is the unique solution.

We can verify algebraically by substituting these values into the original equations, and ensuring that the equations hold. First, we substitute the values into the first equation and check that it equals \(3\). \[x + y = (-1)+(4) = 3\nonumber \] This equals \(3\) as needed, so we see that \(\left( -1,4 \right)\) is a solution to the first equation. Substituting the values into the second equation yields \[y -x = (4) - (-1) = 4 + 1 = 5\nonumber \] which is true. For \(\left( x,y\right) =\left( -1,4\right)\) each equation is true and therefore, this is a solution to the system.

Now, the interesting question is this: If you were not given these numbers to verify, how could you algebraically determine the solution? Linear algebra gives us the tools needed to answer this question. The following basic operations are important tools that we will utilize.

**Elementary operations** are those operations consisting of the following.

- Interchange the order in which the equations are listed.
- Multiply any equation by a nonzero number.
- Replace any equation with itself added to a multiple of another equation.

It is important to note that none of these operations will change the set of solutions of the system of equations. In fact, elementary operations are the *key tool *we use in linear algebra to find solutions to systems of equations.

Consider the following example.

Show that the system \[\begin{array}{c} x+y=7 \\ 2x-y=8 \end{array}\nonumber \] has the same solution as the system \[\begin{array}{c} x+y=7 \\ -3y=-6 \end{array}\nonumber \]

###### Solution

Notice that the second system has been obtained by taking the second equation of the first system and adding -2 times the first equation, as follows: \[2x-y + (-2)(x+y) = 8 + (-2)(7)\nonumber \] By simplifying, we obtain \[-3y=-6\nonumber \] which is the second equation in the second system. Now, from here we can solve for \(y\) and see that \(y=2\). Next, we substitute this value into the first equation as follows \[x+y=x+2=7\nonumber \] Hence \(x=5\) and so \(\left( x,y\right) = \left(5,2 \right)\) is a solution to the second system. We want to check if \(\left(5,2 \right)\) is also a solution to the first system. We check this by substituting \(\left(x, y \right) = \left(5,2 \right)\) into the system and ensuring the equations are true. \[\begin{array}{c} x+y = \left(5 \right)+ \left( 2 \right) = 7 \\ 2x-y= 2 \left(5 \right) - \left( 2 \right) = 8 \end{array}\nonumber \] Hence, \(\left(5,2 \right)\) is also a solution to the first system.

This example illustrates how an elementary operation applied to a system of two equations in two variables does not affect the solution set. However, a linear system may involve many equations and many variables and there is no reason to limit our study to small systems. For any size of system in any number of variables, the solution set is still the collection of solutions to the equations. In every case, the above operations of Definition \(\PageIndex{4}\) do not change the set of solutions to the system of linear equations.

In the following theorem, we use the notation \(E_i\) to represent an equation, while \(b_i\) denotes a constant.

Suppose you have a system of two linear equations \[\begin{array}{c} E_{1}=b_{1}\\ E_{2}=b_{2} \end{array} \label{system}\] Then the following systems have the same solution set as \(\eqref{system}\):

- \[\begin{array}{c} E_{2}=b_{2}\\ E_{1}=b_{1} \end{array} \label{thm1.9.1}\]
- \[\begin{array}{c} E_{1}=b_{1} \\ kE_{2}=kb_{2}\\ \end{array} \label{thm1.9.2}\] for any scalar \(k\), provided \(k\neq0\).
- \[\begin{array}{c} E_{1}=b_{1} \\ E_{2}+kE_{1}=b_{2}+kb_{1} \end{array} \label{thm1.9.3}\] for any scalar \(k\) (including \(k=0\)).

Before we proceed with the proof of Theorem \(\PageIndex{1}\), let us consider this theorem in context of Example \(\PageIndex{2}\). Then, \[\begin{array}{cc} E_{1} = x+y, & b_{1} = 7 \\ E_{2} = 2x-y, & b_{2} = 8 \end{array}\nonumber \] Recall the elementary operations that we used to modify the system in the solution to the example. First, we added \(\left( -2 \right)\) times the first equation to the second equation. In terms of Theorem \(\PageIndex{1}\), this action is given by \[E_{2} + \left( -2 \right) E_{1} = b_{2} + \left( -2 \right)b_{1}\nonumber \] or \[2x-y + \left( -2 \right) \left(x+y \right) = 8 + \left( -2 \right) 7\nonumber \] This gave us the second system in Example \(\PageIndex{2}\), given by \[\begin{array}{c} E_{1} = b_{1} \\ E_{2} + \left( -2 \right) E_{1} = b_{2} + \left( -2 \right) b_{1} \end{array}\nonumber \]

From this point, we were able to find the solution to the system. Theorem \(\PageIndex{1}\) tells us that the solution we found is in fact a solution to the original system.

We will now prove Theorem \(\PageIndex{1}\).

**Proof**-
- The proof that the systems \(\eqref{system}\) and \(\eqref{thm1.9.1}\) have the same solution set is as follows. Suppose that \(\left( x_{1},\cdots ,x_{n}\right)\) is a solution to \(E_{1}=b_{1},E_{2}=b_{2}\). We want to show that this is a solution to the system in \(\eqref{thm1.9.1}\) above. This is clear, because the system in \(\eqref{thm1.9.1}\) is the original system, but listed in a different order. Changing the order does not effect the solution set, so \(\left( x_{1},\cdots ,x_{n}\right)\) is a solution to \(\eqref{thm1.9.1}\).
- Next we want to prove that the systems \(\eqref{system}\) and \(\eqref{thm1.9.2}\) have the same solution set. That is \(E_{1}=b_{1},E_{2}=b_{2}\) has the same solution set as the system \(E_{1}=b_{1},kE_{2}=kb_{2}\) provided \(k\neq 0\). Let \(\left( x_{1},\cdots ,x_{n}\right)\) be a solution of \(E_{1}=b_{1},E_{2}=b_{2},\). We want to show that it is a solution to \(E_{1}=b_{1},kE_{2}=kb_{2}\). Notice that the only difference between these two systems is that the second involves multiplying the equation, \(E_{2}=b_{2}\) by the scalar \(k\). Recall that when you multiply both sides of an equation by the same number, the sides are still equal to each other. Hence if \(\left( x_{1},\cdots ,x_{n}\right)\) is a solution to \(E_{2}=b_{2}\), then it will also be a solution to \(kE_{2}=kb_{2}\). Hence, \(\left( x_{1},\cdots ,x_{n}\right)\) is also a solution to \(\eqref{thm1.9.2}\).

Similarly, let \(\left( x_{1},\cdots ,x_{n}\right)\) be a solution of \(E_{1}=b_{1},kE_{2}=kb_{2}\). Then we can multiply the equation \(kE_{2}=kb_{2}\) by the scalar \(1/k\), which is possible only because we have required that \(k\neq 0\). Just as above, this action preserves equality and we obtain the equation \(E_{2} = b_{2}\). Hence \(\left( x_{1},\cdots ,x_{n}\right)\) is also a solution to \(E_{1}=b_{1},E_{2}=b_{2}.\) - Finally, we will prove that the systems \(\eqref{system}\) and \(\eqref{thm1.9.3}\) have the same solution set. We will show that any solution of \(E_{1}=b_{1},E_{2}=b_{2}\) is also a solution of \(\eqref{thm1.9.3}\). Then, we will show that any solution of \(\eqref{thm1.9.3}\) is also a solution of \(E_{1}=b_{1},E_{2}=b_{2}\). Let \(\left( x_{1},\cdots ,x_{n}\right)\) be a solution to \(E_{1}=b_{1},E_{2}=b_{2}\). Then in particular it solves \(E_{1} = b_{1}\). Hence, it solves the first equation in \(\eqref{thm1.9.3}\). Similarly, it also solves \(E_{2} = b_{2}\). By our proof of \(\eqref{thm1.9.2}\), it also solves \(kE_{1}=kb_{1}\). Notice that if we add \(E_{2}\) and \(kE_{1}\), this is equal to \(b_{2} + kb_{1}\). Therefore, if \(\left( x_{1},\cdots ,x_{n}\right)\) solves \(E_{1}=b_{1},E_{2}=b_{2}\) it must also solve \(E_{2}+kE_{1}=b_{2}+kb_{1}\).

Now suppose \(\left( x_{1},\cdots ,x_{n}\right)\) solves the system \(E_{1}=b_{1}, E_{2}+kE_{1}=b_{2}+kb_{1}\). Then in particular it is a solution of \(E_{1} = b_{1}\). Again by our proof of \(\eqref{thm1.9.2}\), it is also a solution to \(kE_{1}=kb_{1}\). Now if we subtract these equal quantities from both sides of \(E_{2}+kE_{1}=b_{2}+kb_{1}\) we obtain \(E_{2}=b_{2}\), which shows that the solution also satisfies \(E_{1}=b_{1},E_{2}=b_{2}.\)

Stated simply, the above theorem shows that the elementary operations do not change the solution set of a system of equations.

We will now look at an example of a system of three equations and three variables. Similarly to the previous examples, the goal is to find values for \(x,y,z\) such that each of the given equations are satisfied when these values are substituted in.

Find the solutions to the system,

\[\begin{array}{c} x+3y+6z=25 \\ 2x+7y+14z=58 \\ 2y+5z=19 \end{array} \label{solvingasystem1}\]

###### Solution

We can relate this system to Theorem \(\PageIndex{1}\) above. In this case, we have \[\begin{array}{c c} E_{1} = x + 3y + 6z, & b_{1} = 25\\ E_{2} = 2x+7y+14z, & b_{2} = 58 \\ E_{3} = 2y+5z, & b_{3} = 19 \end{array}\nonumber \] Theorem \(\PageIndex{1}\) claims that if we do elementary operations on this system, we will not change the solution set. Therefore, we can solve this system using the elementary operations given in Definition \(\PageIndex{4}\). First, replace the second equation by \(\left( -2\right)\) times the first equation added to the second. This yields the system \[\begin{array}{c} x+3y+6z=25 \\ y+2z=8 \\ 2y+5z=19 \end{array} \label{solvingasystem2}\] Now, replace the third equation with \(\left( -2\right)\) times the second added to the third. This yields the system \[\begin{array}{c} x+3y+6z=25 \\ y+2z=8 \\ z=3 \end{array} \label{solvingasystem3}\] At this point, we can easily find the solution. Simply take \(z=3\) and substitute this back into the previous equation to solve for \(y\), and similarly to solve for \(x\). \[\begin{array}{c} x + 3y + 6 \left(3 \right) = x + 3y + 18 = 25\\ y + 2 \left(3 \right) = y + 6 = 8 \\ z = 3 \end{array}\nonumber \] The second equation is now \[y+6=8\nonumber \] You can see from this equation that \(y = 2\). Therefore, we can substitute this value into the first equation as follows: \[x + 3 \left(2 \right) + 18 = 25\nonumber \] By simplifying this equation, we find that \(x=1\). Hence, the solution to this system is \(\left( x,y,z \right) = \left( 1,2,3 \right)\). This process is called **back substitution**.

Alternatively, in \(\eqref{solvingasystem3}\) you could have continued as follows. Add \(\left( -2\right)\) times the third equation to the second and then add \(\left( -6\right)\) times the second to the first. This yields \[ \begin{array}{c} x+3y=7 \\ y=2 \\ z=3 \end{array}\nonumber \] Now add \(\left( -3\right)\) times the second to the first. This yields \[ \begin{array}{c} x=1 \\ y=2 \\ z=3 \end{array}\nonumber \] a system which has the same solution set as the original system. This avoided back substitution and led to the same solution set. It is your decision which you prefer to use, as both methods lead to the correct solution, \(\left( x,y,z \right) = \left(1,2,3\right)\).