Skip to main content
Mathematics LibreTexts

2.1: Systems of Linear Equations

  • Page ID
    112004
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    \( \def\Span#1{\text{Span}\left\lbrace #1\right\rbrace} \def\vect#1{\mathbf{#1}} \def\ip{\boldsymbol{\cdot}} \def\iff{\Longleftrightarrow} \def\cp{\times} \)

     

    Introduction. Consistent and Inconsistent Linear Systems

     

    In Chapter 1 the question whether two lines or two planes intersect or do not intersect was touched upon. In the case of two planes the question can be resolved by finding equations for the planes and checking whether there are points that simultaneously satisfy these two equations. We can write this in the form \[ \left\{ \begin{array} {rr} a_1x_1 &+& a_2x_2 &+& a_3x_3 &=& k_1 \\ b_1x_1 &+& b_2x_2 &+& b_3x_3 &=& k_2 \end{array} \right. \nonumber\] which we will call a system of two linear equations in three unknowns.

    Definition \(\PageIndex{1}\)
     
    A linear equation is an equation that can be written in the form \[ a_1x_1 + a_2x_2 + \ldots + a_nx_n = k. \nonumber\] The numbers \(a_1,a_2,\ldots,a_n\) are called the coefficients and the variables \(x_1,x_2, \ldots, x_n\) the unknowns. The term \(k\) is referred to as the constant term.
    Example \(\PageIndex{2}\)
     
    The equation \[ 3x_1 + 5x_2 = 4x_3 - 2x_4 + 10 \nonumber\] is a linear equation, as it can be rewritten as \[ 3x_1 + 5x_2 - 4x_3 + 2x_4 = 10. \nonumber\] By contrast, the equation \[ x_1 + 2x_1x_2 - 3x_2 = 5 \nonumber\] is not linear because of the nonlinear term \(2x_1x_2\).
    Definition \(\PageIndex{3}\)
     
    A set of one or more linear equations is called a system of linear equations (or linear system, for short). In the case of \(m\) linear equations in the variables \(x_1,x_2,\ldots, x_n\) we speak of a system of \(m\) equations in \(n\) unknowns. The most general system then looks as follows: \[ \left\{\begin{array}{ccccccccc} a_{11}x_1\! & \!+\!&\!a_{12}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{1n}x_n \! & \!=\!&\! b_1 \\ a_{21}x_1 \quad \! & \!+\!&\!a_{22}x_2\! & \!+\!&\!\ldots\! & \!+\!&\!a_{2n}x_n \! & \!=\!&\! b_2 \\ \vdots \! & \! \!&\! \vdots\! & \! \!&\!\cdots\! & \! \!&\! \vdots \! & \! \!&\! \vdots \\ a_{m1}x_1 \quad \! & \!+\!&\!a_{m2}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{mn}x_n \! & \!=\!&\! b_m \\ \end{array} \right. \nonumber\]

    Of course, if we have equations we want to solve them. Here is what we mean by that in a precise way.

    Definition \(\PageIndex{4}\)
     
    A solution of a linear system is an ordered list of \(n\) values \((c_1, c_2, \ldots, c_n)\), or, depending on the context, a vector \( \left[\begin{array}{r}c_1 \\ c_2 \\ \vdots \\ c_n \end{array}\right]\) such that substitution of \[ x_1 = c_1, x_2 = c_2, \ldots, x_n = c_n \nonumber\] into each of the equations yields a true identity. The solution set or general solution of a system is the set of all solutions.
    Example \(\PageIndex{5}\)
     
    Two solutions for the system of equations \[ \left\{\begin{array} {rr} 2x_1&+&3x_2&+& x_3&=&0\\ 3x_1&+& x_2&+& 5x_3&=&7\\ \end{array}\right. \nonumber\] are given by \[ (1, -1, 1) \quad \text{and} \quad (5, -3, -1) \nonumber\] or, equivalently, by \[ \left[\begin{array}{r} 1\\-1\\1 \end{array}\right] \quad \text{and} \quad \left[\begin{array}{r} 5\\-3\\-1 \end{array}\right]. \nonumber\] For instance, substitution of the second proposed solution yields \[ \left\{\begin{array} {rr} 2\cdot 5&+&3\cdot(-3)&+& (-1)&=&0\\ 3\cdot 5&+& (-3)&+& 5\cdot(-1)&=&7.\\ \end{array}\right. \nonumber\] which are both true identities.

    The solution set may be empty, as in the following example

    Example \(\PageIndex{6}\)
     
    The system \[ \left\{\begin{array} {rr} 2x_1&+&3x_2&+& x_3&=&5\\ 3x_1&+& x_2&+& 4x_3&=&7\\ 4x_1&+& 6x_2&+& 2x_3&=&8\\ \end{array}\right. \nonumber\] has no solutions because the first equation conflicts with the third equation: if \((c_1,c_2,c_3)\) is a triple that satisfies \[ 2c_1 + 3c_2 + c_3 = 5 \nonumber\] then automatically \[ 4c_1 + 6c_2 + 2c_3 = 2(2c_1 + 3c_2 + c_3) = 10 \neq 8 \nonumber\] so \((c_1,c_2,c_3)\) cannot also be a solution of the third equation.

    If the solution set of a system is empty, a system is said to be inconsistent. This concept and its opposite are sufficiently important to be properly defined.

    Definition \(\PageIndex{7}\)
     
    A system of linear equations is consistent if it has at least one solution. Otherwise it is called inconsistent.
    Example \(\PageIndex{8}\)
     
    The simplest inconsistent system may well be the system with the one equation \[ 0x_1 + 0x_2 + \ldots + 0x_n = 1 \nonumber\] As we will see later, this conflicting equation in a way pops up in any inconsistent system.

    A consistent system of one equation in \(n\) unknowns is easily solved. Any unknown with a nonzero coefficient can be expressed in the other unknowns, and the other unknowns can be chosen freely. In words this may look more complicated than it is, as the following example illustrates.

    Example \(\PageIndex{9}\)
     
    Find all solutions of the equation in the variables \(x_1,\ldots,x_5\). \[ x_1 + 4x_2 + 5x_3 - x_5 = 7 \nonumber\] One way to denote the set of solutions: \[ \left\{\begin{array}{l} x_1 = 7 - 4x_2 -5x_3 + x_5 \\ x_2, x_3, x_4 \quad \text{, and } x_5 \quad \text{ are free} \end{array} \right. \nonumber\] By this we mean: if we assign arbitrary values to the variables \(x_2, x_3, x_4\) and \(x_5\), say \[ x_2 = c_1, x_3 = c_2, x_4 = c_3, x_5 = c_4, \nonumber\] and put \[ x_1 = 7 - 4c_1 -5c_2 + c_4 \nonumber\] then \[ x_1 + 4x_2 + 5x_3 - x_5 = (7 - 4c_1 -5c_2 + c_4) + 4c_1 +5c_2 - c_4 = 7 \nonumber\] so \[ (7 - 4c_1 -5c_2 + c_4, c_1, c_2, c_3, c_4) \nonumber\] is indeed a solution of the given equation. However, this is not the only way to write down the general solution: in this example almost any set of four variables can act as free variables. The descriptions \[ \left\{\begin{array}{l} x_5 = -7 +x_1 +4x_2 +5x_3 \\ x_1, x_2, x_3 \quad \text{, and } x_4 \quad \text{ are free} \end{array} \right. \nonumber\] and \[ \left\{\begin{array}{l} x_2 = \frac74 - \frac14x_1 -\frac54x_3+\frac14x_5 \\ x_1, x_3, x_4 \quad \text{, and } x_5 \quad \text{ are free} \end{array} \right. \nonumber\] are just as good. You might want to avoid fractions though. The only set of four variables that cannot act as free variables is the set \(\{x_1,x_2,x_3,x_5\}\).

    The idea behind all methods to find the general solution of a linear system: rewrite the system in simpler forms, basically by eliminating variables from equations. We illustrate this by an example.

    Example \(\PageIndex{10}\)
     
    We solve the system \[ \left\{\begin{array} {rr} 2x_1&-&5x_2&=&-2\\ 4x_1&-&7x_2&=& 2 \end{array}\right. \nonumber\] First method: from the first equation it follows that \[ 2x_1 = -2 + 5x_2 \quad \iff x_1 = -1 + \tfrac52 x_2 \nonumber\] Substitution of the expression for \(x_1\) into the second equation yields \[ 4(-1 + \tfrac52 x_2) - 7x_2 = 2, \nonumber\] an equation with \(x_2\) as single unknown (the jargon: \(x_1\) has been eliminated), and then \[ -4 +10x_2 - 7x_2 = 2 \quad \iff 3x_2 = 6 \quad \iff x_2 = 2, \nonumber\] and finally \[ x_1 = -1 + \tfrac52 x_2 = -1 + \tfrac52\cdot2 = 4. \nonumber\] Thus we have found that there is a unique solution: \[ \left\{\begin{array}{l} x_1 = 4 \\ x_2 = 2 \end{array} \right. \nonumber\] There is nothing wrong with this method, but with more than two equations it has the tendency to become messy. Second method: take clever combinations of the equations to eliminate variables. For the above example we may for instance subtract the first equation twice from the second. Think a moment why this is okay. It is the crucial step in the elimination method we will explain in the next subsection. \[ \left\{\begin{array} {rr} 2x_1&-&5x_2&=&-2\\ 4x_1&-&7x_2&=& 2 \end{array}\right. \quad\Rightarrow\quad \left\{\begin{array} {rr} 2x_1&-&5x_2&=&-2\\ & &3x_2&=& 6 \end{array}\right. \nonumber\] Again we see that \(x_2 = 2\), in fact the equation \(3x_2 = 6\) is the same equation that we arrived at with the substitution method above, and substitution of this into the first equation again yields \[ 2x_1 - 5\cdot2 = -2 \quad \quad \iff \quad x_1 = 4. \nonumber\]

    Solving a linear system by elimination

     

    We start with an example of three equations in three unknowns:

    Example \(\PageIndex{11}\)
     
    \[ \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ 3x_1 &+& 7x_2 &-& 2x_3 &=& 8 \\ 2x_1 &+& 10x_2 &-& 9x_2 &=& 4 \end{array} \right. \nonumber\] We can simplify this system by successively eliminating unknowns from equations by combining equations in a clever way. We can for instance eliminate the variable \(x_1\) from the second equation by subtracting the first equation three times from the second equation, and likewise we can subtract the first equation twice from the third equation: \[ \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ 3x_1 &+& 7x_2 &-& 2x_3 &=& 8 \\ 2x_1 &+& 10x_2 &-& 9x_2 &=& 4 \end{array} \right. \quad \Longrightarrow \quad \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ & & -2x_2 &+& 4x_3 &=& -4 \\ & & 4x_2 &-& 5x_2 &=& -4 \end{array} \right. \nonumber\] With the arrow we express that if we have a solution of the system on the left, this will also be a solution of the system on the right. Now, why is this okay, why is it allowed to 'subtract equations'? Let's introduce the shorthand notation \[ L_1 = x_1 +3x_2 - 2x_3, \quad L_2 = 3x_1 +7 x_2 -2x_3. \nonumber\] for the expressions on the left sides of the first two equations. Then the given equations are: \[ L_1 = 4, \quad L_2 = 8. \nonumber\] It then follows that we must have \[ L_2 - 3L_1 = 8 - 3\cdot4 = -4 \nonumber\] which yields \[ 3x_1 +7 x_2 -2x_3 - 3(x_1 +3x_2 - 2x_3) = -4 \quad \iff -2 x_2 +4x_3 = -4. \nonumber\] The last equation is exactly the second equation of the second system. The crucial thing to note is that these operations can be undone: if in the second system the first equation is added three times to the second equation, and added twice to the third equation we end up with the original system. So in fact we have \[ \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ & & -2x_2 &+& 4x_3 &=& -4 \\ & & 4x_2 &-& 5x_2 &=& -4 \end{array} \right. \quad \Longrightarrow \quad \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ 3x_1 &+& 7x_2 &-& 2x_3 &=& 8 \\ 2x_1 &+& 10x_2 &-& 9x_2 &=& 4 \end{array} \right. \nonumber\] The implication works two ways, which we can write as follows: \[ \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ 3x_1 &+& 7x_2 &-& 2x_3 &=& 8 \\ 2x_1 &+& 10x_2 &-& 9x_2 &=& 4 \end{array} \right. \quad \Longleftrightarrow \quad \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ & & -2x_2 &+& 4x_3 &=& -4 \\ & & 4x_2 &-& 5x_2 &=& -4 \end{array} \right. \nonumber\] So any triple of values \((c_1,c_2,c_3)\) that satisfies the first system satisfies the second system and vice versa. Systems that have the same set of solutions are called equivalent.
    Definition \(\PageIndex{12}\)
     
    Two systems of linear equations are called equivalent if they have the same set of solutions.

    By the same line of reasoning as in the above example we can deduce that adding an arbitrary multiple of any equation to another equation does not change the solution set of the system. Of course if we multiply an equation with some nonzero constant, the solution set also remains invariant. This operation is called scaling. For the system at hand we could, as a next step, scale the second equation with a factor \(-\frac12\). The following proposition summarizes the suitable operations to adapt a system of equations.

    Proposition \(\PageIndex{13}\)
     
    The following operations applied to a linear system always yield an equivalent system
    • Adding a multiple of an equation to another equation.
    • Scaling an equation with a nonzero scaling factor \(c\).
    • Changing the order of the equations.
    Skip/Read the proof
    Proof
    The correctness of the first operation is illustrated in Example \(\PageIndex{11}\). One example is by far not a proof, but the explanation given there can be generalized/formalized. The other two statements are rather obvious.
    Example \(\PageIndex{14}\)
     
    Let's take up the example at the point where we left it and work our way to its solution. Also let us introduce a notation that makes it easier for the reader to see what's going on. And also for yourself, in case you look back at your computations later, or if you want to check your computations. The '\(E\)' stands for: 'Equation'. We scale the second equation with a factor \(-\frac12\) \[ \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 & \quad[E_1]\\ & & -2x_2 &+& 4x_3 &=& -4 & \quad[-\frac12E_2]\\ & & 4x_2 &-& 5x_2 &=& -4 &\quad[E_3] \end{array} \right. \quad \Longleftrightarrow \quad \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ & & x_2 &-& 2x_3 &=& 2 \\ & & 4x_2 &-& 5x_2 &=& -4 \end{array} \right. \nonumber\] and then subtract the second equation four times from the third: \[ \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 & \quad[E_1]\\ & & x_2 &-& 2x_3 &=& 2 & \quad[E_2]\\ & & 4x_2 &-& 5x_2 &=& -4 & \quad[E_3-4 E_2] \end{array} \right. \quad \Longleftrightarrow \quad \left\{\begin{array} {rr} x_1 &+& 3x_2 &-& 2x_3 &=& 4 \\ & & x_2 &-& 2x_3 &=& 2 \\ & & & & 3x_2 &=& -12 \end{array} \right. \nonumber\] From the third equation it follows that \[ x_3 = \dfrac{-12}{3} = -4, \nonumber\] and then we can work upwards to find that \[ x_2 = 2 + 2x_3 = 2+2\cdot(-4) = -6 \nonumber\] and finally from the first equation it follows that \[ x_1 = 4 - 3x_2 + 2x_3 = 4 - 3\cdot(-6) + 2\cdot(-4) = 14. \nonumber\] Conclusion: the system has the unique solution \[ (x_1,x_2,x_3) = (14,-6,-4). \nonumber\] Problem solved. The last few steps, working our way up from the last equation to successively find \(x_3\), \(x_2\), \(x_1\), is sometimes referred to as back substitution.
    Example \(\PageIndex{15}\)
     
    Consider the following system of equations \[ \left\{\begin{array} {rr} x_1 &+& 4x_2 &-& 5x_3 &=& 4 \\ 2x_1 &+& 7x_2 &-& 2x_3 &=& 9 \\ x_1 &+& 3x_2 &+& 3x_3 &=& 6 \end{array} \right. \nonumber\] Making use of the notation introduced in the previous example we simplify the system: \[ \left\{\begin{array} {rr} x_1 &+& 4x_2 &-& 5x_3 &=& 4 &\quad [E_1]\\ 2x_1 &+& 7x_2 &-& 2x_3 &=& 9 &\quad [E_2-2E_1]\\ x_1 &+& 3x_2 &+& 3x_3 &=& 6 &\quad [E_3-E_1] \end{array} \right. \nonumber\] \[ \iff \left\{\begin{array} {rr} x_1 &+& 4x_2 &-& 5x_3 &=& 4 &\quad [E_1] \\ & & -x_2 &+& 8x_3 &=& 1 &\quad [E_2]\\ & & -x_2 &+& 8x_3 &=& 2 &\quad [E_3-E_2] \end{array} \right. \nonumber\] \[ \iff \left\{\begin{array} {rr} x_1 &+& 4x_2 &-& 5x_3 &=& 4 \\ & & -x_2 &+& 8x_3 &=& 1 \\ & & & & 0 &=& 1 \end{array} \right. \nonumber\] and from the last equation it immediately follows that there are no solutions, in other words: the system is inconsistent.

    Let's look at one more example. Here we will see how to find the solution that contains a free variable.

    Example \(\PageIndex{16}\)
     
    We find the general solution of the linear system \[ \left\{\begin{array} {rr} 4x_1&-&2x_2&-& 3x_3&+&7x_4&=&5\\ 3x_1&-&x_2&-& 2x_3&+&5x_4&=&7\\ x_1&-&x_2&-& 2x_3&+&3x_4&=&3. \end{array}\right. \nonumber\] Using the shorthand notation just introduced the system can be simplified as follows: we first interchange the first and the third equation to have a first equation where the coefficient of \(x_1\) is equal to 1. That way we avoid fractions in at least the first elimination step. \[ \begin{array}{cl} & \left\{\begin{array} {rr} 4x_1&-&2x_2&-& 3x_3&+&7x_4&=&5&\quad[E_3]\\ 3x_1&-&x_2&-& 2x_3&+&5x_4&=&7&\quad[E_2]\\ x_1&-&x_2&-& 2x_3&+&3x_4&=&3&\quad[E_1] \end{array}\right.\\ \iff & \left\{\begin{array} {rr} x_1&-&x_2&-& 2x_3&+&3x_4&=&3&\quad[E_1]\\ 3x_1&-&x_2&-& 2x_3&+&5x_4&=&7&\quad[E_2-3E_1]\\ 4x_1&-&2x_2&-& 3x_3&+&7x_4&=&5&\quad[E_3-4E_1] \end{array}\right. \\ \iff & \left\{\begin{array} {rr} x_1&-&x_2&-& 2x_3&+&3x_4&=&3&\quad[E_1]\\ &&2x_2&+& 4x_3&-&4x_4&=&-2&\quad[\frac12E_2]\\ &&2x_2&+& 5x_3&-&5x_4&=&-7&\quad[E_3] \end{array}\right. \\ \iff & \left\{\begin{array} {rr} x_1&-&x_2&-& 2x_3&+&3x_4&=&3&\quad[E_1]\\ &&x_2&+& 2x_3&-&2x_4&=&-1&\quad[E_2]\\ &&2x_2&+& 5x_3&-&5x_4&=&-7&\quad[E_3-2E_2] \end{array}\right. \\ \iff & \left\{\begin{array} {rr} x_1&-&x_2&-& 2x_3&+&3x_4&=&3\\ &&x_2&+& 2x_3&-&2x_4&=&-1\\ && && x_3&-&x_4&=&-5. \end{array}\right. \end{array} \nonumber\] Again we can find the solution by back-substitution: the third equation can be rewritten as \[ x_3 = -5 + x_4. \nonumber\] Via the second equation we can express \(x_2\) as a function of \(x_4\) \[ x_2 + 2(-5+x_4) -2x_4 = -1 \quad \quad \iff \quad x_2 = 9 \nonumber\] And then it follows from the first equation that \[ x_1 - 9 - 2\cdot(-5+x_4) + 3x_4 = 3 \quad \quad \iff \quad x_1 = 2 -x_4 \nonumber\] So the solution can be written as \[ \left\{\begin{array}{l} x_1 = 2 -x_4 \\ x_2 = 9 \\ x_3 = -5 + x_4\\ x_4 \quad \text{ is free} \end{array}\right. \nonumber\] Note that the row swap that we used as a first step is not really necessary. However, this way we avoided having to work with non-integer multiples of rows.

    Let's summarize the elimination method in a

    Summary \(\PageIndex{17}\)
     
    Any linear system in the variables \(x_1,\ldots, x_n\) can be solved as follows:
    • Using the operations of proposition (\(\PageIndex{13}\)) the system can be simplified to an equivalent linear system with the following property: in each equation at least one more of the first unknowns has a coefficient 0 than in the previous equation. If an unknown has a coefficient 0 we say that the unknown has been eliminated.
    • If an equation \[ 0x_1 + 0x_2 + \ldots + 0x_n = b, \nonumber\] with \(b\neq 0\) pops up, the system is inconsistent.
    • If no such equation appears, the general solution can be found by back substitution: starting from the last equation, we work our way upwards.

    In theory the method works for any linear system, however large, though with pen and paper it soon becomes cumbersome. In the next subsection we will use an appropriate representation of a linear system to solve it in a more efficient way. And we will also see how the procedure of back-substitution can be incorporated in the elimination process.

    Augmented matrices

     

    We will introduce a convenient shorthand notation for linear systems. This notation contains the essentials of the system in a structured way. Before that, we define the concept of one of the most basic building blocks in linear algebra: the matrix.

    Definition \(\PageIndex{18}\)
     
    An \(m \times n\) matrix \(A\) is an rectangular array of numbers \(a_{ij}, 1\leq i \leq m, 1 \leq j \leq n\). \[ A = \left[\begin{array}{cccc} a_{11} & a_{12}& \ldots& a_{1n} \\ a_{21} & a_{22}& \ldots& a_{2n} \\ \vdots & \vdots& \cdots& \vdots \\ a_{m1} & a_{m2}& \ldots& a_{mn} \end{array} \right]. \nonumber\] It consists of \(m\) horizontal rows of size \(n\), or, equivalently, of \(n\) vertical columns of size \(m\).

    In a statement about a matrix the first index always refers to the row(s), the second index to the column(s). E.g., \(a_{ij}\) is the number in the \(i\)-th row and the \(j\)-th column, and an \(m \times n\) matrix has \(m\) rows and \(n\) columns. A matrix is usually surrounded by parentheses or (square) brackets. We opt for brackets.

    Example \(\PageIndex{19}\)
     
    The matrix \[ B = \left[\begin{array}{ccccc} 1 & 2 & 3 & 4 & 5 \\ 2 & 7 & -1 & 0 & 8 \\ 5 & 5 & 5 & 0 & 4 \end{array}\right] \nonumber\] is a \(3\times 5\) matrix. Its second row is \( \left[\begin{array}{rrrrr} 2 & 7 & -1 & 0 & 8 \end{array}\right]\), and its third column: \[ \left[\begin{array}{c} 3 \\ -1 \\ 5 \end{array}\right]\nonumber\]

    Matrices play an important role in linear algebra. In this section we will use them as concise representations of linear systems, in which the computations involved to solve a system can be done quite systematically.

    Definition \(\PageIndex{20}\)
     
    The augmented matrix for a system of equations \[ \left\{\begin{array}{ccccccccc} a_{11}x_1\! & \!+\!&\!a_{12}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{1n}x_n \! & \!=\!&\! b_1 \\ a_{21}x_1 \quad \! & \!+\!&\!a_{22}x_2\! & \!+\!&\!\ldots\! & \!+\!&\!a_{2n}x_n \! & \!=\!&\! b_2 \\ \vdots \! & \! \!&\! \vdots\! & \! \!&\!\cdots\! & \! \!&\! \vdots \! & \! \!&\! \vdots \\ a_{m1}x_1 \quad \! & \!+\!&\!a_{m2}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{mn}x_n \! & \!=\!&\! b_m \\ \end{array} \right. \nonumber\] is the matrix \[ \left[\begin{array}{cccc | c} a_{11} & a_{12}& \ldots& a_{1n} & b_1 \\ a_{21} & a_{22}& \ldots& a_{2n} & b_2 \\ \vdots & \vdots& \ldots& \vdots & \vdots \\ a_{m1} & a_{m2}& \ldots& a_{mn} & b_m \end{array} \right]. \nonumber\] The part before the vertical bar, i.e. \[ A = \left[\begin{array}{cccc} a_{11} & a_{12}& \ldots& a_{1n} \\ a_{21} & a_{22}& \ldots& a_{2n} \\ \vdots & \vdots& \ldots& \vdots \\ a_{m1} & a_{m2}& \ldots& a_{mn} \end{array} \right] \nonumber\] is called the coefficient matrix of the system. The column behind the bar contains the constant terms.

    The augmented matrix is nothing more than an abbreviation for a system of equations. With the vertical bar we want to indicate that the last column plays a special role, namely, it contains the constants on the right-hand sides of the equations. If we denote these terms by the vector \[ \vect{b} = \left[\begin{array}{c} b_1\\b_2\\\vdots\\b_m \end{array} \right] \nonumber\] the augmented matrix can be written as \[ [ A | \vect{b} ]. \nonumber\] To conclude this subsection we will reconsider the earlier example of a system of three equations in three unknowns \[ \left\{\begin{array} {rr} x_1 & + & 3x_2 & -&2x_3 &=& 4 \\ 3x_1 & + & 7x_2 & -&2x_3 &=& 8 \\ 2x_1 & + &10x_2 & -&9x_3 &=& 4. \end{array} \right. \nonumber\] We will apply the same simplifications to the system as before. Parallel to this we adapt the augmented matrix accordingly, using a notation that speaks for itself. \[ \begin{array}{lcl} \left\{\begin{array} {rr} x_1 & + & 3x_2 & -&2x_3 &=& 4 \\ 3x_1 & + & 7x_2 & -&2x_3 &=& 8 \\ 2x_1 & + &10x_2 & -&9x_3 &=& 4 \end{array} \right.&\qquad& \left[\begin{array} {rrrr | r} 1 & 3 & -2& 4 \\ 3 & 7 & -2& 8 \\ 2 & 10 & -9 & 4 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2} -3R_{1}] \\ [R_{3} -2R_{1}] \end{array} \\ &{\Big\Updownarrow} & \\ \left\{\begin{array} {rr} x_1 & + & 3x_2 & -&2x_3 &=& 4 \\ & - & 2x_2 & +&4x_3 &=& -4 \\ & &4x_2 & -&5x_3 &=& -4 \end{array} \right.&\qquad& \left[\begin{array} {rrrr | r} 1 & 3 & -2& 4 \\ 0& -2 & 4 & -4 \\ 0 & 4 & -5 & -4 \end{array}\right]\quad \begin{array}{r} [R_{1}] \\ [ -\frac12 R_{2}] \\ [R_{3}] \end{array} \\ &{\Big\Updownarrow}& \\ \left\{\begin{array} {rr} x_1 & + & 3x_2 & -&2x_3 &=& 4 \\ & & x_2 & -&2x_3 &=& 2 \\ & &4x_2 & -&5x_2 &=& -4 \end{array} \right.&\qquad& \left[\begin{array} {rrrr | r} 1 & 3 & -2& 4 \\ 0& 1 & -2 & 2 \\ 0 & 4 & -5 & -4 \end{array}\right]\quad \begin{array}{r} [R_{1}] \\ [R_{2}] \\ [R_{3} -4R_{2}] \end{array} \\ &{\Big\Updownarrow}& \\ \left\{\begin{array} {rr} x_1 & + & 3x_2 & -&2x_3 &=& 4 \\ & & x_2 & -&2x_3 &=& 2 \\ & & & &3x_3 &=& -12 \end{array} \right.&\qquad& \left[\begin{array} {rrrr | r} 1 & 3 & -2& 4 \\ 0& 1 & -2 & 2 \\ 0 & 0 & 3 & -12 \end{array}\right] \end{array} \nonumber\] As we have seen before, the solution can now be found by back substitution. The right moment to start this back substitution is when the augmented matrix has been simplified to so-called echelon form.

    Row reduction and echelon forms

     

    In subsection \(\PageIndex{2}\) we have solved linear systems by eliminating variables from the equations. It would be nice to have a clear mark where we can stop rewriting the given system, to forestall ending up in a never ending loop. When we use the notation of an augmented matrix we can identify such a mark. We first need a few more definitions.

    Definition \(\PageIndex{21}\)
     
    A matrix is in row echelon form if it has the following two properties:
    1. All non-zero rows are above all rows that contain only zeros.
    2. Each non-zero row that is not the last row starts with fewer zeros than the rows below it.
    Such a matrix is also called a row echelon matrix.
    Example \(\PageIndex{22}\)
     
    The following three matrices are meant to visualize the structure of an echelon matrix. The symbol \(\blacksquare\) denotes an arbitrary nonzero number, and \(\ast\) just any real number. \[ E_1 = \left[\begin{array}{rrrr} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 &\blacksquare & \ast \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right] , \quad E_2 = \left[\begin{array}{rrrrr} \blacksquare & \ast & \ast & \ast & \ast \\ 0 & 0 & \blacksquare & \ast & \ast \\ 0 & 0 & 0 &0 & \blacksquare \end{array}\right], \quad E_3 = \left[\begin{array}{rrrr} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 & \blacksquare & \ast \\ 0 & 0 & 0 & \blacksquare \\ 0 & 0 &0 & 0 \end{array}\right] \nonumber\]

    In a similar manner we can define the concept of a column echelon matrix. However, since we will only consider row echelon matrices we will not do this. In the sequel we will drop the epithet 'row' and simply speak of echelon form and echelon matrix.

    Definition \(\PageIndex{23}\)
     
    A pivot of a row in an echelon matrix is the first nonzero element of the row. Sometimes we also refer to it as the leading entry.
    Example \(\PageIndex{24}\)
     
    The following three matrices are in echelon form: \[ \left[\begin{array}{rrr}1 & 2 & 3 \\ 0 & 3 & 2 \\ 0 & 0 & 0 \end{array} \right], \quad \left[\begin{array}{rr}1 & 0 \\ 0 & 1 \\ 0 & 0 \\ 0 & 0 \end{array} \right], \quad \left[\begin{array}{rrrrr}1 & 1 & 0 & 2 & 0\\ 0 & 0 & 1 & 4 & 0\\ 0 & 0 & 0 & 0 & 1\end{array} \right] \nonumber\] The following two matrices are not in echelon form \[ \left[\begin{array}{rrr}0 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \end{array} \right], \quad \left[\begin{array}{rrr}1 & 0 & 0 \\ 0 & 1 & 1 \\ 0 &1 & 0 \end{array} \right]. \nonumber\]
    Exercise \(\PageIndex{25}\)
     
    Explain why the last two matrices in the example are not echelon matrices.
    Example \(\PageIndex{26}\)
     
    Here are the three echelon matrices again, with boxes around their pivots: \[ \left[\begin{array}{rrr}\fbox{1} & 2 & 3 \\ 0 & \fbox{3} & 2 \\ 0 & 0 & 0 \end{array} \right], \quad \left[\begin{array}{rr}\fbox{1} & 0 \\ 0 & \fbox{1} \\ 0 & 0 \\ 0 & 0 \end{array} \right], \quad \left[\begin{array}{rrrrr}\fbox{1} & 1 & 0 & 2 & 0\\ 0 & 0 & \fbox{1} & 4 & 0\\ 0 & 0 & 0 & 0 & \fbox{1}\end{array} \right]. \nonumber\] The third and the fourth row of the second matrix do not have pivots.
    Note \(\PageIndex{27}\)
    In practice the pivots are the coefficients in the equations of a system that are used to eliminate variables from other equations. In the context of augmented matrices: they are the entries used to create zeros in the column below that entry. Note that from the second condition in Definition \(\PageIndex{21}\) it follows that automatically all entries in the column below a pivot must be 0.

    Now have a look again at the derivation at the end of the previous subsection. We worked our way downwards through the rows to create zeros in the first columns, while keeping in mind that we did not change the solution set of the corresponding linear system. The process is called row reduction. The end point, from which we could start building the solution by back substitution, was an augmented matrix in echelon form!

    Definition \(\PageIndex{28}\)
     
    The elementary row operations that one can apply to a matrix are
    1. Adding a multiple of a row to another row.
    2. Multiplying a row by a non-zero number. This is also referred to as scaling.
    3. Interchanging (or: swapping) two rows.
    Definition \(\PageIndex{29}\)
     
    Matrices that can be transformed into each other via row operations are called row equivalent. If two matrices \(A\) and \(B\) are row equivalent we denote this by \(A \sim B\).
    Note \(\PageIndex{30}\)
    if two augmented matrices are row equivalent it means that the linear systems they represent are equivalent (i.e., have the same solution set).

    Above we applied row operations to an augmented matrix, to work our way to the solution of a system of equations. In fact we simplified the system and the matrix along parallel paths. From now on we will simplify a system by working almost always with the corresponding augmented matrix. In the future (chapter or section . . . .) we will also apply row reduction to matrices in other contexts, i.c. for other purposes.

    Example \(\PageIndex{31}\)
     
    We will row reduce the matrix \[ M = \left[\begin{array} {rrrrr} 4 & -4 & -4 & 8 & 12 \\ -2 & 2 & 2 & -4 & -6 \\ 3 & -3 & -1 & 5 & 4 \end{array}\right] \nonumber\] to a matrix \(E\) in echelon form: \[ \begin{array}{ccl} M&=& \left[\begin{array} {rrrrr} 4 & -4 & -4 & 8 & 12 \\ -2 & 2 & 2 & -4 & -6 \\ 3 & -3 & -1 & 5 & 4 \end{array}\right] \quad \begin{array}{r} [ \frac14 R_{1}] \\ [R_{2}] \\ [R_{3}] \end{array} \\ &\sim& \left[\begin{array} {rrrrr} 1 & -1 & -1 & 2 & 3 \\ -2 & 2 & 2 & -4 & 2 \\ 3 & -3 & -1 & 5 & 4 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}+ 2R_{1}] \\ [R_{3}] \end{array} \\ &\sim& \left[\begin{array} {rrrrr} 1 & -1 & -1 & 2 & 3 \\ 0 & 0 & 0 & 0 & 4 \\ 3 & -3 & -1 & 5 & 4 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}] \\ [R_{3} -R_{1}] \end{array} \\ &\sim& \left[\begin{array} {rrrrr} 1 & -1 & -1 & 2 & 3 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 2 & -1 & -5 1 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}\leftrightarrow R_{3}] \\ [R_{3}\leftrightarrow R_{2}] \end{array} \\ &\sim& \left[\begin{array} {rrrrr} 1 & -1 & -1 & 2 & 3 \\ 0 & 0 & 2 & -1 & -5 \\ 0 & 0 & 0 & 0 & 4 \end{array}\right] = E \end{array} \nonumber\] Here a row swap was essential to bring the matrix into echelon form. Sometimes a row swap may just be convenient to simplify the computations. Note that we have also introduced a notation for a row swap. It is a good practice use a notation like this when you do a row reduction process yourself. To speed up the process it may be preferable to combine row operations that do not interfere. In this example the second and the third step both involved adding multiples of the first row to the other rows. This can be done simultaneously: \[ \begin{array}{ccl} \left[\begin{array} {rrrrr} 1 & -1 & -1 & 2 & 3 \\ -2 & 2 & 2 & -4 & -2 \\ 3 & -3 & -1 & 5 & 4 \end{array}\right]\quad \begin{array}{r} [R_{1}] \\ [R_{2}+ 2R_{1}] \\ [R_{3} -R_{1}] \end{array} &\sim& \left[\begin{array} {rrrrr} 1 & -1 & -1 & 2 & 3 \\ 0 & 0 & 0 & 0 & 4 \\ 0 & 0 & 2 & -1 & -5 \end{array}\right]\quad \begin{array}{r} [R_{1}] \\ [R_{2}] \\ [R_{3}] \end{array} \end{array} \nonumber\]
    Proposition \(\PageIndex{32}\)
    Any matrix is row equivalent to an echelon matrix.
    Note \(\PageIndex{33}\)
    We will not give a formal proof. The idea that a matrix can be reduced to an echelon matrix is as follows: just start from the top left and work downwards. If \(a_{11}\) is not 0, that will be the first pivot. We can use it to make all the other elements in the first column 0. We then get \[ A = \left[\begin{array}{cccc} a_{11} & a_{12}& \ldots& a_{1n} \\ a_{21} & a_{22}& \ldots& a_{2n} \\ \vdots & \vdots& \ldots& \vdots \\ a_{m1} & a_{m2}& \ldots& a_{mn} \end{array} \right] \sim \left[\begin{array}{cccc} a_{11} & a_{12}& \ldots& a_{1n} \\ 0 & \tilde{a}_{22}& \ldots& \tilde{a}_{2n} \\ \vdots & \vdots& \ldots& \vdots \\ 0 & \tilde{a}_{m2}& \ldots& \tilde{a}_{mn} \end{array} \right]. \nonumber\] From then on we will leave the first row as it is. If \(a_{11} = 0\), then we try to find a non-zero element in the first column. If this is the element \(a_{i1}\), then we can start by interchanging the first and the \(i\)-th row. After this row swap we use the new first row to create zeros in the first column. If the first column happens to consist of zeros only, we skip it and start from the first non-zero column. We continue with the part of the matrix below and to the right of the first pivot, i.e., \[ \left[\begin{array}{cccc} \tilde{a}_{22}& \tilde{a}_{23}& \ldots& \tilde{a}_{2n} \\ \tilde{a}_{32}& \tilde{a}_{43}& \ldots& \tilde{a}_{3n} \\ \vdots & \vdots & \ldots& \vdots \\ \tilde{a}_{m2}& \tilde{a}_{m3}& \ldots& \tilde{a}_{mn} \end{array} \right] \nonumber\] And so on, until we get to the last row, or until we get to a row below which all rows only contain zeros.

    The echelon matrix to which a matrix can be reduced is in no way unique. For instance, by scaling a row in an echelon matrix the echelon form persists. We can go a bit further, namely we can create zeros in the columns above the pivots as well. The following example shows how. First we work downwards to the echelon form, and then work upwards to create the extra zeros, as mentioned.

    Example \(\PageIndex{34}\)
     
    The matrix \[ M = \left[\begin{array}{rrrr} 1 & 2 & 3 & 1\\ 1 & 4 & 7 & 3\\ 3 & 6 & 11 & 9 \end{array}\right] \nonumber\] is row equivalent to all of the following echelon matrices: \[ \left[\begin{array}{rrrr} 1 & 2 & 3 & 1\\ 0 & 2 & 4 & 2\\ 0 & 0 & 2 & 6 \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 2 & 3 & 1\\ 0 & 1 & 2 & 1\\ 0 & 0 & 1 & 3 \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 2 & 0 & -8\\ 0 & 1 & 0 & -5\\ 0 & 0 & 1 & 3 \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 0 & 0 & 2\\ 0 & 1 & 0 & -5\\ 0 & 0 & 1 & 3 \end{array}\right]. \nonumber\] Or, using the notation for the row operations: \[ \left[\begin{array}{rrrr} 1 & 2 & 3 & 1\\ 1 & 4 & 7 & 3\\ 3 & 6 & 11 & 9 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2} -1R_{1}] \\ [R_{3} -R_{1}] \end{array} \sim \left[\begin{array}{rrrr} 1 & 2 & 3 & 1\\ 0 & 2 & 4 & 2\\ 0 & 0 & 2 & 6 \end{array}\right]\quad \begin{array}{r} [R_{1}] \\ [ \frac12 R_{2}] \\ [ \frac12 R_{3}] \end{array} \quad \sim \left[\begin{array}{rrrr} 1 & 2 & 3 & 1\\ 0 & 1 & 2 & 1\\ 0 & 0 & 1 & 3 \end{array}\right]\quad \begin{array}{r} [R_{1} -3R_{3}] \\ [R_{2} -2R_{3}] \\ [R_{3}] \end{array} \nonumber\] \[ \sim \left[\begin{array}{rrrr} 1 & 2 & 0 & -8\\ 0 & 1 & 0 & -5\\ 0 & 0 & 1 & 3 \end{array}\right]\quad \begin{array}{r} [R_{1} -2R_{2}] \\ [R_{2}] \\ [R_{3}] \end{array} \sim \left[\begin{array}{rrrr} 1 &0 & 0 & 2\\ 0 & 1 & 0 & -5\\ 0 & 0 & 1 & 3 \end{array}\right] \nonumber\]

    There are three important observations regarding this example.

    Note \(\PageIndex{35}\)
     
    Apart from the second step, where two rows were scaled, in each step one pivot was used to make all elements right above and right below it equal to 0. In this way we move forward all the time to a matrix with more and more zeros in a structured way.
    Note \(\PageIndex{36}\)
     
    The last matrix can really be seen as a natural end point of the reduction process:
    • the pivots are all 1, the simplest non-zero number
    • if we try to create more zeros, we can only do so in the fourth column. But then we we will lose one or more of the zeros in the first three columns.

    The third remark is the most important one, keeping in mind the goal of this section: solving linear systems.

    Note \(\PageIndex{37}\)
     
    If the matrix \(M\) were actually an augmented matrix for a system, in which we'd better have written \[ M = \left[\begin{array}{rrr | r} 1 & 2 & 3 & 1\\ 1 & 4 & 7 & 3\\ 3 & 6 & 11 & 9 \end{array}\right], \nonumber\] then the linear system corresponding to the final echelon matrix \[ \left[\begin{array}{rrr | r} 1 &0 & 0 & 2\\ 0 & 1 & 0 & -5\\ 0 & 0 & 1 & 3 \end{array}\right] \nonumber\] is given by \[ \left\{ \begin{array} {rr} x_1 & & &=& 2\\ & x_2 & &=&-5\\ & & x_3 &=& 3 \end{array} \right. \nonumber\] which is in fact the solution!

    This natural end point of the row reduction process has got his own name.

    Definition \(\PageIndex{38}\)
     
    A reduced echelon matrix or matrix in reduced echelon form is an echelon matrix with the extra properties
    1. All pivots are 1.
    2. In a column with a pivot all other elements are 0.
    Example \(\PageIndex{39}\)
     
    Of the matrices \[ \left[\begin{array}{rrrr} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array}\right], \quad \left[\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & 2 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array}\right], \quad \left[\begin{array}{rrrr} 1 & 0 & 1 & 0 \\ 0 & 1 & 3 & 0\\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 &0 \end{array}\right], \nonumber\] the first and the third are echelon matrices, and only the third is a reduced echelon matrix.
    Exercise \(\PageIndex{40}\)
     
    Explain why.

    The big advantage of reduced echelon matrices, already hinted at in Remark (\(\PageIndex{37}\)), is the following:

    Proposition \(\PageIndex{41}\)
     
    If the augmented matrix of a linear system is in reduced echelon from, the solution of the system is found by expressing the variables corresponding to the pivots in terms of the other variables. These other variables can be assigned arbitrary values.
    Definition \(\PageIndex{42}\)
     
    In the solution as constructed according to the previous proposition the pivot variables are called the basic variables. The other variables are called free variables.
    Example \(\PageIndex{43}\)
     
    We find the solution of the linear system with the following augmented matrix, which is already in row reduced echelon: \[ \left[\begin{array}{rrrrrr | r} 1 & 0 & 2 & 0 & 3 & 6\\ 0 & 1 & -3 & 0 &-4 & 7\\ 0 & 0 & 0 & 1 & 5 & 8 \end{array}\right] \nonumber\] We go back to the corresponding system and bring the non-pivot variables \(x_3\) and \(x_5\) to the right: \[ \left\{ \begin{array} {rr} x_1 & & & +& 2x_3& & &+&3x_5& =& 6\\ & &x_2 & -& 3x_3& & &-& 4x_5&=&7 \\ & & & & & & x_4 &+&5x_5 &=& 8 \end{array} \right. \quad \iff \quad \left\{ \begin{array} {rr} x_1 & = &6&-&2x_3 & - &3x_5\\ x_2 & = &7&+&3x_3 & + &4x_5 \\ x_4 & = &8& & & - &5x_5 \end{array} \right. \nonumber\] and we add: '\(x_3\) and \(x_5\) are free'.

    The row reduction of the augmented matrix to echelon form corresponds to the forward substitution as in examples (\(\PageIndex{10}\)) and (\(\PageIndex{16}\)). There we found the solution by back-substitution. When the augmented matrix is reduced to reduced echelon form we have in fact incorporated this back-substitution part and can write down the general solution directly.

    Theorem \(\PageIndex{44}\)
     
    Any matrix is row equivalent to a reduced echelon matrix. Moreover, this last matrix is unique.
    Note \(\PageIndex{45}\)
     
    Again we give no formal proof. In the previous section we showed, also informally, that any matrix can be reduced to a matrix in echelon form. In this echelon matrix we may divide each row by its pivot (first nonzero element). And lastly working upwards' step by step we use a pivot -- which we made equal to 1 -- to create zeros in all positions above it. (The last two simplifications may be done in reversed order: first use the pivots to create zeros in the positions above them, and then scale the rows.) This reasoning supports the validity of the first statement. The uniqueness is harder to show in an intuitive way, and it is definitely harder to prove rigorously.
    Example \(\PageIndex{46}\)
     
    We further simplify the echelon matrix \[ \left[\begin{array}{rrrrr} 3 & 2 &1 &6&-2\\ 0 & 2 & -2 &-3 & 1\\ 0 & 0 & 0 &3 & 2 \end{array}\right] \nonumber\] to reduced echelon form: step 1: use the pivot in the third row to create zeros above it; step 2: use the pivot in the second row to create a zero above it; step 3: scale all rows: \[ \left[\begin{array}{rrrrr} 3 & 2 &1 &6&-2\\ 0 & 2 & -2 &-3 & 1\\ 0 & 0 & 0 &3 & 2 \end{array}\right] \quad \begin{array}{r} [R_{1} -2R_{3}] \\ [R_{2}+ 1R_{3}] \\ [R_{3}] \end{array} \sim \quad \left[\begin{array}{rrrrr} 3 & 2 &1 &0&-6\\ 0 & 2 & -2 &0 & 3\\ 0 & 0 & 0 &3 & 2 \end{array}\right]\quad \begin{array}{r} [R_{1} -1R_{2}] \\ [R_{2}] \\ [R_{3}] \end{array} \quad \nonumber\] \[ \sim \quad \left[\begin{array}{rrrrr} 3 & 0 &3 &0&-9\\ 0 & 2 & -2 &-0 & 3\\ 0 & 0 & 0 &3 & 2 \end{array}\right] \quad \begin{array}{r} [ \frac13 R_{1}] \\ [ \frac{1}{2} R_{2}] \\ [ \frac13 R_{3}] \end{array} \quad \sim \quad \left[\begin{array}{rrrrr} 1 & 0 &1 &0&-3\\ 0 & 1 & -1 &-0 & 3/2\\ 0 & 0 & 0 &1 & 2/3 \end{array}\right] \nonumber\]

    Instead of a formal proof of the uniqueness of the row reduced echelon form of a matrix, we illustrate this uniqueness with one example.

    Example \(\PageIndex{47}\)
     
    We will find the row reduced echelon form of the matrix \[ M = \left[\begin{array}{rrrr} 2 & -1 & -1 & 2\\ 1 & 2 & 4 & 4\\ 4 & -2 & -4 & 6 \end{array}\right] \nonumber\] via two different routes. Route 1: Use the top left entry \(a_{11} = 2\) as a first pivot. An auxiliary step, to avoid fractions, is to scale the second row with a factor 2: \[ \left[\begin{array}{rrrr} 2 & -1 & -1 & 2\\ 1 & 2 & 4 & 4\\ 4 & -2 & -4 & 6 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [ 2 R_{2}] \\ [R_{3}] \end{array} \quad\sim \left[\begin{array}{rrrr} 2 & -1 & -1 & 2\\ 2 & 4 & 8 & 8\\ 4 & -2 & -4 & 6 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2} -1R_{1}] \\ [R_{3} -2R_{1}] \end{array} \nonumber\] \[ \sim \left[\begin{array}{rrrr} 2 & -1 & -1 & 2\\ 0 & 5 & 9 & 6\\ 0 & 0 & -2 & 2 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}] \\ [ -\frac12 R_{3}] \end{array} \quad\sim \left[\begin{array}{rrrr} 2 & -1 & -1 & 2\\ 0 & 5 & 9 & 6\\ 0 & 0 & 1 & -1 \end{array}\right] \quad \begin{array}{r} [R_{1}+ 1R_{3}] \\ [R_{2} -9R_{3}] \\ [R_{3}] \end{array} \nonumber\] \[ \sim \left[\begin{array}{rrrr} 2 & -1 & 0 & 1\\ 0 & 5 & 0 & 15\\ 0 & 0 & 1 & -1 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [ \frac15 R_{2}] \\ [R_{3}] \end{array} \quad\sim \left[\begin{array}{rrrr} 2 & -1 & 0 & 1\\ 0 & 1 & 0 & 3\\ 0 & 0 & 1 & -1 \end{array}\right] \quad \begin{array}{r} [R_{1}+ 1R_{2}] \\ [R_{2}] \\ [R_{3}] \end{array} \nonumber\] \[ \sim \left[\begin{array}{rrrr} 2 & 0 & 0 & 4\\ 0 & 1 & 0 & 3\\ 0 & 0 & 1 & -1 \end{array}\right] \quad \begin{array}{r} [ \frac12 R_{1}] \\ [R_{2}] \\ [R_{3}] \end{array} \quad \sim \left[\begin{array}{rrrr} 1 & 0 & 0 & 2\\ 0 & 1 & 0 & 3\\ 0 & 0 & 1 & -1 \end{array}\right]. \nonumber\] Alternatively, we may start with a row swap: \[ \left[\begin{array}{rrrr} 2 & -1 & -1 & 2\\ 1 & 2 & 4 & 4\\ 4 & -2 & -4 & 6 1 \end{array}\right] \quad \begin{array}{r} [R_{1}\leftrightarrow R_{2}] \\ [R_{2}\leftrightarrow R_{1}] \\ [R_{3}] \end{array} \sim \left[\begin{array}{rrrr} 1 & 2 & 4 & 4\\2 & -1 & -1 & 2\\4 & -2 & -4 & 6 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2} -2R_{1}] \\ [R_{3} -4R_{1}] \end{array} \nonumber\] \[ \sim\left[\begin{array}{rrrr} 1 & 2 & 4 & 4\\ 0 & -5 & -9 & -6\\ 0 & -10 & -20 & -10 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}] \\ [ -\frac{1}{10} R_{3}] \end{array} \sim \left[\begin{array}{rrrr} 1 & 2 & 4 & 4\\ 0 & -5 & -9 & -6\\ 0 & 1 & 2 & 1 1 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}\leftrightarrow R_{3}] \\ [R_{3}\leftrightarrow R_{2}] \end{array} \nonumber\] \[ \sim\left[\begin{array}{rrrr} 1 & 2 & 4 & 4\\ 0 & 1 & 2 & 1 \\ 0 & -5 & -9 & -6 \end{array}\right] \quad \begin{array}{r} [R_{1} -2R_{2}] \\ [R_{2}] \\ [R_{3}+ 5R_{2}] \end{array} \sim \left[\begin{array}{rrrr} 1 & 0 & 0 & 2\\ 0 & 1 & 2 & 1 \\ 0 & 0 & 1 & -1 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2} -2R_{3}] \\ [R_{3}] \end{array} \nonumber\] \[ \sim\left[\begin{array}{rrrr} 1 & 0 & 0 & 2\\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & -1 \end{array}\right] , \text{ the same outcome as before.} \rule{5em}{0ex} \nonumber\]

    The following algorithm summarizes the solution method for a linear system.

    Algorithm \(\PageIndex{48}\)
    Any system of linear equations can be solved as follows.
    1. Write down the augmented matrix corresponding to the system.
    2. Row reduce the augmented matrix to reduced echelon form.
    3. If there is a pivot in the last column (the column 'behind the bar'), the system is inconsistent.
    4. If the last column does not contain a pivot: write down the corresponding system of equations and express the variables in the pivot columns into the other variables (if any). These other variables are free variables.
    The word 'elimination' refers to the fact that the zeros that are created in the augmented matrix correspond to the elimination of variables from the equations.

    The following important general statement about the solution set of linear systems has already been lurking in the background all the time.

    Theorem \(\PageIndex{49}\)
     
    A system of linear equations has either zero, or one, or infinitely many solutions. In the case when there is exactly one solution, we speak of a unique solution.
    Skip/Read the proof
    Proof
    This just depends on the outcome of the elimination method. If (iii) occurs, the number of solutions is zero; if (iv) occurs and there are no free variables, there is just one solution, and if there is at least one free variable, the solution set automatically contains infinitely many solutions.

    Note that to answer the question which of the three cases -- zero solutions, a unique solution or infinitely many solutions -- holds, it is suffices to reduce the augmented matrix to just any echelon form. From this echelon form it can already be decided whether the system is consistent, and if it is, whether there are free variables.

    Example \(\PageIndex{50}\)
     
    We want to find out whether the linear system \[ \left\{\begin{array} {rr} x_1 & + & 3x_2 & +&x_3 &=& 5 \\ 2x_1 & + & x_2 & -&x_3 &=& 4 \\ 3x_1 & - & x_2 & -&3x_2 &=& 3 \end{array} \right. \nonumber\] has zero, exactly one, or infinitely many solutions. We row reduce the augmented matrix just as far as necessary: \[ \left[\begin{array}{rrrr | r} 1 & 3 & 1 & 5\\ 2 & 1 & -1 & 4\\ 3 & -1 & -3 & 3 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2} -2R_{1}] \\ [R_{3} -3R_{1}] \end{array} \sim \left[\begin{array}{rrrr | r} 1 & 3 & 1 & 5\\ 0 & -5& -3 & -6\\ 0 & -10 & -6 & -12 \end{array}\right] \quad \begin{array}{r} [R_{1}] \\ [R_{2}] \\ [R_{3} -2R_{2}] \end{array} \sim \left[\begin{array}{rrrr | r} 1 & 3 & 1 & 5\\ 0 & -5& -3 & -6\\ 0 & 0 & 0 & 0 \end{array}\right] \nonumber\] As the system is clearly consistent, and there will be a free variable, we can conclude that the system has infinitely many solutions.
    Example \(\PageIndex{51}\)
     
    This example stresses that the conclusion whether a linear system has zero, one or infinitely many solutions, is basically a matter of the structure of an echelon matrix that is equivalent to the augmented matrix of the system. Suppose the augmented matrices of three linear systems can be row reduced to the following matrices \[ E_1 = \left[\begin{array}{rrrr | r} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 &\blacksquare & \ast \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right] , \quad E_2 = \left[\begin{array}{rrrrr | r} \blacksquare & \ast & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast & \ast \\ 0 & 0 & 0 &\blacksquare & \ast \\ 0 & 0 & 0 &0 & 0 \end{array}\right], \quad E_3 = \left[\begin{array}{rrrr | r} \blacksquare & \ast & \ast & \ast \\ 0 & \blacksquare & \ast & \ast \\ 0 & 0 & \blacksquare & \ast \\ 0 & 0 & 0 & \blacksquare \\ 0 & 0 &0 & 0 \end{array}\right] \nonumber\] where \(\blacksquare\) denotes an arbitrary nonzero number, and \(\ast\) just any real number. Then the first system has a unique solution, the second system has infinitely many solutions, the third system is inconsistent.
    Exercise \(\PageIndex{52}\)
     
    Explain why.
    Proposition \(\PageIndex{53}\)
     
    A linear system of \(m\) equations in \(n\) unknowns can only have a unique solution if \(m \geq n\), i.e. if the number of unknowns is at most equal to the number of equations.
    Skip/Read the proof
    Proof
    Let \[ [A |\vect{b}] \nonumber\] be the augmented matrix of the system, and \[ [E |\vect{c}] \nonumber\] an equivalent echelon matrix. Here \(E\) is an \(m\times n\) echelon matrix. Since the pivots are in different rows, there are at most \(m\) pivots. If \(m < n\), there must be at least one column without a pivot. This implies that either the system is inconsistent (zero solutions) or the system has a solution with at least one free variable (infinitely many solutions). And we have shown that a unique solution is impossible for a system of \(m\) equations in \(n\) unknowns with \(m < n\).
    Note \(\PageIndex{54}\)
     
    A geometric interpretation of the last proposition: suppose \(n = 3\). The solution set of a linear equation \[ a_1x_1 + a_2x_2 + a_3x_3 = b \nonumber\] can be seen as a plane in \(\mathbb{R}^3\). The previous proposition tells us: the intersection of \(m\) planes in \(\mathbb{R}^3\), where \(m < 3\), cannot be a single point.

    2.1: Systems of Linear Equations is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?