Skip to main content
Mathematics LibreTexts

2.3: Elementary Row Operations

  • Page ID
    1767
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Elementary row operations (EROS) are systems of linear equations relating the old and new rows in Gaussian Elimination.

    Example \(\PageIndex{1}\): (Keeping track of EROs with equations between rows)

    We will refer to the new \(k\)th row as \(R'_k\) and the old \(k\)th row as \(R_k\).

    \[\left(\begin{array}{rrr|r} 0 & 1 & 1 & 7 \\ 2 & 0 & 0 & 4 \\ 0 & 0 & 1 & 4\end{array}\right) \stackrel{R'_1 = 0R_1 + R_2 + 0R_3}{\stackrel{R'_2 = R_1 + 0R_2 + 0R_3}{\stackrel{R'_3 = 0R_1 + 0R_2 + R_3}{\sim}}} \left(\begin{array}{rrr|r} 2 & 0 & 0 & 4 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)\]

    \[\stackrel{R'_1 = \frac{1}{2}R_1 + 0R_2 + 0R_3}{\stackrel{R'_2 = 0R_1 + R_2 + 0R_3}{\stackrel{R'_3 = 0R_1 + 0R_2 + R_3}{\sim}}} \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)\]

    \[\stackrel{R'_1 = R_1 + 0R_2 + 0R_3}{\stackrel{R'_2 = 0R_1 + R_2 - R_3}{\stackrel{R'_3 = 0R_1 + 0R_2 + R_3}{\sim}}} \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 4 \end{array}\right)\]

    Elementary Row Operations as Matrices

    The matrix describing the system of equations between rows performs the corresponding ERO on the augmented matrix:

    Example \(\PageIndex{2}\): (Performing EROs with Matrices)

    \[\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}\left(\begin{array}{rrr|r} 0 & 1 & 1 & 7 \\ 2 & 0 & 0 & 4 \\ 0 & 0 & 1 & 4 \end{array}\right) = \left(\begin{array}{rrr|r} 2 & 0 & 0 & 4 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)\]

    \[\begin{pmatrix} \frac{1}{2} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}\left(\begin{array}{rrr|r} 2 & 0 & 0 & 4 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right) = \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right)\]

    \[\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1\end{pmatrix}\left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 1 & 7 \\ 0 & 0 & 1 & 4 \end{array}\right) = \left(\begin{array}{rrr|r} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 4 \end{array}\right)\]

    This obviously involved more writing than Gaussian elimination.

    The point here is that realizing EROs as matrices allows us to make concrete the notion of "dividing by a matrix'' {alluded to in chapter 1}; we can now perform manipulations on both sides of an equation in a familiar way.

    Example \(\PageIndex{3}\):

    (Undoing \(A\) in \(Ax=b\) slowly, using \(A=6=3\cdot2\))

    \[6x = 12\]

    \[\Leftrightarrow 3^{-1}6x = 3^{-1}12\]

    \[\Leftrightarrow 2x = 4\]

    \[\Leftrightarrow 2^{-1}2x = 2^{-1}4\]

    \[\Leftrightarrow 1x = 2

    In particular, matrices corresponding to EROs undo a matrix step by step.

    Example \(\PageIndex{4}\):

    (Undoing \(A\) in \(Ax=b\) slowly, Using \(A=M=...\))

    \[\begin{pmatrix} 0 &1 &1 \\ 2 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 7 \\ 4 \\ 4 \end{pmatrix}\]

    \[\Leftrightarrow\begin{pmatrix} 0 &1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 0 &1 &1 \\ 2 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 0 &1 &0 \\ 1 &0 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 7 \\ 4 \\ 4 \end{pmatrix}\]

    \[\Leftrightarrow\begin{pmatrix} 2 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 4 \\ 7 \\ 4 \end{pmatrix}\]

    \[\Leftrightarrow\begin{pmatrix}\frac{1}{2} &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 2 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix}\frac{1}{2} &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 4 \\ 7 \\ 4 \end{pmatrix}\]

    \[\begin{pmatrix} 1 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 2 \\ 7 \\ 4 \end{pmatrix}\]

    \[\Leftrightarrow\begin{pmatrix} 1 &0 &0 \\ 0 &1 &-1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 1 &0 &0 \\ 0 &1 &1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 1 &0 &0 \\ 0 &1 &-1 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} 2 \\ 7 \\ 4 \end{pmatrix}\]

    \[\begin{pmatrix} 1 &0 &0 \\ 0 &1 &0 \\ 0 &0 &1 \end{pmatrix}\begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 2 \\ 3 \\ 4 \end{pmatrix}\]

    This is another way of thinking about what is happening in the process of elimination which feels more like elementary algebra in the sense that you ``do something to both sides of an equation" until you have a solution.

    Recording EROs in \([M | I ]\)

    Just as we put together \(3^{-1}2^{-1}=6^{-1}\) to get a single thing to apply to both sides of \(6x=12\) to undo \(6\), we should put together multiple EROs to get a single thing that undoes our matrix. To do this, augment by the identity matrix (not just a single column) and then perform Gaussian elimination. There is no need to write the EROs as systems of equations or as matrices while doing this.

    Example \(\PageIndex{5}\):Collecting EROs that undo a matrix

    \begin{eqnarray*}
    \left(\begin{array}{ccc|ccc}
    0 & 1 & 1 &1 &0 &0\\
    2 & 0 & 0 &0&1&0\\
    0& 0 & 1 &0 &0 &1\\
    \end{array} \right)
    \sim
    &\left(\begin{array}{ccc|ccc}
    2 & 0 & 0 &0&1&0\\
    0 & 1 & 1 &1 &0 &0\\
    0& 0 & 1 &0 &0 &1\\
    \end{array} \right)&
    \\
    \sim
    &\left(\begin{array}{ccc|ccc}
    1 & 0 & 0 &0&\frac12&0\\
    0 & 1 & 1 &1 &0 &0\\
    0& 0 & 1 &0 &0 &1\\
    \end{array} \right)&
    \sim
    \left(\begin{array}{ccc|ccc}
    1 & 0 & 0 &0&\frac12&0\\
    0 & 1 & 0 &1 &0 &-1\\
    0& 0 & 1 &0 &0 &1\\
    \end{array} \right)
    \end{eqnarray*}

    As we changed the left slot from the matrix \(M\) to the identity matrix, the right slot changed from the identity matrix to the matrix which undoes \(M\).

    Example \(\PageIndex{6}\): Checking that one matrix undoes another

    \begin{eqnarray*}
    \left(\begin{array}{ccc}
    0&\frac12&0\\
    1 &0 &-1\\
    0 &0 &1\\
    \end{array} \right)
    \left(\begin{array}{ccc}
    0&1&1\\
    2 &0 &0\\
    0 &0 &1\\
    \end{array} \right)
    =
    \left(\begin{array}{ccc}
    1 &0 &0\\
    0 &1 &0\\
    0 &0 &1\\
    \end{array} \right) \, .
    \end{eqnarray*}

    If the matrices are composed in the opposite order, the result is the same.

    \begin{eqnarray*}
    \left(\begin{array}{ccc}
    0&1&1\\
    2 &0 &0\\
    0 &0 &1\\
    \end{array} \right)
    \left(\begin{array}{ccc}
    0&\frac12&0\\
    1 &0 &-1\\
    0 &0 &1\\
    \end{array} \right)
    =
    \left(\begin{array}{ccc}
    1 &0 &0\\
    0 &1 &0\\
    0 &0 &1\\
    \end{array} \right) \, .
    \end{eqnarray*}

    In abstract generality, let \(M\) be some matrix and, as always, let \(I\) stand for the identity matrix. Imagine the process of performing elementary row operations to bring \(M\) to the identity matrix.

    \[(M | I) \sim ( E_1M| E_1)\sim (E_2E_1 M | E_2 E_1) \sim \cdots \sim (I | \cdots E_2E_1 )\]

    Ellipses stand for additional EROs. The result is a product of matrices that form a matrix which undoes \(M\)

    \[\cdots E_2 E_1 M = I \]

    This is only true if RREF of \(M\) is the identity matrix. In that case, we say \(M\) is invertible.

    Much use is made of the fact that invertible matrices can be undone with EROs. To begin with, since each elementary row operation has an inverse,

    \[M= E_1^{-1} E_2^{-1} \cdots\]

    while the inverse of \(M\) is

    \[M^{-1}=\cdots E_2 E_1 \]

    This is symbolically verified as

    \[M^{-1}M=\cdots E_2 E_1\, E_1^{-1} E_2^{-1} \cdots=\cdots E_2 \, E_2^{-1} \cdots = \cdots = I\]

    Thus, if \(M\) is invertible then \(M\) and can be expressed as the product of EROs. (The same is true for its inverse.) This has the feel of the fundamental theorem of arithmetic (integers can be expressed as the product of primes) or the fundamental theorem of algebra (polynomials can be expressed as the product of first order polynomials); EROs are the building blocks of invertible matrices.

    Three Kinds of EROs, Three Kinds of Matrices

    We now work toward concrete examples and applications. It is surprisingly easy to translate between EROs as descriptions of rows and as matrices. The matrices corresponding to these kinds are close in form to the identity matrix:

    • Row Swap: Identity matrix with two rows swapped.
    • Scalar Multiplication: Identity matrix with one diagonal entry not 1.
    • Row Sum: The identity matrix with one off-diagonal entry not 0.

    Example \(\PageIndex{7}\): Correspondences between EROs and their matrices

    The row swap matrix that swaps the 2nd and 4th row is the identity matrix with the 2nd and 4th row swapped:

    \[\begin{pmatrix} 1 &0 &0 &0 &0 \\ 0 &0 &0 &1 &0 \\ 0 &0 &1 &0 &0 \\ 0 &1 &0 &0 &0 \\ 0 &0 &0 &0 &1 \end{pmatrix}\]

    The scalar multiplication matrix that replaces the 3rd row with 7 times the 3rd row is the identity matrix with 7 in the 3rd row instead of 1:

    \[
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&7&0\\
    0&0&0&1\\
    \end{pmatrix}
    \]

    The row sum matrix that replaces the 4th row with the 4th row plus 9 times the 2nd row is the identity matrix with a 9 in the 4th row, 2nd column:

    \[
    \begin{pmatrix}
    1&0&0&0&0&0&0\\
    0&1&0&0&0&0&0\\
    0&0&1&0&0&0&0\\
    0&9&0&1&0&0&0\\
    0&0&0&0&1&0&0\\
    0&0&0&0&0&1&0\\
    0&0&0&0&0&0&1\\
    \end{pmatrix}
    \]

    We can write an explicit factorization of a matrix into EROs by keeping track of the EROs used in getting to RREF.

    Example \(\PageIndex{8}\): Express \(M\) from Example 24 as a product of EROs

    Note that in the previous example one of each of the kinds of EROs is used, in the order just given. Elimination looked like

    \begin{eqnarray*}
    M=
    \left(\begin{array}{ccc}
    0 & 1 & 1 \\
    2 & 0 & 0 \\\
    0& 0 & 1
    \end{array} \right)
    \stackrel{E_1}{\sim}
    \left(\begin{array}{ccc}
    2 & 0 & 0 \\
    0 & 1 & 1 \\
    0& 0 & 1
    \end{array} \right)
    \stackrel{E_2}{\sim}
    \left(\begin{array}{ccc}
    1 & 0 & 0 \\
    0 & 1 & 1 \\
    0& 0 & 1
    \end{array} \right)
    \stackrel{E_3}{\sim}
    \left(\begin{array}{ccc}
    1 & 0 & 0 \\
    0 & 1 & 0 \\
    0& 0 & 1
    \end{array} \right)
    =I
    \end{eqnarray*}

    where the EROs matrices are

    \begin{eqnarray*}
    E_1
    = \left(\begin{array}{ccc}
    0 &1 &0\\
    1 &0 &0\\
    0 &0 &1\\
    \end{array} \right)
    ,
    E_2
    = \left(\begin{array}{ccc}
    \frac12 &0 &0\\
    0 &1 &0\\
    0 &0 &1\\
    \end{array} \right) ,
    E_3
    = \left(\begin{array}{ccc}
    1 &0 &0\\
    0 &1 & -1\\
    0 &0 &1\\
    \end{array} \right) \,.
    \end{eqnarray*}

    The inverse of the ERO matrices (corresponding to the description of the reverse row manipulations)

    \begin{eqnarray*}
    E_1^{-1}
    = \left(\begin{array}{ccc}
    0 &1 &0\\
    1 &0 &0\\
    0 &0 &1\\
    \end{array} \right)
    ,
    E_2^{-1}
    = \left(\begin{array}{ccc}
    2 &0 &0\\
    0 &1 &0\\
    0 &0 &1\\
    \end{array} \right) ,
    E_3^{-1}
    = \left(\begin{array}{ccc}
    1 &0 &0\\
    0 &1 & 1\\
    0 &0 &1\\
    \end{array} \right) \,.
    \end{eqnarray*}

    Multiplying these gives

    \begin{eqnarray*}
    E_1^{-1}E_2^{-1}E_3^{-1}
    &=&
    \left(\begin{array}{ccc}
    0 &1 &0\\
    1 &0 &0\\
    0 &0 &1\\
    \end{array}\right)
    \left(\begin{array}{ccc}
    2 &0 &0\\
    0 &1 &0\\
    0 &0 &1\\
    \end{array}\right)
    \left(\begin{array}{ccc}
    1 &0 &0\\
    0 &1 & 1\\
    0 &0 &1\\
    \end{array}\right)
    \\&=&
    \left(\begin{array}{ccc}
    0 &1 &0\\
    1 &0 &0\\
    0 &0 &1\\
    \end{array}\right)
    \left(\begin{array}{ccc}
    2 &0 &0\\
    0 &1 &1\\
    0 &0 &1\\
    \end{array}\right)
    =
    \left(\begin{array}{ccc}
    0 &1 &1\\
    2 &0 &0\\
    0 &0 &1\\
    \end{array}\right) = M.
    \end{eqnarray*}

    \(LU\), \(LDU\), and \(LDPU\) Factorizations

    This process of elimination can be stopped half way to obtain decompositions frequently used in large computations in sciences and engineering. The first half of the elimination process is to eliminate entries below the diagonal leaving a matrix which is called upper triangular. The ERO matrices which do this part of the elimination are lower triangular, as are their inverses. But putting together the upper triangular and lower triangular parts one obtains the so called \(LU\) factorization.

    Example \(\PageIndex{9}\): \(LU\) factorization

    \begin{eqnarray*}
    M=
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    &\stackrel{E_1}{\sim}&
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&-1&1&-1\\
    \end{pmatrix}
    \\
    &\stackrel{E_2}{\sim}&
    ~~\begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&3&1\\
    \end{pmatrix}
    ~~\stackrel{E_3}{\sim}~
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&0&-3\\
    \end{pmatrix}
    :=U
    \end{eqnarray*}

    where the EROs and their inverses are

    \begin{eqnarray*}
    E_1=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    2&0&1&0\\
    0&0&0&1\\
    \end{pmatrix} \, ,~~~~
    E_2=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&1&0&1\\
    \end{pmatrix} \, ,~~
    E_3=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&-1&1\\
    \end{pmatrix} \,
    \\
    E_1^{-1}=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    -2&0&1&0\\
    0&0&0&1\\
    \end{pmatrix} , \,
    E_2^{-1}=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&-1&0&1\\
    \end{pmatrix} , \,
    E_3^{-1}=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&1&1\\
    \end{pmatrix} \, .
    \end{eqnarray*}

    applying the inverses of the EROs to both sides of the equality \(U=E_3E_2E_1M\) gives \(M=E_1^{-1}E_2^{-1}E_3^{-1}U\) or

    \begin{eqnarray*}
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    &=&
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    -2&0&1&0\\
    0&0&0&1\\
    \end{pmatrix}
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&-1&0&1\\
    \end{pmatrix}
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&1&1\\
    \end{pmatrix}
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&0&-3\\
    \end{pmatrix}
    \\
    &=&
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    -2&0&1&0\\
    0&0&0&1\\
    \end{pmatrix}
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&-1&1&1\\
    \end{pmatrix}
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&0&-3\\
    \end{pmatrix}
    \\
    &=&
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    -2&0&1&0\\
    0&-1&1&1\\
    \end{pmatrix}
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&0&-3\\
    \end{pmatrix} \, .
    \end{eqnarray*}

    This is a lower triangular matrix times an upper triangular matrix.

    What if we stop at a different point in elimination? We could multiply rows so that the entries in the diagonal are 1 next. Note that the EROs that do this are diagonal. This gives a slightly different factorization.

    Example \(\PageIndex{10}\): \(LDU\) factorization building from previous example

    \begin{eqnarray*}
    M=
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    \stackrel{E_3E_2E_1}{\sim}
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&0&-3\\
    \end{pmatrix}
    \stackrel{E_4}{\sim}
    \begin{pmatrix}
    1&0&\frac{-3}{2}&\frac{1}{2}\\
    0&1&2&2\\
    0&0&3&4\\
    0&0&0&-3\\
    \end{pmatrix}
    \\
    \stackrel{E_5}{\sim}
    \begin{pmatrix}
    1&0&\frac{-3}{2}&\frac{1}{2}\\
    0&1&2&2\\
    0&0&1&\frac43\\
    0&0&0&-3\\
    \end{pmatrix}
    \stackrel{E_6}{\sim}
    \begin{pmatrix}
    1&0&\frac{-3}{2}&\frac{1}{2}\\
    0&1&2&2\\
    0&0&1&\frac43\\
    0&0&0&1\\
    \end{pmatrix}
    =:U
    \end{eqnarray*}

    The corresponding elementary matrices are

    \begin{eqnarray*}E_4=
    \begin{pmatrix}
    \frac12&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&0&1\\
    \end{pmatrix} , \, ~~
    E_5=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&\frac13&0\\
    0&0&0&1\\
    \end{pmatrix} , \, ~~
    E_6=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&0&-\frac13\\
    \end{pmatrix} , \,
    \\
    E_4^{-1}=
    \begin{pmatrix}
    2&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&0&1\\
    \end{pmatrix} , \,
    E_5^{-1}=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&3&0\\
    0&0&0&1\\
    \end{pmatrix} , \,
    E_6^{-1}=
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    0&0&1&0\\
    0&0&0&-3\\
    \end{pmatrix} , \,
    \end{eqnarray*}

    The equation \(U=E_6E_5E_4E_3E_2E_1 M\) can be rearranged into

    \[M=(E_1^{-1}E_2^{-1}E_3^{-1})(E_4^{-1}E_5^{-1}E_6^{-1})U.\]

    We calculated the product of the first three factors in the previous example; it was named \(L\) there, and we will reuse that name here. The product of the next three factors is diagonal and we will name it \(D\). The last factor we named \(U\) (the name means something different in this example than the last example.) The \(LDU\) factorization of our matrix is

    \begin{eqnarray*}
    \begin{pmatrix}
    2&0&-3&1\\
    0&1&2&2\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    =
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    -2&0&1&0\\
    0&-1&1&1\\
    \end{pmatrix}
    \begin{pmatrix}
    2&0&0&0\\
    0&1&0&0\\
    0&0&3&0\\
    0&0&1&-3\\
    \end{pmatrix}
    \begin{pmatrix}
    1&0&\frac{-3}{2}&\frac{1}{2}\\
    0&1&2&2\\
    0&0&1&\frac43\\
    0&0&0&1\\
    \end{pmatrix}
    \end{eqnarray*}

    The \(LDU\) factorization of a matrix is a factorization into blocks of EROs of a various types: \(L\) is the product of the inverses of EROs which eliminate below the diagonal by row addition, \(D\) the product of inverses of EROs which set the diagonal elements to 1 by row multiplication, and \(U\) is the product of inverses of EROs which eliminate above the diagonal by row addition. You may notice that one of the three kinds of row operation is missing from this story. Row exchange my very well be necessary to obtain RREF. Indeed, so far in this chapter we have been working under the tacit assumption that \(M\) can be brought to the identity by just row multiplication and row addition. If row exchange is necessary, the resulting factorization is \(LDPU\) where \(P\) is the product of inverses of EROs that perform row exchange.

    Example \(\PageIndex{11}\): \(LDPU\) factorization, building from previous examples

    \begin{eqnarray*}
    M=
    \begin{pmatrix}
    0&1&2&2\\
    2&0&-3&1\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    \stackrel{E_7}{\sim}
    \begin{pmatrix}
    0&1&2&2\\
    2&0&-3&1\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    \stackrel{E_6E_5E_4E_3E_2E_1}{\sim} L
    \end{eqnarray*}
    \begin{eqnarray*}
    E_7=
    \begin{pmatrix}
    0&1&0&0\\
    1&0&0&0\\
    0&0&1&0\\
    0&0&0&1\\
    \end{pmatrix}
    =E_7^{-1}
    \end{eqnarray*}
    \begin{eqnarray*}
    M=(E_1^{-1}E_2^{-1}E_3^{-1})(E_4^{-1}E_5^{-1}E_6^{-1}) (E_7^{-1}) U=LDPU\\
    \end{eqnarray*}

    \begin{eqnarray*}
    \begin{pmatrix}
    0&1&2&2\\
    2&0&-3&1\\
    -4&0&9&2\\
    0&-1&1&-1\\
    \end{pmatrix}
    =
    \begin{pmatrix}
    1&0&0&0\\
    0&1&0&0\\
    -2&0&1&0\\
    0&-1&1&1\\
    \end{pmatrix}
    \begin{pmatrix}
    2&0&0&0\\
    0&1&0&0\\
    0&0&3&0\\
    0&0&1&-3\\
    \end{pmatrix}
    \begin{pmatrix}
    0&1&0&0\\
    1&0&0&0\\
    0&0&1&0\\
    0&0&0&1\\
    \end{pmatrix}
    \begin{pmatrix}
    1&0&\frac{-3}{2}&\frac{1}{2}\\
    0&1&2&2\\
    0&0&1&\frac43\\
    0&0&0&1\\
    \end{pmatrix}
    \end{eqnarray*}

    Contributor


    This page titled 2.3: Elementary Row Operations is shared under a not declared license and was authored, remixed, and/or curated by David Cherney, Tom Denton, & Andrew Waldron.