Skip to main content
Mathematics LibreTexts

3.2: LU Decomposition

  • Page ID
    96046
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The process of Gaussian Elimination also results in the factoring of the matrix A to

    \[\mathrm{A}=\mathrm{LU}, \nonumber \]

    where \(L\) is a lower triangular matrix and U is an upper triangular matrix. Using the same matrix A as in the last section, we show how this factorization is realized. We have

    \[\left(\begin{array}{rrr} -3 & 2 & -1 \\ 6 & -6 & 7 \\ 3 & -4 & 4 \end{array}\right) \rightarrow\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & -2 & 3 \end{array}\right)=\mathrm{M}_{1} \mathrm{~A} \nonumber \]

    where

    \[\mathrm{M}_{1} \mathrm{~A}=\left(\begin{array}{rrr} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right)\left(\begin{array}{rrr} -3 & 2 & -1 \\ 6 & -6 & 7 \\ 3 & -4 & 4 \end{array}\right)=\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & -2 & 3 \end{array}\right) \nonumber \]

    Note that the matrix \(\mathrm{M}_{1}\) performs row elimination on the first column. Two times the first row is added to the second row and one times the first row is added to the third row. The entries of the column of \(\mathrm{M}_{1}\) come from \(2=-(6 /-3)\) and \(1=-(3 /-3)\) as required for row elimination. The number \(-3\) is called the pivot.

    The next step is

    \[\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & -2 & 3 \end{array}\right) \rightarrow\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & 0 & -2 \end{array}\right)=\mathrm{M}_{2}\left(\mathrm{M}_{1} \mathrm{~A}\right) \nonumber \]

    where

    \[\mathrm{M}_{2}\left(\mathrm{M}_{1} \mathrm{~A}\right)=\left(\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -1 & 1 \end{array}\right)\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & -2 & 3 \end{array}\right)=\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & 0 & -2 \end{array}\right) \nonumber \]

    Here, \(\mathrm{M}_{2}\) multiplies the second row by \(-1=-(-2 /-2)\) and adds it to the third row. The pivot is \(-2\).

    We now have

    \[\mathrm{M}_{2} \mathrm{M}_{1} \mathrm{~A}=\mathrm{U} \nonumber \]

    or

    \[\mathrm{A}=\mathrm{M}_{1}^{-1} \mathrm{M}_{2}^{-1} \mathrm{U} \nonumber \]

    The inverse matrices are easy to find. The matrix \(\mathrm{M}_{1}\) multiples the first row by 2 and adds it to the second row, and multiplies the first row by 1 and adds it to the third row. To invert these operations, we need to multiply the first row by \(-2\) and add it to the second row, and multiply the first row by \(-1\) and add it to the third row. To check, with

    \[\mathrm{M}_{1} \mathrm{M}_{1}^{-1}=\mathrm{I} \nonumber \]

    we have

    \[\left(\begin{array}{lll} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right)\left(\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right)=\left(\begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) \nonumber \]

    Similarly,

    \[\mathrm{M}_{2}^{-1}=\left(\begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right) \nonumber \]

    Therefore,

    \[\mathrm{L}=\mathrm{M}_{1}^{-1} \mathrm{M}_{2}^{-1} \nonumber \]

    is given by

    \[L=\left(\begin{array}{rll} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right)\left(\begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 1 & 1 \end{array}\right)=\left(\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 1 & 1 \end{array}\right) \nonumber \]

    which is lower triangular. The off-diagonal elements of \(\mathrm{M}_{1}^{-1}\) and \(\mathrm{M}_{2}^{-1}\) are simply combined to form L. Our LU decomposition is therefore

    \[\left(\begin{array}{rrr} -3 & 2 & -1 \\ 6 & -6 & 7 \\ 3 & -4 & 4 \end{array}\right)=\left(\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 1 & 1 \end{array}\right)\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & 0 & -2 \end{array}\right) . \nonumber \]

    Another nice feature of the LU decomposition is that it can be done by overwriting A, therefore saving memory if the matrix A is very large.

    The LU decomposition is useful when one needs to solve \(A \mathbf{x}=\mathbf{b}\) for \(\mathbf{x}\) when A is fixed and there are many different b’s. First one determines \(\mathrm{L}\) and \(\mathrm{U}\) using Gaussian elimination. Then one writes

    \[(\mathrm{LU}) \mathbf{x}=\mathrm{L}(\mathrm{U} \mathbf{x})=\mathbf{b} . \nonumber \]

    We let

    \[\mathbf{y}=\mathbf{U} \mathbf{x} \nonumber \]

    and first solve

    \[\mathrm{Ly}=\mathbf{b} \nonumber \]

    for \(y\) by forward substitution. We then solve

    \[\mathrm{Ux}=\mathbf{y} \nonumber \]

    for \(\mathbf{x}\) by backward substitution. When we count operations, we will see that solving \((\mathrm{LU}) \mathbf{x}=\mathbf{b}\) is significantly faster once \(\mathrm{L}\) and \(\mathrm{U}\) are in hand than solving \(\mathrm{A} \mathbf{x}=\mathbf{b}\) directly by Gaussian elimination.

    We now illustrate the solution of \(L U \mathbf{x}=\mathbf{b}\) using our previous example, where

    \[\mathrm{L}=\left(\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 1 & 1 \end{array}\right), \quad \mathrm{U}=\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & 0 & -2 \end{array}\right), \quad \mathbf{b}=\left(\begin{array}{l} -1 \\ -7 \\ -6 \end{array}\right) \nonumber \]

    With \(\mathbf{y}=\mathbf{U x}\), we first solve \(\mathbf{L} \mathbf{y}=\mathbf{b}\), that is

    \[\left(\begin{array}{rrr} 1 & 0 & 0 \\ -2 & 1 & 0 \\ -1 & 1 & 1 \end{array}\right)\left(\begin{array}{l} y_{1} \\ y_{2} \\ y_{3} \end{array}\right)=\left(\begin{array}{l} -1 \\ -7 \\ -6 \end{array}\right) \nonumber \]

    Using forward substitution

    \[\begin{aligned} &y_{1}=-1 \\ &y_{2}=-7+2 y_{1}=-9 \\ &y_{3}=-6+y_{1}-y_{2}=2 \end{aligned} \nonumber \]

    We now solve \(U x=y\), that is

    \[\left(\begin{array}{rrr} -3 & 2 & -1 \\ 0 & -2 & 5 \\ 0 & 0 & -2 \end{array}\right)\left(\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right)=\left(\begin{array}{r} -1 \\ -9 \\ 2 \end{array}\right) \nonumber \]

    Using backward substitution,

    \[\begin{aligned} &-2 x_{3}=2 \rightarrow x_{3}=-1 \\ &-2 x_{2}=-9-5 x_{3}=-4 \rightarrow x_{2}=2 \\ &-3 x_{1}=-1-2 x_{2}+x_{3}=-6 \rightarrow x_{1}=2 \end{aligned} \nonumber \]

    and we have once again determined

    \[\left(\begin{array}{l} x_{1} \\ x_{2} \\ x_{3} \end{array}\right)=\left(\begin{array}{r} 2 \\ 2 \\ -1 \end{array}\right) \nonumber \]


    This page titled 3.2: LU Decomposition is shared under a CC BY 3.0 license and was authored, remixed, and/or curated by Jeffrey R. Chasnov via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.