4.07: LU Decomposition for Solving Simultaneous Linear Equations
- Page ID
- 126406
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Lesson 1: Theory of LU Decomposition Method
Learning Objectives
After successful completion of this lesson, you should be able to:
1) solve a set of simultaneous linear equations using the LU decomposition method
2) decompose a nonsingular square matrix into the LU form.
I hear about LU decomposition used as a method to solve a set of simultaneous linear equations. What is it?
We already studied two numerical methods of finding the solution to simultaneous linear equations – Naive Gauss elimination and Gaussian elimination with partial pivoting. Then, why do we need to learn yet another method? To appreciate why LU decomposition could be a better choice than the Gauss elimination techniques in some cases, let us first discuss what LU decomposition is about.
For a nonsingular square matrix \(\left\lbrack A \right\rbrack\) on which one can successfully conduct the Naive Gauss elimination forward elimination part, one can write it as
\[\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \nonumber\]
where
\[\left\lbrack L \right\rbrack = \text{Lower triangular matrix} \nonumber\]
\[\left\lbrack U \right\rbrack = \text{Upper triangular matrix} \nonumber\]
Then if one is solving a set of equations
\[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack, \nonumber\]
then
\[\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\ \text{as }\left( \lbrack A\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \right) \nonumber\]
Multiplying both sides by \(\left\lbrack L \right\rbrack^{- 1}\),
\[\left\lbrack L \right\rbrack^{- 1}\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack \nonumber\]
\[\left\lbrack I \right\rbrack \left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack\ \text{as }\left( \left\lbrack L \right\rbrack^{- 1}\left\lbrack L \right\rbrack = \lbrack I\rbrack \right) \nonumber\]
\[\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack\ \text{as }\left( \left\lbrack I \right\rbrack\ \left\lbrack U \right\rbrack = \lbrack U\rbrack \right) \nonumber\]
Let
\[\left\lbrack L \right\rbrack^{- 1}\left\lbrack C \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\]
then
\[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\;\;\;\;\;\;\;\;\;\;\;\;(\PageIndex{1.1}) \nonumber\]
and
\[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{1.2}) \nonumber\]
So we can solve Equation \((\PageIndex{1.1})\) first for \(\lbrack Z\rbrack\) by using forward substitution and then use Equation \((\PageIndex{1.2})\) to calculate the solution vector \(\left\lbrack X \right\rbrack\) by back substitution.
So how do I decompose a nonsingular matrix [A], that is, how do I find [A] = [L][U]?
If the forward elimination part of the Naive Gauss elimination methods can be applied on a nonsingular matrix \(\left\lbrack A \right\rbrack\), then \(\left\lbrack A \right\rbrack\) can be decomposed into LU form as
\[\begin{split} \lbrack A\rbrack &= \begin{bmatrix} a_{11} & a_{12} & \ldots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \cdots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 & 0 & \ldots & 0 \\ {l}_{21} & 1 & \cdots & 0 \\ \vdots & \vdots & \cdots & \vdots \\ {l}_{n1} & {l}_{n2} & \cdots & 1 \\ \end{bmatrix}\ \ \begin{bmatrix} u_{11} & u_{12} & \ldots & u_{1n} \\ 0 & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & \cdots & u_{nn} \\ \end{bmatrix} \end{split} \nonumber\]
The elements of the \(\left\lbrack U \right\rbrack\) matrix are exactly the same as the coefficient matrix one obtains at the end of the forward elimination part in Naive Gauss elimination.
The lower triangular matrix \(\left\lbrack L \right\rbrack\) has \(1\) in its diagonal entries. The non-zero elements on the non-diagonal elements in \(\left\lbrack L \right\rbrack\) are multipliers that made the corresponding entries zero in the upper triangular matrix \(\left\lbrack U \right\rbrack\) during the forward elimination.
Audiovisual Lecture
Title: LU Decomposition Method - Theory
Summary: This video shows the basis of the LU decomposition method of solving simultaneous linear equations.
Lesson 2: Application of LU Decomposition Method
Learning Objectives
After successful completion of this lesson, you should be able to:
1) decompose a nonsingular matrix into the LU form.
2) solve a set of simultaneous linear equations using the LU decomposition method
Applications
So far, you now know how the LU decomposition method is used to solve simultaneous linear equations. We also learned how to decompose the coefficient matrix to use forward substitution and back substitution parts to solve a set of simultaneous linear equations..
In this lesson, we apply what we learned in the previous lesson.
Find the LU decomposition of the matrix
\[\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \nonumber\]
Solution
\[\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &= \begin{bmatrix} 1 & 0 & 0 \\ {l}_{21} & 1 & 0 \\ {l}_{31} & {l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \end{split} \nonumber\]
The \(\left\lbrack U \right\rbrack\) matrix is the same as found at the end of the forward elimination part of the Naive Gauss elimination method, that is
\[\left\lbrack U \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \nonumber\]
To find \({l}_{21}\) and \({l}_{31}\), find the multipliers that were used to make the \(a_{21}\) and \(a_{31}\) elements zero in the first step of the forward elimination part of the Naive Gauss elimination method. They were
\[\begin{split} {l}_{21} &= \frac{64}{25}\\ &= 2.56 \end{split} \nonumber\]
\[\begin{split} {l}_{31} &= \frac{144}{25}\\ &= 5.76 \end{split} \nonumber\]
To find \({l}_{32}\), what multiplier was used to make \(a_{32}\) element zero? Remember that \(a_{32}\) element was made zero in the second step of the forward elimination part. The \(\left\lbrack A \right\rbrack\) matrix at the beginning of the second step of the forward elimination part was
\[\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & - 16.8 & - 4.76 \\ \end{bmatrix} \nonumber\]
So
\[\begin{split} {l}_{32} &= \frac{- 16.8}{- 4.8}\\ &= 3.5 \end{split} \nonumber\]
Hence
\[\left\lbrack L \right\rbrack = \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix} \nonumber\]
Confirm \(\left\lbrack L \right\rbrack \left\lbrack U \right\rbrack = \left\lbrack A \right\rbrack\).
\[\begin{split} \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack &= \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\\ &= \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \end{split} \nonumber\]
Use the LU decomposition method to solve the following simultaneous linear equations.
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix} \nonumber\]
Solution
Recall that
\[\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack \nonumber\]
and if
\[\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack \nonumber\]
then first solving
\[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack \nonumber\]
and then
\[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\]
gives the solution vector \(\left\lbrack X \right\rbrack\).
Now in Example \(\PageIndex{2.1}\), we showed
\[\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack \left\lbrack U \right\rbrack\\ &=\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \end{split} \nonumber\]
First, solve
\[\left\lbrack L \right\rbrack\ \left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack \nonumber\]
\[\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ 177.2 \\ 279.2 \\ \end{bmatrix} \nonumber\]
to give
\[\begin{split} &z_{1} = 106.8\\ &2.56z_{1} + z_{2} = 177.2\\ &5.76z_{1} + 3.5z_{2} + z_{3} = 279.2\end{split} \nonumber\]
Forward substitution starting from the first equation gives
\[z_{1} = 106.8 \nonumber\]
\[\begin{split} z_{2} &= 177.2 - 2.56z_{1}\\ &= 177.2 - 2.56 \times 106.8\\ &= - 96.208 \end{split} \nonumber\]
\[\begin{split} z_{3} &= 279.2 - 5.76z_{1} - 3.5z_{2}\\ &= 279.2 - 5.76 \times 106.8 - 3.5 \times \left( - 96.208 \right)\\ &= 0.76 \end{split} \nonumber\]
Hence
\[\begin{split} \left\lbrack Z \right\rbrack &= \begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix}\\ &= \begin{bmatrix} 106.8 \\ - 96.208 \\ 0.76 \\ \end{bmatrix} \end{split} \nonumber\]
This matrix is the same as the right-hand side obtained at the end of the forward elimination part of the Naive Gauss elimination method. Is this a coincidence?
Now solve
\[\left\lbrack U \right\rbrack \left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\]
\[\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 106.8 \\ - 96.208 \\ 0.76 \\ \end{bmatrix} \nonumber\]
\[\begin{split} 25a_{1} + 5a_{2} + a_{3} &= 106.8\\ - 4.8a_{2} - 1.56a_{3} &= - 96.208\\ 0.7a_{3} &= 0.76 \end{split} \nonumber\]
From the third equation
\[0.7a_{3} = 0.76 \nonumber\]
\[\begin{split} a_{3} &= \frac{0.76}{0.7}\\ &= 1.0857\end{split} \nonumber\]
Substituting the value of \(a_{3}\) in the second equation,
\[- 4.8a_{2} - 1.56a_{3} = - 96.208 \nonumber\]
\[\begin{split} a_{2} &= \frac{- 96.208 + 1.56a_{3}}{- 4.8}\\ &= \frac{- 96.208 + 1.56 \times 1.0857}{- 4.8}\\ &= 19.691 \end{split} \nonumber\]
Substituting the value of \(a_{2}\) and \(a_{3}\) in the first equation,
\[25a_{1} + 5a_{2} + a_{3} = 106.8 \nonumber\]
\[\begin{split} a_{1} &= \frac{106.8 - 5a_{2} - a_{3}}{25}\\ &= \frac{106.8 - 5 \times 19.691 - 1.0857}{25}\\ &= 0.29048 \end{split} \nonumber\]
Hence the solution vector is
\[\begin{bmatrix} a_{1} \\ a_{2} \\ a_{3} \\ \end{bmatrix} = \begin{bmatrix} 0.29048 \\ 19.691 \\ 1.0857 \\ \end{bmatrix} \nonumber\]
Audiovisual Lecture
Title: LU Decomposition Example
Summary: Learn the LU decomposition method of solving simultaneous linear equations via an example.
Lesson 3: Finding Inverse of a Matrix Using LU Decomposition Method
Learning Objectives
After successful completion of this lesson, you should be able to:
1) find the inverse of a matrix using the LU decomposition method.
How do I find the inverse of a square matrix using LU decomposition?
A matrix \(\left\lbrack B \right\rbrack\) is the inverse of \(\left\lbrack A \right\rbrack\) if
\[\left\lbrack A \right\rbrack\left\lbrack B \right\rbrack = \left\lbrack I \right\rbrack = \left\lbrack B \right\rbrack\left\lbrack A \right\rbrack \nonumber\]
How can we use LU decomposition to find the inverse of the matrix? Assume the first column of \(\left\lbrack B \right\rbrack\) (the inverse of \(\left\lbrack A \right\rbrack\)) is
\[\lbrack b_{11}\ b_{12}\ldots\ \ldots b_{n1}\rbrack^{T} \nonumber\]
Then from the above definition of an inverse and the definition of matrix multiplication
\[\left\lbrack A \right\rbrack\begin{bmatrix} b_{11} \\ b_{21} \\ \vdots \\ b_{n1} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ \vdots \\ 0 \\ \end{bmatrix} \nonumber\]
Similarly, the second column of \(\left\lbrack B \right\rbrack\) is given by
\[\left\lbrack A \right\rbrack\begin{bmatrix} b_{12} \\ b_{22} \\ \vdots \\ b_{n2} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ \vdots \\ 0 \\ \end{bmatrix} \nonumber\]
Similarly, all columns of \(\left\lbrack B \right\rbrack\) can be found by solving \(n\) different sets of equations with the column of the right-hand side being the \(n\) columns of the identity matrix.
In the previous lessons, we learned how to solve simultaneous linear equations using the LU decomposition method.
Use LU decomposition to find the inverse of
\[\left\lbrack A \right\rbrack = \begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix} \nonumber\]
Solution
Knowing that from the previous lesson,
\[\begin{split} \left\lbrack A \right\rbrack &= \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\\ &= \begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix} \end{split} \nonumber\]
We can solve for the first column of \(\lbrack B\rbrack = \left\lbrack A \right\rbrack^{- 1}\) by solving for
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} \nonumber\]
First, solve
\[\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack, \nonumber\]
that is,
\[\begin{bmatrix} 1 & 0 & 0 \\ 2.56 & 1 & 0 \\ 5.76 & 3.5 & 1 \\ \end{bmatrix}\begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} \nonumber\]
to give
\[\begin{split} &z_{1} = 1\\ &2.56z_{1} + z_{2} = 0\\ &5.76z_{1} + 3.5z_{2} + z_{3} = 0 \end{split} \nonumber\]
Forward substitution starting from the first equation gives
\[z_{1} = 1 \nonumber\]
\[\begin{split} z_{2} &= 0 - 2.56z_{1}\\ &= 0 - 2.56\left( 1 \right)\\ &= - 2.56 \end{split} \nonumber\]
\[\begin{split} z_{3} &= 0 - 5.76z_{1} - 3.5z_{2}\\ &= 0 - 5.76\left( 1 \right) - 3.5\left( - 2.56 \right)\\ &= 3.2 \end{split} \nonumber\]
Hence
\[\begin{split} \left\lbrack Z \right\rbrack &= \begin{bmatrix} z_{1} \\ z_{2} \\ z_{3} \\ \end{bmatrix}\\ &= \begin{bmatrix} 1 \\ - 2.56 \\ 3.2 \\ \end{bmatrix} \end{split} \nonumber\]
Now solve
\[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack \nonumber\]
that is
\[\begin{bmatrix} 25 & 5 & 1 \\ 0 & - 4.8 & - 1.56 \\ 0 & 0 & 0.7 \\ \end{bmatrix}\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 1 \\ - 2.56 \\ 3.2 \\ \end{bmatrix} \nonumber\]
\[\begin{split} 25b_{11} + 5b_{21} + b_{31} &= 1\\ - 4.8b_{21} - 1.56b_{31} &= - 2.56\\ 0.7b_{31} &= 3.2 \end{split} \nonumber\]
Backward substitution starting from the third equation gives
\[\begin{split} b_{31} &= \frac{3.2}{0.7}\\ &= 4.571 \end{split} \nonumber\]
\[\begin{split} b_{21} &= \frac{- 2.56 + 1.56b_{31}}{- 4.8}\\ &= \frac{- 2.56 + 1.56(4.571)}{- 4.8}\\ &= - 0.9524 \end{split} \nonumber\]
\[\begin{split} b_{11} &= \frac{1 - 5b_{21} - b_{31}}{25}\\ &= \frac{1 - 5( - 0.9524) - 4.571}{25}\\ &= 0.04762 \end{split} \nonumber\]
Hence the first column of the inverse of \(\left\lbrack A \right\rbrack\) is
\[\begin{bmatrix} b_{11} \\ b_{21} \\ b_{31} \\ \end{bmatrix} = \begin{bmatrix} 0.04762 \\ - 0.9524 \\ 4.571 \\ \end{bmatrix} \nonumber\]
Similarly, solving
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{12} \\ b_{22} \\ b_{32} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ \end{bmatrix}\ \text{gives }\begin{bmatrix} b_{12} \\ b_{22} \\ b_{32} \\ \end{bmatrix} = \begin{bmatrix} - 0.08333 \\ 1.417 \\ - 5.000 \\ \end{bmatrix} \nonumber\]
and solving
\[\begin{bmatrix} 25 & 5 & 1 \\ 64 & 8 & 1 \\ 144 & 12 & 1 \\ \end{bmatrix}\begin{bmatrix} b_{13} \\ b_{23} \\ b_{33} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ \end{bmatrix}\ \text{gives }\begin{bmatrix} b_{13} \\ b_{23} \\ b_{33} \\ \end{bmatrix} = \begin{bmatrix} 0.03571 \\ - 0.4643 \\ 1.429 \\ \end{bmatrix} \nonumber\]
Hence
\[\left\lbrack A \right\rbrack^{- 1} = \begin{bmatrix} 0.04762 & - 0.08333 & 0.03571 \\ - 0.9524 & 1.417 & - 0.4643 \\ 4.571 & - 5.000 & 1.429 \\ \end{bmatrix} \nonumber\]
Can you confirm the following for the above example?
\[\left\lbrack A \right\rbrack\ \left\lbrack A \right\rbrack^{- 1} = \left\lbrack I \right\rbrack = \left\lbrack A \right\rbrack^{- 1}\left\lbrack A \right\rbrack \nonumber\]
Did you also note that the decomposition of \([A]\) to \([L][U]\) needed to be done only once?
Audiovisual Lecture
Title: LU Decomposition Method: Finding Inverse of a Matrix - Example
Summary: This video shows an example how LU decomposition method can be used to find inverse of a matrix.
Lesson 4: Computational Efficiency
Learning Objectives
After successful completion of this lesson, you should be able to:
1) justify why using the LU decomposition method is more efficient than Gaussian elimination in some cases.
LU decomposition looks more complicated than Gaussian elimination. Do we use LU decomposition because it is computationally more efficient than Gaussian elimination to solve a set of n equations given by \([A][X]=[C]\)?
Note: The formulas for the computational time in this lesson assume it takes 4 clock cycles each for an add, subtract, or multiply operation, and 16 clock cycles for a divide operation as is the case for a typical AMD®-K7 chip. <http://www.isi.edu/~draper/papers/mwscas07_kwon.pdf>. These clock cycle times taken by an arithmetic operation and the number of such operations allow us to find the computational time for each of the processes.
For a square matrix \(\lbrack A\rbrack\) of \(n \times n\) size, the computational time \({CT}|_{DE}\) to decompose the \(\lbrack A\rbrack\) matrix to \(\lbrack L\rbrack\lbrack U\rbrack\) form is given by
\[{CT}|_{DE} = T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.1}) \nonumber\]
where
\[T = \text{clock cycle time} \nonumber\]
The computational time \({CT}|_{FS}\) for forward substitution: \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) is given by
\[{CT}|_{FS} = T\left( 4n^{2} - 4n \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.2}) \nonumber\]
The computational time \({CT}|_{BS}\) for back substitution: \(\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack Z \right\rbrack\) is given by
\[{CT}|_{BS} = T\left( 4n^{2} + 12n \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.3}) \nonumber\]
So, the total computational time to solve a set of simultaneous linear equations by LU decomposition is
\[\begin{split} {CT}|_{LU} &= {CT}|_{DE} + {CT}|_{FS} + {CT}|_{BS}\\ &= T\left( \frac{8n^{3}}{3} + 4n^{2} - \frac{20n}{3} \right) + T\left( 4n^{2} - 4n \right) + T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{3}}{3} + 12n^{2} + \frac{4n}{3} \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.4}) \end{split} \nonumber\]
Now let us look at the computational time taken by Naive Gaussian elimination. The computational time \({CT}|_{FE}\) for the forward elimination part is
\[{CT}|_{FE} = T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.5}) \nonumber\]
and the computational time \({CT}|_{BS}\) for the back substitution part is
\[{CT}|_{BS} = T\left( 4n^{2} + 12n \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.6}) \nonumber\]
So, the total computational time \({CT}|_{GE}\) to solve a set of simultaneous linear equations by Gaussian Elimination is
\[\begin{split} {CT}|_{GE} &= {CT}|_{FE} + {CT}|_{BS}\\ &= T\left( \frac{8n^{3}}{3} + 8n^{2} - \frac{32n}{3} \right) + T\left( 4n^{2} + 12n \right)\\ &= T\left( \frac{8n^{3}}{3} + 12n^{2} + \frac{4n}{3} \right)\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{4.7}) \end{split} \nonumber\]
The computational times for Gaussian elimination and LU decomposition are identical.
Appendix
How much computational time does it take to conduct back substitution in the LU Decomposition method?
Solution
At the beginning of back substitution, the set of equations is of the form
\[\left\lbrack U \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack B \right\rbrack \nonumber\]
where
\[\left\lbrack U \right\rbrack = \text{upper triangular matrix},\ n \times n, \nonumber\] \[\left\lbrack X \right\rbrack = \text{unknown vector},\ n \times 1,\ \text{and} \nonumber\]
\[\left\lbrack B \right\rbrack = \text{right-hand side vector},\ n \times 1. \nonumber\]
The algorithm for finding the unknown vector is
\[x_{n} = \frac{b_{n}}{u_{nn}} \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{A.1}) \nonumber\]
\[x_{i} = \frac{b_{i} - \displaystyle \sum_{j = i + 1}^{n} u_{j}x_{j}}{u_{ii}},\ i = n-1,\ n-2,\ ...,\ 2,\ 1\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; (\PageIndex{A.2}) \nonumber\]
Hint 1
For an arithmetic progression series summation
\[S = a_{1} + a_{2} + ... + a_{m - 1} + a_{m} \nonumber\]
\[\frac{m}{2}(a_{1} + a_{m}) \nonumber\]
or
\[\frac{m}{2}(2a_1 + (m-1)d) \nonumber\]
where
\[d = a_{i + 1} - a_{i},i = 1,2..,m - 1 \nonumber\]
Hint 2
When you add \(n\) terms together, there are \(n-1\) additions.
Now let us see how many arithmetic operations are needed to complete the algorithm.
Number of Divisions
See equation \((\PageIndex{A.1})\). We have one operation of division for the calculation of \(x_{n}\).
See equation \((\PageIndex{A.2})\). For each \(x_{i}\), \(i = n - 1,n - 2,...,2,1\), the result is divided once by \(u_{ii}\).
So, the total number of divisions is \(1+(n-1) =n\)
Number of Subtractions
See equation \((\PageIndex{A.2})\). For each \(x_{i}\), \(i = n - 1,n - 2,...,2,1\), we have 1 subtraction in the numerator.
So, the total number of subtractions is \(n-1\).
Number of Multiplications
See equation \((\PageIndex{A.2})\). For a particular \(i\), there are \(\left( n - i \right)\) multiplications — just count the \(u_{ij}x_{j}\) terms within the summation in the numerator. Since \(i\) takes the values of \(i = n - 1,n - 2,...,2,1\), you can see that
when \(i=n-1\), there is \((n-(n-1))=1\) multiplication,
when \(i=n-2\), there are \((n-(n-2))=2\) multiplications,
\(\vdots\)
when \(i=1\), there are \((n-1)=n-1\) multiplications.
So, the total number of multiplications is
\[\begin{split} &1 + 2 + ... + (n - 1)\\ &= \frac{n - 1}{2}(1 + (n - 1))\\ &= \frac{n(n - 1)}{2} \end{split} \nonumber\]
Number of Additions
For a particular \(i\), there are \(\left( n - i - 1 \right)\) additions – just see the number of \(u_{ij}x_{j}\) terms within the summation in the numerator of equation \((\PageIndex{A.2})\).
Since \(i\) takes the values of \(i = n - 1,n - 2,...,2,1\), you can see that
when \(i=n-1\), there is \((n-(n-1)-1))=0\) addition,
when \(i=n-2\), there is \((n-(n-2)-1))=1\) addition,
\(\vdots\)
when \(i=1\), there are \((n-(1)-1))=n-2\) additions.
So, the total number of additions is
\[\begin{split} &0 + 1 + 2 + ... + (n - 2)\\ &= \frac{n - 1}{2}(0 + (n - 2))\\ &= \frac{(n - 1)(n - 2)}{2} \end{split} \nonumber\]
Computational Time
Assuming it takes 4 clock cycles for each addition, subtraction, and multiplication, and 16 clock cycles for each division, then if \(C\) is the clock cycle time,
computational time spent on divisions= \(n \times (16C) = 16Cn\)
computational time spent on subtractions\(= \left( n - 1 \right) \times (4C) = 4C(n - 1)\)
computational time spent on multiplications\(\displaystyle = \frac{n(n - 1)}{2} \times (4C) = 2Cn(n - 1)\)
computational time spent on additions\(\displaystyle= \frac{(n - 1)(n - 2)}{2} \times (4C) = 2C(n - 1)(n - 2)\)
The total computational time spent on back substitution then is \[\begin{split} &=\text{Total time for divisions} + \text{Total time for subtractions} \\ &+ \text{Total time for multiplications} + \text{Total time for additions}\\ &= (16Cn) + (4Cn - 4C) + (2Cn^{2} - 2Cn) + (2Cn^{2} - 6Cn + 4C)\\ &= 4Cn^{2} + 12Cn\\ &= C(4n^{2} + 12n) \end{split} \nonumber\]
Questions Related to the Appendix
1. How much computational time does it take to conduct forward substitution?
2. How much computational time does it take to conduct forward elimination?
3. How much computational time does it take to conduct back substitutions?
4. How much computational time does it take to conduct Naive Gaussian elimination?
5. How much computational time does it take to conduct LU decomposition?
6. How much computational time does it take to find the inverse of a square matrix using LU decomposition?
7. How much computational time does it take to find the inverse of a square matrix using Gaussian elimination?
Multiple Choice Test
(1). The \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method is computationally more efficient than Naive Gauss elimination for solving
(A). a single set of simultaneous linear equations.
(B). multiple sets of simultaneous linear equations with different coefficient matrices and the same right-hand side vectors.
(C). multiple sets of simultaneous linear equations with the same coefficient matrix and different right-hand side vectors.
(D). less than ten simultaneous linear equations.
(2). The lower triangular matrix \(\left\lbrack L \right\rbrack\) in the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition of the matrix given below
\[\begin{bmatrix} 25 & 5 & 4 \\ 10 & 8 & 16 \\ 8 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ \mathcal{l}_{21} & 1 & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \nonumber\] is
(A) \(\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end{bmatrix}\)
(B) \(\begin{bmatrix} 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end{bmatrix}\)
(C) \(\begin{bmatrix} 1 & 0 & 0 \\ 10 & 1 & 0 \\ 8 & 12 & 0 \\ \end{bmatrix}\)
(D) \(\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.5000 & 1 \\ \end{bmatrix}\)
(3). The upper triangular matrix \(\left\lbrack U \right\rbrack\) in the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition of the matrix given below
\[\begin{bmatrix} 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ \mathcal{l}_{21} & 1 & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & 1 \\ \end{bmatrix}\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \nonumber\]
is
(A) \(\begin{bmatrix} 1 & 0 & 0 \\ 0.40000 & 1 & 0 \\ 0.32000 & 1.7333 & 1 \\ \end{bmatrix}\)
(B) \(\begin{bmatrix} 25 & 5 & 4 \\ 0 & 6 & 14.400 \\ 0 & 0 & - 4.2400 \\ \end{bmatrix}\)
(C) \(\begin{bmatrix} 25 & 5 & 4 \\ 0 & 8 & 16 \\ 0 & 0 & - 2 \\ \end{bmatrix}\)
(D) \(\begin{bmatrix} 1 & 0.2000 & 0.16000 \\ 0 & 1 & 2.4000 \\ 0 & 0 & - 4.240 \\ \end{bmatrix}\)
(4). For a given \(2000 \times 2000\) matrix \(\left\lbrack A \right\rbrack\), assume that it takes about \(15\) seconds to find the inverse of \(\left\lbrack A \right\rbrack\) by the use of the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method, that is, finding the \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) once, and then doing forward substitution and back substitution \(2000\) times using the \(2000\) columns of the identity matrix as the right-hand side vector. The approximate time in seconds that it will take to find the inverse if found by repeated use of the Naive Gauss elimination method, that is, doing forward elimination and back substitution \(2000\) times by using the \(2000\) columns of the identity matrix as the right-hand side vector, is most nearly
(A) \(300\)
(B) \(1500\)
(C) \(7500\)
(D) \(30000\)
(5). The algorithm for solving a set of \(n\) equations \(\left\lbrack A \right\rbrack\left\lbrack X \right\rbrack = \left\lbrack C \right\rbrack\), where \(\left\lbrack A \right\rbrack = \left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) involves solving\(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) by forward substitution. The algorithm to solve \(\left\lbrack L \right\rbrack\left\lbrack Z \right\rbrack = \left\lbrack C \right\rbrack\) is given by
(A) \(z_{1} = c_{1}/l_{11}\)
for i from 2 to n do
sum = 0
for j from 1 to i do
sum = sum + l_ij * z_j
end do
z_i = (c_i - sum)/l_i
end do
(B) \(z_{1} = c_{1}/l_{11}\)
for i from 2 to n do
sum = 0
for j from 1 to (i - 1) do
sum = sum + l_ij * z_j
end do
z_i = (c_i - sum)/l_ij
end do
(C) \(z_{1} = c_{1}/l_{11}\)
for i from 2 to n do
for j from 1 to (i - 1) do
sum = sum + l_ij * z_j
end do
z_i = (c_i - sum)/l_ii
end do
(D) for i from 2 to n do
sum = 0
for j from 1 to (i - 1) do
sum = sum + l_ij * z_j
end do
z_i = (c_i - sum)/l_ii
end do
(6). To solve boundary value problems, a numerical method based on the finite difference method is used. This results in simultaneous linear equations with tridiagonal coefficient matrices. These are solved using a specialized \(\left\lbrack L \right\rbrack\left\lbrack U \right\rbrack\) decomposition method.
Choose the set of equations that approximately solves the boundary value problem
\[\frac{d^{2}y}{dx^{2}} = 6x - 0.5x^{2},\ \ y\left( 0 \right) = 0,\ \ y\left( 12 \right) = 0,\ \ 0 \leq x \leq 12 \nonumber\]
The second derivative in the above equation is approximated by the second-order accurate central divided difference approximation as learned in the numerical differentiation lessons (Chapter 02.02). A step size of \(h = 4\) is used, and hence the value of \(y\) can be found approximately at equidistantly placed 4 nodes between \(x = 0\) and \(x = 12\).
(A) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}\)
(B) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}\)
(C) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & - 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & - 0.125 & 0.0625 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 16.0 \\ 16.0 \\ 0 \\ \end{bmatrix}\)
(D) \(\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0.0625 & 0.125 & 0.0625 & 0 \\ 0 & 0.0625 & 0.125 & 0.0625 \\ \end{bmatrix}\begin{bmatrix} y_{1} \\ y_{2} \\ y_{3} \\ y_{4} \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 16.0 \\ 16.0 \\ \end{bmatrix}\)
For complete solution, go to
http://nm.mathforcollege.com/mcquizzes/04sle/quiz_04sle_ludecomposition_solution.pdf
Problem Set
(1). Show that LU decomposition method is computationally a more efficient way of finding the inverse of a square matrix than using Gaussian elimination. See book for answer.
(2). Use LU decomposition to find \([L]\) and \([U]\)
\[4x_{1} + x_{2} - x_{3} = - 2 \nonumber\] \[5x_{1} + x_{2} + 2x_{3} = 4 \nonumber\] \[6x_{1} + x_{2} + x_{3} = 6 \nonumber\]
- Answer
-
\\left[L\right]\left[U\right]=\left[\begin{matrix}1&0&0\\1.25&1&0\\1.5&2&1\\\end{matrix}\right]\left[\begin{matrix}4&1&-1\\0&-0.25&3.25\\0&0&-4\\\end{matrix}\right]\)
(3). Find the inverse of
\[\lbrack A\rbrack = \begin{bmatrix} 3 & 4 & 1 \\ 2 & - 7 & - 1 \\ 8 & 1 & 5 \\ \end{bmatrix} \nonumber\]
using the LU decomposition method.
- Answer
-
\(\left[A\right]^{-1}=\left[\begin{matrix}0.29310&0.169379&-0.025862\\0.1551&-0.060345&-0.043103\\-0.5&-0.25&0.25\\\end{matrix}\right]\)
(4). Fill in the blanks for the unknowns in the LU decomposition of the matrix given below
\[\begin{bmatrix} 25 & 5 & 4 \\ 75 & 7 & 16 \\ 12.5 & 12 & 22 \\ \end{bmatrix} = \begin{bmatrix} \mathcal{l}_{11} & 0 & 0 \\ \mathcal{l}_{21} & \mathcal{l}_{22} & 0 \\ \mathcal{l}_{31} & \mathcal{l}_{32} & \mathcal{l}_{33} \\ \end{bmatrix}\begin{bmatrix} 25 & 5 & 4 \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33} \\ \end{bmatrix} \nonumber\]
- Answer
-
\(\left[L\right]=\left[\begin{matrix}1&0&0\\3&1&0\\0.5&-1.1875&1\\\end{matrix}\right];\ \left[U\right]=\left[\begin{matrix}25&5&4\\0&-8&4\\0&0&24.75\\\end{matrix}\right]\)
(5). Show that the nonsingular matrix
\[\left\lbrack A \right\rbrack = \begin{bmatrix} 0 & 2 \\ 2 & 0 \\ \end{bmatrix} \nonumber\]
cannot be decomposed into the LU form.
- Answer
-
Since the \(a_{11}=0\), the first step of the forward elimination part of Gaussian elimination will involve a division by zero.
(6). The LU decomposition of
\[\lbrack A\rbrack = \begin{bmatrix} 4 & 1 & - 1 \\ 5 & 1 & 2 \\ 6 & 1 & 1 \\ \end{bmatrix} \nonumber\]
is given by
\[\begin{bmatrix} 4 & 1 & - 1 \\ 5 & 1 & 2 \\ 6 & 1 & 1 \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 1.25 & 1 & 0 \\ 1.5 & 2 & 1 \\ \end{bmatrix}\begin{bmatrix} ?? & ?? & ?? \\ 0 & ?? & ?? \\ 0 & 0 & ?? \\ \end{bmatrix} \nonumber\]
Find the upper triangular matrix in the above decomposition.
- Answer
-
\(\left[U\right]=\left[\begin{matrix}4&1&-1\\0&-0.25&3.25\\0&0&-4\\\end{matrix}\right]\)