Skip to main content
Mathematics LibreTexts

34.2: Decompositions

  • Page ID
    70170
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Diagonalization as rotation
    You can visualize a matrix diagonalization as a rotation of your axis to align them with the matrix eigenvectors. (Public Domain; Jacopo Bertolotti via Wikipedia)

    In numerical linear algebra, we factorize matrices to facilitate efficient and/or accurate computations (they are also helpful in proofs). There are many possible matrix decompositions. Some, e.g., the eigendecomposition, require the matrix to be square, while others, e.g., the \(QR\) factorization, exist for arbitrary matrices. Among all possible decompositions (also called factorizations), some common examples include:

    • QR Factorization from Gram-Schmidt orthogonization:
      • \(A = QR\)
      • \(Q\) has orthonormal columns and \(R\) is a upper-triangular matrix
      • If there are zero rows in \(R\), we can reduce the number of columns in \(Q\)
      • Exists for arbitrary matrices
    • LU / LDU Decomposition from Gauss Elimination:
      • \(A=LU\) or \(A=LDU\)
      • \(L\) is lower-triangular, \(U\) is upper-triangular, and \(D\) is diagonal
      • Exists for all square matrices
      • Is related to Gaussian Elimination
    • Cholesky Decomposition:
      • \(A=R^{T}R \quad (=LDL^{T})\)
      • \(R\) is upper-triangular
      • Factorization of \(A\) into \(R^{T}R\) requires \(A\) be symmetric and positive-definite. The latter simply requires \(x^{T}Ax>0\) for every \(x \in \mathbb{R}^n\). Note that \(x^{T}Ax\) is always a scalar value (e.g., note that \(x^TA = y^T\) for some vector \(y\in\mathbb{R}^n\), and \(y^Tx\) is the dot product between \(x\) and \(y\) and, hence, a real scalar).
    • Schur Decomposition:
      • \(A = UTU^{T}\)
      • \(U\) is orthogonal and \(T\) is upper-triangular
      • Exists for every square matrix and says every such matrix, \(A\), is unitarily equivalent to an upper-triangular matrix, \(T\) (i.e., there exists an orthonomal basis with respect to which \(A\) is upper-triangular)
      • Eigenvalues on diagonal of \(T\)
    • Singular Value Decomposition:
      • \(A = U\Sigma V^{T}\)
      • \(U\) is orthogonal, \(V\) is orthogonal, and \(\Sigma\) is diagonal
      • Exists for arbitrary matrices
    • Eigenvalue Decomposition:
      • \(A = X\Lambda X^{-1}\)
      • \(X\) is invertible and \(\Lambda\) is diagonal
      • Exists for square matrices with linearly independent columns (e.g., full rank)
      • Also called the eigendecomposition

    This page titled 34.2: Decompositions is shared under a CC BY-NC 4.0 license and was authored, remixed, and/or curated by Dirk Colbry via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

    • Was this article helpful?