Skip to main content
Mathematics LibreTexts

8.3: Positive Definite Matrices

  • Page ID
    58878
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are encountered, have countless applications throughout science and engineering. They also arise in statistics (for example, in factor analysis used in the social sciences) and in geometry (see Section [sec:8_8]). We will encounter them again in Chapter [chap:10] when describing all inner products in \(\mathbb{R}^n\).

    Positive Definite Matrices024811 A square matrix is called positive definite if it is symmetric and all its eigenvalues \(\lambda\) are positive, that is \(\lambda > 0\).

    Because these matrices are symmetric, the principal axes theorem plays a central role in the theory.

    024815 If \(A\) is positive definite, then it is invertible and \(\det A > 0\).

    If \(A\) is \(n \times n\) and the eigenvalues are \(\lambda_{1}, \lambda_{2}, \dots, \lambda_{n}\), then \(\det A = \lambda_{1}\lambda_{2} \cdots \lambda_{n} > 0\) by the principal axes theorem (or the corollary to Theorem [thm:024503]).

    If \(\mathbf{x}\) is a column in \(\mathbb{R}^n\) and \(A\) is any real \(n \times n\) matrix, we view the \(1 \times 1\) matrix \(\mathbf{x}^{T}A\mathbf{x}\) as a real number. With this convention, we have the following characterization of positive definite matrices.

    024830 A symmetric matrix \(A\) is positive definite if and only if \(\mathbf{x}^{T} A \mathbf{x} > 0\) for every column \(\mathbf{x} \neq \mathbf{0}\) in \(\mathbb{R}^n\).

    \(A\) is symmetric so, by the principal axes theorem, let \(P^{T}AP = D = \func{diag}(\lambda_{1}, \lambda_{2}, \dots, \lambda_{n})\) where \(P^{-1} = P^{T}\) and the \(\lambda_{i}\) are the eigenvalues of \(A\). Given a column \(\mathbf{x}\) in \(\mathbb{R}^n\), write \(\mathbf{y} = P^{T}\mathbf{x} = \left[ \begin{array}{cccc} y_{1} & y_{2} & \dots & y_{n} \end{array}\right]^T\). Then

    \[\label{symmetricEq} \mathbf{x}^TA\mathbf{x} = \mathbf{x}^T(PDP^T)\mathbf{x} = \mathbf{y}^TD\mathbf{y} = \lambda_{1}y_{1}^2 + \lambda_{2}y_{2}^2 + \dots + \lambda_{n}y_{n}^2 \nonumber \]

    If \(A\) is positive definite and \(\mathbf{x} \neq \mathbf{0}\), then \(\mathbf{x}^{T}A\mathbf{x} > 0\) by ([symmetricEq]) because some \(y_{j} \neq 0\) and every \(\lambda_{i} > 0\). Conversely, if \(\mathbf{x}^{T}A\mathbf{x} > 0\) whenever \(\mathbf{x} \neq \mathbf{0}\), let \(\mathbf{x} = P\mathbf{e}_{j} \neq \mathbf{0}\) where \(\mathbf{e}_{j}\) is column \(j\) of \(I_{n}\). Then \(\mathbf{y} = \mathbf{e}_{j}\), so ([symmetricEq]) reads \(\lambda_{j} = \mathbf{x}^{T}A\mathbf{x} > 0\).

    Note that Theorem [thm:024830] shows that the positive definite matrices are exactly the symmetric matrices \(A\) for which the quadratic form \(q = \mathbf{x}^{T}A\mathbf{x}\) takes only positive values.

    024865 If \(U\) is any invertible \(n \times n\) matrix, show that \(A = U^{T}U\) is positive definite.

    If \(\mathbf{x}\) is in \(\mathbb{R}^n\) and \(\mathbf{x} \neq \mathbf{0}\), then

    \[\mathbf{x}^TA\mathbf{x} = \mathbf{x}^T(U^TU)\mathbf{x} = (U\mathbf{x})^T(U\mathbf{x})= \| U\mathbf{x} \|^2 > 0 \nonumber \]

    because \(U\mathbf{x} \neq \mathbf{0}\) (\(U\) is invertible). Hence Theorem [thm:024830] applies.

    It is remarkable that the converse to Example [exa:024865] is also true. In fact every positive definite matrix \(A\) can be factored as \(A = U^{T}U\) where \(U\) is an upper triangular matrix with positive elements on the main diagonal. However, before verifying this, we introduce another concept that is central to any discussion of positive definite matrices.

    If \(A\) is any \(n \times n\) matrix, let \(^{(r)}A\) denote the \(r \times r\) submatrix in the upper left corner of \(A\); that is, \(^{(r)}A\) is the matrix obtained from \(A\) by deleting the last \(n - r\) rows and columns. The matrices \(^{(1)}A, ^{(2)}A, ^{(3)}A, \dots\), \(^{(n)}A = A\) are called the principal submatrices of \(A\).

    024883 If \(A = \left[ \begin{array}{rrr} 10 & 5 & 2 \\ 5 & 3 & 2 \\ 2 & 2 & 3 \end{array}\right]\) then \(^{(1)}A = \left[ 10 \right]\), \(^{(2)}A = \left[ \begin{array}{rr} 10 & 5 \\ 5 & 3 \end{array}\right]\) and \(^{(3)}A = A\).

    024890 If \(A\) is positive definite, so is each principal submatrix \(^{(r)}A\) for \(r = 1, 2, \dots, n\).

    Write \(A = \left[ \begin{array}{rr} ^{(r)}A & P \\ Q & R \end{array}\right]\) in block form. If \(\mathbf{y} \neq \mathbf{0}\) in \(\mathbb{R}^r\), write \(\mathbf{x} = \left[ \begin{array}{r} \mathbf{y} \\ \mathbf{0} \end{array}\right]\) in \(\mathbb{R}^n\).

    Then \(\mathbf{x} \neq \mathbf{0}\), so the fact that \(A\) is positive definite gives

    \[0 < \mathbf{x}^TA\mathbf{x} = \left[ \begin{array}{rr} \mathbf{y}^T & \mathbf{0} \end{array}\right] \left[ \begin{array}{rr} ^{(r)}A & P \\ Q & R \end{array}\right] \left[ \begin{array}{r} \mathbf{y} \\ \mathbf{0} \end{array}\right] = \mathbf{y}^T(^{(r)}A)\mathbf{y} \nonumber \]

    This shows that \(^{(r)}A\) is positive definite by Theorem [thm:024830].1

    If \(A\) is positive definite, Lemma [lem:024890] and Theorem [thm:024815] show that \(\det (^{(r)}A) > 0\) for every \(r\). This proves part of the following theorem which contains the converse to Example [exa:024865], and characterizes the positive definite matrices among the symmetric ones.

    024907 The following conditions are equivalent for a symmetric \(n \times n\) matrix \(A\):

    1. \(A\) is positive definite.
    2. \(\det (^{(r)}A) > 0\) for each \(r = 1, 2, \dots, n\).
    3. \(A = U^{T}U\) where \(U\) is an upper triangular matrix with positive entries on the main diagonal.

    Furthermore, the factorization in (3) is unique (called the Cholesky factorizationof \(A\)).

    First, (3) \(\Rightarrow\) (1) by Example [exa:024865], and (1) \(\Rightarrow\) (2) by Lemma [lem:024890] and Theorem [thm:024815].

    (2) \(\Rightarrow\) (3). Assume (2) and proceed by induction on \(n\). If \(n = 1\), then \(A = \left[ a \right]\) where \(a > 0\) by (2), so take \(U = \left[ \sqrt{a} \right]\). If \(n > 1\), write \(B =^{(n-1)}A\). Then \(B\) is symmetric and satisfies (2) so, by induction, we have \(B = U^{T}U\) as in (3) where \(U\) is of size \((n - 1) \times (n - 1)\). Then, as \(A\) is symmetric, it has block form \(A = \left[ \begin{array}{cc} B & \mathbf{p} \\ \mathbf{p}^T & b \end{array}\right]\) where \(\mathbf{p}\) is a column in \(\mathbb{R}^{n-1}\) and \(b\) is in \(\mathbb{R}\). If we write \(\mathbf{x} = (U^{T})^{-1}\mathbf{p}\) and \(c = b - \mathbf{x}^{T}\mathbf{x}\), block multiplication gives

    \[A = \left[ \begin{array}{cc} U^TU & \mathbf{p} \\ \mathbf{p}^T & b \end{array}\right] = \left[ \begin{array}{cc} U^T & 0 \\ \mathbf{x}^T & 1 \end{array}\right] \left[ \begin{array}{cc} U & \mathbf{x} \\ 0 & c \end{array}\right] \nonumber \]

    as the reader can verify. Taking determinants and applying Theorem [thm:007890] gives \(\det A = \det (U^{T}) \det U \cdot c = c(\det U)^{2}\). Hence \(c > 0\) because \(\det A > 0\) by (2), so the above factorization can be written

    \[A = \left[ \begin{array}{cc} U^T & 0 \\ \mathbf{x}^T & \sqrt{c} \end{array}\right] \left[ \begin{array}{cc} U & \mathbf{x} \\ 0 & \sqrt{c} \end{array}\right] \nonumber \]

    Since \(U\) has positive diagonal entries, this proves (3).

    As to the uniqueness, suppose that \(A = U^TU = U_{1}^TU_{1}\) are two Cholesky factorizations. Now write \(D = UU_{1}^{-1} = (U^T)^{-1}U_{1}^T\). Then \(D\) is upper triangular, because \(D = UU_{1}^{-1}\), and lower triangular, because \(D = (U^T)^{-1}U_{1}^T\), and so it is a diagonal matrix. Thus \(U = DU_{1}\) and \(U_{1} = DU\), so it suffices to show that \(D = I\). But eliminating \(U_{1}\) gives \(U = D^{2}U\), so \(D^{2} = I\) because \(U\) is invertible. Since the diagonal entries of \(D\) are positive (this is true of \(U\) and \(U_{1}\)), it follows that \(D = I\).

    The remarkable thing is that the matrix \(U\) in the Cholesky factorization is easy to obtain from \(A\) using row operations. The key is that Step 1 of the following algorithm is possible for any positive definite matrix \(A\). A proof of the algorithm is given following Example [exa:024959].

    Algorithm for the Cholesky Factorization024947 If \(A\) is a positive definite matrix, the Cholesky factorization \(A = U^{T}U\) can be obtained as follows:

    • Carry \(A\) to an upper triangular matrix \(U_{1}\) with positive diagonal entries using row operations each of which adds a multiple of a row to a lower row.
    • Obtain \(U\) from \(U_{1}\) by dividing each row of \(U_{1}\) by the square root of the diagonal entry in that row.

    024959 Find the Cholesky factorization of \(A = \left[ \begin{array}{rrr} 10 & 5 & 2 \\ 5 & 3 & 2 \\ 2 & 2 & 3 \end{array}\right]\).

    The matrix \(A\) is positive definite by Theorem [thm:024907] because \(\det {^{(1)}A} = 10 > 0\), \(\det {^{(2)}A} = 5 > 0\), and \(\det {^{(3)}A} = \det A = 3 > 0\). Hence Step 1 of the algorithm is carried out as follows:

    \[A = \left[ \begin{array}{rrr} 10 & 5 & 2 \\ 5 & 3 & 2 \\ 2 & 2 & 3 \end{array}\right] \rightarrow \left[ \begin{array}{rrc} 10 & 5 & 2 \\ 0 & \frac{1}{2} & 1 \\ 0 & 1 & \frac{13}{5} \end{array}\right] \rightarrow \left[ \begin{array}{rrr} 10 & 5 & 2 \\ 0 & \frac{1}{2} & 1 \\ 0 & 0 & \frac{3}{5} \end{array}\right] = U_{1} \nonumber \]

    Now carry out Step 2 on \(U_{1}\) to obtain \(U = \left[ \def\arraystretch{1.5}\begin{array}{ccc} \sqrt{10} & \frac{5}{\sqrt{10}} & \frac{2}{\sqrt{10}} \\ 0 & \frac{1}{\sqrt{2}} & \sqrt{2} \\ 0 & 0 & \frac{\sqrt{3}}{\sqrt{5}} \end{array}\right]\).

    The reader can verify that \(U^{T}U = A\).

    If \(A\) is positive definite, let \(A = U^{T}U\) be the Cholesky factorization, and let \(D = \func{diag}(d_{1}, \dots, d_{n})\) be the common diagonal of \(U\) and \(U^{T}\). Then \(U^{T}D^{-1}\) is lower triangular with ones on the diagonal (call such matrices LT-1). Hence \(L = (U^{T}D^{-1})^{-1}\) is also LT-1, and so \(I_{n} \to L\) by a sequence of row operations each of which adds a multiple of a row to a lower row (verify; modify columns right to left). But then \(A \to LA\) by the same sequence of row operations (see the discussion preceding Theorem [thm:005294]). Since \(LA = [D(U^{T})^{-1}][U^{T}U] = DU\) is upper triangular with positive entries on the diagonal, this shows that Step 1 of the algorithm is possible.

    Turning to Step 2, let \(A \to U_{1}\) as in Step 1 so that \(U_{1} = L_{1}A\) where \(L_{1}\) is LT-1. Since A is symmetric, we get

    \[\label{symmetricEq2} L_{1}U_{1}^T = L_{1}(L_{1}A)^T = L_{1}A^TL_{1}^T = L_{1}AL_{1}^T = U_{1}L_{1}^T \nonumber \]

    Let \(D_{1} = \func{diag}(e_{1}, \dots, e_{n})\) denote the diagonal of \(U_{1}\). Then ([symmetricEq2]) gives \(L_{1}(U_{1}^TD_{1}^{-1}) = U_{1}L_{1}^TD_{1}^{-1}\). This is both upper triangular (right side) and LT-1 (left side), and so must equal \(I_{n}\). In particular, \(U_{1}^TD_{1}^{-1} = L_{1}^{-1}\). Now let \(D_{2} = \func{diag}(\sqrt{e_{1}}, \dots, \sqrt{e_{n}})\), so that \(D_{2}^2 = D_{1}\). If we write \(U = D_{2}^{-1}U_{1}\) we have

    \[U^TU = (U_{1}^TD_{2}^{-1})(D_{2}^{-1}U_{1}) = U_{1}^T(D_{2}^2)^{-1}U_{1} = (U_{1}^TD_{1}^{-1})U_{1} = (L_{1}^{-1})U_{1} = A \nonumber \]

    This proves Step 2 because \(U = D_{2}^{-1}U_{1}\) is formed by dividing each row of \(U_{1}\) by the square root of its diagonal entry (verify).


    1. A similar argument shows that, if \(B\) is any matrix obtained from a positive definite matrix \(A\) by deleting certain rows and deleting the same columns, then \(B\) is also positive definite.↩

    This page titled 8.3: Positive Definite Matrices is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform.