8: Orthogonality
- Page ID
- 58882
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)In Section [sec:5_3] we introduced the dot product in \(\mathbb{R}^n\) and extended the basic geometric notions of length and distance. A set \(\{ \mathbf{f}_1, \mathbf{f}_2, \dots, \mathbf{f}_m\}\) of nonzero vectors in \(\mathbb{R}^n\) was called an orthogonal set if \(\mathbf{f}_i\bullet \mathbf{f}_j =0\) for all \(i \neq j\), and it was proved that every orthogonal set is independent. In particular, it was observed that the expansion of a vector as a linear combination of orthogonal basis vectors is easy to obtain because formulas exist for the coefficients. Hence the orthogonal bases are the “nice” bases, and much of this chapter is devoted to extending results about bases to orthogonal bases. This leads to some very powerful methods and theorems. Our first task is to show that every subspace of \(\mathbb{R}^n\) has an orthogonal basis.
- 8.1: Orthogonal Complements and Projections
- This page explores orthogonal complements and projections in vector spaces, focusing on orthogonal sets in \(\mathbb{R}^n\). It details the Orthogonal Lemma, the Gram-Schmidt algorithm for creating orthogonal bases, and defines the orthogonal complement \(U^\perp\). The concept of projecting a vector onto a subspace is established, with emphasis on the linear nature of projection operators.
- 8.2: Orthogonal Diagonalization
- This page covers the diagonalizability of \(n \times n\) matrices, focusing on symmetric matrices, which are orthogonally diagonalizable with orthonormal eigenvectors. Key concepts include the spectral theorem, properties of orthogonal matrices, and methods for obtaining orthonormal eigenvectors using Gaussian elimination and the Gram-Schmidt process.
- 8.3: Positive Definite Matrices
- This page covers positive definite matrices, defined by their symmetric nature and positive eigenvalues, which are crucial in optimization, statistics, and geometry. Key properties include their invertibility and determinant positivity. The Cholesky factorization expresses a positive definite matrix as \(A = U^TU\) with \(U\) being upper triangular.
- 8.4: QR-Factorization
- This page covers the properties and significance of QR-factorization in linear algebra, highlighting its ability to decompose a matrix \(A\) with independent columns into an orthogonal matrix \(Q\) and an upper triangular matrix \(R\). The page discusses the ease of computing least squares approximations using QR-factorization, its uniqueness, and the relationship between orthogonal transformations and vector equivalencies.
- 8.5: Computing Eigenvalues
- This page presents two iterative methods for determining eigenvalues of large matrices: the Power Method and the QR Algorithm. The Power Method approximates the dominant eigenvector and eigenvalue through repeated matrix multiplication, noting its limitations in convergence speed. The QR Algorithm improves upon this by factorizing the matrix into orthogonal and upper triangular components, converging to an upper triangular matrix with eigenvalues on the diagonal.
- 8.6: The Singular Value Decomposition
- This page covers the diagonalization of square matrices and the Singular Value Decomposition (SVD) for real matrices. It explains SVD's construction, properties, and applications, emphasizing orthonormal bases, rank, and the relationships between fundamental subspaces. The text also discusses the polar decomposition and pseudoinverse of matrices, detailing how the SVD aids in these concepts.
- 8.7: Complex Matrices
- This page covers the essentials of linear algebra involving complex matrices, eigenvalues, and their properties. It begins with matrices and complex numbers, defining inner products and norms in \(\mathbb{C}^n\). The treatment of hermitian matrices reveals their real eigenvalues and orthogonality of eigenvectors. Unitary diagonalization is introduced along with Schur's Theorem, while the Cayley-Hamilton theorem links a matrix to its characteristic polynomial.
- 8.8: An Application to Linear Codes over Finite Fields
- This page explores the use of statistical principal component analysis (PCA) in multivariate analysis, covering essential concepts like random variables, mean, variance, standard deviation, and covariance. It explains the covariance matrix for multiple random variables, which can be diagonalized to identify uncorrelated principal components that summarize the variance of the original data.
- 8.9: An Application to Quadratic Forms
- This page covers quadratic forms and their properties, emphasizing the diagonalization of symmetric matrices through orthogonal transformations to simplify expressions. It explains concepts like principal axes, index, and rank, outlining the relationship between eigenvalues and matrix congruence. Transformations in 2D space for conic sections and standard forms of quadratic equations are discussed, along with conditions for positive definiteness.
- 8.10: An Application to Constrained Optimization
- This page covers the optimization of quadratic objective functions subject to budget constraints and transformations to simplify these constraints, particularly through the principal axes theorem. It illustrates how to maximize and minimize quadratic forms using eigenvalues, applicable in various fields like aerodynamics and particle physics. Practical examples include a politician's spending and a manufacturer maximizing profit under production constraints.


