2: Matrix Algebra
- Page ID
- 58840
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)In the study of systems of linear equations in Chapter [chap:1], we found it convenient to manipulate the augmented matrix of the system. Our aim was to reduce it to row-echelon form (using elementary row operations) and hence to write down all solutions to the system. In the present chapter we consider matrices for their own sake. While some of the motivation comes from linear equations, it turns out that matrices can be multiplied and added and so form an algebraic system somewhat analogous to the real numbers. This “matrix algebra” is useful in ways that are quite different from the study of linear equations. For example, the geometrical transformations obtained by rotating the euclidean plane about the origin can be viewed as multiplications by certain \(2 \times 2\) matrices. These “matrix transformations” are an important tool in geometry and, in turn, the geometry provides a “picture” of the matrices. Furthermore, matrix algebra has many other applications, some of which will be explored in this chapter. This subject is quite old and was first studied systematically in 1858 by Arthur Cayley.1
- 2.1: Matrix Addition, Scalar Multiplication, and Transposition
- This page provides an overview of matrix theory, covering definitions, properties, and operations including addition, subtraction, scalar multiplication, and the transpose. It emphasizes the requirements for matrix equality and the zero matrix's role. Key properties such as commutativity, associativity, and the existence of inverses in matrix addition are discussed, along with examples.
- 2.2: Equations, Matricies and Transformations
- This page covers the fundamentals of vectors, linear equations, and matrix algebra. It defines vectors in \(\mathbb{R}^n\), discusses matrix representation of linear systems, and highlights matrix-vector multiplication's role in linear combinations. The consistency of systems is explored through examples and properties, including solutions and the identity matrix.
- 2.3: Matrix Multiplication
- This page delves into matrix/vector products, expanding on concepts from systems of linear equations to matrix multiplication and its properties. It outlines how to compute matrix products through dot products and highlights essential characteristics, such as non-commutativity and the associative/distributive laws. The text covers block matrix multiplication and its applications, connects matrix operations to graph theory, and demonstrates how adjacency matrices represent paths between vertices.
- 2.4: Matrix Inverses
- This page explores matrix inverses, explaining their mathematical foundation, conditions for invertibility, and the importance of determinants. It outlines that a matrix \(A\) is invertible if \(\det A \neq 0\) and discusses methods for finding inverses, particularly through row reduction. Key properties of invertible matrices are examined, such as relationships to cancellation laws and transformations.
- 2.5: Elementary Matrices
- This page covers the concept of elementary matrices, derived from the identity matrix through row operations, essential for solving linear systems and matrix inversion. It explains the transformation of a matrix using these elementary matrices, highlighting that a matrix is invertible if it can be represented this way. The page also introduces the Smith normal form, detailing the process for its computation and the uniqueness of reduced row-echelon forms.
- 2.6: Linear Transformations
- This page covers essential concepts of linear algebra related to matrix transformations and linear transformations. It outlines the definitions and properties, illustrating how linear transformations can be represented as matrix transformations. Key examples include reflections and rotations in \(\mathbb{R}^2\), where the matrix representations are derived.
- 2.7: LU-Factorization
- This page covers the LU factorization of matrices, where a matrix \(A\) can be expressed as \(A = LU\) with \(L\) (lower triangular) and \(U\) (upper triangular). It describes using Gaussian elimination for obtaining \(LU\) and the role of permutation matrices to handle row interchanges for effective factorization. The uniqueness of \(L\) and \(U\) is confirmed when \(A\) has full rank, and techniques for transforming matrices into row-echelon form are detailed.
- 2.8: An Application to Input-Output Economic Models
- This page covers Wassily Leontief's Nobel Prize-winning work on mathematical models analyzing economic systems, using an input-output matrix to illustrate industry interactions in a primitive society. It explains the equilibrium condition where expenditures equal revenues and presents a model for equilibrium price structures.
- 2.9: An Application to Markov Chains
- This page covers Markov chains, emphasizing transitions between states determined solely by the current state. It explores transition probabilities, state vectors, and steady-state distributions through various examples, including weather patterns and animal behaviors. By utilizing transition matrices, the text details how to compute future states, showing convergence to long-term probabilities.
- 2.E: Supplementary Exercises for Chapter 2
- This page discusses supplementary exercises and concepts in linear algebra, focusing on solving matrix equations and understanding linear transformations. It covers properties of matrices including invertibility conditions for matrices \(P\) and \(Q\), their sum \(P + Q\), and the implications thereof. Key topics include scalar matrices, nilpotent and idempotent matrices, and matrix ranks and inverses, promoting a deeper comprehension of matrix theory and its applications.
1Arthur Cayley (1821-1895) showed his mathematical talent early and graduated from Cambridge in 1842 as senior wrangler. With no employment in mathematics in view, he took legal training and worked as a lawyer while continuing to do mathematics, publishing nearly 300 papers in fourteen years. Finally, in 1863, he accepted the Sadlerian professorship in Cambridge and remained there for the rest of his life, valued for his administrative and teaching skills as well as for his scholarship. His mathematical achievements were of the first rank. In addition to originating matrix theory and the theory of determinants, he did fundamental work in group theory, in higher-dimensional geometry, and in the theory of invariants. He was one of the most prolific mathematicians of all time and produced 966 papers.↩


