Skip to main content
Mathematics LibreTexts

2: Matrix Algebra

  • Page ID
    58840
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\dsum}{\displaystyle\sum\limits} \)

    \( \newcommand{\dint}{\displaystyle\int\limits} \)

    \( \newcommand{\dlim}{\displaystyle\lim\limits} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \(\newcommand{\longvect}{\overrightarrow}\)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    After a traditional look at matrix addition, scalar multiplication, and transposition in Section 2.1, matrixvector multiplication is introduced in Section 2.2 by viewing the left side of a system of linear equations as the product \(A \mathbf{x}\) of the coefficient matrix \(A\) with the column \(\mathbf{x}\) of variables. The usual dot-product definition of a matrix-vector multiplication follows. Section 2.2 ends by viewing an \(m \times n\) matrix \(A\) as a transformation \(\mathbb{R}^n \rightarrow \mathbb{R}^m\). This is illustrated for \(\mathbb{R}^2 \rightarrow \mathbb{R}^2\) by describing reflection in the \(x\) axis, rotation of \(\mathbb{R}^2\) through \(\frac{\pi}{2}\), shears, and so on.

    In Section 2.3, the product of matrices \(A\) and \(B\) is defined by \(A B=\left[\begin{array}{llll}A \mathbf{b}_1 & A \mathbf{b}_2 & \cdots & A \mathbf{b}_n\end{array}\right]\), where the \(\mathbf{b}_i\) are the columns of \(B\). A routine computation shows that this is the matrix of the transformation \(B\) followed by \(A\). This observation is used frequently throughout the book, and leads to simple, conceptual proofs of the basic axioms of matrix algebra. Note that linearity is not required-all that is needed is some basic properties of matrix-vector multiplication developed in Section 2.2. Thus the usual arcane definition of matrix multiplication is split into two well motivated parts, each an important aspect of matrix algebra. Of course, this has the pedagogical advantage that the conceptual power of geometry can be invoked to illuminate and clarify algebraic techniques and definitions.

    In Section 2.4 and 2.5 matrix inverses are characterized, their geometrical meaning is explored, and block multiplication is introduced, emphasizing those cases needed later in the book. Elementary matrices are discussed, and the Smith normal form is derived. Then in Section 2.6, linear transformations \(\mathbb{R}^n \rightarrow \mathbb{R}^m\) are defined and shown to be matrix transformations. The matrices of reflections, rotations, and projections in the plane are determined. Finally, matrix multiplication is related to directed graphs, matrix LU-factorization is introduced, and applications to economic models and Markov chains are presented.

    • 2.0: Prelude to Matrix Algebra
    • 2.1: Matrix Addition, Scalar Multiplication, and Transposition
      This page provides an overview of matrix theory, covering definitions, properties, and operations including addition, subtraction, scalar multiplication, and the transpose. It emphasizes the requirements for matrix equality and the zero matrix's role. Key properties such as commutativity, associativity, and the existence of inverses in matrix addition are discussed, along with examples.
    • 2.2: Equations, Matricies and Transformations
      This page covers the fundamentals of vectors, linear equations, and matrix algebra. It defines vectors in \(\mathbb{R}^n\), discusses matrix representation of linear systems, and highlights matrix-vector multiplication's role in linear combinations. The consistency of systems is explored through examples and properties, including solutions and the identity matrix.
    • 2.3: Matrix Multiplication
      This page delves into matrix/vector products, expanding on concepts from systems of linear equations to matrix multiplication and its properties. It outlines how to compute matrix products through dot products and highlights essential characteristics, such as non-commutativity and the associative/distributive laws. The text covers block matrix multiplication and its applications, connects matrix operations to graph theory, and demonstrates how adjacency matrices represent paths between vertices.
    • 2.4: Matrix Inverses
      This page explores matrix inverses, explaining their mathematical foundation, conditions for invertibility, and the importance of determinants. It outlines that a matrix \(A\) is invertible if \(\det A \neq 0\) and discusses methods for finding inverses, particularly through row reduction. Key properties of invertible matrices are examined, such as relationships to cancellation laws and transformations.
    • 2.5: Elementary Matrices
      This page covers the concept of elementary matrices, derived from the identity matrix through row operations, essential for solving linear systems and matrix inversion. It explains the transformation of a matrix using these elementary matrices, highlighting that a matrix is invertible if it can be represented this way. The page also introduces the Smith normal form, detailing the process for its computation and the uniqueness of reduced row-echelon forms.
    • 2.6: Linear Transformations
      This page covers essential concepts of linear algebra related to matrix transformations and linear transformations. It outlines the definitions and properties, illustrating how linear transformations can be represented as matrix transformations. Key examples include reflections and rotations in \(\mathbb{R}^2\), where the matrix representations are derived.
    • 2.7: LU-Factorization
      This page covers the LU factorization of matrices, where a matrix \(A\) can be expressed as \(A = LU\) with \(L\) (lower triangular) and \(U\) (upper triangular). It describes using Gaussian elimination for obtaining \(LU\) and the role of permutation matrices to handle row interchanges for effective factorization. The uniqueness of \(L\) and \(U\) is confirmed when \(A\) has full rank, and techniques for transforming matrices into row-echelon form are detailed.
    • 2.8: An Application to Input-Output Economic Models
      This page covers Wassily Leontief's Nobel Prize-winning work on mathematical models analyzing economic systems, using an input-output matrix to illustrate industry interactions in a primitive society. It explains the equilibrium condition where expenditures equal revenues and presents a model for equilibrium price structures.
    • 2.9: An Application to Markov Chains
      This page covers Markov chains, emphasizing transitions between states determined solely by the current state. It explores transition probabilities, state vectors, and steady-state distributions through various examples, including weather patterns and animal behaviors. By utilizing transition matrices, the text details how to compute future states, showing convergence to long-term probabilities.
    • 2.E: Supplementary Exercises for Chapter 2
      This page discusses supplementary exercises and concepts in linear algebra, focusing on solving matrix equations and understanding linear transformations. It covers properties of matrices including invertibility conditions for matrices \(P\) and \(Q\), their sum \(P + Q\), and the implications thereof. Key topics include scalar matrices, nilpotent and idempotent matrices, and matrix ranks and inverses, promoting a deeper comprehension of matrix theory and its applications.

    Thumbnail: Matrix multiplication. (CC BY-SA 4.0 International; Svjo via Wikipedia)


    This page titled 2: Matrix Algebra is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson via source content that was edited to the style and standards of the LibreTexts platform.