Skip to main content
Mathematics LibreTexts

3.2: Matrices and linear systems

  • Page ID
    363
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Matrices and vectors

    Before we can start talking about linear systems of ODEs, we will need to talk about matrices, so let us review these briefly. A matrix is an \(m \times n \) array of numbers (\(m\) rows and \(n\) columns). For example, we denote a \( 3 \times 5\) matrix as follows

    \[ A = \begin {bmatrix} a_{11} & a_{12} & a_{13} & a_{14} & a_{15} \\ a_{21} & a_{22} & a_{23} & a_{24} & a_{25} \\ a_{31} & a_{32} & a_{33} & a_{34} & a_{35} \end {bmatrix} \nonumber \]

    The numbers \(a_{ij}\) are called elements or entries.

    By a vector we will usually mean a column vector, that is an \( m \times 1 \) matrix. If we mean a row vector we will explicitly say so (a row vector is a \( 1 \times n\) matrix). We will usually denote matrices by upper case letters and vectors by lower case letters with an arrow such as \( \vec {\text {x}}\) or \( \vec {b} \). By \( \vec {0} \) we will mean the vector of all zeros.

    It is easy to define some operations on matrices. Note that we will want \( 1 \times 1 \) matrices to really act like numbers, so our operations will have to be compatible with this viewpoint.

    First, we can multiply by a scalar (a number). This means just multiplying each entry by the same number. For example,

    \[ 2 {\begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end {bmatrix}} = \begin {bmatrix} 2 & 4 & 6 \\ 8 & 10 & 12 \end {bmatrix} \nonumber \]

    Matrix addition is also easy. We add matrices element by element. For example,

    \[ \begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end {bmatrix} + \begin {bmatrix} 1 & 1 & -1 \\ 0 & 2 & 4 \end {bmatrix} = \begin {bmatrix} 2 & 3 & 2 \\ 4 & 7 & 10 \end {bmatrix} \nonumber \]

    If the sizes do not match, then addition is not defined.

    If we denote by 0 the matrix of with all zero entries, by \( c, d \) scalars, and by \( A, B, C\) matrices, we have the following familiar rules.

    \[\begin{align}\begin{aligned} A + 0 &= A = 0 + A \\ A + B &= B + A \\ (A + B) + C &= A + ( B + C) \\ c( A + B) &= cA + cB \\ ( c + d) A &= cA + dA \end{aligned}\end{align} \nonumber \]

    Another useful operation for matrices is the so-called transpose. This operation just swaps rows and columns of a matrix. The transpose of \( A\) is denoted by \(A^T\). Example:

    \[ { \begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end {bmatrix}}^T = \begin {bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end {bmatrix} \nonumber \]

    Matrix Multiplication

    Let us now define matrix multiplication. First we define the so-called dot product (or inner product) of two vectors. Usually this will be a row vector multiplied with a column vector of the same size. For the dot product we multiply each pair of entries from the first and the second vector and we sum these products. The result is a single number. For example,

    \[ \begin {bmatrix} a_1 & a_2 & a_3 \end {bmatrix} \cdot \begin {bmatrix} b_1 \\ b_2 \\ b_3 \end {bmatrix} = \begin {bmatrix} a_1b_1 + a_2b_2 + a_3b_3 \end {bmatrix} \nonumber \]

    And similarly for larger (or smaller) vectors.

    Armed with the dot product we can define the product of matrices. First let us denote by \( \text {row}_i (A) \) the \( i^{th}\) row of \(A\) and by \( \text {column}_j (A) \) the \(j^{th} \) column of \(A\). For an \(m \times n \) matrix \(A\) and an \( n \times p \) matrix \(B\) we can define the product \(AB\). We let \(AB\) be an \(m \times p \) matrix whose \( ij^{th} \) entry is

    \[ \text {row}_i (A) \cdot \text {column}_j (B) \nonumber \]

    Do note how the sizes match up. Example:

    \[\begin{gathered} \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} \begin{bmatrix} 1 & 0 & -1 \\ 1 & 1 & 1 \\ 1 & 0 & 0 \end{bmatrix} = \\ = \begin{bmatrix} 1\cdot 1 + 2\cdot 1 + 3 \cdot 1 & & 1\cdot 0 + 2\cdot 1 + 3 \cdot 0 & & 1\cdot (-1) + 2\cdot 1 + 3 \cdot 0 \\ 4\cdot 1 + 5\cdot 1 + 6 \cdot 1 & & 4\cdot 0 + 5\cdot 1 + 6 \cdot 0 & & 4\cdot (-1) + 5\cdot 1 + 6 \cdot 0 \end{bmatrix} = \begin{bmatrix} 6 & 2 & 1 \\ 15 & 5 & 1 \end{bmatrix}\end{gathered} \nonumber \]

    For multiplication we want an analog of a 1. This analog is the so-called identity matrix. The identity matrix is a square matrix with 1s on the main diagonal and zeros everywhere else. It is usually denoted by \(I\). For each size we have a different identity matrix and so sometimes we may denote the size as a subscript. For example, the \(I_3\) would be the \( 3 \times 3\) identity matrix

    \[ I = I_3 = \begin {bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end {bmatrix} \nonumber \]

    We have the following rules for matrix multiplication. Suppose that \( A, B, C\) are matrices of the correct sizes so that the following make sense. Let \( \alpha\) denote a scalar (number).

    \[\begin{align}\begin{aligned} A (BC) &= (AB) C\\ A (B + C) &= AB + AC \\ (B + C) A &= BA + CA \\ \alpha (AB) &= ( \alpha A )B = A ( \alpha B) \\ IA &= A = AI \end{aligned}\end{align} \nonumber \]

    A few warnings are in order.

    1. \( AB \ne BA \) in general (it may be true by fluke sometimes). That is, matrices do not commute. For example take \( A = \begin {bmatrix} 1 & 1 \\ 1 & 1 \end {bmatrix} \) and \( B = \begin {bmatrix} 1 & 0 \\ 0 & 2 \end {bmatrix} \).
    2. \( AB = AC \) does not necessarily imply \(B = C\), even if \(A\) is not 0.
    3. \(AB = 0\) does not necessarily mean that \(A = 0\) or \(B = 0\). For example take \(A = B = \begin {bmatrix} 0 & 1 \\ 0 & 0 \end {bmatrix} \).

    For the last two items to hold we would need to “divide” by a matrix. This is where the matrix inverse comes in. Suppose that \(A\) and \(B\) are \( n \times n \) matrices such that

    \[ AB = I = BA \nonumber \]

    Then we call \(B\) the inverse of \(A\) and we denote \(B\) by \(A^{-1}\). If the inverse of \(A\) exists, then we call \(A\) invertible. If \(A\) is not invertible we sometimes say \(A\) is singular.

    If \(A\) is invertible, then \(AB = AC\) does imply that \(B = C\) (in particular the inverse of \(A\) is unique). We just multiply both sides by \(A^{-1} \) to get \(A^{-1} AB = A^{-1} AC\) or \(IB = IC\) or \(B = C\). It is also not hard to see that \( {(A^{-1})}^{-1} = A\).

    3.2.3Determinant

    We can now talk about determinants of square matrices. We define the determinant of a \( 1 \times 1\) matrix as the value of its only entry. For a \( 2 \times 2\) matrix we define

    \[ \text {det} \left ( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \right ) \overset {\text {def}}{=} ad - bc \nonumber \]

    Before trying to compute the determinant for larger matrices, let us first note the meaning of the determinant. Consider an \(n \times n\) matrix as a mapping of the \( n\) dimensional euclidean space \( \mathbb {R}^n\) to \( \mathbb {R}^n\). In particular, a \( 2 \times 2\) matrix \(A\) is a mapping of the plane to itself, where \( \vec {x} \) gets sent to \( A \vec {x} \). Then the determinant of \(A\) is the factor by which the area of objects gets changed. If we take the unit square (square of side 1) in the plane, then \(A\) takes the square to a parallelogram of area \( \mid \text {det} (A) \mid \). The sign of \( \text {det} (A) \) denotes changing of orientation (negative if the axes got flipped). For example, let

    \[ A = \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \nonumber \]

    Then \( \text {det} (A) = 1 + 1 = 2 \). Let us see where the square with vertices \( (0, 0), (1, 0), (0, 1)\) and \( (1, 1) \) gets sent. Clearly \( (0, 0 ) \) gets sent to \( (0, 0)\).

    \[ \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \begin {bmatrix} 1 \\ 0 \end {bmatrix} = \begin {bmatrix} 1 \\ -1 \end {bmatrix}, \quad \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \begin {bmatrix} 0 \\ 1 \end {bmatrix} = \begin {bmatrix} 1 \\ 1 \end {bmatrix},\quad \begin {bmatrix} 1 & 1 \\ -1 & 1 \end {bmatrix} \begin {bmatrix} 1 \\ 1 \end {bmatrix} = \begin {bmatrix} 2 \\ 0 \end {bmatrix} \nonumber \]

    So the image of the square is another square. The image square has a side of length \( \sqrt {2} \) and is therefore of area 2.

    If you think back to high school geometry, you may have seen a formula for computing the area of a parallelogram with vertices \( (0, 0), (a, c), (b, d)\) and \( (a + b, c + d ) \). And it is precisely

    \[ \left| \text {det} \left ( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \right ) \right| \nonumber \]

    The vertical lines above mean absolute value. The matrix \( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \) carries the unit square to the given parallelogram.

    Now we can define the determinant for larger matrices. We define \( A_{ij} \) as the matrix \(A\) with the \( i^{th}\) row and the \(j^{th} \) column deleted. To compute the determinant of a matrix, pick one row, say the \(i^{th} \) row and compute.

    \[ \text {det} (A) = \sum _ {j=1}^n (-1)^{i+j} a_{ij} \text {det} (A_{ij}) \nonumber \]

    For the first row we get

    \[ \text {det} (A) = a_{11} \text {det} (A_{11}) - a_{12} \text {det} (A_{12}) + a_{13} \text {det} (A_{13}) - \dots \begin {cases} +a_{1n} \text {det} (A_{1n} & \text {if n is odd} \\ -a_{1n} \text {det} (A_{1n} & \text {if n even} \end {cases} \nonumber \]

    We alternately add and subtract the determinants of the submatrices \(A_{ij}\) for a fixed \(i\) and all \(j\). For a \(3 \times 3\) matrix, picking the first row, we would get \( \text {det} (A) = a_{11} \text {det} (A_{11}) - a_{12} \text {det} (A_{12}) + a_{13} \text {det} (A_{13})\). For example,

    \[\begin{align}\begin{aligned} \text {det} \left ( \begin {bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end {bmatrix} \right ) &= 1 \cdot \text {det} \left ( \begin {bmatrix} 5 & 6 \\ 8 & 9 \end {bmatrix} \right ) - 2 \cdot \text {det} \left ( \begin {bmatrix} 4 & 6 \\ 7 & 9 \end {bmatrix} \right ) + 3 \cdot \text {det} \left ( \begin {bmatrix} 4 & 5 \\ 7 & 8 \end {bmatrix} \right ) \\ &= 1(5 \cdot 9 - 6 \cdot 8) - 2 ( 4 \cdot 9 - 6 \cdot 7) + 3 ( 4 \cdot 8 - 5 \cdot 7 ) = 0 \end{aligned}\end{align} \nonumber \]

    The numbers \( (-1)^{i+j} \text {det} (A_{ij}) \) are called cofactors of the matrix and this way of computing the determinant is called the cofactor expansion. It is also possible to compute the determinant by expanding along columns (picking a column instead of a row above).

    Note that a common notation for the determinant is a pair of vertical lines:

    \[ \begin {bmatrix} a & b \\ c & d \end {bmatrix} = \text {det} \left ( \begin {bmatrix} a & b \\ c & d \end {bmatrix} \right ) \nonumber \]

    I personally find this notation confusing as vertical lines usually mean a positive quantity, while determinants can be negative. I will not use this notation in this book. One of the most important properties of determinants (in the context of this course) is the following theorem.

    Think of the determinants telling you the scaling of a mapping. If \(B\) doubles the sizes of geometric objects and \(A\) triples them, then \(AB\) (which applies \(B\) to an object and then \(A\)) should make size go up by a factor of \(6\). This is true in general: \[\det(AB) = \det(A)\det(B) . \nonumber \] This property is one of the most useful, and it is employed often to actually compute determinants. A particularly interesting consequence is to note what it means for existence of inverses. Take \(A\) and \(B\) to be inverses of each other, that is \(AB=I\). Then \[\det(A)\det(B) = \det(AB) = \det(I) = 1 . \nonumber \] Neither \(\det(A)\) nor \(\det(B)\) can be zero. Let us state this as a theorem as it will be very important in the context of this course.

    Theorem \(\PageIndex{1}\)

    An \( n \times n\) matrix \(A\) is invertible if and only if \( \text {det} (A) \ne 0 \).

    In fact, there is a formula for the inverse of a \(2 \times 2 \) matrix

    \[ {\begin {bmatrix} a & b \\ c & d \end {bmatrix}}^ {-1} = \frac {1}{ad - bc} \begin {bmatrix} d & -b \\ -c & a \end {bmatrix} \nonumber \]

    Notice the determinant of the matrix in the denominator of the fraction. The formula only works if the determinant is nonzero, otherwise we are dividing by zero.

    Solving Linear Systems

    One application of matrices we will need is to solve systems of linear equations. This is best shown by example. Suppose that we have the following system of linear equations

    \[\begin{align}\begin{aligned} 2x_1 + 2x_2 + 2x_3 &= 2 \\ x_1 + x_2 + 3x_3 &= 5\\ x_1 + 4x_2 + x_3 &= 10\end{aligned}\end{align} \nonumber \]

    Without changing the solution, we could swap equations in this system, we could multiply any of the equations by a nonzero number, and we could add a multiple of one equation to another equation. It turns out these operations always suffice to find a solution.

    It is easier to write the system as a matrix equation. Note that the system can be written as

    \[ \begin {bmatrix} 2 & 2 & 2 \\ 1 & 1 & 3 \\ 1 & 4 & 1 \end {bmatrix} \begin {bmatrix} x_1 \\ x_2 \\ x_3 \end {bmatrix} = \begin {bmatrix} 2 \\ 5 \\ 10 \end {bmatrix} \nonumber \]

    To solve the system we put the coefficient matrix (the matrix on the left hand side of the equation) together with the vector on the right and side and get the so-called augmented matrix

    \[ \left [ \begin {array}{ccc|c} 2 & 2 & 2 & 2 \\ 1 & 1 & 3 & 5 \\ 1 & 4 & 1&10 \end {array} \right ] \nonumber \]

    We apply the following three elementary operations.

    1. Swap two rows.
    2. Add a multiple of one row to another row.
    3. Multiply a row by a nonzero number.

    We will keep doing these operations until we get into a state where it is easy to read off the answer, or until we get into a contradiction indicating no solution, for example if we come up with an equation such as \( 0 = 1\).

    Let us work through the example. First multiply the first row by \( \frac {1}{2} \) to obtain

    \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 1 & 1 & 3 & 5 \\ 1 & 4 & 1&10 \end {array} \right ] \nonumber \]

    Now subtract the first row from the second and third row.

    \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 0 & 2 & 4 \\ 0 & 3 & 0 & 9 \end {array} \right ] \nonumber \]

    Multiply the last row by \(\frac {1}{3} \) and the second row by \(\frac {1}{2} \).

    \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 2 \\ 0 & 1 & 0 & 3 \end {array} \right ] \nonumber \]

    Swap rows 2 and 3.

    \[ \left [ \begin {array}{ccc|c} 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 2 \end {array} \right ] \nonumber \]

    Subtract the last row from the first, then subtract the second row from the first.

    \[ \left [ \begin {array}{ccc|c} 1 & 0 & 0 & -4 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 2 \end {array} \right ] \nonumber \]

    If we think about what equations this augmented matrix represents, we see that \( x_1 = -4, x_2 = 3 \) and \( x_3 = 2 \). We try this solution in the original system and, voilà, it works!

    Exercise \(\PageIndex{1}\)

    Check that the solution above really solves the given equations.

    We write this equation in matrix notation as \[A \vec{x} = \vec{b} , \nonumber \] where \(A\) is the matrix \(\left[ \begin{smallmatrix} 2 & 2 & 2 \\ 1 & 1 & 3 \\ 1 & 4 & 1 \end{smallmatrix} \right]\) and \(\vec{b}\) is the vector \(\left[ \begin{smallmatrix} 2 \\ 5 \\ 10 \end{smallmatrix} \right]\). The solution can also be computed via the inverse, \[\vec{x} = A^{-1} A \vec{x} = A^{-1} \vec{b} . \nonumber \]

    It is possible that the solution is not unique, or that no solution exists. It is easy to tell if a solution does not exist. If during the row reduction you come up with a row where all the entries except the last one are zero (the last entry in a row corresponds to the right-hand side of the equation), then the system is inconsistent and has no solution. For example, for a system of 3 equations and 3 unknowns, if you find a row such as \([\,0 \quad 0 \quad 0 ~\,|\,~ 1\,]\) in the augmented matrix, you know the system is inconsistent. That row corresponds to \(0=1\).

    You generally try to use row operations until the following conditions are satisfied. The first (from the left) nonzero entry in each row is called the leading entry.

    1. The leading entry in any row is strictly to the right of the leading entry of the row above.
    2. Any zero rows are below all the nonzero rows.
    3. All leading entries are \(1\).
    4. All the entries above and below a leading entry are zero.

    Such a matrix is said to be in reduced row echelon form. The variables corresponding to columns with no leading entries are said to be free variables. Free variables mean that we can pick those variables to be anything we want and then solve for the rest of the unknowns.

    Example \(\PageIndex{1}\)

    The following augmented matrix is in reduced row echelon form.

    \[ \left [ \begin {array}{ccc|c} 1 & 2 & 0 & 3 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 \end {array} \right ] \nonumber \]

    Suppose the variables are \( x_1, x_2\) and \(x_3\). Then \(x_2\) is the free variable, \( x_1 = 3 - 2x_2\), and \( x_3 = 1\).

    On the other hand if during the row reduction process you come up with the matrix

    \[ \left [ \begin {array}{ccc|c} 1 & 2 & 13 & 3 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 3 \end {array} \right ] \nonumber \]

    there is no need to go further. The last row corresponds to the equation \( 0x_1 + 0x_2 + 0x_3 = 3 \), which is preposterous. Hence, no solution exists.

    Computing the Inverse

    If the coefficient matrix is square and there exists a unique solution \( \vec {x} \) to \( A \vec {x} = \vec {b} \) for any \( \vec {b} \), then \( A\) is invertible. In fact by multiplying both sides by \( A^{-1} \) you can see that \( \vec {x} = A^{-1} \vec {b} \). So it is useful to compute the inverse if you want to solve the equation for many different right hand sides \( \vec {b}\).

    The \( 2 \times 2 \) inverse can be given by a formula, but it is also not hard to compute inverses of larger matrices. While we will not have too much occasion to compute inverses for larger matrices than \( 2 \times 2\) by hand, let us touch on how to do it. Finding the inverse of \(A\) is actually just solving a bunch of linear equations. If we can solve \( A \vec {x}_k = \vec {e}_k\) where \( \vec {e}_k \) is the vector with all zeros except a 1 at the \( k^{th} \) position, then the inverse is the matrix with the columns \( \vec {x}_k \) for \( k = 1, \dots , n \) (exercise: why?). Therefore, to find the inverse we can write a larger \(n \times 2n \) augmented matrix \( [ A \mid I ] \), where \(I\) is the identity. We then perform row reduction. The reduced row echelon form of \( [ A \mid I ] \) will be of the form \( [ I \mid A^{-1} ] \) if and only if \(A\) is invertible. We can then just read off the inverse \( A^{-1}\).


    This page titled 3.2: Matrices and linear systems is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Jiří Lebl via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.