Skip to main content
Mathematics LibreTexts

2.1: Matrix Arithmetic

  • Page ID
    117908
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
    Outcomes
    1. Perform the matrix operations of matrix addition, scalar multiplication, transposition and matrix multiplication. Identify when these operations are not defined. Represent these operations in terms of the entries of a matrix.
    2. Prove algebraic properties for matrix addition, scalar multiplication, transposition, and matrix multiplication. Apply these properties to manipulate an algebraic expression involving matrices.
    3. Compute the inverse of a matrix using row operations, and prove identities involving matrix inverses.
    4. Solve a linear system using matrix algebra.
    5. Use multiplication by an elementary matrix to apply row operations.
    6. Write a matrix as a product of elementary matrices.

    You have now solved systems of equations by writing them in terms of an augmented matrix and then doing row operations on this augmented matrix. It turns out that matrices are important not only for systems of equations but also in many applications.

    Recall that a matrix is a rectangular array of numbers. Several of them are referred to as matrices. For example, here is a matrix.

    \[\left[ \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 5 & 2 & 8 & 7 \\ 6 & -9 & 1 & 2 \end{array} \right] \label{matrix}\]

    Recall that the size or dimension of a matrix is defined as \(m\times n\) where \(m\) is the number of rows and \(n\) is the number of columns. The above matrix is a \(3\times 4\) matrix because there are three rows and four columns. You can remember the columns are like columns in a Greek temple. They stand upright while the rows lay flat like rows made by a tractor in a plowed field.

    When specifying the size of a matrix, you always list the number of rows before the number of columns.You might remember that you always list the rows before the columns by using the phrase Rowman Catholic.

    Consider the following definition.

    Definition \(\PageIndex{1}\): Square Matrix

    A matrix \(A\) which has size \(n \times n\) is called a square matrix . In other words, \(A\) is a square matrix if it has the same number of rows and columns.

    There is some notation specific to matrices which we now introduce. We denote the columns of a matrix \(A\) by \(A_{j}\) as follows

    \[A = \left[ \begin{array}{rrrr} A_{1} & A_{2} & \cdots & A_{n} \end{array} \right]\nonumber \] Therefore, \(A_{j}\) is the \(j^{th}\) column of \(A\), when counted from left to right.

    The individual elements of the matrix are called entries or components of \(A\). Elements of the matrix are identified according to their position. The \(\mathbf{\left( i, j \right)}\)-entry of a matrix is the entry in the \(i^{th}\) row and \(j^{th}\) column. For example, in the matrix \(\eqref{matrix}\) above, \(8\) is in position \(\left(2,3 \right)\) (and is called the \(\left(2,3 \right)\)-entry) because it is in the second row and the third column.

    In order to remember which matrix we are speaking of, we will denote the entry in the \(i^{th}\) row and the \(j^{th}\) column of matrix \(A\) by \(a_{ij}\). Then, we can write \(A\) in terms of its entries, as \(A= \left[ a_{ij} \right]\). Using this notation on the matrix in \(\eqref{matrix}\), \(a_{23}=8, a_{32}=-9, a_{12}=2,\) etc.

    There are various operations which are done on matrices of appropriate sizes. Matrices can be added to and subtracted from other matrices, multiplied by a scalar, and multiplied by other matrices. We will never divide a matrix by another matrix, but we will see later how matrix inverses play a similar role.

    In doing arithmetic with matrices, we often define the action by what happens in terms of the entries (or components) of the matrices. Before looking at these operations in depth, consider a few general definitions.

    Definition \(\PageIndex{2}\): The Zero Matrix

    The \(m\times n\) zero matrix is the \(m\times n\) matrix having every entry equal to zero. It is denoted by \(0.\)

    One possible zero matrix is shown in the following example.

    Example \(\PageIndex{1}\): The Zero Matrix

    The \(2\times 3\) zero matrix is \(0= \left[ \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]\).

    Note there is a \(2\times 3\) zero matrix, a \(3\times 4\) zero matrix, etc. In fact there is a zero matrix for every size!

    Definition \(\PageIndex{3}\): Equality of Matrices

    Let \(A\) and \(B\) be two \(m\times n\) matrices. Then \(A=B\) means that for \(A=[a_{ij}]\) and \(B=[b_{ij}]\), \(a_{ij}=b_{ij}\) for all \(1\leq i\leq m\) and \(1\leq j\leq n\).

    In other words, two matrices are equal exactly when they are the same size and the corresponding entries are identical. Thus \[\left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{array} \right] \neq \left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \end{array} \right]\nonumber \] because they are different sizes. Also, \[\left[ \begin{array}{rr} 0 & 1 \\ 3 & 2 \end{array} \right] \neq \left[ \begin{array}{rr} 1 & 0 \\ 2 & 3 \end{array} \right]\nonumber \] because, although they are the same size, their corresponding entries are not identical.

    In the following section, we explore addition of matrices.

    Addition of Matrices

    When adding matrices, all matrices in the sum need have the same size. For example, \[\left[ \begin{array}{rr} 1 & 2 \\ 3 & 4 \\ 5 & 2 \end{array} \right]\nonumber \] and \[\left[ \begin{array}{rrr} -1 & 4 & 8\\ 2 & 8 & 5 \end{array} \right]\nonumber \] cannot be added, as one has size \(3 \times 2\) while the other has size \(2 \times 3\).

    However, the addition \[\left[ \begin{array}{rrr} 4 & 6 & 3\\ 5 & 0 & 4\\ 11 & -2 & 3 \end{array} \right] + \left[ \begin{array}{rrr} 0 & 5 & 0 \\ 4 & -4 & 14 \\ 1 & 2 & 6 \end{array} \right]\nonumber \] is possible.

    The formal definition is as follows.

    Definition \(\PageIndex{4}\): Addition of Matrices

    Let \(A=\left[ a_{ij}\right]\) and \(B=\left[ b_{ij}\right]\) be two \(m\times n\) matrices. Then \(A+B=C\) where \(C\) is the \(m \times n\) matrix \(C=\left[ c_{ij}\right]\) defined by \[c_{ij}=a_{ij}+b_{ij}\nonumber \]

    This definition tells us that when adding matrices, we simply add corresponding entries of the matrices. This is demonstrated in the next example.

    Example \(\PageIndex{2}\): Addition of Matrices of Same Size

    Add the following matrices, if possible. \[A = \left[ \begin{array}{ccc} 1 & 2 & 3 \\ 1 & 0 & 4 \end{array} \right], B = \left[ \begin{array}{rrr} 5 & 2 & 3 \\ -6 & 2 & 1 \end{array} \right]\nonumber \]

    Solution

    Notice that both \(A\) and \(B\) are of size \(2 \times 3\). Since \(A\) and \(B\) are of the same size, the addition is possible. Using Definition \(\PageIndex{4}\), the addition is done as follows. \[A + B = \left[ \begin{array}{rrr} 1 & 2 & 3 \\ 1 & 0 & 4 \end{array} \right] + \left[ \begin{array}{rrr} 5 & 2 & 3 \\ -6 & 2 & 1 \end{array} \right] = \left[ \begin{array}{rrr} 1+5 & 2+2 & 3+3 \\ 1+ -6 & 0+2 & 4+1 \end{array} \right] = \left[ \begin{array}{rrr} 6 & 4 & 6 \\ -5 & 2 & 5 \end{array} \right]\nonumber \]

    Addition of matrices obeys very much the same properties as normal addition with numbers. Note that when we write for example \(A+B\) then we assume that both matrices are of equal size so that the operation is indeed possible.

    Proposition \(\PageIndex{1}\): Properties of Matrix Addition

    Let \(A,B\) and \(C\) be matrices. Then, the following properties hold.

    • Commutative Law of Addition \[A+B=B+A \label{mat1}\]
    • Associative Law of Addition \[\left( A+B\right) +C=A+\left( B+C\right) \label{mat2}\]
    • Existence of an Additive Identity \[\begin{array}{c} \mbox{There exists a zero matrix 0 such that}\\ A+0=A \label{mat3} \end{array}\]
    • Existence of an Additive Inverse \[\begin{array}{c} \mbox{There exists a matrix $-A$ such that} \\ A+\left( -A\right) =0 \label{mat4} \end{array}\]
    Proof

    Consider the Commutative Law of Addition given in \(\eqref{mat1}\). Let \(A,B,C,\) and \(D\) be matrices such that \(A+B=C\) and \(B+A=D.\) We want to show that \(D=C\). To do so, we will use the definition of matrix addition given in Definition \(\PageIndex{4}\). Now, \[c_{ij}=a_{ij}+b_{ij}=b_{ij}+a_{ij}=d_{ij}\nonumber \] Therefore, \(C=D\) because the \(ij^{th}\) entries are the same for all \(i\) and \(j\). Note that the conclusion follows from the commutative law of addition of numbers, which says that if \(a\) and \(b\) are two numbers, then \(a+b = b+a\). The proof of the other results are similar, and are left as an exercise.

    We call the zero matrix in \(\eqref{mat3}\) the additive identity. Similarly, we call the matrix \(-A\) in \(\eqref{mat4}\) the additive inverse. \(-A\) is defined to equal \(\left( -1\right) A = [-a_{ij}].\) In other words, every entry of \(A\) is multiplied by \(-1\).

    In the next section we will study scalar multiplication in more depth to understand what is meant by \(\left( -1\right) A.\)

    Scalar Multiplication of Matrices

    Recall that we use the word scalar when referring to numbers. Therefore, scalar multiplication of a matrix is the multiplication of a matrix by a number. To illustrate this concept, consider the following example in which a matrix is multiplied by the scalar \(3\). \[3\left[ \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 5 & 2 & 8 & 7 \\ 6 & -9 & 1 & 2 \end{array} \right] = \left[ \begin{array}{rrrr} 3 & 6 & 9 & 12 \\ 15 & 6 & 24 & 21 \\ 18 & -27 & 3 & 6 \end{array} \right]\nonumber \]

    The new matrix is obtained by multiplying every entry of the original matrix by the given scalar.

    The formal definition of scalar multiplication is as follows.

    Definition \(\PageIndex{5}\): Scalar Multiplication of Matrices

    If \(A=\left[ a_{ij}\right]\) and \(k\) is a scalar, then \(kA=\left[ ka_{ij}\right] .\)

    Consider the following example.

    Example \(\PageIndex{3}\): Effect of Multiplication by a Scalar

    Find the result of multiplying the following matrix \(A\) by \(7\). \[A=\left[ \begin{array}{rr} 2 & 0 \\ 1 & -4 \end{array} \right]\nonumber \]

    Solution

    By Definition \(\PageIndex{5}\), we multiply each element of \(A\) by \(7\). Therefore, \[7A = 7\left[ \begin{array}{rr} 2 & 0 \\ 1 & -4 \end{array} \right] = \left[ \begin{array}{rr} 7(2) & 7(0) \\ 7(1) & 7(-4) \end{array} \right] = \left[ \begin{array}{rr} 14 & 0 \\ 7 & -28 \end{array} \right]\nonumber \]

    Similarly to addition of matrices, there are several properties of scalar multiplication which hold.

    Proposition \(\PageIndex{2}\): Properties of Scalar Multiplication

    Let \(A, B\) be matrices, and \(k, p\) be scalars. Then, the following properties hold.

    • Distributive Law over Matrix Addition \[k \left( A+B\right) =k A+ kB\nonumber \]
    • Distributive Law over Scalar Addition \[\left( k +p \right) A= k A+p A\nonumber \]
    • Associative Law for Scalar Multiplication \[k \left( p A\right) = \left( k p \right) A\nonumber \]
    • Rule for Multiplication by \(1\) \[1A=A\nonumber \]
    Proof

    The proof of this proposition is similar to the proof of Proposition \(\PageIndex{1}\) and is left an exercise to the reader.


    This page titled 2.1: Matrix Arithmetic is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Ken Kuttler (Lyryx) via source content that was edited to the style and standards of the LibreTexts platform.