2.1: Matrix Arithmetic
-
- Last updated
- Save as PDF
Outcomes
- Perform the matrix operations of matrix addition, scalar multiplication, transposition and matrix multiplication. Identify when these operations are not defined. Represent these operations in terms of the entries of a matrix.
- Prove algebraic properties for matrix addition, scalar multiplication, transposition, and matrix multiplication. Apply these properties to manipulate an algebraic expression involving matrices.
- Compute the inverse of a matrix using row operations, and prove identities involving matrix inverses.
- Solve a linear system using matrix algebra.
- Use multiplication by an elementary matrix to apply row operations.
- Write a matrix as a product of elementary matrices.
You have now solved systems of equations by writing them in terms of an augmented matrix and then doing row operations on this augmented matrix. It turns out that matrices are important not only for systems of equations but also in many applications.
Recall that a matrix is a rectangular array of numbers. Several of them are referred to as matrices . For example, here is a matrix.
\[\left[ \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 5 & 2 & 8 & 7 \\ 6 & -9 & 1 & 2 \end{array} \right] \label{matrix}\]
Recall that the size or dimension of a matrix is defined as \(m\times n\) where \(m\) is the number of rows and \(n\) is the number of columns. The above matrix is a \(3\times 4\) matrix because there are three rows and four columns. You can remember the columns are like columns in a Greek temple. They stand upright while the rows lay flat like rows made by a tractor in a plowed field.
When specifying the size of a matrix, you always list the number of rows before the number of columns.You might remember that you always list the rows before the columns by using the phrase Row man C atholic.
Consider the following definition.
There is some notation specific to matrices which we now introduce. We denote the columns of a matrix \(A\) by \(A_{j}\) as follows
\[A = \left[ \begin{array}{rrrr} A_{1} & A_{2} & \cdots & A_{n} \end{array} \right]\nonumber \] Therefore, \(A_{j}\) is the \(j^{th}\) column of \(A\) , when counted from left to right.
The individual elements of the matrix are called entries or components of \(A\) . Elements of the matrix are identified according to their position. The \(\mathbf{\left( i, j \right)}\) -entry of a matrix is the entry in the \(i^{th}\) row and \(j^{th}\) column. For example, in the matrix \(\eqref{matrix}\) above, \(8\) is in position \(\left(2,3 \right)\) (and is called the \(\left(2,3 \right)\) -entry) because it is in the second row and the third column.
In order to remember which matrix we are speaking of, we will denote the entry in the \(i^{th}\) row and the \(j^{th}\) column of matrix \(A\) by \(a_{ij}\) . Then, we can write \(A\) in terms of its entries, as \(A= \left[ a_{ij} \right]\) . Using this notation on the matrix in \(\eqref{matrix}\), \(a_{23}=8, a_{32}=-9, a_{12}=2,\) etc.
There are various operations which are done on matrices of appropriate sizes. Matrices can be added to and subtracted from other matrices, multiplied by a scalar, and multiplied by other matrices. We will never divide a matrix by another matrix, but we will see later how matrix inverses play a similar role.
In doing arithmetic with matrices, we often define the action by what happens in terms of the entries (or components) of the matrices. Before looking at these operations in depth, consider a few general definitions.
One possible zero matrix is shown in the following example.
Note there is a \(2\times 3\) zero matrix, a \(3\times 4\) zero matrix, etc. In fact there is a zero matrix for every size!
In other words, two matrices are equal exactly when they are the same size and the corresponding entries are identical. Thus \[\left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{array} \right] \neq \left[ \begin{array}{rr} 0 & 0 \\ 0 & 0 \end{array} \right]\nonumber \] because they are different sizes. Also, \[\left[ \begin{array}{rr} 0 & 1 \\ 3 & 2 \end{array} \right] \neq \left[ \begin{array}{rr} 1 & 0 \\ 2 & 3 \end{array} \right]\nonumber \] because, although they are the same size, their corresponding entries are not identical.
In the following section, we explore addition of matrices.
Addition of Matrices
When adding matrices, all matrices in the sum need have the same size. For example, \[\left[ \begin{array}{rr} 1 & 2 \\ 3 & 4 \\ 5 & 2 \end{array} \right]\nonumber \] and \[\left[ \begin{array}{rrr} -1 & 4 & 8\\ 2 & 8 & 5 \end{array} \right]\nonumber \] cannot be added, as one has size \(3 \times 2\) while the other has size \(2 \times 3\) .
However, the addition \[\left[ \begin{array}{rrr} 4 & 6 & 3\\ 5 & 0 & 4\\ 11 & -2 & 3 \end{array} \right] + \left[ \begin{array}{rrr} 0 & 5 & 0 \\ 4 & -4 & 14 \\ 1 & 2 & 6 \end{array} \right]\nonumber \] is possible.
The formal definition is as follows.
This definition tells us that when adding matrices, we simply add corresponding entries of the matrices. This is demonstrated in the next example.
Example \(\PageIndex{2}\): Addition of Matrices of Same Size
Add the following matrices, if possible. \[A = \left[ \begin{array}{ccc} 1 & 2 & 3 \\ 1 & 0 & 4 \end{array} \right], B = \left[ \begin{array}{rrr} 5 & 2 & 3 \\ -6 & 2 & 1 \end{array} \right]\nonumber \]
Solution
Notice that both \(A\) and \(B\) are of size \(2 \times 3\) . Since \(A\) and \(B\) are of the same size, the addition is possible. Using Definition \(\PageIndex{4}\) , the addition is done as follows. \[A + B = \left[ \begin{array}{rrr} 1 & 2 & 3 \\ 1 & 0 & 4 \end{array} \right] + \left[ \begin{array}{rrr} 5 & 2 & 3 \\ -6 & 2 & 1 \end{array} \right] = \left[ \begin{array}{rrr} 1+5 & 2+2 & 3+3 \\ 1+ -6 & 0+2 & 4+1 \end{array} \right] = \left[ \begin{array}{rrr} 6 & 4 & 6 \\ -5 & 2 & 5 \end{array} \right]\nonumber \]
Addition of matrices obeys very much the same properties as normal addition with numbers. Note that when we write for example \(A+B\) then we assume that both matrices are of equal size so that the operation is indeed possible.
Proposition \(\PageIndex{1}\): Properties of Matrix Addition
Let \(A,B\) and \(C\) be matrices. Then, the following properties hold.
- Commutative Law of Addition \[A+B=B+A \label{mat1}\]
- Associative Law of Addition \[\left( A+B\right) +C=A+\left( B+C\right) \label{mat2}\]
- Existence of an Additive Identity \[\begin{array}{c} \mbox{There exists a zero matrix 0 such that}\\ A+0=A \label{mat3} \end{array}\]
- Existence of an Additive Inverse \[\begin{array}{c} \mbox{There exists a matrix $-A$ such that} \\ A+\left( -A\right) =0 \label{mat4} \end{array}\]
- Proof
-
Consider the Commutative Law of Addition given in \(\eqref{mat1}\). Let \(A,B,C,\) and \(D\) be matrices such that \(A+B=C\) and \(B+A=D.\) We want to show that \(D=C\) . To do so, we will use the definition of matrix addition given in Definition \(\PageIndex{4}\) . Now, \[c_{ij}=a_{ij}+b_{ij}=b_{ij}+a_{ij}=d_{ij}\nonumber \] Therefore, \(C=D\) because the \(ij^{th}\) entries are the same for all \(i\) and \(j\) . Note that the conclusion follows from the commutative law of addition of numbers, which says that if \(a\) and \(b\) are two numbers, then \(a+b = b+a\) . The proof of the other results are similar, and are left as an exercise.
We call the zero matrix in \(\eqref{mat3}\) the additive identity . Similarly, we call the matrix \(-A\) in \(\eqref{mat4}\) the additive inverse . \(-A\) is defined to equal \(\left( -1\right) A = [-a_{ij}].\) In other words, every entry of \(A\) is multiplied by \(-1\) .
In the next section we will study scalar multiplication in more depth to understand what is meant by \(\left( -1\right) A.\)
Scalar Multiplication of Matrices
Recall that we use the word scalar when referring to numbers. Therefore, scalar multiplication of a matrix is the multiplication of a matrix by a number. To illustrate this concept, consider the following example in which a matrix is multiplied by the scalar \(3\) . \[3\left[ \begin{array}{rrrr} 1 & 2 & 3 & 4 \\ 5 & 2 & 8 & 7 \\ 6 & -9 & 1 & 2 \end{array} \right] = \left[ \begin{array}{rrrr} 3 & 6 & 9 & 12 \\ 15 & 6 & 24 & 21 \\ 18 & -27 & 3 & 6 \end{array} \right]\nonumber \]
The new matrix is obtained by multiplying every entry of the original matrix by the given scalar.
The formal definition of scalar multiplication is as follows.
Consider the following example.
Example \(\PageIndex{3}\): Effect of Multiplication by a Scalar
Find the result of multiplying the following matrix \(A\) by \(7\) . \[A=\left[ \begin{array}{rr} 2 & 0 \\ 1 & -4 \end{array} \right]\nonumber \]
Solution
By Definition \(\PageIndex{5}\) , we multiply each element of \(A\) by \(7\) . Therefore, \[7A = 7\left[ \begin{array}{rr} 2 & 0 \\ 1 & -4 \end{array} \right] = \left[ \begin{array}{rr} 7(2) & 7(0) \\ 7(1) & 7(-4) \end{array} \right] = \left[ \begin{array}{rr} 14 & 0 \\ 7 & -28 \end{array} \right]\nonumber \]
Similarly to addition of matrices, there are several properties of scalar multiplication which hold.
Proposition \(\PageIndex{2}\): Properties of Scalar Multiplication
Let \(A, B\) be matrices, and \(k, p\) be scalars. Then, the following properties hold.
- Distributive Law over Matrix Addition \[k \left( A+B\right) =k A+ kB\nonumber \]
- Distributive Law over Scalar Addition \[\left( k +p \right) A= k A+p A\nonumber \]
- Associative Law for Scalar Multiplication \[k \left( p A\right) = \left( k p \right) A\nonumber \]
- Rule for Multiplication by \(1\) \[1A=A\nonumber \]
- Proof
-
The proof of this proposition is similar to the proof of Proposition \(\PageIndex{1}\) and is left an exercise to the reader.