2.1: Matrix Addition and Scalar Multiplication
- Page ID
- 63383
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)- When are two matrices equal?
- Write an explanation of how to add matrices as though wrong to someone who knows what a matrix is but not much more.
- T/F: There is only 1 zero matrix.
- T/F: To multiply a matrix by 2 means to multiply each entry in the matrix by 2.
In the past, when we dealt with expressions that used “\(x\),” we didn’t just add and multiply \(x\)’s together for the fun of it, but rather because we were usually given some sort of equation that had \(x\) in it and we had to “solve for \(x\).”
This begs the question, “What does it mean to be equal?” Two numbers are equal, when, \(\ldots\), uh, \(\ldots\), nevermind. What does it mean for two matrices to be equal? We say that matrices \(A\) and \(B\) are equal when their corresponding entries are equal. This seems like a very simple definition, but it is rather important, so we give it a box.
Two \(m\times n\) matrices \(A\) and \(B\) are equal if their corresponding entries are equal.
Notice that our more formal definition specifies that if matrices are equal, they have the same dimensions. This should make sense.
Now we move on to describing how to add two matrices together. To start off, take a wild stab: what do you think the following sum is equal to?
\[\left[\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}\right]\: +\: \left[\begin{array}{cc}{2}&{-1}\\{5}&{7}\end{array}\right]\: =\: ? \nonumber \]
If you guessed
\[\left[\begin{array}{cc}{3}&{1}\\{8}&{11}\end{array}\right]\: , \nonumber \]
you guessed correctly. That wasn’t so hard, was it?
Let’s keep going, hoping that we are starting to get on a roll. Make another wild guess: what do you think the following expression is equal to?
\[3\cdot\:\left[\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}\right]\: = \: ? \nonumber \]
If you guessed
\[\left[\begin{array}{cc}{3}&{6}\\{9}&{12}\end{array}\right]\: , \nonumber \]
you guessed correctly!
Even if you guessed wrong both times, you probably have seen enough in these two examples to have a fair idea now what matrix addition and scalar multiplication are all about.
Before we formally define how to perform the above operations, let us first recall that if \(A\) is an \(m\times n\) matrix, then we can write \(A\) as
\[A=\left[\begin{array}{cccc}{a_{11}}&{a_{12}}&{\cdots}&{a_{1n}}\\{a_{21}}&{a_{22}}&{\cdots}&{a_{2n}}\\{\vdots}&{\vdots}&{\ddots}&{\vdots}\\{a_{m1}}&{a_{m2}}&{\cdots}&{a_{mn}}\end{array}\right] . \nonumber \]
Secondly, we should define what we mean by the word scalar. A scalar is any number that we multiply a matrix by. (In some sense, we use that number to scale the matrix.) We are now ready to define our first arithmetic operations.
Let \(A\) and \(B\) be \(m\times n\) matrices. The sum of \(A\) and \(B\), denoted \(A + B\), is
\[\left[\begin{array}{cccc}{a_{11}+b_{11}}&{a_{12}+b_{12}}&{\cdots}&{a_{1n}+b_{1n}} \\ {a_{21}+b_{21}}&{a_{22}+b_{22}}&{\cdots}&{a_{2n}+b_{2n}} \\ {\vdots}&{\vdots}&{\ddots}&{\vdots} \\ {a_{m1}+b_{m1}}&{a_{m2}+b_{m2}}&{\cdots}&{a_{mn}+b_{mn}}\end{array}\right] . \nonumber \]
Let \(A\) be an \(m\times n\) matrix and let \(k\) be a scalar. The scalar multiplication of \(k\) and \(A\), denoted \(kA\), is
\[\left[\begin{array}{cccc}{ka_{11}}&{ka_{12}}&{\cdots}&{ka_{1n}} \\ {ka_{21}}&{ka_{22}}&{\cdots}&{ka_{2n}} \\ {\vdots}&{\vdots}&{\ddots}&{\vdots} \\ {ka_{m1}}&{ka_{m2}}&{\cdots}&{ka_{mn}}\end{array}\right] \nonumber \]
We are now ready for an example.
Let
\[A=\left[\begin{array}{ccc}{1}&{2}&{3}\\{-1}&{2}&{1}\\{5}&{5}&{5}\end{array}\right] \:, \qquad B=\left[\begin{array}{ccc}{2}&{4}&{6}\\{1}&{2}&{2}\\{-1}&{0}&{4}\end{array}\right] \:, \qquad C=\left[\begin{array}{ccc}{1}&{2}&{3}\\{9}&{8}&{7}\end{array}\right] \:. \nonumber \]
Simplify the following matrix expressions.
- \(A+B\)
- \(B+A\)
- \(A-B\)
- \(A+C\)
- \(-3A+2B\)
- \(A-A\)
- \(5A+5B\)
- \(5(A+B)\)
Solution
- \(A+B=\left[\begin{array}{ccc}{3}&{6}&{9}\\{0}&{4}&{3}\\{4}&{5}&{9}\end{array}\right]\)
- \(B+A=\left[\begin{array}{ccc}{3}&{6}&{9}\\{0}&{4}&{3}\\{4}&{5}&{9}\end{array}\right]\)
- \(A-B=\left[\begin{array}{ccc}{-1}&{-2}&{-3}\\{-2}&{0}&{-1}\\{6}&{5}&{1}\end{array}\right]\)
- \(A+C\) is not defined. If we look at our definition of matrix addition, we see that the two matrices need to be the same size. Since \(A\) and \(C\) have different dimensions, we don’t even try to create something as an addition; we simply say that the sum is not defined.
- \(-3A+2B=\left[\begin{array}{ccc}{1}&{2}&{3}\\{5}&{-2}&{1}\\{-17}&{-15}&{-7}\end{array}\right]\)
- \(A-A=\left[\begin{array}{ccc}{0}&{0}&{0}\\{0}&{0}&{0}\\{0}&{0}&{0}\end{array}\right]\)
- Strictly speaking, this is \(\left[\begin{array}{ccc}{5}&{10}&{15}\\{-5}&{10}&{5}\\{25}&{25}&{25}\end{array}\right]\: + \:\left[ \begin{array}{ccc}{10}&{20}&{30}\\{5}&{10}&{10}\\{-5}&{0}&{20}\end{array}\right]\: = \:\left[ \begin{array}{ccc}{15}&{30}&{45}\\{0}&{20}&{15}\\{20}&{25}&{45}\end{array}\right]\).
- Strictly speaking, this is
\(\begin{aligned}5\left(\left[\begin{array}{ccc}{1}&{2}&{3}\\{-1}&{2}&{1}\\{5}&{5}&{5}\end{array}\right]\:+ \:\left[\begin{array}{ccc}{2}&{4}&{6}\\{1}&{2}&{2}\\{-1}&{0}&{4}\end{array}\right]\right)\: &=\:5\cdot\left[\begin{array}{ccc}{3}&{6}&{9}\\{0}&{4}&{3}\\{4}&{5}&{9}\end{array}\right] \\ &=\left[\begin{array}{ccc}{15}&{30}&{45}\\{0}&{20}&{15}\\{20}&{25}&{45}\end{array}\right]\end{aligned}\)
Our example raised a few interesting points. Notice how \(A+B = B+A\). We probably aren’t surprised by this, since we know that when dealing with numbers, \(a+b = b+a\). Also, notice that \(5A+5B=5(A+B)\). In our example, we were careful to compute each of these expressions following the proper order of operations; knowing these are equal allows us to compute similar expressions in the most convenient way.
Another interesting thing that came from our previous example is that
\[A-A=\left[\begin{array}{ccc}{0}&{0}&{0}\\{0}&{0}&{0}\\{0}&{0}&{0}\end{array}\right] \: . \nonumber \]
It seems like this should be a special matrix; after all, every entry is 0 and 0 is a special number.
In fact, this is a special matrix. We define , which we read as “the zero matrix,” to be the matrix of all zeros.\(^{1}\) We should be careful; this previous “definition” is a bit ambiguous, for we have not stated what size the zero matrix should be. Is \(\left[\begin{array}{cc}{0}&{0}\\{0}&{0}\end{array}\right]\) the zero matrix? How about \(\left[\begin{array}{cc}{0}&{0}\end{array}\right]\) ?
Let’s not get bogged down in semantics. If we ever see \(\mathbf{0}\) in an expression, we will usually know right away what size \(\mathbf{0}\) should be; it will be the size that allows the expression to make sense. If \(A\) is a \(3\times 5\) matrix, and we write \(A+\mathbf{0}\), we’ll simply assume that \(\mathbf{0}\) is also a \(3\times 5\) matrix. If we are ever in doubt, we can add a subscript; for instance, \(\mathbf{0}_{2\times 7}\) is the \(2\times7\) matrix of all zeros.
Since the zero matrix is an important concept, we give it it’s own definition box.
The \(m\times n\) matrix of all zeros, denoted \(\mathbf{0}_{m\times n}\), is the zero matrix. When the dimensions of the zero matrix are clear from the context, the subscript is generally omitted.
The following presents some of the properties of matrix addition and scalar multiplication that we discovered above, plus a few more.
The following equalities hold for all \(m\times n\) matrices \(A\), \(B\) and \(C\) and scalars \(k\).
- \(A+B=B+A\) (Commutative Property)
- \((A+B)+C=A+(B+C)\) (Associative Property)
- \(k(A+B)=kA+kB\) (Scalar Multiplication Distributive Property)
- \(kA=Ak\)
- \(A+\mathbf{0}=\mathbf{0}+A=A\) (Additive Identity)
- \(0A=\mathbf{0}\)
Be sure that this last property makes sense; it says that if we multiply any matrix by the number 0, the result is the zero matrix, or \(\mathbf{0}\).
We began this section with the concept of matrix equality. Let’s put our matrix addition properties to use and solve a matrix equation.
Let
\[A=\left[\begin{array}{cc}{2}&{-1}\\{3}&{6}\end{array}\right] . \nonumber \]
Find the matrix \(X\) such that
\[2A+3X=-4A. \nonumber \]
Solution
We can use basic algebra techniques to manipulate this equation for \(X\); first, let’s subtract \(2A\) from both sides. This gives us \[3X = -6A. \nonumber \] Now divide both sides by 3 to get \[X = -2A. \nonumber \] Now we just need to compute \(-2A\); we find that \[X=\left[\begin{array}{cc}{-4}&{2}\\{-6}&{-12}\end{array}\right] . \nonumber \]
Our matrix properties identified \(\mathbf{0}\) as the Additive Identity; i.e., if you add \(\mathbf{0}\) to any matrix \(A\), you simply get \(A\). This is similar in notion to the fact that for all numbers \(a\), \(a+0 = a\). A Multiplicative Identity would be a matrix \(I\) where \(I\times A=A\) for all matrices \(A\). (What would such a matrix look like? A matrix of all 1s, perhaps?) However, in order for this to make sense, we’ll need to learn to multiply matrices together, which we’ll do in the next section.
Footnotes
[1] We use the bold face to distinguish the zero matrix, \(\mathbf{0}\), from the number zero, 0.


