Skip to main content
Mathematics LibreTexts

2.3: The matrix-vector product

  • Page ID
    112382
  • This page is a draft and is under active development. 

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    \( \def\Span#1{\text{Span}\left\lbrace #1\right\rbrace} \def\vect#1{\mathbf{#1}} \def\ip{\boldsymbol{\cdot}} \def\iff{\Longleftrightarrow} \def\cp{\times} \)

    The matrix-vector product \(A\vect{x}\)

     

    In this section we will introduce another interpretation/representation of a system of linear equations. We'll define the product of an \(m\times n\) matrix \(A\) with a vector \(\vect{x}\) in \(\mathbb{R}^n\). In the next chapter this will also be the stepping stone to the general matrix-matrix product.

    Definition \(\PageIndex{1}\)
     
    The product \(A\vect{x}\) of an \(m\times n\) matrix \[ A = [\vect{a}_1 \quad \vect{a}_2 \quad \ldots \quad \vect{a}_n] \nonumber \nonumber\] with a vector \[ \vect{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}\in \mathbb{R}^n \nonumber \nonumber\] is defined as \[ A\vect{x} = x_1\vect{a}_1 + x_2\vect{a}_2 + \ldots + x_n\vect{a}_n. \nonumber \nonumber\] So: \(A\vect{x}\) is the linear combination of the columns of the matrix \(A\) with the entries of the vector \(\vect{x}\) as coefficients. If the size \(n\) of the vector \(\vect{x}\) is not equal to the number of columns of the matrix \(A\), then the product \(A\vect{x}\) is not defined.
    Example \(\PageIndex{2}\)
     
    \[ \begin{bmatrix} 2 & 3 \\ 4 & 1 \\ 3 & 5 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 5 \\ -1 \end{bmatrix} = 5 \begin{bmatrix} 2 \\ 4 \\ 3 \\ 0 \end{bmatrix} + (-1) \begin{bmatrix} 3 \\ 1 \\ 5 \\ 1 \end{bmatrix} = \begin{bmatrix} 10 \\ 20 \\ 15 \\ 0 \end{bmatrix} + \begin{bmatrix} -3 \\ -1 \\ -5 \\ -1 \end{bmatrix} = \begin{bmatrix} 7 \\ 19 \\ 10 \\ -1 \end{bmatrix}. \nonumber \nonumber\] and \[ \begin{bmatrix}1 & 2 & 3 & 5 \end{bmatrix} \begin{bmatrix} 4 \\ -2 \\ -1 \\ 3\end{bmatrix} = 1\cdot4 +2\cdot(-2)+3\cdot(-1) + 5\cdot 3 = 12. \nonumber \nonumber\]

    The interpretation of \(A\vect{x}\) as a linear combination of the columns of \(A\) is important to keep in mind. That is, to not forget it after the following slightly easier way to compute the matrix-vector product.

    Proposition \(\PageIndex{3}\)
     
    \[ \left[\begin{array}{ccccc} a_{11} & a_{12}& \ldots& \ldots& a_{1n} \\ a_{21} & a_{22}& \ldots& \ldots& a_{2n} \\ \vdots & \vdots& \ldots& \ldots& \vdots \\ a_{m1} & a_{m2}& \ldots& \ldots& a_{mn} \end{array} \right] \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} a_{11}x_1 + a_{12}x_2 + \ldots+ a_{1n}x_n \\ a_{21}x_1 + a_{22}x_2 + \ldots+ a_{2n}x_n \\ \vdots\\ a_{m1}x_1 + a_{m2}x_2 + \ldots+ a_{mn}x_n \end{bmatrix}. \nonumber \nonumber\]
    Skip/Read the proof
    Proof
    The vector on the right-hand side of the identity is equal to the linear combination \[ x_1 \begin{bmatrix} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{bmatrix} + x_2 \begin{bmatrix} a_{12} \\ a_{22} \\ \vdots \\ a_{m2} \end{bmatrix} + \ldots + x_n \begin{bmatrix} a_{1n} \\ a_{2n} \\ \vdots \\ a_{mn} \end{bmatrix}. \nonumber \nonumber\] Note that the entry on the \(i\)-th position of the product \[ a_{i1}x_1 + a_{i2}x_2 + \ldots+ a_{in}x_n \nonumber \nonumber\] is the 'row-column product' \[ \begin{bmatrix} a_{i1} & a_{i2} & \ldots & a_{in} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ \vdots \\ x_n \end{bmatrix} \nonumber \nonumber\]
    Example \(\PageIndex{4}\)
     
    We find the product using the row-column rule: \[ \begin{bmatrix} 3 & 4 & 5 \\ 1 & 0 & -1 \\ 2 & 2 & 4 \\ 5 & -5 & 2\end{bmatrix} \begin{bmatrix} 3 \\ 1 \\ -4 \end{bmatrix} = \begin{bmatrix} 3\cdot3\!\! &+&\!\! 4\cdot1\!\! &+&\!\! 5\cdot(-4) \\ 1\cdot3\!\! &+& \!\!0\cdot1\!\! &+&\!\! (-1)\cdot(-4) \\ 2\cdot3 \quad \!\!&+&\!\! 2\cdot1 \quad \!\! &+&\!\! 4\cdot(-4)\\ 5\cdot3 \quad \!\!&+& \!\!\!(-5)\cdot1 \quad \!\!\! &+&\!\! 2\cdot(-4) \end{bmatrix} = \begin{bmatrix} -7 \\ 7 \\ -8\\ 2\end{bmatrix}. \nonumber \nonumber\]

    From the above it follows that the `matrix-vector equation' \[ \left[\begin{array}{ccccc} a_{11} & a_{12}& \ldots& \ldots& a_{1n} \\ a_{21} & a_{22}& \ldots& \ldots& a_{2n} \\ \vdots & \vdots& \cdots& \cdots& \vdots \\ a_{m1} & a_{m2}& \ldots& \ldots& a_{mn} \end{array} \right] \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ \vdots\\ b_m\end{bmatrix} \nonumber \nonumber\] and the linear system \[ \left\{\begin{array}{ccccccccc} a_{11}x_1\! & \!+\!&\!a_{12}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{1n}x_n \! & \!=\!&\! b_1 \\ a_{21}x_1 \quad \! & \!+\!&\!a_{22}x_2\! & \!+\!&\!\ldots\! & \!+\!&\!a_{2n}x_n \! & \!=\!&\! b_2 \\ \vdots \! & \! \!&\! \vdots\! & \! \!&\!\cdots\! & \! \!&\! \vdots \! & \! \!&\! \vdots \\ a_{m1}x_1 \quad \! & \!+\!&\!a_{m2}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{mn}x_n \! & \!=\!&\! b_m \\ \end{array} \right. \nonumber \nonumber\] are one and the same thing! So, we can see this linear system as

    • a vector equation: \[ x_1\vect{a}_1 + x_2\vect{a}_2 + \ldots + x_n\vect{a}_n = \vect{b} \nonumber \nonumber\] or
    • a matrix equation: \[ A \vect{x} = \vect{b}. \nonumber \nonumber\]

    As we will see later these different interpretations may lead to different insights.

    Example \(\PageIndex{5}\)
     
    We want to write the system of equations \[ \left\{\begin{array} {rr} 5x_1 & - & 3x_2 & -&2x_3 &=& 4 \\ 3x_1 & + & 7x_2 & -&2x_3 &=& 5 \\ 2x_1 & - &6x_2 & +&5x_3 &=& 6 \\ x_1 & & & +& x_3 &=& 8 \end{array} \right. \nonumber \nonumber\] in these two different forms. The corresponding vector equation is \[ x_1 \begin{bmatrix} 5 \\ 3 \\ 2 \\ 1 \end{bmatrix} + x_2 \begin{bmatrix} -3 \\ 7 \\ -6 \\ 0 \end{bmatrix} + x_3 \begin{bmatrix} -2 \\ -2 \\ 5 \\ 1 \end{bmatrix} = \begin{bmatrix} 4 \\ 5 \\ 6 \\ 8 \end{bmatrix}, \nonumber \nonumber\] and the corresponding matrix equation becomes \[ \begin{bmatrix} 5 & -3 & -2\\ 3 &7 & -2\\ 2&-6&5 \\ 1 &0&1 \end{bmatrix} \begin{bmatrix} x_1 \\x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} 4 \\ 5 \\ 6 \\ 8 \end{bmatrix}. \nonumber \nonumber\]
    Proposition \(\PageIndex{6}\)
     
    For each \(m\times n\) matrix \(A\), for all vectors \(\vect{x},\vect{y}\) in \(\mathbb{R}^n\), and for all scalars \(c\)
    1. \(A (\vect{x}+\vect{y} ) = A\vect{x} + A\vect{y}\);
    2. \(A (c\vect{x}) = c A\vect{x}\).
    Skip/Read the proof
    Proof
    Let's prove the first of the statements; the other statement goes in a similar fashion. There are several ways to derive the formula. Via the linear combination idea it may be the easiest. So assume \[ A = [ \vect{a}_1 \quad \vect{a}_2 \quad \ldots \quad \vect{a}_n ], \quad \vect{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ \vdots \\ x_n \end{bmatrix}, \quad \text{and}\quad \vect{y} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ \vdots \\ y_n \end{bmatrix}. \nonumber \nonumber\] Then \[ A (\vect{x}+\vect{y} ) = A \begin{bmatrix} x_1+y_1 \\ x_2+y_2 \\ \vdots \\ \vdots \\ x_n+y_n \end{bmatrix} = (x_1+y_1 )\vect{a}_1 + (x_2+y_2 )\vect{a}_2 + \ldots + (x_n+y_n )\vect{a}_n \nonumber \nonumber\] which is obviously equal to \[ \big(x_1\vect{a}_1 + x_2\vect{a}_2 + \ldots + x_n\vect{a}_n\big) + \big(y_1\vect{a}_1 + y_2\vect{a}_2 + \ldots + y_n\vect{a}_n\big) = A\vect{x} + A\vect{y}. \nonumber \nonumber\]
    Exercise \(\PageIndex{7}\)
     
    Prove statement (ii) of the previous proposition.

    Using the above rules we can give shorter proofs of statements concerning linear systems. We illustrate this by having a second look at Proposition 2.4.6:

    Example \(\PageIndex{8}\)
     
    The contents of that proposition: suppose \((c_{1},...,c_{n})\) is a solution of a linear system. Then \((c_{1}',...,c_{n}')\) is also a solution of the linear system if and only if there exists a solution \((d_{1},...,d_{n})\) of the associated homogeneous system such that \(c'_{i}=c_{i}+d_{i}\) for all \(i\). Using the matrix-vector product we can derive this property as follows: we can consider the solutions in vector form \[ \vect{c} = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ \vdots \\ c_n \end{bmatrix}, \quad \vect{c'} = \begin{bmatrix} c'_1 \\ c'_2 \\ \vdots \\ \vdots \\ c'_n \end{bmatrix} \nonumber \nonumber\] and let \(A\) and \(\vect{b}\) have the obvious meanings. It is then given that both \[ A\vect{c} = \vect{b} \quad \text{and} \quad A\vect{c'} = \vect{b}. \nonumber \nonumber\] From the rules just found it follows that \[ A(\vect{c} -\vect{c'}) = A\vect{c} -A\vect{c'} = \vect{b} - \vect{b} = 0, \nonumber \nonumber\] which show that the vector \[ (\vect{c} -\vect{c'}) = \vect{d} \nonumber \nonumber\] is a solution of the homogeneous system. Of course \[ (\vect{c} -\vect{c'}) = \vect{d} \iff \vect{c} = \vect{c'}+\vect{d} \iff c_i = c'_i + d_i, i=1,\ldots, n, \nonumber \nonumber\] On the other hand, if \(\vect{c'}\) is a solution of the linear system \[ A\vect{x} = \vect{b} \nonumber \nonumber\] and \(\vect{d}\) is a solution of the homogeneous system \[ A\vect{x} = \vect{0}, \nonumber \nonumber\] then \[ A(\vect{c'}+\vect{d}) = A\vect{c'}+A\vect{d} = \vect{b} + \vect{0} = \vect{b}, \nonumber \nonumber\] so \[ \vect{c} = \vect{c'} + \vect{d} \nonumber \nonumber\] is a solution of the system \[ A\vect{x} = \vect{b}. \nonumber \nonumber\] The proof is basically the same as before, but using the matrix-vector product it can be written more concisely.
    Exercise \(\PageIndex{9}\)
     
    (To practice with a proof as in the previous proposition.) Suppose the linear system \[ \left\{\begin{array}{ccccccccc} a_{11}x_1\! & \!+\!&\!a_{12}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{1n}x_n \! & \!=\!&\! p_1 \\ a_{21}x_1 \quad \! & \!+\!&\!a_{22}x_2\! & \!+\!&\!\ldots\! & \!+\!&\!a_{2n}x_n \! & \!=\!&\! p_2 \\ \vdots \! & \! \!&\! \vdots\! & \! \!&\!\cdots\! & \! \!&\! \vdots \! & \! \!&\! \vdots \\ a_{m1}x_1 \quad \! & \!+\!&\!a_{m2}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{mn}x_n \! & \!=\!&\! p_m \\ \end{array} \right. \nonumber \nonumber\] is consistent and linear system \[ \left\{\begin{array}{ccccccccc} a_{11}x_1\! & \!+\!&\!a_{12}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{1n}x_n \! & \!=\!&\! q_1 \\ a_{21}x_1 \quad \! & \!+\!&\!a_{22}x_2\! & \!+\!&\!\ldots\! & \!+\!&\!a_{2n}x_n \! & \!=\!&\! q_2 \\ \vdots \! & \! \!&\! \vdots\! & \! \!&\!\cdots\! & \! \!&\! \vdots \! & \! \!&\! \vdots \\ a_{m1}x_1 \quad \! & \!+\!&\!a_{m2}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{mn}x_n \! & \!=\!&\! q_m \\ \end{array} \right. \nonumber \nonumber\] is inconsistent. Show that the system \[ \left\{\begin{array}{ccccccccc} a_{11}x_1\! & \!+\!&\!a_{12}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{1n}x_n \! & \!=\!&\! r_1 \\ a_{21}x_1 \quad \! & \!+\!&\!a_{22}x_2\! & \!+\!&\!\ldots\! & \!+\!&\!a_{2n}x_n \! & \!=\!&\! r_2 \\ \vdots \! & \! \!&\! \vdots\! & \! \!&\!\cdots\! & \! \!&\! \vdots \! & \! \!&\! \vdots \\ a_{m1}x_1 \quad \! & \!+\!&\!a_{m2}x_2\! & \!+\!&\! \ldots\! & \!+\!&\!a_{mn}x_n \! & \!=\!&\! r_m \\ \end{array} \right. \nonumber \nonumber\] where \(r_i = p_i - q_i, i=1,\ldots, m\) is inconsistent.

    2.3: The matrix-vector product is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?