4.2: Vector Algebra
- Page ID
- 98149
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)- Understand vector addition and scalar multiplication, algebraically.
- Introduce the notion of linear combination of vectors.
Addition and scalar multiplication are two important algebraic operations done with vectors. Notice that these operations apply to vectors in \(\mathbb{R}^{n}\), for any value of \(n\). We will explore these operations in more detail in the following sections.
Addition of Vectors in \(\mathbb{R}^n\)
Addition of vectors in \(\mathbb{R}^n\) is defined as follows.
If \(\vec{u}=\left [ \begin{array}{c} u_{1} \\ \vdots \\ u_{n} \end{array} \right ],\; \vec{v}= \left [ \begin{array}{c} v_{1} \\ \vdots \\ v_{n} \end{array} \right ] \in \mathbb{R}^{n}\) then \(\vec{u}+\vec{v}\in \mathbb{R}^{n}\) and is defined by
\[\begin{aligned} \vec{u}+\vec{v} &= \left [ \begin{array}{c} u_{1} \\ \vdots \\ u_{n} \end{array} \right ] + \left [ \begin{array}{c} v_{1} \\ \vdots \\ v_{n} \end{array} \right ]\\ & = \left [ \begin{array}{c} u_{1}+v_{1} \\ \vdots \\ u_{n}+v_{n} \end{array} \right ]\end{aligned}\]
To add vectors, we simply add corresponding components. Therefore, in order to add vectors, they must be the same size.
Below is a video on finding the sum of two vectors as a linear combination of other vectors.
Addition of vectors satisfies some important properties which are outlined in the following theorem.
The following properties hold for vectors \(\vec{u},\vec{v}, \vec{w} \in \mathbb{R}^{n}\).
- The Commutative Law of Addition \[\vec{u}+\vec{v}=\vec{v}+\vec{u}\nonumber \]
- The Associative Law of Addition \[\left( \vec{u}+\vec{v}\right) +\vec{w}=\vec{u}+\left( \vec{v}+\vec{w}\right)\nonumber \]
- The Existence of an Additive Identity \[\vec{u}+\vec{0}=\vec{u} \label{vectoridentity}\]
- The Existence of an Additive Inverse \[\vec{u}+\left( -\vec{u}\right) =\vec{0}\nonumber \]
The additive identity shown in Equation \(\eqref{vectoridentity}\) is also called the zero vector, the \(n \times 1\) vector in which all components are equal to \(0\). Further, \(-\vec{u}\) is simply the vector with all components having same value as those of \(\vec{u}\) but opposite sign; this is just \((-1)\vec{u}\). This will be made more explicit in the next section when we explore scalar multiplication of vectors. Note that subtraction is defined as \(\vec{u}-\vec{v} = \vec{u}+\left( -\vec{v} \right)\).
Scalar Multiplication of Vectors in \(\mathbb{R}^n\)
Scalar multiplication of vectors in \(\mathbb{R}^n\) is defined as follows.
If \(\vec{u}\in \mathbb{R}^{n}\) and \(k\in \mathbb{R}\) is a scalar, then \(k\vec{u}\in \mathbb{R}^{n}\) is defined by \[k\vec{u}=k\left [ \begin{array}{c} u_{1} \\ \vdots \\ u_{n} \end{array} \right ] = \left [ \begin{array}{c} ku_{1} \\ \vdots \\ ku_{n} \end{array} \right ]\nonumber \]
Just as with addition, scalar multiplication of vectors satisfies several important properties. These are outlined in the following theorem.
The following properties hold for vectors \(\vec{u},\vec{v}\in \mathbb{R}^{n}\) and \(k,p\) scalars.
- The Distributive Law over Vector Addition \[k \left( \vec{u}+\vec{v}\right) = k\vec{u}+ k\vec{v}\nonumber\]
- The Distributive Law over Scalar Addition \[\left( k + p \right)\vec{u} = k \vec{u}+p \vec{u}\nonumber\]
- The Associative Law for Scalar Multiplication \[k \left( p \vec{u}\right) = \left(k p \right)\vec{u}\nonumber\]
- Rule for Multiplication by \(1\) \[1\vec{u}=\vec{u}\nonumber\]
- Proof
-
We will show the proof of: \[k \left( \vec{u}+\vec{v}\right) = k \vec{u}+ k \vec{v}\nonumber\] Note that: \[\begin{array}{ll} k \left( \vec{u}+\vec{v}\right) & =k \left [ u_{1}+v_{1} \cdots u_{n}+v_{n}\right ]^T \\ & = \left [ k \left( u_{1}+v_{1}\right) \cdots k \left( u_{n}+v_{n}\right) \right ]^T \\ & = \left [ k u_{1}+ k v_{1} \cdots k u_{n}+ k v_{n}\right ]^T \\ & = \left [ k u_{1} \cdots k u_{n} \right ]^T + \left [ k v_{1} \cdots k v_{n} \right ]^T \\ & = k \vec{u}+k \vec{v} \\ \end{array}\nonumber\]
Below is a video on the dot product.
Below is a video on vector scalar multiplication.
We now present a useful notion you may have seen earlier combining vector addition and scalar multiplication
A vector \(\vec{v}\) is said to be a linear combination of the vectors \(\vec{u}_1,\cdots , \vec{u}_n\) if there exist scalars, \(a_{1},\cdots ,a_{n}\) such that \[\vec{v} = a_1 \vec{u}_1 + \cdots + a_n \vec{u}_n\nonumber \]
For example, \[3 \left [ \begin{array}{r} -4 \\ 1 \\ 0 \end{array} \right ] + 2 \left [ \begin{array}{r} -3 \\ 0\\ 1 \end{array} \right ] = \left [ \begin{array}{r} -18 \\ 3 \\ 2 \end{array} \right ].\nonumber \] Thus we can say that \[\vec{v}= \left [ \begin{array}{r} -18 \\ 3 \\ 2 \end{array} \right ]\nonumber \] is a linear combination of the vectors \[\vec{u}_1 = \left [ \begin{array}{r} -4 \\ 1 \\ 0 \end{array} \right ] \mbox{ and } \vec{u}_2 = \left [ \begin{array}{r} -3 \\ 0\\ 1 \end{array} \right ]\nonumber \]