6.1: The Dot Product
\(\newcommand{\twovec}[2]{\begin{pmatrix} #1 \\ #2 \end{pmatrix} } \)
\(\newcommand{\threevec}[3]{\begin{pmatrix} #1 \\ #2 \\ #3 \end{pmatrix} } \)
\(\newcommand{\fourvec}[4]{\begin{pmatrix} #1 \\ #2 \\ #3 \\ #4 \end{pmatrix} } \)
\(\newcommand{\fivevec}[5]{\begin{pmatrix} #1 \\ #2 \\ #3 \\ #4 \\ #5 \end{pmatrix} } \)
In this section, we introduce a simple algebraic operation, known as the dot product , that helps us measure the length of vectors and the angle formed by a pair of vectors. For two-dimensional vectors \(\mathbf v\) and \(\mathbf w\text{,}\) their dot product \(\mathbf v\cdot\mathbf w\) is the scalar defined to be
For instance,
Preview Activity 6.1.1.
-
Compute the dot product
\begin{equation*} \twovec{3}{4}\cdot\twovec{2}{-2}\text{.} \end{equation*}
- Sketch the vector \(\mathbf v=\twovec{3}{4}\) below. Then use the Pythagorean theorem to find the length of \(\mathbf v\text{.}\)
- Compute the dot product \(\mathbf v\cdot\mathbf v\text{.}\) How is the dot product related to the length of \(\mathbf v\text{?}\)
- Remember that the matrix \(\left[\begin{array}{rr}0 & -1 \\ 1 & 0\end{array}\right]\) represents the matrix transformation that rotates vectors counterclockwise by \(90^\circ\text{.}\) Beginning with the vector \(\mathbf v = \twovec34\text{,}\) find \(\mathbf w\text{,}\) the result of rotating \(\mathbf v\) by \(90^\circ\text{,}\) and sketch it above.
- What is the dot product \(\mathbf v\cdot\mathbf w\text{?}\)
- Suppose that \(\mathbf v=\twovec ab\text{.}\) Find the vector \(\mathbf w\) that results from rotating \(\mathbf v\) by \(90^\circ\) and find the dot product \(\mathbf v\cdot\mathbf w\text{.}\)
- Suppose that \(\mathbf v\) and \(\mathbf w\) are two perpendicular vectors. What do you think their dot product \(\mathbf v\cdot\mathbf w\) is?
The geometry of the dot product
The dot product is defined, more generally, for any two \(m\)-dimensional vectors:
The important thing to remember is that the dot product will produce a scalar. In other words, the two vectors are combined in such a way as to create a number, and, as we'll see, this number conveys important geometric information.
We compute the dot product between two four-dimensional vectors as
Properties of dot products.
As with ordinary multiplication, the dot product enjoys some familiar algebraic properties, such as commutativity and distributivity. More specifically, it doesn't matter in which order we compute the dot product of two vectors:
If \(s\) is a scalar, we have
We may also distribute the dot product across linear combinations:
Suppose that \(\mathbf v_1\cdot\mathbf w = 4\) and \(\mathbf v_2\cdot\mathbf w = -7\text{.}\) Then
The most important property of the dot product, and the real reason for our interest in it, is that it gives us geometric information about vectors and their relationship to one another. Let's first think about the length of a vector by looking at the vector \(\mathbf v = \twovec32\) as shown in Figure 6.1.4
We may find the length of this vector using the Pythagorean theorem as the vector forms the hyptonuse of a right triangle having a horizontal leg of length 3 and a vertical leg of length 2. The length of \(\mathbf v\text{,}\) which we denote as \(|{\mathbf v}| \text{,} |\) is therefore \(|{\mathbf v}| = \sqrt{3^2 + 2^2} = \sqrt{13}\text{.}\) Now notice that the dot product of \(\mathbf v\) with itself is
This is true in general; that is, we have
More than that, the dot product of two vectors records information about the angle between them. Consider Figure 6.1.5.
To see this, we will apply the Law of Cosines, which says that
\begin{aligned}
|\mathbf{w}-\mathbf{v}|^2 & =|\mathbf{v}|^2+|\mathbf{w}|^2-2|\mathbf{v}||\mathbf{w}| \cos \theta \\
(\mathbf{w}-\mathbf{v}) \cdot(\mathbf{w}-\mathbf{v}) & =\mathbf{v} \cdot \mathbf{v}+\mathbf{w} \cdot \mathbf{w}-2|\mathbf{v}||\mathbf{w}| \cos \theta \\
\mathbf{w} \cdot \mathbf{w}+\mathbf{v} \cdot \mathbf{v}-2 \mathbf{v} \cdot \mathbf{w} & =\mathbf{v} \cdot \mathbf{v}+\mathbf{w} \cdot \mathbf{w}-2|\mathbf{v}||\mathbf{w}| \cos \theta \\
-2 \mathbf{v} \cdot \mathbf{w} & =-2|\mathbf{v}||\mathbf{w}| \cos \theta \\
\mathbf{v} \cdot \mathbf{w} & =|\mathbf{v}||\mathbf{w}| \cos \theta
\end{aligned} \notag
\]
The upshot of this reasoning is that
To summarize:
Geometric properties of the dot product.
The dot product gives us the following geometric information:
where \(\theta\) is the angle between \(\mathbf v\) and \(\mathbf w\text{.}\)
Activity 6.1.2.
- Sketch the vectors \(\mathbf v=\twovec32\) and \(\mathbf w=\twovec{-1}3\) using Figure 6.1.6.
- Find the lengths \(|{\mathbf v}| \) and \(|{\mathbf w}| \) using the dot product.
- Find the dot product \(\mathbf v\cdot\mathbf w\) and use it to find the angle between \(\mathbf v\) and \(\mathbf w\text{.}\)
- Consider the vector \(\mathbf x = \twovec{-2}{3}\text{.}\) Include it in your sketch in Figure 6.1.6 and find the angle between \(\mathbf v\) and \(\mathbf x\text{.}\)
- If two vectors are perpendicular, what can you say about their dot product? Explain your thinking.
- For what value of \(k\) is the vector \(\twovec6k\) perpendicular to \(\mathbf w\text{?}\)
-
Sage can be used to find lengths of vectors and their dot products. For instance, if
v
andw
are vectors, thenv.norm()
gives the length ofv
andv * w
gives \(\mathbf v\cdot\mathbf w\text{.}\)Suppose that
\begin{equation*} \mathbf v=\fourvec203{-2}, \hspace{24pt} \mathbf w=\fourvec1{-3}41\text{.} \end{equation*}Use the Sage cell below to find \(|{\mathbf v}| \text{,}\) \(|{\mathbf w}| \text{,}\) \(\mathbf v\cdot\mathbf w\text{,}\) and the angle between \(\mathbf v\) and \(\mathbf w\text{.}\) You may use
arccos
to find the angle's measure expressed in radians.
As we move forward, it will be important for us to recognize when vectors are perpendicular to one another. For instance, when vectors \(\mathbf v\) and \(\mathbf w\) are perpendicular, the angle between them \(\theta=90^\circ\) and we have
Therefore, the dot product between perpendicular vectors must be zero. This leads to the following definition.
We say that vectors \(\mathbf v\) and \(\mathbf w\) are orthogonal if \(\mathbf v\cdot\mathbf w=0\text{.}\)
In practical terms, two perpendicular vectors are orthogonal. However, the concept of orthogonality is somewhat more general because it allows one or both of the vectors to be the zero vector \(\mathbf 0\text{.}\)
We've now seen that the dot product gives us geometric information about vectors. It also provides a way to compare vectors. For example, consider the vectors \(\mathbf u\text{,}\) \(\mathbf v\text{,}\) and \(\mathbf w\text{,}\) shown in Figure 6.1.8. The vectors \(\mathbf v\) and \(\mathbf w\) seem somewhat similar as the directions they define are nearly the same. By comparison, \(\mathbf u\) appears rather dissimilar to both \(\mathbf v\) and \(\mathbf w\text{.}\) We will measure the similarity of vectors by finding the angle between them; the smaller the angle, the more similar the vectors.
Activity 6.1.3.
This activity explores two further uses of the dot product beginning with the similarity of vectors.
-
Our first task is to assess the similarity between various Wikipedia articles by forming vectors from each of five articles. In particular, one may download the text from a Wikipedia article, remove common words, such as “the” and “then,”, count the number of times the remaining words appear in the article, and represent these counts in a vector.
For example, evaluate the following cell that loads in some special commands along with the vectors constructed from the Wikipedia articles on Veteran's Day, Memorial Day, Labor Day, the Golden Globe Awards, and the Super Bowl. For each of the five articles, you will see a list of the number of times 10 words appear in these articles. For instance, the word “act” appears 3 times in the Veteran's Day article and 0 times in the Labor Day article.
For each of the five articles, we obtain 604-dimensional vectors, which are namedveterans
,memorial
,labor
,golden
, andsuper
.- Suppose that two articles have no words in common. What is the value of the dot product between their corresponding vectors? What does this say about the angle between these vectors?
- Suppose there are two articles on the same subject, yet one article is twice is long. What approximate relationship would you expect to hold between the two vectors? What does this say about the angle between them?
-
Use the Sage cell below to find the angle between the vector
veterans
and the other four vectors. To express the angle in degrees, use thedegrees(x)
command, which gives the number of degrees inx
radians. - Compare the four angles you have found and discuss what they mean about the similarity between the Veteran's Day article and the other four. Does your result reflect the nature of these five events?
-
Vectors are often used to represent how a quantity changes over time. For instance, the vector \(\mathbf s=\fourvec{78.3}{81.2}{82.1}{79.0}\) might represent the value of a company's stock on four consecutive days. When interpreted in this way, we call a vector a
time series.
Evaluate the Sage cell below to see a representation of two time series \(\mathbf s_1\text{,}\) in blue, and \(\mathbf s_2\text{,}\) in orange, which we imagine represent the value of two stocks over a period of time. (This cell relies on some data loaded by the first cell in this activity.)
Even though one stock has a higher value than the other, the two appear to be related since they seem to rise and fall at roughly similar ways. We often say that they are correlated , and we would like to measure the degree to which they are correlated.
-
In order to compare the ways in which they rise and fall, we will first
demean
the time series; that is, for each time series, we will subtract its average value to obtain a new time series. There is a command
demean(s)
that returns the demeaned time series ofs
. Use the Sage cell below to demean the series \(\mathbf s_1\) and \(\mathbf s_2\) and plot. -
If the demeaned series are \(\tilde{\mathbf s}_1\) and \(\tilde{\mathbf s}_2\text{,}\) then the correlation between \(\mathbf s_1\) and \(\mathbf s_2\) is defined to be
\begin{equation*} corr(\mathbf s_1, \mathbf s_2) = \frac{\tilde{\mathbf s}_1\cdot\tilde{\mathbf s}_2} {|{\tilde{\mathbf s}_1}| |{\tilde{\mathbf s}_2}}| . \end{equation*}
Given the geometric interpretation of the dot product, the correlation equals the cosine of the agle between the demeaned time series, and therefore \(corr(\mathbf s_1,\mathbf s_2)\) is between -1 and 1.
Find the correlation between \(\mathbf s_1\) and \(\mathbf s_2\text{.}\)
-
Suppose that two time series are such that their demeaned time series are scalar multiples of one another, as in Figure 6.1.9
Suppose we have time series \(\mathbf t_1\) and \(\mathbf t_2\) whose demeaned time series \(\tilde{\mathbf t}_1\) and \(\tilde{\mathbf t}_2\) are positive scalar multiples of one another. What is the angle between the demeaned vectors? What does this say about the correlation \(corr(\mathbf t_1, \mathbf t_2)\text{?}\)
- Suppose the demeaned time series \(\tilde{\mathbf t}_1\) and \(\tilde{\mathbf t}_2\) are negative scalar multiples of one another, what is the angle between the demeaned vectors? What does this say about the correlation \(corr(\mathbf t_1, \mathbf t_2)\text{?}\)
-
Use the Sage cell below to plot the time series \(\mathbf s_1\) and \(\mathbf s_3\) and find their correlation.
-
Use the Sage cell below to plot the time series \(\mathbf s_1\) and \(\mathbf s_4\) and find their correlation.
-
In order to compare the ways in which they rise and fall, we will first
demean
the time series; that is, for each time series, we will subtract its average value to obtain a new time series. There is a command
\(k\)-means clustering
A typical problem in data science is to find some underlying patterns in a dataset. Suppose, for instance, that we have the set of 177 data points plotted in Figure 6.1.10. Notice that the points are not scattered around haphazardly; instead, they seem to form clusters. Our goal here is to develop a strategy for detecting the clusters.
To see how this could be useful, suppose we have medical data describing a group of patients, some of whom have been diagnosed with a specific condition, such as diabetes. Perhaps we have a record of age, weight, blood sugar, cholestrol, and other attributes for each patient. It could be that the data points for the group diagnosed as having the condition form a cluster that is somewhat distinct from the rest of the data. Suppose that we are able to identify that cluster and that we are then presented with a new patient that has not been tested for the condition. If the attributes for that patient place them in that cluster, we might identify them as being at risk for the condition and prioritize them for appropriate screenings.
If there are many attributes for each patient, the data may be high-dimensional and not easily visualized. We would therefore like to develop an algorithm that separates the data points into clusters without human intervention. We call the result a clustering .
The next activity introduces a technique, called \(k\)-means clustering, that helps us find clusterings. To do so, we will view the data points as vectors so that the distance between two data points equals the length of the vector joining them.
Activity 6.1.4.
To begin, we identify the centroid , or the average, of a set of vectors \(\mathbf v_1, \mathbf v_2, \ldots,\mathbf v_n\) as
-
Find the centroid of the vectors
\begin{equation*} \mathbf v_1=\twovec11, \mathbf v_2=\twovec41, \mathbf v_3=\twovec44. \end{equation*}
and sketch the vectors and the centroid using Figure 6.1.11. You may wish to simply plot the points represented by the tips of the vectors rather than drawing the vectors themselves.
Notice that the centroid lies in the center of the points defined by the vectors.
-
Now we'll illustrate an algorithm that forms clusterings. To begin, consider the following points, represented as vectors,
\begin{equation*} \mathbf v_1=\twovec{-2}{1}, \mathbf v_2=\twovec11, \mathbf v_3=\twovec12, \mathbf v_4=\twovec32, \end{equation*}
which are shown in Figure 6.1.12.
Suppose that we would like to group these points into \(k=2\) clusters. (Later on, we'll see how to choose an appropriate value for \(k\text{,}\) the number of clusters.) We begin by choosing two points \(c_1\) and \(c_2\) at random and declaring them to be the “centers”' of the two clusters.
For example, suppose we randomly choose \(c_1=\mathbf v_2\) and \(c_2=\mathbf v_3\) as the center of two clusters. The cluster centered on \(c_1=\mathbf v_2\) will be the set of points that are closer to \(c_1=\mathbf v_2\) than to \(c_2=\mathbf v_3\text{.}\) Determine which of the four data points are in this cluster, which we denote by \(C_1\text{,}\) and circle them in Figure 6.1.12.
- The second cluster will consist of the data points that are closer to \(c_2=\mathbf v_3\) than \(c_1=\mathbf v_2\text{.}\) Determine which of the four points are in this cluster, which we denote by \(C_2\text{,}\) and circle them in Figure 6.1.12.
-
We now have a clustering with two clusters, but we will try to improve upon it in the following way. First, find the centroids of the two clusters; that is, redefine \(c_1\) to be the centroid of cluster \(C_1\) and \(c_2\) to be the centroid of \(C_2\text{.}\) Find those centroids and indicate them in Figure 6.1.13
Now update the cluster \(C_1\) to be the set of points closer to \(c_1\) than \(c_2\text{.}\) Update the cluster \(C_2\) in a similar way and indicate the clusters in Figure 6.1.13.
-
Let's perform this last step again. That is, update the centroids \(c_1\) and \(c_2\) from the new clusters and then update the clusters \(C_1\) and \(C_2\text{.}\) Indicate your centroids and clusters in Figure 6.1.14
Notice that this last step produces the same set of clusters so there is no point in repeating it. We declare this to be our final clustering.
This activity demonstrates our algorithm for finding a clustering. We first choose a value \(k\) and seek to break the data points into \(k\) clusters. The algorithm proceeds in the following way:
- Choose \(k\) points \(c_1, c_2, \ldots, c_k\) at random from our data set.
- Construct the cluster \(C_1\) as the set of data points closest to \(c_1\text{,}\) \(C_2\) as the set of data points closest to \(c_2\text{,}\) and so forth.
-
Repeat the following until the clusters no longer change:
- Find the centroids \(c_1, c_2,\ldots,c_k\) of the current clusters.
- Update the clusters \(C_1,C_2,\ldots,C_k\text{.}\)
The clusterings we find depend on the initial random choice of points \(c_1, c_2,\ldots, c_k\text{.}\) For instance, in the previous activity, we arrived, with the initial choice \(c_1= \mathbf v_2\) and \(c_2=\mathbf v_3\text{,}\) at the clustering:
If we instead choose the initial points to be \(c_1 = \mathbf v_3\) and \(c_2=\mathbf v_4\text{,}\) we eventually find the clustering:
Is there a way that we can determine which clustering is the better of the two? It seems like a better clustering will be one for which the points in a cluster are, on average, closer to the centroid of their cluster. If we have a clustering, we therefore define a function, called the objective , which measures the average of the square of the distance from each point to the centroid of the cluster to which that point belongs. A clustering with a smaller objective will have clusters more tightly centered around their centroids, which should result in a better clustering.
For example, when we obtain the clustering:
with centroids \(c_1=\twovec{0}{4/3}\) and \(c_2=\mathbf v_4=\twovec32\text{,}\) we find the objective to be
Activity 6.1.5.
We'll now use the objective to compare clusterings and to choose an appropriate value of \(k\text{.}\)
-
In the previous activity, one initial choice of \(c_1\) and \(c_2\) led to the clustering:
\begin{equation*} \begin{array}{rcl} C_1 & {}={} & \{\mathbf v_1\} \\ C_2 & {}={} & \{\mathbf v_2, \mathbf v_3,\mathbf v_4\} \end{array} \end{equation*}
with centroids \(c_1=\mathbf v_1\) and \(c_2=\twovec{5/3}{5/3}\text{.}\) Find the objective of this clustering.
- We have now seen two clusterings and computed their objectives. Recall that our data set is shown in Figure 6.1.12. Which of the two clusterings feels like the better fit? How is this fit reflected in the values of the objectives?
-
Evaluating the following cell will load and display a data set consisting of 177 data points. This data set has the name
data
.Given this plot of the data, what would seem like a reasonable number of clusters? -
In the following cell, you may choose a value of \(k\) and then run the algorithm to determine and display a clustering and its objective. If you run the algorithm a few times with the same value of \(k\text{,}\) you will likely see different clusterings having different objectives. This is natural since our algorithm starts by making a random choice of points \(c_1,c_2,\ldots,c_k\text{,}\) and a different choices may lead to different clusterings. Choose a value of \(k\) and run the algorithm a few times. Notice that clusterings having lower objectives seem to fit the data better. Repeat this experiment with a few different values of \(k\text{.}\)
-
For a given value of \(k\text{,}\) our strategy is to run the algorithm several times and choose the clustering with the smallest objective. After choosing a value of \(k\text{,}\) the following cell will run the algorithm 10 times and display the clustering having the smallest objective.
For each value of \(k\) between 2 and 9, find the clustering having the smallest objective and plot your findings in Figure 6.1.15.
This plot is called an elbow plot due to its shape. Notice how the objective decreases sharply when \(k\) is small, but then flattens out. This leads to a location, called the elbow, where the objective transitions from being sharply decreasing to relatively flat. This means that increasing \(k\) beyond the elbow does not significantly decrease the objective, which makes the elbow a good choice for \(k\text{.}\)
Where does the elbow occur in your plot above? How does this compare to the best value of \(k\) that you estimated by simply looking at the data in Item c.
Of course, we could increase \(k\) until each data point is its own cluster. However, this defeats the point of the technique, which is to group together nearby data points in the hope that they share common features, thus providing insight into the structure of the data.
We have now seen how our algorithm and the objective identify a reasonable value for \(k\text{,}\) the number of the clusters, and produce a good clustering having \(k\) clusters. Notice that we don't claim to have found the best clustering as the true test of any clustering will be in how it helps us understand the dataset and helps us make predictions for any new data that we may encounter.
Summary
This section introduced the dot product and the ability to investigate geometric relationships between vectors.
-
The dot product of two vectors \(\mathbf v\) and \(\mathbf w\) satisfies these properties:
\begin{equation*} \begin{array}{rcl} \mathbf v\cdot\mathbf v & {}={} & |{\mathbf v}|^2 \\ \mathbf v\cdot\mathbf w & {}={} & |{\mathbf v}| |{\mathbf w}| \cos\theta \\ \end{array} \end{equation*}
where \(\theta\) is the angle between \(\mathbf v\) and \(\mathbf w\text{.}\)
- The vectors \(\mathbf v\) and \(\mathbf w\) are orthogonal when \(\mathbf v\cdot\mathbf w= 0\text{.}\)
- We explored some applications of the dot product to the similarity of vectors, correlation of time series, and \(k\)-means clustering.
Exercises 6.1.4Exercises
Consider the vectors
- Find the lengths of the vectors, \(|{\mathbf v}| \) and \(|{\mathbf w}|\text{.}\)
- Find the dot product \(\mathbf v\cdot\mathbf w\) and use it to find the angle \(\theta\) between \(\mathbf v\) and \(\mathbf w\text{.}\)
Consider the three vectors
- Find the dot products \(\mathbf u\cdot\mathbf u\text{,}\) \(\mathbf u\cdot\mathbf v\text{,}\) and \(\mathbf u\cdot\mathbf w\text{.}\)
-
Use the dot products you just found to evaluate:
- \(|{\mathbf u}| \text{.}\)
- \((-5\mathbf u)\cdot\mathbf v\text{.}\)
- \(\mathbf u\cdot(-3\mathbf v+2\mathbf w)\text{.}\)
- \(|{\frac1{|{\mathbf u}|} \mathbf u}|\text{.}\)
- For what value of \(k\) is \(\mathbf u\) orthogonal to \(k\mathbf v+5\mathbf w\text{?}\)
Suppose that \(\mathbf v\) and \(\mathbf w\) are vectors where
- What is \(|{\mathbf v}| \text{?}\)
- What is the angle between \(\mathbf v\) and \(\mathbf w\text{?}\)
- Suppose that \(t\) is a scalar. Find the value of \(t\) for which \(\mathbf v\) is orthogonal to \(\mathbf w+t\mathbf v\text{?}\)
Suppose that \(\mathbf v=3\mathbf w\text{.}\)
- What is the relationship between \(\mathbf v\cdot\mathbf v\) and \(\mathbf w\cdot\mathbf w\text{?}\)
- What is the relationship between \(|{\mathbf v}| \) and \(|{\mathbf w}| \text{?}\)
- If \(\mathbf v=s\mathbf w\) for some scalar \(s\text{,}\) what is the relationship between \(\mathbf v\cdot\mathbf v\) and \(\mathbf w\cdot\mathbf w\text{?}\) What is the relationship between \(|{\mathbf v}\) and \(|{\mathbf w}\text{?}\)
- Suppose that \(\mathbf v=\threevec{3}{-2}2\text{.}\) Find a scalar \(s\) so that \(s\mathbf v\) has length 1.
Given vectors \(\mathbf v\) and \(\mathbf w\text{,}\) explain why
Sketch two vectors \(\mathbf v\) and \(\mathbf w\) and explain why this fact is called the parallelogam law .
Consider the vectors
and a general vector \(\mathbf x=\threevec xyz\text{.}\)
- Write an equation in terms of \(x\text{,}\) \(y\text{,}\) and \(z\) that describes all the vectors \(\mathbf x\) orthogonal to \(\mathbf v_1\text{.}\)
- Write a linear system that describes all the vectors \(\mathbf x\) orthogonal to both \(\mathbf v_1\) and \(\mathbf v_2\text{.}\)
- Write the solution set to this linear system in parametric form. What type of geometric object does this solution set represent? Indicate with a rough sketch why this makes sense.
- Give a parametric description of all vectors orthogonal to \(\mathbf v_1\text{.}\) What type of geometric object does this represent? Indicate with a rough sketch why this makes sense.
Explain your responses to these questions.
- Suppose that \(\mathbf v\) is orthogonal to both \(\mathbf w_1\) and \(\mathbf w_2\text{.}\) Can you guarantee that \(\mathbf v\) is also orthogonal to any linear combination \(c_1\mathbf w_1+c_2\mathbf w_2\text{?}\)
- Suppose that \(\mathbf v\) is orthogonal to itself. What can you say about \(\mathbf v\text{?}\)
Suppose that \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\) form a basis for \(\mathbb R^3\) and that each vector is orthogonal to the other two. Suppose also that \(\mathbf v\) is another vector in \(\mathbb R^3\text{.}\)
- Explain why \(\mathbf v=c_1\mathbf v_1+c_2\mathbf v_2+c_3\mathbf v_3\) for some scalars \(c_1\text{,}\) \(c_2\text{,}\) and \(c_3\text{.}\)
-
Beginning with the expression
\begin{equation*} \mathbf v\cdot\mathbf v_1 = (c_1\mathbf v_1+c_2\mathbf v_2+c_3\mathbf v_3)\cdot\mathbf v_1, \end{equation*}
apply the distributive property of dot products to explain why
\begin{equation*} c_1=\frac{\mathbf v\cdot\mathbf v_1}{\mathbf v_1\cdot\mathbf v_1}. \end{equation*}Find similar expressions for \(c_2\) and \(c_3\text{.}\)
-
Verify that
\begin{equation*} \mathbf v_1=\threevec121,\hspace{24pt} \mathbf v_2=\threevec1{-1}1,\hspace{24pt} \mathbf v_3=\threevec10{-1} \end{equation*}
form a basis for \(\mathbb R^3\) and that each vector is orthogonal to the other two. Use what you've discovered in this problem to write the vector \(\mathbf v=\threevec35{-1}\) as a linear combination of \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\text{.}\)
Suppose that \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\) are three nonzero vectors that are pairwise orthogonal; that is, each vector is orthogonal to the other two.
- Explain why \(\mathbf v_3\) cannot be a linear combination of \(\mathbf v_1\) and \(\mathbf v_2\text{.}\)
- Explain why this set of three vectors is linearly independent.
In the next chapter, we will consider certain \(n\times n\) matrices \(A\) and define a function
where \(\mathbf x\) is a vector in \(\mathbb R^n\text{.}\)
- Suppose that \(A=\begin{bmatrix} 1 & 2 \\ 2 & 1 \\ \end{bmatrix}\) and \(\mathbf x=\twovec21\text{.}\) Evaluate \(q(\mathbf x) = \mathbf x\cdot(A\mathbf x)\text{.}\)
- For a general vector \(\mathbf x=\twovec xy\text{,}\) evaluate \(q(\mathbf x) = \mathbf x\cdot(A\mathbf x)\) as an expression involving \(x\) and \(y\text{.}\)
- Suppose that \(\mathbf v\) is an eigenvector of a matrix \(A\) with associated eigenvalue \(\lambda\) and that \(\mathbf v\) has length 1. What is the value of the function \(q(\mathbf x)\text{?}\)
Back in Section 1.1 , we saw that equations of the form \(Ax+By = C\) represent lines in the plane. In this exericse, we will see how this expression arises geometrically.
- Find the slope and vertical intercept of the line shown in Figure 6.1.16. Then write an equation for the line in the form \(y=mx+b\text{.}\)
- Suppose that \(\mathbf p\) is a point on the line, that \(\mathbf n\) is a vector perpendicular to the line, and that \(\mathbf x=\twovec xy\) is a general point on the line. Sketch the vector \(\mathbf x-\mathbf p\) and describe the angle between this vector and the vector \(\mathbf n\text{.}\)
- What is the value of the dot product \(\mathbf n\cdot(\mathbf x - \mathbf p)\text{?}\)
- Explain why the equation of the line can be written in the form \(\mathbf n\cdot\mathbf x = \mathbf n\cdot\mathbf p\text{.}\)
- Identify the vectors \(\mathbf p\) and \(\mathbf n\) for the line illustrated in Figure 6.1.16 and use them to write the equation of the line in terms of \(x\) and \(y\text{.}\) Verify that this expression is algebraically equivalent to the equation \(y=mx+b\) that you earlier found for this line.
- Explain why any line in the plane can be described by an equation having the form \(Ax+By = C\text{.}\) What is the significance of the vector \(\twovec AB\text{?}\)