3.5: Subspaces of Rp
\(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\twovec}[2]{\begin{pmatrix} #1 \\ #2 \end{pmatrix} } \) \(\newcommand{\threevec}[3]{\begin{pmatrix} #1 \\ #2 \\ #3 \end{pmatrix} } \) \(\newcommand{\fourvec}[4]{\begin{pmatrix} #1 \\ #2 \\ #3 \\ #4 \end{pmatrix} } \) \(\newcommand{\fivevec}[5]{\begin{pmatrix} #1 \\ #2 \\ #3 \\ #4 \\ #5 \end{pmatrix} } \)
We saw that vectors in a basis for \(\mathbb R^p\) form the columns of an invertible matrix, which is necessarily a square matrix.
A basis for \(\mathbb R^p\) can be useful for it creates a coordinate system that helps us effectively navigate in \(\mathbb R^p\text{.}\) Sometimes, however, we find ourselves dealing with only a subset of \(\mathbb R^p\text{.}\) In particular, if we are given an \(m\times n\) matrix \(A\text{,}\) we have been interested in both the span of the columns of \(A\) and the solution space to the homogeneous equation \(A\mathbf x = \mathbf 0\text{.}\) In this section, we will expand the concept of basis to describe sets like these.
Preview Activity 3.5.1.
Let's consider the following matrix \(A\) and its reduced row echelon form.
- Are the columns of \(A\) linearly independent? Do they span \(\mathbb R^3\text{?}\)
- Give a parametric description of the solution space to the homogeneous equation \(A\mathbf x = \mathbf 0\text{.}\)
- Explain how this parametric description produces two vectors \(\mathbf w_1\) and \(\mathbf w_2\) whose span is the solution space to the equation \(A\mathbf x = \mathbf 0\text{.}\)
- What can you say about the linear independence of the set of vectors \(\mathbf w_1\) and \(\mathbf w_2\text{?}\)
- Let's denote the columns of \(A\) as \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) \(\mathbf v_3\text{,}\) and \(\mathbf v_4\text{.}\) Explain why \(\mathbf v_3\) and \(\mathbf v_4\) can be written as linear combinations of \(\mathbf v_1\) and \(\mathbf v_2\text{.}\)
- Explain why \(\mathbf v_1\) and \(\mathbf v_2\) are linearly independent and \(Span\{{\mathbf v_1,\mathbf v_2}\} = Span\{{\mathbf v_1, \mathbf v_2, \mathbf v_3, \mathbf v_4}\}\text{.}\)
Subspaces of \(\mathbb R^p\)
In the preview activity, we considered a \(3\times4\) matrix \(A\) and described two familiar sets of vectors. FIrst, we described the solution space to the homogeneous equation \(A\mathbf x = \mathbf 0\text{,}\) which is a set of vectors in \(\mathbb R^4\text{.}\) Next, we described the span of the columns of \(A\text{,}\) which is a set of vectors in \(\mathbb R^3\text{.}\) As we will see shortly, each of these sets has a common feature that we would like to study further: if we choose some vectors in one of these sets, any linear combination of those vectors is also in the set. This observation motivates the following definition.
A subspace of \(\mathbb R^p\) is a nonempty subset of \(\mathbb R^p\) such that any linear combination of vectors in that set is also in the set.
Without mentioning it explicitly, we have frequently encountered and worked with subspaces earlier in our investigations. Let's look at some examples to get comfortable with this concept.
It will be helpful to first look at some examples of subsets of \(\mathbb R^2\) that are not subspaces. First, consider the set of vectors in the first quadrant of \(\mathbb R^2\text{;}\) that is, vectors of the form \(\twovec{x}{y}\) where both \(x,y \geq 0\text{.}\) This subset is illustrated on the left of Figure 3.5.3.
If this subset were a subspace of \(\mathbb R^2\text{,}\) any linear combination of vectors in the first quadrant must also be in the first quadrant. If we consider the vector \(\mathbf v=\twovec{3}{2}\text{,}\) however, we can form the linear combination \(-\mathbf v=\twovec{-3}{-2}\text{,}\) which is not in the first quadrant, as seen on the right of Figure 3.5.3. Therefore, the set of vectors in the first quadrant is not a subspace.
This shows something important, however. Suppose that \(S\) is a subspace and \(\mathbf v\) is a vector in \(S\text{.}\) Any scalar multiple of \(\mathbf v\) is a linear combination of \(\mathbf v\) and so must be in \(S\) as well. This means that the line containing \(\mathbf v\) must be in \(S\text{.}\)
With this in mind, let's consider another example where we look at vectors that are in either the first or third quadrant; that is, we will consider vectors of the form \(\twovec{x}{y}\) where either \(x,y\geq 0\) or \(x,y\leq 0\text{,}\) as seen on the left of Figure 3.5.4.
If \(\mathbf v\) is a vector in this set, then the line containing \(\mathbf v\) is in the set. However, if we consider the vectors \(\mathbf v = \twovec{0}{3}\) and \(\mathbf w=\twovec{-2}{0}\text{,}\) then their sum \(\mathbf v+\mathbf w = \twovec{-2}{3}\) is not in the subset, as seen on the right of Figure 3.5.4. This subset is also not a subspace.
Let's look in \(\mathbb R^2\) and consider \(S\text{,}\) the set of vectors lying on the \(x\) axis; that is, vectors having the form \(\twovec{x}{0}\text{,}\) as shown on the left of Figure 3.5.6. Any scalar multiple of a vector lying on the \(x\) axis also lies on the \(x\) axis. Also, any sum of vectors lying on the \(x\) axis also lies on the \(x\) axis. Therefore, \(S\) is a subspace of \(\mathbb R^2\text{.}\) Notice that \(S\) is the span of the vector \(\twovec{1}{0}\text{.}\)
In fact, any line through the origin forms a subspace, as seen on the right of Figure 3.5.6. Indeed, any such line is the span of a nonzero vector on the line.
Activity 3.5.2.
We will look at some more subspaces of \(\mathbb R^2\text{.}\)
-
Explain why a line that does not pass through the origin, as seen to the right, is not a subspace of \(\mathbb R^2\text{.}\)
- Explain why any subspace of \(\mathbb R^2\) must contain the zero vector \(\mathbf 0\text{.}\)
- Explain why the subset \(S\) of \(\mathbb R^2\) that consists of only the zero vector \(\mathbf 0\) is a subspace of \(\mathbb R^2\text{.}\)
- Explain why the subspace \(S=\mathbb R^2\) is itself a subspace of \(\mathbb R^2\text{.}\)
- If \(\mathbf v\) and \(\mathbf w\) are two vectors in a subspace \(S\text{,}\) explain why \(Span\{{\mathbf v,\mathbf w}\}\) is contained in the subspace \(S\) as well.
-
Suppose that \(S\) is a subspace of \(\mathbb R^2\) containing two vectors \(\mathbf v\) and \(\mathbf w\) that are not scalar multiples of one another. What is the subspace \(S\) in this case?
This activity introduces an important idea. Suppose that we have a subspace \(S\) of \(\mathbb R^p\) and that vectors \(\mathbf v_1, \mathbf v_2, \ldots, \mathbf v_n\) are in \(S\text{.}\) We know that any linear combination of these vectors must also be in the subspace \(S\text{.}\) Since the span of these vectors is the set of all linear combinations of the vectors, it must be the case that \(Span\{{\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_n}\}\) is in the subspace \(S\) as well.
With this in mind, we can list all the subspaces of \(\mathbb R^2\text{.}\) If a subspace \(S\) contains a nonzero vector, then it must contain the line containing that vector. If \(S\) contains two vectors \(\mathbf v\) and \(\mathbf w\) that are not scalar multiples of one another, then \(Span\{{\mathbf v,\mathbf w}\} = \mathbb R^2\) so the subspace \(S\) must be all of \(\mathbb R^2\text{.}\) These are the only possibilities:
- The subspace \(S=\{\mathbf 0\}\) consisting of only the zero vector .
- A line through the origin.
- The subspace \(S=\mathbb R^2\text{.}\)
Subspaces are the simplest subsets of \(\mathbb R^p\text{;}\) they are subsets in which we can perform the usual operations of scalar multiplication and vector addition without leaving the subset. Just as we can create bases for \(\mathbb R^p\text{,}\) we can create bases for subspaces as well.
A basis for a subspace \(S\) of \(\mathbb R^p\) is a set of vectors in \(S\) that are linearly independent and span \(S\text{.}\) It can be seen that any two bases have the same number of vectors. Therefore, we say that the dimension of the subspace \(S\text{,}\) denoted \(\dim S\text{,}\) is the number of vectors in any basis.
With this in mind, we can describe the possible spaces of \(\mathbb R^3\text{.}\)
- The subspace \(S=\{\mathbf 0\}\) is a subspace whose dimension is 0.
-
A line through the origin is a subspace whose dimension is 1. Any nonzero vector on the line forms a basis.
-
A plane through the origin is a subspace whose dimension is 2. For instance, the vectors \(\mathbf v_1\) and \(\mathbf v_2\) form a basis for the subspace shown here.
- Finally, the subspace \(S=\mathbb R^3\) is a subspace of \(\mathbb R^3\) whose dimension is 3.
Of course, there cannot be a subspace of \(\mathbb R^3\) whose dimension is four or higher since any set of four vectors in \(\mathbb R^3\) cannot be linearly independent.
We are most interested in two subspaces that are naturally associated with a matrix. With this background, we are now ready to introduce them.
The null space of \(A\)
When we looked at the linear independence of the columns of a matrix \(A\) in Section 2.4 , we were led to consider the homogeneous equation \(A\mathbf x = \mathbf 0\text{.}\) We note that this solution space forms a subspace that we call the null space of \(A\text{.}\)
If \(A\) is an \(m\times n\) matrix, we call the subset of vectors \(\mathbf x\) in \(\mathbb R^n\) satisfying \(A\mathbf x = \mathbf 0\) the null space of \(A\text{.}\) We denote it as \(Nul(A)\text{.}\)
The linearity of matrix multiplication, expressed in Proposition 2.2.3, tells us that \(Nul(A)\) is a subspace of \(\mathbb R^n\text{.}\) If \(\mathbf x_1\) and \(\mathbf x_2\) are both vectors in \(Nul(A)\text{,}\) we know that \(A\mathbf x_1 = \mathbf 0\) and \(A\mathbf x_2 = \mathbf 0\text{.}\) A linear combination of \(\mathbf x_1\) and \(\mathbf x_2\) can be written as \(c_1\mathbf x_1 + c_2\mathbf x_2\text{.}\) This linear combination is in \(Nul(A)\) because
Activity 3.5.3.
We will explore some null spaces in this activity.
-
Consider the matrix
\begin{equation*} A=\left[\begin{array}{rrr} 1 & 3 & -1 \\ -2 & 0 & -4 \\ 1 & 2 & 0 \\ \end{array}\right] \end{equation*}
and give a parametric description of the null space \(Nul(A)\text{.}\)
- Give a basis for and state the dimension of \(Nul(A)\text{.}\)
- The null space \(Nul(A)\) is a subspace of \(\mathbb R^p\) for which \(p\text{?}\)
-
Now consider the matrix \(A\) whose reduced row echelon form is given:
\begin{equation*} A \sim \left[\begin{array}{rrrr} 1 & 2 & 0 & -3 \\ 0 & 0 & 1 & 2 \\ \end{array}\right]\text{.} \end{equation*}
Give a parametric description of \(Nul(A)\text{.}\)
- Notice that the parametric description gives a set of vectors that span \(Nul(A)\text{.}\) Explain why this set of vectors is linearly independent and hence forms a basis. What is the dimension of \(Nul(A)\text{?}\)
- For this matrix, \(Nul(A)\) is a subspace of \(\mathbb R^p\) for what \(p\text{?}\)
- What is the relationship between the dimensions of the matrix \(A\text{,}\) the number of pivot positions of \(A\) and the dimension of \(Nul(A)\text{?}\)
- Suppose that the columns of a matrix \(A\) are linearly independent. What can you say about \(Nul(A)\text{?}\)
- If \(A\) is an invertible \(n\times n\) matrix, what can you say about \(Nul(A)\text{?}\)
- Suppose that \(A\) is a \(5\times 10\) matrix and that \(Nul(A) = \mathbb R^{10}\text{.}\) What can you say about the matrix \(A\text{?}\)
Let's consider an example of our own. Suppose we have a matrix \(A\) and its reduced row echelon form:
To find a parametric description of the solution space to \(A\mathbf x=\mathbf 0\text{,}\) imagine that we augment both \(A\) and its reduced row echelon form by a column of zeroes, which leads to the equations
Notice that \(x_3\text{,}\) \(x_4\text{,}\) and \(x_5\) are free variables so we rewrite these equations as
Writing this as a vector, we have
This expression says that any vector \(\mathbf x\) satisfying \(A\mathbf x= \mathbf 0\) is a linear combination of the vectors
It is easy to see that these vectors are linearly independent. Remember that we saw in Section 2.4 that this set of vectors is linearly dependent if any linear combination \(c_1\mathbf v_1 + c_2\mathbf v_2 + c_3\mathbf v_3 = \mathbf 0\) implies that \(c_1=c_2=c_3 = 0\text{.}\) But this linear combination would be
This expression shows that \(c_1=c_2=c_3=0\) so the vectors are linearly independent.
Therefore, we see that the vectors
form a basis for \(Nul(A)\) showing that \(Nul(A)\) is a three-dimensional subspace of \(\mathbb R^5\text{.}\)
Notice that the dimension of \(Nul(A)\) is equal to the number of free variables, which equals the number of columns of \(A\) minus the number of pivot positions. This example illustrates a general principle that motivates the following dimension.
The \(rank\) of a matrix \(A\text{,}\) denoted \(Rank(A)\text{,}\) is the number of pivot positions of \(A\text{.}\)
As illustrated by the previous example, if \(A\) is an \(m\times n\) matrix, then \(Nul(A)\) is a subspace of \(\mathbb R^n\) and
or
We may consider two extreme cases. If \(Nul(A)=\{\mathbf 0\}\text{,}\) then \(\dim~Nul(A) = 0\) so that \(Rank(A) = n\text{.}\) This means that the number of pivot positions is equal to the number of columns. In this case, there are no free variables in the description of the solutions to the homogeneous equation \(A\mathbf x = \mathbf 0\) so there is only the trivial solution. This is exactly what we are saying when we say that \(Nul(A) = \{\mathbf 0\}\text{.}\)
Similarly, if \(Nul(A) = \mathbb R^n\text{,}\) then \(\dim~Nul(A) = n\text{,}\) which implies that \(Rank(A) = 0\text{.}\) This means that \(A\) does not have any pivot positions and so \(A\) must be the zero matrix \(0\text{.}\) This is also consistent with what we already know: if \(Nul(A)=\mathbb R^n\text{,}\) then \(A\mathbf x = \mathbf 0\) for any vector \(\mathbf x\text{.}\) This can only be true if \(A = 0\text{.}\)
The column space of \(A\)
Besides the null space, the other subspace that is naturally associated to a matrix \(A\) is its column space.
If \(A\) is an \(m\times n\) matrix, we call the span of its columns the column space of \(A\) and denote it as \(Col(A)\text{.}\)
Notice that the columns of \(A\) are vectors in \(\mathbb R^m\text{,}\) which means that any linear combination of the columns is also in \(\mathbb R^m\text{.}\) The column space is therefore a subset of \(\mathbb R^m\text{.}\)
We can also see \(Col(A)\) is a subspace of \(\mathbb R^m\text{.}\) First, notice that a vector is in \(Col(A)\) if it is a linear combination of the columns of \(A\text{.}\) This means that \(\mathbf b\) is in \(Col(A)\) if there is a vector \(\mathbf x\) such that \(A\mathbf x = \mathbf b\text{.}\) To see that \(Col(A)\) is a subspace of \(\mathbb R^m\text{,}\) we need to check that any linear combination of vectors in \(Col(A)\) is also in \(Col(A)\text{.}\) This follows, once again, from the linearity of matrix multiplicaiton expressed in Proposition 2.2.3.
If vectors \(\mathbf b_1\) and \(\mathbf b_2\) are in \(Col(A)\text{,}\) then there are vectors \(\mathbf x_1\) and \(\mathbf x_2\) such that \(A\mathbf x_1 = \mathbf b_1\) and \(A\mathbf x_2 = \mathbf b_2\text{.}\) Therefore, if we have a linear combination of \(\mathbf b_1\) and \(\mathbf b_2\text{,}\) then
which shows that the linear combination is itself in the column space of \(A\text{.}\) Therefore, \(Col(A)\) is a subspace of \(\mathbb R^m\text{.}\)
Activity 3.5.4.
We will explore some column spaces in this activity.
-
Consider the matrix
\begin{equation*} A= \left[\begin{array}{rrr} \mathbf v_1 & \mathbf v_2 & \mathbf v_3 \end{array}\right] = \left[\begin{array}{rrr} 1 & 3 & -1 \\ -2 & 0 & -4 \\ 1 & 2 & 0 \\ \end{array}\right]\text{.} \end{equation*}
Since \(Col(A)\) is the span of the columns, the vectors \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) and \(\mathbf v_3\) naturally span \(Col(A)\text{.}\) Are these vectors linearly independent?
- Show that \(\mathbf v_3\) can be written as a linear combination of \(\mathbf v_1\) and \(\mathbf v_2\text{.}\) Then explain why \(Col(A)=Span\{{\mathbf v_1,\mathbf v_2}\}\text{.}\)
- Explain why the vectors \(\mathbf v_1\) and \(\mathbf v_2\) form a basis for \(Col(A)\text{.}\) This shows that \(Col(A)\) is a 2-dimensional subspace of \(\mathbb R^2\) and is therefore a plane.
-
Now consider the matrix \(A\) and its reduced row echelon form:
\begin{equation*} A = \left[\begin{array}{rrrr} -2 & -4 & 0 & 6 \\ 1 & 2 & 0 & -3 \\ \end{array}\right] \sim \left[\begin{array}{rrrr} 1 & 2 & 0 & -3 \\ 0 & 0 & 0 & 0 \\ \end{array}\right]\text{.} \end{equation*}
We will call the columns \(\mathbf v_1\text{,}\) \(\mathbf v_2\text{,}\) \(\mathbf v_3\text{,}\) and \(\mathbf v_4\text{.}\) Explain why \(\mathbf v_2\text{,}\) \(\mathbf v_3\text{,}\) and \(\mathbf v_4\) can be written as a linear combination of \(\mathbf v_1\text{.}\)
- Explain why \(Col(A)\) is a 1-dimensional subspace of \(\mathbb R^2\) and is therefore a line.
- What is the relationship between the dimension \(\dim~Col(A)\) and the rank \(Rank(A)\text{?}\)
- What is the relationship between the dimension of the column space \(Col(A)\) and the null space \(Nul(A)\text{?}\)
- If \(A\) is an invertible \(9\times9\) matrix, what can you say about the column space \(Col(A)\text{?}\)
- If \(Col(A)=\{\mathbf 0\}\text{,}\) what can you say about the matrix \(A\text{?}\)
Once again, we will consider the matrix \(A\) and its reduced row echelon form:
We will denote the columns as \(\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_5\text{.}\)
It is certainly true that \(Col(A) = Span\{{\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_5}\}\) by the definition of the column space. However, the reduced row echelon form of the matrix shows us that the vectors are not linearly independent so \(\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_5\) do not form a basis for \(Col(A)\text{.}\)
From the reduced row echelon form, however, we can see that
This means that any linear combination of \(\mathbf v_1,\mathbf v_2,\ldots,\mathbf v_5\) can be written as a linear combination of just \(\mathbf v_1\) and \(\mathbf v_2\text{.}\) Therefore, we see that \(Col(A) = Span\{{\mathbf v_1,\mathbf v_2}\}\text{.}\) Moreover, the reduced row echelon form shows that \(\mathbf v_1\) and \(\mathbf v_2\) are linearly independent, which implies that they form a basis for \(Col(A)\text{.}\) Therefore, \(Col(A)\) is a 2-dimensional subspace of \(\mathbb R^3\text{,}\) which is a plane in \(\mathbb R^3\text{,}\) having basis
In general, a column without a pivot position can be written as a linear combination of the columns that have pivot positions. This means that a basis for \(Col(A)\) will always be given by the columns of \(A\) having pivot positions. Therefore, the dimension of the column space \(Col(A)\) equals the rank \(Rank(A)\text{:}\)
If \(A\) is an \(m\times n\) matrix, this also says that
If \(A\) has a pivot position in every row, then \(\dim~Col(A) = Rank(A) = m\text{.}\) This implies that \(Col(A)\) is an \(m\)-dimensional subspace of \(\mathbb R^m\) and therefore, \(Col(A) = \mathbb R^m\text{.}\) This agrees with our earlier explorations in which we found that the columns of a matrix span \(\mathbb R^m\) if there is a pivot in every row.
At the other extreme, suppose that \(\dim~Col(A) = 0\text{.}\) The matrix \(A\) then has no pivots, which means that \(A\) must be the zero matrix \(0\text{.}\)
Summary
Once again, we find ourselves revisiting our two fundamental questions, expressed in Question 1.4.2, concerning the existence and uniqueness of solutions to linear systems. The column space \(Col(A)\) contains all the vectors \(\mathbf b\) for which the equation \(A\mathbf x = \mathbf b\) is consistent. The null space \(Nul(A)\) describes the solution space to the equation \(A\mathbf x = \mathbf 0\text{,}\) and its dimension tells us whether this equation has a unique solution.
- A subset \(S\) of \(\mathbb R^p\) is a subspace of \(\mathbb R^p\) if any linear combination of vectors in \(S\) is also in \(S\text{.}\) This essentially means that we can perform the usual vector operations of scalar multiplication and vector addition without leaving \(S\text{.}\) A basis of a subspace \(S\) is a linearly indepedent set of vectors in \(S\) whose span is \(S\text{.}\)
- If \(A\) is an \(m\times n\) matrix, then its null space \(Nul(A)\) is the solution space to the homogeneous equation \(A\mathbf x = \mathbf 0\text{.}\) It is a subspace of \(\mathbb R^n\text{.}\)
- A basis for \(Nul(A)\) is found through a parametric description of the solution space of \(A\mathbf x = \mathbf 0\text{.}\) We see that \(\dim~Nul(A) = n - Rank(A)\text{.}\)
- The column space \(Col(A)\) is the span of the columns of \(A\) and forms a subspace of \(\mathbb R^m\text{.}\)
- A basis for \(Col(A)\) is found from the columns of \(A\) that have pivot positions. The dimension is therefore \(\dim~Col(A) = Rank(A)\text{.}\)
Exercises
Suppose that \(A\) and its reduced row echelon form are
- The null space \(Nul(A)\) is a subspace of \(\mathbb R^p\) for what \(p\text{?}\) The column space \(Col(A)\) is a subspace of \(\mathbb R^p\) for what \(p\text{?}\)
- What are the dimensions \(\dim~Nul(A)\) and \(\dim~Col(A)\text{?}\)
- Find a basis for the column space \(Col(A)\text{.}\)
- Find a basis for the null space \(Nul(A)\text{.}\)
Suppose that
- Is the vector \(\threevec{0}{-1}{-1}\) in \(Col(A)\text{?}\)
- Is the vector \(\fourvec{2}{1}{0}{2}\) in \(Col(A)\text{?}\)
- Is the vector \(\threevec{2}{-2}{0}\) in \(Nul(A)\text{?}\)
- Is the vector \(\fourvec{1}{-1}{3}{-1}\) in \(Nul(A)\text{?}\)
- Is the vector \(\fourvec{1}{0}{1}{-1}\) in \(Nul(A)\text{?}\)
Determine whether the following statements are true or false and provide a justification for your response. Unless otherwise stated, assume that \(A\) is an \(m\times n\) matrix.
- If \(A\) is a \(127\times 341\) matrix, then \(Nul(A)\) is a subspace of \(\mathbb R^{127}\text{.}\)
- If \(\dim~Nul(A) = 0\text{,}\) then the columns of \(A\) are linearly independent.
- If \(Col(A) = \mathbb R^m\text{,}\) then \(A\) is invertible.
- If \(A\) has a pivot position in every column, then \(Nul(A) = \mathbb R^m\text{.}\)
- If \(Col(A) = \mathbb R^m\) and \(Nul(A) = \{\mathbf 0\}\text{,}\) then \(A\) is invertible.
Explain why the following statements are true.
- If \(B\) is invertible, then \(Nul(BA) = Nul(A)\text{.}\)
- If \(B\) is invertible, then \(Col(AB) = Col(A)\text{.}\)
- If \(A\sim A'\text{,}\) then \(Nul(A) = Nul(A')\text{.}\)
For each of the following conditions, construct a \(3\times 3\) matrix having the given properties.
- \(\dim~Nul(A) = 0\text{.}\)
- \(\dim~Nul(A) = 1\text{.}\)
- \(\dim~Nul(A) = 2\text{.}\)
- \(\dim~Nul(A) = 3\text{.}\)
Suppose that \(A\) is a \(3\times 4\) matrix.
- Is it possible that \(\dim~Nul(A) = 0\text{?}\)
- If \(\dim~Nul(A) = 1\text{,}\) what can you say about \(Col(A)\text{?}\)
- If \(\dim~Nul(A) = 2\text{,}\) what can you say about \(Col(A)\text{?}\)
- If \(\dim~Nul(A) = 3\text{,}\) what can you say about \(Col(A)\text{?}\)
- If \(\dim~Nul(A) = 4\text{,}\) what can you say about \(Col(A)\text{?}\)
Consider the vectors
and suppose that \(A\) is a matrix such that \(Col(A)=Span\{{\mathbf v_1,\mathbf v_2}\}\) and \(Nul(A) = Span\{{\mathbf w_1,\mathbf w_2}\}\text{.}\)
- What are the dimensions of \(A\text{?}\)
- Find such a matrix \(A\text{.}\)
Suppose that \(A\) is an \(8\times 8\) matrix and that \(\det A = 14\text{.}\)
- What can you conclude about \(Nul(A)\text{?}\)
- What can you conclude about \(Col(A)\text{?}\)
Suppose that \(A\) is a matrix and there is an invertible matrix \(P\) such that
- What can you conclude about \(Nul(A)\text{?}\)
- What can you conclude about \(Col(A)\text{?}\)
In this section, we saw that the solution space to the homogeneous equation \(A\mathbf x = \mathbf 0\) is a subspace of \(\mathbb R^p\) for some \(p\text{.}\) In this exercise, we will investigate whether the solution space to another equation \(A\mathbf x = \mathbf b\) can form a subspace.
Let's consider the matrix
- Find a parametric description of the solution space to the homogeneous equation \(A\mathbf x = \mathbf 0\text{.}\)
-
Graph the solution space to the homogeneous equation to the right.
- Find a parametric description of the solution space to the equation \(A\mathbf x = \twovec{4}{-2}\) and graph it above.
- Is the solution space to the equation \(A\mathbf x = \twovec{4}{-2}\) a subspace of \(\mathbb R^2\text{?}\)
- Find a parametric description of the solution space to the equation \(A\mathbf x=\twovec{-8}{4}\) and graph it above.
- What can you say about all the solution spaces to equations of the form \(A\mathbf x = \mathbf b\) when \(\mathbf b\) is a vector in \(Col(A)\text{?}\)
- Suppose that the solution space to the equation \(A\mathbf x = \mathbf b\) forms a subspace. Explain why it must be true that \(\mathbf b = \mathbf 0\text{.}\)