$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# D: Sample First Midterm

$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

Here are some worked problems typical for what you might expect on a first midterm examination.

1. Solve the following linear system. Write the solution set in vector form. Check your solution. Write one particular solution and one homogeneous solution, if they exist. What does the solution set look like geometrically?
$$\begin{array}{rrrrrr} x &+&3y & &&= 4\\ x &-& 2y &+& z &= 1\\ 2x &+&y &+& z &= 5\\ \end{array}$$

2.
Consider the system
$$\left\{ \begin{array}{rrrrrrrrr} x&&&-&z&+&2w&=&-1\\ x&+&y&+&z&-&w&=&2\\ &-&y&-&2z&+&3w&=&-3\\ 5x&+&2y&-&z&+&4w&=&1\\ \end{array} \right.$$

a) Write an augmented matrix for this system.

b) Use elementary row operations to find its reduced row echelon form.

c) Write the solution set for the system in the form $$S=\{X_0+\sum_i \mu_i Y_i:\mu_i\in \mathbb R\} .$$

d) What are the vectors $$X_{0}$$ and $$Y_{i}$$ called $$\textit{and}$$ which matrix equations do they solve?

e) Check separately that $$X_{0}$$ and each $$Y_{i}$$ solve the matrix systems you claimed they solved in part (d).

3. Use row operations to invert the matrix
$$\begin{pmatrix} 1&2&3&4\\ 2&4&7&11\\ 3&7&14&25\\ 4&11&25&50 \end{pmatrix}$$

4. Let $$M = \left ( \begin{array}{cc} 2 & 1 \\ 3 & -1 \end{array} \right )$$. Calculate $$M^{T}M^{-1}$$. Is $$M$$ symmetric? What is the trace of the transpose of $$f(M)$$, where $$f(x) = x^{2} -1$$?

5. In this problem $$M$$ is the matrix
$$M=\begin{pmatrix}\cos\theta & \sin\theta \\ -\sin\theta&\cos\theta\end{pmatrix}$$
and $$X$$ is the vector
$$X=\begin{pmatrix}x\\y \end{pmatrix}\, .$$
Calculate all possible dot products between the vectors $$X$$ and $$MX$$. Compute the lengths of $$X$$ and $$MX$$. What is the angle between the vectors $$MX$$ and $$X$$. Draw a picture of these vectors in the plane. For what values of $$\theta$$ do you expect equality in the triangle and Cauchy--Schwartz inequalities?

6. Let $$M$$ be the matrix
$$\begin{pmatrix} 1&0&0&1&0&0\\ 0&1&0&0&1&0\\ 0&0&1&0&0&1\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1 \end{pmatrix}$$
Find a formula for $$M^{k}$$ for any positive integer power $$k$$. Try some simple examples like $$k=2,3$$ if confused.

7.
$$\textit{Determinants:}$$ The determinant $${\rm det}\, M$$ of a $$2\times 2$$ matrix $$M=\begin{pmatrix}a&b\\c&d\end{pmatrix}$$ is defined by
$${\rm det}\, M =ad -bc\, .$$

a) For which values of $${\rm det}\, M$$ does $$M$$ have an inverse?
b) Write down all $$2\times 2$$ bit matrices with determinant 1. (Remember bits are either 0 or 1 and $$1+1=0$$.)
c) Write down all $$2\times 2$$ bit matrices with determinant 0.
d) Use one of the above examples to show why the following statement is FALSE.

$$\textit{Square matrices with the same determinant are always row equivalent.} 8. What does it mean for a function to be linear? Check that integration is a linear function from \(V$$ to $$V$$, where $$V = \{ f: \mathbb{R} \to \mathbb{R} \mid f \textrm{ is integrable}\}$$ is a vector space over $$\mathbb{R}$$ with usual addition and scalar multiplication.

9. What are the four main things we need to define for a vector space? Which of the following is a vector space over $$\mathbb{R}$$? For those that are not vector spaces, modify one part of the definition to make it into a vector space.

a) $$V = \{ \textrm{ \(2 \times 2$$ matrices with entries in $$\mathbb{R}$$} \}\), usual matrix addition, and $$k \cdot \left ( \begin{array}{cc} a & b \\ c & d \end{array} \right ) = \left ( \begin{array}{cc} ka & b \\ kc & d \end{array} \right )$$ for $$k \in \mathbb{R}$$.

b) $$V = \{ \textrm{polynomials with complex coefficients of degree \(\leq 3$$} \}\), with usual addition and scalar multiplication of polynomials.

c) $$V = \{ \textrm{vectors in \(\mathbb{R}^{3}$$ with at least one entry containing a 1}\}\), with usual addition and scalar multiplication.

10.
$$\textit{Subspaces:}$$ If $$V$$ is a vector space, we say that $$U$$ is a $$\textit{subspace}$$ of $$V$$ when the set $$U$$ is also a vector space, using the vector addition and scalar multiplication rules of the vector space $$V$$. (Remember that $$U\subset V$$ says that "$$U$$ is a subset of $$V$$'', $$\textit{i.e.}$$, all elements of $$U$$ are also elements of $$V$$. The symbol $$\forall$$ means "for all'' and $$\in$$ means "is an element of''.)

Explain why additive closure ($$u+w\in U$$ $$\forall$$ $$u,v\in U$$) and multiplicative closure ($$r.u\in U$$ $$\forall$$ $$r\in \mathbb{R}$$, $$u\in V$$) ensure that (i) the zero vector $$0\in U$$ and (ii) every $$u\in U$$ has an additive inverse.

In fact it suffices to check closure under addition and scalar multiplication to verify that $$U$$ is a vector space. Check whether the following choices of $$U$$ are vector spaces:

a) $$U=\left\{\begin{pmatrix}x\\y\\0\end{pmatrix}:x,y\in \mathbb{R}\right\}$$

b) $$U=\left\{\begin{pmatrix}1\\0\\z\end{pmatrix}:z\in \mathbb{R}\right\}$$

## Solutions

1.
$$\textit{As an additional exercise, write out the row operations above the \(\sim$$ signs below:}\)
$$\left(\begin{array}{rrr|r} 1&3&0&4\\1&-2&1&1\\2&1&1&5 \end{array}\right) \sim \left(\begin{array}{rrr|r} 1&3&0&4\\0&-5&1&-3\\0&-5&1&-3 \end{array}\right) \sim \left(\begin{array}{rrr|r} 1&0&\frac{3}{5}&\frac{11}{5}\\0&1&-\frac{1}{5}&\frac{3}{5}\\0&0&0&0 \end{array}\right)$$
Solution set
$$\left\{\begin{pmatrix}x\\y\\ z\end{pmatrix}=\begin{pmatrix}\frac{11}{5}\\ \frac{3}{5}\\ 0\end{pmatrix}+\mu \begin{pmatrix}-\frac{3}{5}\\\frac{1}{5}\\1\end{pmatrix}\colon \mu \in \mathbb{R} \right\}$$
Geometrically this represents a line in $$\mathbb{R}^{3}$$ through the point $$\begin{pmatrix}\frac{11}{5}\\ \frac{3}{5}\\ 0\end{pmatrix}$$ and running parallel to the vector $$\begin{pmatrix}-\frac{3}{5}\\\frac{1}{5}\\1\end{pmatrix}$$.

$$\textit{A}$$ particular solution is $$\begin{pmatrix}\frac{11}{5}\\ \frac{3}{5}\\ 0\end{pmatrix}$$ and a homogeneous solution is $$\begin{pmatrix}-\frac{3}{5}\\\frac{1}{5}\\1\end{pmatrix}$$.

As a double check note that
$$\left(\begin{array}{rrr} 1&3&0\\1&-2&1\\2&1&1 \end{array}\right)\ \begin{pmatrix}\frac{11}{5}\\ \frac{3}{5}\\ 0\end{pmatrix}=\begin{pmatrix}4\\ 1\\ 5\end{pmatrix} \mbox{ and } \left(\begin{array}{rrr} 1&3&0\\1&-2&1\\2&1&1 \end{array}\right)\ \begin{pmatrix}-\frac{3}{5}\\\frac{1}{5}\\1\end{pmatrix}=\begin{pmatrix}0\\0\\0\end{pmatrix}\, .$$

2.

a) $$\textit{Again, write out the row operations as an additional exercise.}$$
$$\left(\begin{array}{rrrr|r} 1&0&-1&2&-1\\ 1&1&1&-1&2\\ 0&-1&-2&3&-3\\ 5&2&-1&4&1 \end{array}\right)$$

b)
$$\sim \left(\begin{array}{rrrr|r} 1&0&-1&2&-1\\ 0&1&2&-3&3\\ 0&-1&-2&3&-3\\ 0&2&4&-6&6 \end{array}\right) \sim \left(\begin{array}{rrrr|r} 1&0&-1&2&-1\\ 0&1&2&-3&3\\ 0&0&0&0&0\\ 0&0&0&0&0 \end{array}\right)$$

c)
Solution set
$$\left\{X=\begin{pmatrix}-1\\3 \\ 0\\0\end{pmatrix} +\mu_{1} \begin{pmatrix} 1 \\-2\\1\\0\end{pmatrix} +\mu_{2} \begin{pmatrix}-2\\3\\0\\1\end{pmatrix} \colon \mu_{1},\mu_{2} \in \mathbb{R} \right\}\, .$$

d)
The vector $$X_{0}=\begin{pmatrix}-1\\3 \\ 0\\0\end{pmatrix}$$ is $$\textit{a}$$ particular solution and the vectors $$Y_{1}=\begin{pmatrix} 1 \\-2\\1\\0\end{pmatrix}$$ and $$Y_{2}= \begin{pmatrix}-2\\3\\0\\1\end{pmatrix}$$ are homogeneous solutions. Calling $$M=\left(\begin{array}{rrrr} 1&0&-1&2\\ 1&1&1&-1\\ 0&-1&-2&3\\ 5&2&-1&4 \end{array}\right)$$ and $$V=\begin{pmatrix}-1\\2\\-3\\1\end{pmatrix}$$, they obey
$$MX=V\, ,\qquad M Y_{1}=0=MY_{2}\, .$$

e) This amounts to performing explicitly the matrix manipulations $$MX-V$$, $$MY_{1}$$, $$MY_{2}$$ and checking they all return the zero vector.

3.
$$\textit{As usual, be sure to write out the row operations above the \(\sim$$'s so your work can be easily checked.}\)
$$\phantom{\sim} \left( \begin{array}{rrrr|rrrr} 1&2&3&4&1&0&0&0\\ 2&4&7&11&0&1&0&0\\ 3&7&14&25&0&0&1&0\\ 4&11&25&50&0&0&0&1 \end{array}\right)$$
$$\sim \left( \begin{array}{rrrr|rrrr} 1&2&3&4&1&0&0&0\\ 0&0&1&3&-2&1&0&0\\ 0&1&5&13&-3&0&1&0\\ 0&3&13&34&-4&0&0&1 \end{array}\right)$$
$$\sim \left( \begin{array}{rrrr|rrrr} 1&0&-7&-22&7&0&-2&0\\ 0&1&5&13&-3&0&1&0\\ 0&0&1&3&-2&1&0&0\\ 0&0&-2&-5&5&0&-3&1 \end{array}\right)$$
$$\sim \left( \begin{array}{rrrr|rrrr} 1&0&0&-1&-7&7&-2&0\\ 0&1&0&-2&7&-5&1&0\\ 0&0&1&3&-2&1&0&0\\ 0&0&0&1&1&2&-3&1 \end{array}\right)$$
$$\sim \left( \begin{array}{rrrr|rrrr} 1&0&0&0&-6&9&-5&1\\ 0&1&0&0&9&-1&-5&2\\ 0&0&1&0&-5&-5&9&-3\\ 0&0&0&1&1&2&-3&1 \end{array}\right)\, .$$
Check
$$\begin{pmatrix} 1&2&3&4\\2&4&7&11\\3&7&14&25\\4&11&25&50 \end{pmatrix} \begin{pmatrix} -6&9&-5&1\\9&-1&-5&2\\-5&-5&9&-3\\1&2&-3&1 \end{pmatrix} = \begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1 \end{pmatrix}\, .$$

4.
$$M^{T} M^{-1}= \begin{pmatrix}2&3\\1&-1\end{pmatrix}\begin{pmatrix}\frac{1}{5}&\frac{1}{5}\\\frac{3}{5}&-\frac{2}{5}\end{pmatrix} =\begin{pmatrix}\frac{11}{5}&-\frac{4}{5}\\-\frac{2}{5}&\frac{3}{5}\end{pmatrix}\, .$$
Since $$M^{T}M^{-1}\neq I$$, it follows $$M^{T}\neq M$$ so $$M$$ is $$\textit{not}$$ symmetric.
Finally
$${\rm tr} f(M)^{T}= {\rm tr} f(M) = {\rm tr}(M^{2}-I)={\rm tr}\begin{pmatrix}2&1\\3&-1\end{pmatrix}\begin{pmatrix}2&1\\3&-1\end{pmatrix}-{\rm tr} I$$
$$=(2\cdot 2+1\cdot 3)+(3\cdot 1+(-1)\cdot(-1))-2=9\, .$$

5. First $$X\cdot (MX)=X^{T} M X = \begin{pmatrix}x & y\end{pmatrix} \begin{pmatrix}\cos\theta &\sin\theta \\ -\sin\theta & \cos\theta\end{pmatrix} \begin{pmatrix}x \\ y\end{pmatrix}$$
$$= \begin{pmatrix}x & y\end{pmatrix}\begin{pmatrix}x \cos\theta + y\sin\theta \\ -x\sin\theta + y\cos\theta\end{pmatrix} =(x^2+y^2)\cos\theta\, .$$
Now $$||X||=\sqrt{X\cdot X}=\sqrt{x^{2} + y^{2}}$$ and
$$(MX)\cdot (MX)= X M^{T} M X$$. But
$$M^{T} M = \begin{pmatrix}\cos\theta &-\sin\theta \\ \sin\theta & \cos\theta\end{pmatrix} \begin{pmatrix}\cos\theta &\sin\theta \\ -\sin\theta & \cos\theta\end{pmatrix}$$ $$= \begin{pmatrix}\cos^{2}\theta +\sin^2\theta& 0 \\ 0 & \cos^{2}\theta +\sin^{2}\theta\end{pmatrix}=I\, .$$
Hence $$||MX||=||X||=\sqrt{x^{2}+y^{2}}$$. Thus the cosine of the angle between $$X$$ and $$MX$$ is given by
$$\frac{X\cdot (MX)}{||X|| \ ||MX||}= \frac{(x^{2}+y^{2})\cos\theta}{\sqrt{x^{2}+y^{2}}\, \sqrt{x^{2}+y^{2}}} = \cos \theta\, .$$
In other words, the angle is $$\theta$$ OR $$-\theta$$. You should draw two pictures, one where the angle between $$X$$ and $$MX$$ is $$\theta$$, the other where it is $$-\theta$$.

For Cauchy--Schwartz, $$\frac{|X\cdot (MX)|}{||X|| \ ||MX||}=|\cos\theta|=1$$ when $$\theta=0,\pi$$. For the triangle equality $$MX = X$$ achieves $$||X+MX||=||X||+||MX||$$, which requires $$\theta=0$$.

6. This is a block matrix problem. Notice the that matrix $$M$$ is really just $$M= \begin{pmatrix} I&I\\0&I \end{pmatrix}$$, where $$I$$ and $$0$$ are the $$3\times3$$ identity zero matrices, respectively. But
$$M^{2}=\begin{pmatrix} I&I\\0&I \end{pmatrix} \begin{pmatrix} I&I\\0&I \end{pmatrix} = \begin{pmatrix} I&2I\\0&I \end{pmatrix}$$
and
$$M^{3}=\begin{pmatrix} I&I\\0&I \end{pmatrix} \begin{pmatrix} I&2I\\0&I \end{pmatrix} = \begin{pmatrix} I&3I\\0&I \end{pmatrix}$$
so, $$M^{k}=\begin{pmatrix} I&kI\\0&I \end{pmatrix}$$, or explicitly
$$M^{k}= \begin{pmatrix} 1&0&0&k&0&0\\ 0&1&0&0&k&0\\ 0&0&1&0&0&k\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1 \end{pmatrix}\, .$$

7.
a) Whenever $${\rm det} M=ad-bc\neq 0$$.

b) Unit determinant bit matrices:
$$\begin{pmatrix} 1&0\\0&1 \end{pmatrix}, \begin{pmatrix} 1&1\\0&1 \end{pmatrix}\, , \begin{pmatrix} 1&0\\1&1 \end{pmatrix}, \begin{pmatrix} 0&1\\1&0 \end{pmatrix}\, , \begin{pmatrix} 1&1\\1&0 \end{pmatrix}, \begin{pmatrix} 0&1\\1&1 \end{pmatrix}\,.$$

c) Bit matrices with vanishing determinant:
$$\begin{pmatrix} 0&0\\0&0 \end{pmatrix}, \begin{pmatrix} 1&0\\0&0 \end{pmatrix}\, , \begin{pmatrix} 0&1\\0&0 \end{pmatrix}\, , \begin{pmatrix} 0&0\\1&0 \end{pmatrix}, \begin{pmatrix} 0&0\\0&1 \end{pmatrix}\, ,$$ $$\begin{pmatrix} 1&1\\0&0 \end{pmatrix}, \begin{pmatrix} 0&0\\1&1 \end{pmatrix}\,, \begin{pmatrix} 1&0\\1&0 \end{pmatrix}, \begin{pmatrix} 0&1\\0&1 \end{pmatrix}\, , \begin{pmatrix} 1&1\\1&1 \end{pmatrix}\,.$$
$$\textit{As a check, count that the total number of \(2\times 2$$ bit matrices is $$2^{(\rm number\ of\ entries)}=2^{4}=16$$.}\)

d) To disprove this statement, we just need to find a single counterexample. All the unit determinant examples above are actually row equivalent to the identity matrix, so focus on the bit matrices with vanishing determinant. Then notice (for example), that
$$\begin{pmatrix} 1&1\\0&0 \end{pmatrix}{\sim}\!\!\!\!/ \begin{pmatrix} 0&0\\0&0 \end{pmatrix}\, .$$
So we have found a pair of matrices that are not row equivalent but do have the same determinant. It follows that the statement is false.

8. We can call a function $$f\colon V\longrightarrow W$$ $$\textit{linear}$$ if the sets $$V$$ and $$W$$ are vector spaces
and $$f$$ obeys
$$f(\alpha u + \beta v)=\alpha f(u)+\beta f(v)\, ,$$
for all $$u,v\in V$$ and $$\alpha,\beta\in \mathbb{R}$$.

Now, integration is a linear transformation from the space $$V$$ of all integrable functions (don't be confused between the definition of a linear function above, and integrable functions $$f(x)$$ which here are the vectors in $$V$$) to the real numbers $$\mathbb{R}$$, because
$$\int_{-\infty}^{\infty} (\alpha f(x) +\beta g(x))dx = \alpha \int_{-\infty}^{\infty} f(x) dx + \beta \int_{-\infty}^{\infty} g(x) dx$$.

9. The four main ingredients are (i) a set $$V$$ of vectors, (ii) a number field $$K$$ (usually $$K=\mathbb{R})$$, (iii) a rule for adding vectors (vector addition) and (iv) a way to multiply vectors by a number to produce a new vector (scalar multiplication). There are, of course, ten rules that these four ingredients must obey.

a) This is not a vector space. Notice that distributivity of scalar multiplication requires $$2 u = (1+1) u = u + u$$ for any vector $$u$$ but
$$2\cdot \begin{pmatrix} a& b\\ c& d\end{pmatrix} = \begin{pmatrix} 2a& b\\ 2c& d\end{pmatrix}$$
which does $$\textit{not}$$ equal
$$\begin{pmatrix} a& b\\ c& d\end{pmatrix}+\begin{pmatrix} a& b\\ c& d\end{pmatrix}= \begin{pmatrix} 2a& 2b\\ 2c& 2d\end{pmatrix}\, .$$
This could be repaired by taking $$k\cdot \begin{pmatrix} a& b\\ c& d\end{pmatrix}= \begin{pmatrix} ka& kb\\ kc& kd\end{pmatrix}\, .$$
b) This is a vector space. {\it Although, the question does not ask you to, it is a useful exercise
to verify that all \hyperref[vectorspace]{ten vector space rules} are satisfied.}

c) This is not a vector space for many reasons. An easy one is that $$(1,-1,0)$$ and $$(-1,1,0)$$ are both in the space, but their sum $$(0,0,0)$$ is not ($$\textit{i.e.}$$, additive closure fails). The easiest way to repair this would be to drop the requirement that there be at least one entry equaling 1.

10.
(i) Thanks to multiplicative closure, if $$u\in U$$, so is $$(-1)\cdot u$$. But $$(-1) \cdot u + u = (-1)\cdot u + 1\cdot u = (-1+1)\cdot u= 0.u = 0$$ (at each step in this chain of equalities we have used the fact that $$V$$ is a vector space and therefore can use its vector space rules). In particular, this means that the zero vector of $$V$$ is in $$U$$ and is it zero vector also. (ii) Also, in $$V$$, for each $$u$$ there is an element $$-u$$ such that $$u+(-u)=0$$. But by additive close, $$(-u)$$ must also be in $$U$$, thus every $$u\in U$$ has an additive inverse.

a) This is a vector space. First we check additive closure: let $$\begin{pmatrix}x \\ y\\0\end{pmatrix}$$ and $$\begin{pmatrix}z \\ w\\0\end{pmatrix}$$ be arbitrary vectors in $$U$$. But since $$\begin{pmatrix}x \\ y\\0\end{pmatrix} + \begin{pmatrix}z \\ w\\0\end{pmatrix} = \begin{pmatrix}x+z \\ y+w\\0\end{pmatrix}$$, so is their sum (because vectors in $$U$$ are those whose third component vanishes). Multiplicative closure is similar: for any $$\alpha\in \mathbb{R}$$, $$\alpha \begin{pmatrix}x \\ y\\0\end{pmatrix}= \begin{pmatrix}\alpha x \\ \alpha y\\0\end{pmatrix}$$, which also has no third component, so is in $$U$$.

b) This is not a vector space for various reasons. A simple one is that $$u=\begin{pmatrix}1\\0\\z\end{pmatrix}$$ is in $$U$$ but the vector $$u+u=\begin{pmatrix}2\\0\\2z\end{pmatrix}$$ is not in $$U$$ (it has a 2 in the first component, but vectors in $$U$$ always have a 1 there).