Skip to main content
\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)
Mathematics LibreTexts

F: Sample Final Exam

Here are some worked problems typical for what you might expect on a final examination.

1.    Define the following terms:

  1. An \(\textit{orthogonal matrix}.\)
  2. A \(\textit{basis}\) for a vector space.
  3. The \(\textit{span}\) of a set of vectors.
  4. The \(\textit{dimension}\) of a vector space.
  5. An \(\textit{eigenvector}.\)
  6. A \(\textit{subspace}\) of a vector space.
  7. The \(\textit{kernel}\) of a linear transformation.
  8. The \(\textit{nullity}\) of a linear transformation.
  9. The \(\textit{image}\) of a linear transformation.
  10. The \(\textit{rank}\) of a linear transformation.
  11. The \(\textit{characteristic polynomial}\) of a square matrix.
  12. An \(\textit{equivalence relation}.\)
  13. A \(\textit{homogeneous solution}\) to a linear system of equations.
  14. A \(\textit{particular solution}\) to a linear system of equations.
  15. The \(\textit{general solution}\) to a linear system of equations.
  16. The \(\textit{direct sum}\) of a pair of subspaces of a vector space.
  17. The \(\textit{orthogonal complement}\) to a subspace of a vector space.


2.    \(\textit{Kirchoff's laws}\): Electrical circuits are easy to analyze using systems of equations.  The change in voltage (measured in Volts) around any loop due to batteries \(|\big|\) and resistors \(/\!\backslash\!/\!\backslash\!/\!\backslash\!/\!\backslash\) (given by the product of the current measured in Amps and resistance measured in Ohms) equals zero.  Also, the sum of currents entering any junction vanishes.  Consider the circuit

Find all possible equations for the unknowns \(I\), \(J\) and \(V\) and then solve for \(I\), \(J\) and \(V\).  Give your answers with correct units.



Suppose \(M\) is the matrix of a linear transformation 

$$L: U\to V  $$

and the vector spaces \(U\) and \(V\) have dimensions
\mbox{dim}\,  U= n\,,  \qquad \mbox{dim}\,  V= m\, , 
m\neq n\, .
Also assume 
\mbox{ker} L = \{0_U\}\, .

  1. How many rows does \(M\) have? 
  2. How many columns does \(M\) have?
  3. Are the columns of \(M\) linearly independent?
  4. What size matrix is \(M^{T}M\)?
  5. What size matrix is \(M M^{T}\)?
  6. Is \(M^{T} M\) invertible?
  7. Is \(M^{T} M\) symmetric? 
  8. Is \(M^{T}M\) diagonalizable?
  9. Does \(M^{T}M\) have a zero eigenvalue?
  10. Suppose \(U=V\) and \(ker L\neq\{0_{U}\}\).  Find an eigenvalue of \(M\).
  11. Suppose \(U=V\) and \(ker L\neq\{0_{U}\}\).  Find \(\det M\).



4.    Consider the system of equations


Express this system as a matrix equation \(MX=V\) and then find the solution set by computing an \(LU\) decomposition for the matrix \(M\) (be sure to use back and forward substitution).



Compute the following determinants
\end{pmatrix} ,\:$$ $$
Now test your skills on
\det\left(\begin{array}{ccccc}1&2&3&\cdots&n\\n+1&n+2&n+3&\cdots&2n\\2n+1&2n+2&2n+3&&3n \\
\(\textit{Make sure to jot down a few brief notes explaining any clever tricks you use.}\)



For which values of \(a\) does $$U={\rm span} \left\{
\begin{pmatrix}1\\0\\1\end{pmatrix},\begin{pmatrix}1\\2\\-3\end{pmatrix},\begin{pmatrix}a\\1\\0\end{pmatrix}\right\}={\mathbb R}^{3}\ ?$$  For any special values of \(a\) at which \(U\neq{\mathbb R}^{3}\), express the subspace \(U\) as the span of the least number of vectors possible.  Give the dimension of \(U\) for these cases and draw a picture showing \(U\) inside \(\mathbb{R}^{3}\).



\(\textit{Vandermonde determinant:}\) Calculate the following determinants
\det \begin{pmatrix}1 & x\\ 1 & y\end{pmatrix}\, ,\quad
\det \begin{pmatrix}1 & x & x^{2}\\ 1 & y&y^{2}\\ 1& z&z^{2}\end{pmatrix}\, ,\quad
\det \begin{pmatrix}1 & x & x^{2} & x^{3}\\ 1 & y& y^{2} & y^{3}\\ 1 & z & z^{2} & z^{3}\\ 1 & w & w^{2} & w^{3}\end{pmatrix}\, .
Be sure to factorize you answers, if possible.

\(\textit{Challenging:}\) Compute the determinant
\det \begin{pmatrix}1 & x_{1} & (x_{1})^{2} & \cdots &(x_{1})^{n-1}\\ 
1 & x_{2}& (x_{2})^{2} & \cdots &  (x_{2})^{n-1}\\ 
1 & x_{3}& (x_{3})^{2} & \cdots &  (x_{3})^{n-1}\\ 
\vdots &\vdots &\vdots &\ddots & \vdots\ \ \  \\ 
1 & x_{n}& (x_{n})^{2} & \cdots &  (x_{n})^{n-1}\end{pmatrix}\, .




  1. Do the vectors \(\left\{\begin{pmatrix}1\\2\\3\end{pmatrix},\begin{pmatrix}3\\2\\1\end{pmatrix},\begin{pmatrix}1\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\0\end{pmatrix},\begin{pmatrix}0\\0\\1\end{pmatrix}\right\}\) form a basis for \(\mathbb{R}^{3}\)?  \(\textit{Be sure to justify your answer.}\)
  2. Find a basis for \(\mathbb{R}^{4}\) that includes the vectors \(\begin{pmatrix}1\\2\\3\\4\end{pmatrix}\) and \(\begin{pmatrix}4\\3\\2\\1\end{pmatrix}\).
  3. Explain in words how to generalize your computation in the second part to obtain a basis for \(\mathbb{R}^{n}\) that includes a given pair of (linearly independent) vectors \(u\) and \(v\). 



Elite NASA engineers\index{Elite NASA engineers} determine that if a satellite is placed in orbit starting at a point \(\cal{O}\), it will return exactly to that same point after one orbit of the earth.  Unfortunately, if there is a small mistake in the original location of the 
satellite, which the engineers label by a vector \(X\) in \(\mathbb{R}^{3}\) with origin at \(\cal{O}\), after one orbit the satellite will instead return to some other point \(Y\in \mathbb{R}^{3}\).  The engineer's computations show that \(Y\) is related to \(X\) by a matrix
Y = \begin{pmatrix} 0&\frac{1}{2} & 1 \\ \frac{1}{2} &\frac{1}{2} &\frac{1}{2} \\ 1 & \frac{1}{2} &0\end{pmatrix} X\, .


  1. Find all eigenvalues of the above matrix.
  2. Determine \(\textit{all}\) possible eigenvectors associated with each eigenvalue.


Let us assume that the rule found by the engineers applies to all subsequent orbits.  Discuss case by case, what will happen to the satellite if the initial mistake in its location is in a direction given by an eigenvector. 



In this problem the scalars in the vector spaces are bits (\(0,1\) with \(1+1=0\)).  The space \(B^{k}\) is the vector space of bit-valued, \(k\)-component column vectors.

  1. Find a basis for \(B^{3}\). 
  2. Your answer to part (a) should be a list of vectors \(v_{1}\), \(v_{2},\ldots v_{n}\).  What number did you find for \(n\)? 
  3. How many elements are there in the \(\textit{set}\) \(B^{3}\). 
  4. What is the dimension of the vector space \(B^{3}\). 
  5. Suppose \(L:B^{3}\to B=\{0,1\}\) is a linear transformation.  Explain why specifying \(L(v_{1})\), \(L(v_{2}),\ldots,L(v_{n})\) completely determines \(L\). 
  6. Use the notation of part (e) to list {all} linear transformations $$L:B^{3}\to B\, .$$ How many different linear transformations did you find?  Compare your answer to part 3. 
  7. Suppose \(L_{1}:B^{3}\to B\) and \(L_{2}: B^{3}\to B\) are linear transformations, and \(\alpha\) and \(\beta\) are bits.  Define a new map \((\alpha L_{1}+\beta L_{2}):B^{3}\to B\) by $$(\alpha L_{1}+\beta L_{2})(v)=\alpha L_{1}(v)+\beta L_{2}(v).$$ Is this map a linear transformation?  Explain.
  8. Do you think the set of all linear transformations from \(B^{3}\) to \(B\) is a vector space using the addition rule above?  If you answer yes, give a basis for this vector space and state its dimension.



A team of distinguished, post-doctoral engineers analyzes the design for a bridge across the English channel. They notice that the force on the center of the bridge when it is displaced by an amount \(X=\begin{pmatrix}x\\y\\z\end{pmatrix}\) is given by 
F=\begin{pmatrix}-x-y\\-x-2y-z\\-y-z\end{pmatrix}\, .
Moreover, having read Newton's Principi\ae, they know that force is proportional to acceleration so that
F=\frac{d^{2} X}{dt^{2}}\, .
Since the engineers are worried the bridge might start swaying in the heavy channel winds, they search for an oscillatory solution to this equation of the form
X=\cos(\omega t) \ \begin{pmatrix} a \\ b \\ c\end{pmatrix}\, .


  1. By plugging their proposed solution in the above equations the engineers find an eigenvalue problem $$M \begin{pmatrix} a \\ b \\ c\end{pmatrix} = -\omega^{2} \begin{pmatrix} a \\ b \\ c\end{pmatrix}\, .$$  Here \(M\) is a \(3\times 3\) matrix.  Which \(3\times 3\) matrix \(M\) did the engineers find?  \(\textit{Justify your answer.}\)
  2. Find the eigenvalues and eigenvectors of the matrix \(M\).
  3. The number \(|\omega|\) is often called a \(\textit{characteristic frequency}\).  What characteristic frequencies do you find for the proposed bridge?
  4. Find an orthogonal matrix \(P\) such that \(MP=PD\) where \(D\) is a diagonal matrix.  \(\textit{Be sure to also state your result for \(D\).}\)
  5. Is there a direction in which displacing the bridge yields no force?  If so give a vector in that direction.  \(\textit{Briefly}\) evaluate the quality of this bridge design.

12.    \(\textit{Conic Sections:}\) The equation for the most general conic section is given by
ax^{2} + 2bxy+dy^{2} + 2cx+2ey + f=0\, . 
Our aim is to analyze the solutions to this equation using matrices.


  1. Rewrite the above quadratic equation as one of the form $$X^{T} M X +  X^{T} C + C^{T} X+ f=0$$ relating an unknown column vector \(X=\begin{pmatrix}x \\ y\end{pmatrix}\), its transpose \(X^{T}\), a \(2\times 2\) matrix \(M\), a constant column vector \(C\) and the constant \(f\).
  2. Does your matrix \(M\) obey any special properties? Find its eigenvalues. You may call your answers \(\lambda\) and \(\mu\) for the rest of the problem to save writing.  \(\textit{For the rest of this problem we will focus on central conics for which the matrix \(M\) is invertible.}\)
  3. Your equation in part (a) above should be be quadratic in \(X\).  Recall that if \(m\neq 0\), the quadratic equation \(mx^{2} + 2cx+f=0\) can be rewritten by \(\textit{completing the square}\) $$m\Big(x+\frac{c}{m}\Big)^{2} = \frac{c^{2}}{m}-f\, .$$  Being very careful that you are now dealing with matrices, use the same trick to rewrite your answer to part (a) in the form $$Y^{T} M Y = g.$$ Make sure you give formulas for the new unknown column vector \(Y\) and constant \(g\) in terms of \(X\), \(M\), \(C\) and \(f\).  You need not multiply out any of the matrix expressions you find.  If all has gone well, you have found a way to shift coordinates for the original conic equationto a new coordinate system with its origin at the center of symmetry.  Our next aim is to rotate the coordinate axes to produce a readily recognizable equation.
  4. Why is the angle between vectors \(V\) and \(W\) is not changed when you replace them by \(PV\) and \(PW\) for \(P\) any orthogonal matrix?
  5. Explain how to choose an orthogonal matrix \(P\) such that \(MP=PD\) where \(D\) is a diagonal matrix.
  6. For the choice of \(P\) above, define our final unknown vector \(Z\) by \(Y=PZ\).  Find an expression for \(Y^{T} MY\) in terms of \(Z\) and the eigenvalues of \(M\).
  7. Call \(Z=\begin{pmatrix}z\\w\end{pmatrix}\).  What equation do \(z\) and \(w\) obey?  \(\textit{(Hint, write your answer using \(\lambda\), \(\mu\) and \(g\).)}\)
  8. Central conics are circles, ellipses, hyperbolae or a pair of straight lines.  Give examples of values of \((\lambda,\mu,g)\) which produce each of these cases.



13.    Let \(L \colon V \to W\) be a linear transformation between finite-dimensional vector spaces \(V\) and \(W\), and let \(M\) be a matrix for \(L\) (with respect to some basis for \(V\) and some basis for \(W\)).  We know that \(L\) has an inverse if and only if it is bijective, and we know a lot of ways to tell whether \(M\) has an inverse.  In fact, \(L\) has an inverse if and only if \(M\) has an inverse:

Suppose that \(L\) is bijective (i.e., one-to-one and onto).

  1. Show that \(\dim V=rank L=\dim W\) 
  2. Show that \(0\) is not an eigenvalue of \(M\). 
  3. Show that \(M\) is an invertible matrix. 


Now, suppose that \(M\) is an invertible matrix.

  1. Show that \(0\) is not an eigenvalue of \(M\).
  2. Show that \(L\) is injective.
  3. Show that \(L\) is surjective.



14.    Captain Conundrum gives Queen Quandary a pair of newborn doves, male and female for her birthday.  After one year, this pair of doves breed and produce a pair of dove eggs.  One year later these eggs hatch yielding a new pair of doves while the original pair of doves breed again and an additional pair of eggs are laid.  Captain Conundrum is very happy because now he will never need to buy the Queen a present ever again! 

Let us say that in year zero, the Queen has no doves.  In year one she has one pair of doves, in year two she has two pairs of doves \(\textit{etc...}\)

Call \(F_{n}\) the number of pairs of doves in years \(n\).  For example, \(F_{0}=0\), \(F_{1}=1\) and \(F_{2}=1\).  Assume no doves die and that the same breeding pattern continues well into the future.  Then \(F_{3}=2\) because the eggs laid by the first pair of doves in year two hatch.  Notice also that in year three, two pairs of eggs are laid (by the first and second pair of doves).  Thus \(F_{4}=3\).

  1. Compute \(F_{5}\) and \(F_{6}\).
  2. Explain why (for any \(n\geq 2\)) the following \(\textit{recursion relation}\) holds $$F_{n}=F_{n-1}+F_{n-2}\, .$$
  3. Let us introduce a column vector \(X_{n}=\begin{pmatrix}F_{n}\\F_{n-1}\end{pmatrix}\).  Compute \(X_{1}\) and \(X_{2}\).  Verify that these vectors obey the relationship $$X_{2}=M X_{1} \mbox{ where } M=\begin{pmatrix}1 & 1\\1 & 0\end{pmatrix}\, . $$
  4. Show that \(X_{n+1}=M X_{n}\).
  5. Diagonalize \(M\).  (\(\textit{I.e.}\), write \(M\) as a product \(M=PDP^{-1}\) where \(D\) is diagonal.)
  6. Find a simple expression for \(M^{n}\) in terms of \(P\), \(D\) and \(P^{-1}\).
  7. Show that \(X_{n+1}=M^{n} X_{1}\).
  8. The number $$\varphi=\frac{1+\sqrt{5}}{2} $$is called the \(\textit{golden ratio}\).  Write the eigenvalues of \(M\) in terms of \(\varphi\).
  9. Put your results from parts 3, 6 and 7 together (along with a short matrix computation) to find the formula for the number of doves \(F_{n}\) in year \(n\) expressed in terms of \(\varphi\), \(1-\varphi\) and \(n\).



Use Gram--Schmidt to find an orthonormal basis for
\end{pmatrix}\, ,\,
\end{pmatrix}\, ,\,
\right\}\, .



16.    Let \(M\) be the matrix of a linear transformation \(L:V\to W\) in given bases for \(V\) and \(W\).  Fill in the blanks below with one of the following six vector spaces: \(V\), \(W\), \({\rm ker} L\), \(\big({\rm ker} L\big)^{\perp}\), \({\rm im} L\), \(\big({\rm im} L\big)^{\perp}\).

  1. The columns of \(M\) span \(\underline{answer}\) in the basis given for \(\underline{answer}\).
  2. The rows of \(M\) span \(\underline{answer}\) in the basis given for \(\underline{answer}\).  Suppose $$M=\begin{pmatrix}1&2&1&3\\2&1&\!-1&2\\1&0&0&\!-1\\4&1&\!-1&0\end{pmatrix}$$ is the matrix of \(L\) in the bases \(\{v_{1},v_{2},v_{3},v_{4}\}\) for \(V\) and \(\{w_{1},w_{2},w_{3},w_{4}\}\) for \(W\).
  3. Find bases for \({\rm ker} L\) and \({\rm im} L\).  Use the dimension formula to check your result.



Captain Conundrum collects the following data set
which he believes to be well-approximated by a parabola
y=ax^{2}+bx+c\, .


  1. Write down a system of four linear equations for the unknown coefficients \(a\), \(b\) and \(c\).
  2. Write the augmented matrix for this system of equations.
  3. Find the reduced row echelon form for this augmented matrix.
  4. Are there any solutions to this system?
  5. Find the least squares solution to the system.
  6. What value does Captain Conundrum predict for \(y\) when \(x=2\)?


18.    Suppose you have collected the following data for an experiment
and believe that the result is well modeled by a straight line
y=mx+b\, .

  1. Write down a linear system of equations you could use to find the slope \(m\) and constant term \(b\). 
  2. Arrange the unknowns \((m,b)\) in a column vector \(X\) and write your answer to (a) as a matrix equation $$M X= V\, .$$  Be sure to give explicit expressions for the matrix \(M\) and column vector \(V\).
  3. For a generic data set, would you expect your system of equations to have a solution?  \(\textit{Briefly}\) explain your answer.
  4. Calculate \(M^{T} M\) and \((M^{T} M)^{-1}\) (for the latter computation, state the condition required for the inverse to exist).
  5. Compute the least squares solution for \(m\) and \(b\).
  6. The least squares method determines a vector \(X\) that minimizes the length of the vector \(V-MX\).  Draw a rough sketch of the three data points in the \((x,y)\)-plane as well as their least squares fit.  Indicate how the components of \(V-MX\) could be obtained from your picture.





1.    You can find the definitions for all these terms by consulting the index of this book.



2.    Both junctions give the same equation for the currents
I+J+13=0\, .
There are three voltage loops (one on the left, one on the right and one going around the outside of the circuit).  Respectively, they give the equations
&60-I+2J-V+3J-3I=0&\, .
The above equations are easily solved (either using an augmented matrix and row reducing, or by substitution).  The result is \(I=-5\) Amps, \(J=-8\) Amps, \(V=40\) Volts.




  1. \(m\).
  2. \(n\).
  3. Yes.
  4. \(n\times n\).
  5. \(m\times m\).
  6. Yes.  This relies on \({\rm ker} M=0\) because if \(M^{T} M\) had a non-trivial kernel, then there would be a non-zero solution \(X\) to \(M^{T} M X=0\).  But then by multiplying on the left by \(X^{T}\) we see that \(||MX||=0\).  This in turn implies \(MX=0\) which contradicts the triviality of the kernel of \(M\). 
  7. Yes because \(\big(M^{T} M\big)^{T}=M^{T} (M^{T})^{T}=M^{T} M\).
  8. Yes, all symmetric matrices have a basis of eigenvectors.
  9. No, because otherwise it would not be invertible.
  10. Since the kernel of \(L\) is non-trivial, \(M\) must have \(0\) as an eigenvalue.
  11. Since \(M\) has a zero eigenvalue in this case, its determinant must vanish. \(\textit{I.e.}\), \(\det M=0\).


4.    To begin with the system becomes
\end{pmatrix}$$ $$
So now \(MX=V\) becomes \(LW=V\) where \(W=UX=\begin{pmatrix}a\\b\\c\end{pmatrix}\) (say).  Thus we solve \(LW=V\) by forward substitution
a=1,\ a+b=1,  \ a+b+c=1 \Rightarrow a=1,b=0,c=0\, .
Now solve \(UX=W\) by back substitution
x+y+z+w=1, \ y+z+w=0, \ z + w =0$$ $$ \ \Rightarrow
w=\mu \mbox{ (arbitrary)}, z=-\mu, y=0, x=1\, .
The solution set is \(\left\{\begin{pmatrix}x\\y\\z\\y\end{pmatrix}=\begin{pmatrix}1\\0\\-\mu\\\mu\end{pmatrix}: \mu\in {\mathbb R}\right\}\)



5.    First 
$$\det\begin{pmatrix}1&2\\3&4\end{pmatrix}=-2\, .$$
All the other determinants vanish because the first three rows of each matrix are not independent.  Indeed, \(2R_{2}-R_{1}=R_{3}\) in each case, so we can make row operations to get a row of zeros and thus a zero determinant. 



6.    If \(U\) spans \(\mathbb{R}^{3}\), then we must be able to express any vector \(X=\begin{pmatrix}x\\y\\z\end{pmatrix}\) \(\in \mathbb{R}^{3}\)
=\begin{pmatrix}1&1&a\\0&2&1\\1&-3&0\end{pmatrix}\begin{pmatrix}c^{1}\\c^{2}\\c^{3}\end{pmatrix}\, ,
for some coefficients $\(c^{1}\), \(c^{2}\) and \(c^{3}\).  This is a linear system.  We could solve for \(c^{1}\), \(c^{2}\) and \(c^{3}\) using an augmented matrix and row operations.  However, since we know that \({\rm dim}\, \mathbb{R}^{3}=3\), if \(U\) spans \(\mathbb{R}^{3}\), it will also be a basis.  Then the solution for \(c^{1}\), \(c^{2}\) and \(c^{3}\) would be unique.  Hence, the \(3\times 3\) matrix above must be invertible, so we examine its determinant
{\rm det} \begin{pmatrix}1&1&a\\0&2&1\\1&-3&0\end{pmatrix}
=1.(2.0-1.(-3))+1.(1.1-a.2)=4-2a\, .
Thus \(U\) spans \(\mathbb{R}^{3}\) whenever \(a\neq 2\).  When \(a=2\) we can write the third vector in \(U\) in terms of the preceding ones as
You can obtain this result, or an equivalent one by studying the above linear system with \(X=0\), i.e., the associated homogeneous system.  The two vectors \(\begin{pmatrix}1\\2\\-3\end{pmatrix}\) and \(\begin{pmatrix}2\\1\\0\end{pmatrix}\) are clearly linearly independent, so this is the least number of vectors spanning \(U\) for this value of \(a\).  Also we see that dim\(U=2\) in this case. Your picture should be a plane in \(\mathbb{R}^{3}\) though the origin containing the vectors \(\begin{pmatrix}1\\2\\-3\end{pmatrix}\) and \(\begin{pmatrix}2\\1\\0\end{pmatrix}\).



\det \begin{pmatrix}1 & x\\ 1 & y\end{pmatrix}=y-x\, ,\ \ 

\det \begin{pmatrix}1 & x & x^{2}\\ 1 & y&y^{2}\\ 1& z&z^{2}\end{pmatrix}=
\det \begin{pmatrix}1 & x & x^{2}\\ 0 & y-x&y^{2}-x^{2}\\ 0& z-x&z^{2}-x^{2}\end{pmatrix}$$ $$
=(y-x)(z^{2}-x^{2})-(y^{2}-x^{2})(z-x)=(y-x)(z-x)(z-y)\, .

\det \begin{pmatrix}1 & x & x^{2} & x^{3}\\ 1 & y& y^{2} & y^{3}\\ 1 & z & z^{2} & z^{3}\\ 1 & w & w^{2} & w^{3}\end{pmatrix}
\det \begin{pmatrix}1 & x & x^{2} & x^{3}\\ 0 & y-x& y^{2}-x^{2} & y^{3}-x^{3}\\ 0 & z-x & z^{2} -x^{2}& z^{3}-x^{3}\\ 0 & w-x & w^{2}-x^{2} & w^{3}-x^{3}\end{pmatrix}
\det \begin{pmatrix}1 & 0 & 0 & 0\\ 0 & y-x& y(y-x)& y^{2}(y-x)\\ 0 & z-x & z(z-x) & z^{2}(z-x)\\ 0 & w-x & w(w-x) & w^{2}(w-x)\end{pmatrix}
(y-x)(z-x)(w-x)\det \begin{pmatrix}1 & 0 & 0 & 0\\ 0 & 1& y& y^{2}\\ 0 & 1 & z & z^{2}\\ 0 & 1 & w & w^{2}\end{pmatrix}
=(y-x)(z-x)(w-x)\det \begin{pmatrix}1 & x & x^{2}\\ 1 & y&y^{2}\\ 1& z&z^{2}\end{pmatrix}
=(y-x)(z-x)(w-x)(y-x)(z-x)(z-y)\, .
From the \(4\times 4\) case above, you can see all the tricks required for a general Vandermonde matrix.  First zero out the first column by subtracting the first row from all other rows (which leaves the determinant unchanged).  Now zero out the top row by subtracting \(x_{1}\) times the first column from the second column, \(x_{1}\) times the second column from the third column \(\textit{etc}\).  Again these column operations do not change the determinant.  Now factor out \(x_{2}-x_{1}\) from the second row, \(x_{3}-x_{1}\) from the third row, \(\textit{etc}\).  This does change the determinant so we write these factors outside the remaining determinant, which is just the same problem but for the \((n-1)\times(n-1)\) case.  Iterating the same procedure gives the result
\det \begin{pmatrix}1 & x_{1} & (x_{1})^{2} & \cdots &(x_{1})^{n-1}\\ 
1 & x_{2}& (x_{2})^{2} & \cdots &  (x_{2})^{n-1}\\ 
1 & x_{3}& (x_{3})^{2} & \cdots &  (x_{3})^{n-1}\\ 
\vdots & \vdots & \vdots &\ddots & \vdots\ \ \  \\ 
1 & x_{n}& (x_{n})^{2} & \cdots &  (x_{n})^{n-1}\end{pmatrix}
=\prod_{i>j}(x_{i}-x_{j})\, .
(Here \(\prod\) stands for a multiple product, just like \(\Sigma\) stands for a multiple sum.)




  1. No, a basis for  \(\mathbb{R}^{3}\) must have exactly three vectors.
  2. We first extend the original vectors by the standard basis for \(\mathbb{R}^{4}\) and then try to eliminate two of them by considering $$\alpha \begin{pmatrix}1\\2\\3\\4\end{pmatrix}+\beta\begin{pmatrix}4\\3\\2\\1\end{pmatrix} +\gamma \begin{pmatrix}1\\0\\0\\0\end{pmatrix}+\delta\begin{pmatrix}0\\1\\0\\0\end{pmatrix}+\varepsilon\begin{pmatrix}0\\0\\1\\0\end{pmatrix}+\eta\begin{pmatrix}0\\0\\0\\1\end{pmatrix}=0\, .$$ So we study$$\begin{pmatrix}1&4&1&0&0&0\\2&3&0&1&0&0\\3&2&0&0&1&0\\4&1&0&0&0&1\end{pmatrix}\sim\begin{pmatrix}1&4&1&0&0&0\\0&-5&-2&1&0&0\\0&-10&-3&0&1&0\\0&-15&-4&0&0&1\end{pmatrix}$$ $$\sim\begin{pmatrix}1&0&-\frac{3}{5}&-4&0&0\\0&1&\frac{2}{5}&\frac{1}{5}&0&0\\0&0&1&10&1&0\\0&0&2&15&0&1\end{pmatrix}\sim\begin{pmatrix}1&0&0&2&\frac{3}{5}&0\\0&1&0&-\frac{19}{5}&-\frac{2}{5}&0\\0&0&1&10&1&0\\0&0&0&-\frac{5}{2}&-10&\frac{1}{2}\end{pmatrix}$$ From here we can keep row reducing to achieve RREF, but we can already see that the non-pivot variables will be \(\varepsilon\) and \(\eta\).  Hence we can eject the last two vectors and obtain as our basis $$\left\{\begin{pmatrix}1\\2\\3\\4\end{pmatrix},\begin{pmatrix}4\\3\\2\\1\end{pmatrix},\begin{pmatrix}1\\0\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\0\\0\end{pmatrix}\right\}\, .$$ Of course, this answer is far from unique!
  3. The method is the same as above.  Add the standard basis to \(\{u,v\}\) to obtain the linearly dependent set \(\{u,v,e_{1},\ldots , e_{n}\}\).  Then put these vectors as the columns of a matrix and row reduce. The standard basis vectors in columns corresponding to the non-pivot variables can be removed.


  1. $${\rm det} \begin{pmatrix} \lambda&-\frac{1}{2} & -1 \\ -\frac{1}{2} &\lambda-\frac{1}{2} &-\frac{1}{2} \\ -1 & -\frac{1}{2} &\lambda\end{pmatrix}=\lambda\Big((\lambda-\frac{1}{2}\Big)\lambda-\frac{1}{4})+\frac{1}{2}\Big(-\frac{\lambda}{2}-\frac{1}{2}\Big)-\Big(-\frac{1}{4}+\lambda\Big)$$$$=\lambda^{3}-\frac{1}{2}\lambda^{2}-\frac{3}{2} \lambda=\lambda(\lambda+1)(\lambda-\frac{3}{2})\, .$$Hence the eigenvalues are \(0,-1,\frac{3}{2}\).
  2. When \(\lambda=0\) we must solve the homogenous system$$\left(\begin{array}{ccc|c}0&\frac{1}{2}&1&0\\\frac{1}{2}&\frac{1}{2}&\frac{1}{2}&0\\1&\frac{1}{2}&0&0\end{array}\right)\sim\left(\begin{array}{ccc|c}1&\frac{1}{2}&0&0\\0&\frac{1}{4}&\frac{1}{2}&0\\0&\frac{1}{2}&1&0\end{array}\right)\sim\left(\begin{array}{ccc|c}1&0&-1&0\\0&1&2&0\\0&0&0&0\end{array}\right)\, .$$ So we find the eigenvector \(\begin{pmatrix}s\\-2s\\s\end{pmatrix}\) where \(s\neq 0\) is arbitrary. For \(\lambda=-1\) $$\left(\begin{array}{ccc|c}1&\frac{1}{2}&1&0\\\frac{1}{2}&\frac{3}{2}&\frac{1}{2}&0\\1&\frac{1}{2}&1&0\end{array}\right)\sim\left(\begin{array}{ccc|c}1&0&1&0\\0&1&0&0\\0&0&0&0\end{array}\right)\, .$$ So we find the eigenvector \(\begin{pmatrix}-s\\0\\s\end{pmatrix}\) where \(s\neq 0\) is arbitrary.Finally, for \(\lambda=\frac{3}{2}\)$$\left(\begin{array}{ccc|c}-\frac{3}{2}&\frac{1}{2}&1&0\\\frac{1}{2}&-1&\frac{1}{2}&0\\1&\frac{1}{2}&-\frac{3}{2}&0\end{array}\right)\sim\left(\begin{array}{ccc|c}1&\frac{1}{2}&-\frac{3}{2}&0\\0&-\frac{5}{4}&\frac{5}{4}&0\\0&\frac{5}{4}&-\frac{5}{4}&0\end{array}\right)\sim\left(\begin{array}{ccc|c}1&0&-1&0\\0&1&-1&0\\0&0&0&0\end{array}\right)\, .$$ So we find the eigenvector \(\begin{pmatrix}s\\s\\s\end{pmatrix}\) where \(s\neq 0\) is arbitrary.  If the mistake \(X\) is in the direction of the eigenvector \(\begin{pmatrix}1\\-2\\1\end{pmatrix}\), then \(Y=0\).  \(\textit{I.e.}\), the satellite returns to the origin \({\cal O}\).  For all subsequent orbits it will again return to the origin.  NASA would be very pleased in this case.If the mistake \(X\) is in the direction \(\begin{pmatrix}-1\\0\\1\end{pmatrix}\), then \(Y=-X\).  Hence the satellite will move to the point opposite to \(X\).  After next orbit will move back to \(X\).  It will continue this wobbling motion indefinitely.  Since this is a stable situation, again, the elite engineers will pat themselves on the back.  Finally, if the mistake \(X\) is in the direction \(\begin{pmatrix}1\\1\\1\end{pmatrix}\), the satellite will move to a point \(Y=\frac{3}{2} X\) which is further away from the origin.  The same will happen for all subsequent orbits, with the satellite moving a factor \(3/2\) further away from \({\cal O}\) each orbit (in reality, after several orbits, the approximations used by the engineers in their calculations probably fail and a new computation  will be needed).  In this case, the satellite will be lost in outer space and the engineers will likely lose their jobs!




  1. A basis for \(B^{3}\) is \(\left\{\begin{pmatrix}1\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\0\end{pmatrix},\begin{pmatrix}0\\0\\1\end{pmatrix}\right\}\)
  2. \(3\).
  3. \(2^{3}=8\).
  4. dim\(B^{3}=3\).
  5. Because the vectors \(\{v_{1},v_{2},v_{3}\}\) are a basis any element \(v\in B^{3}\) can be written uniquely as \(v=b^{1} v_{1} +b^{2} v_{2}+b^{3} v_{3}\) for some triplet of bits \(\begin{pmatrix}b^{1}\\b^{2}\\b^{3}\end{pmatrix}\).  Hence, to compute \(L(v)\) we use linearity of \(L\) $$L(v) = L(b^{1} v_{1} +b^{2} v_{2}+b^{3} v_{3}) = b^{1} L(v_{1})  + b^{2} L(v_{2}) + b^{3} L(v_{3}) $$ $$=\begin{pmatrix}L(v_{1}) & L(v_{2}) & L(v_{3})\end{pmatrix}\begin{pmatrix}b^{1}\\b^{2}\\b^{3}\end{pmatrix}\, .$$
  6. From the notation of the previous part, we see that we can list  linear transformations \(L:B^{3}\to B\) by writing out all possible bit-valued row vectors\begin{eqnarray*}&\begin{pmatrix}0 & 0 & 0\end{pmatrix} ,\\&\begin{pmatrix}1 & 0 & 0\end{pmatrix} ,\\&\begin{pmatrix}0 & 1 & 0\end{pmatrix} ,\\&\begin{pmatrix}0 & 0 & 1\end{pmatrix} ,\\&\begin{pmatrix}1 & 1 & 0\end{pmatrix} ,\\&\begin{pmatrix}1 & 0 & 1\end{pmatrix} ,\\&\begin{pmatrix}0 & 1 & 1\end{pmatrix} ,\\&\begin{pmatrix}1 & 1  & 1\end{pmatrix} .\end{eqnarray*} There are \(2^{3}=8\) different linear transformations \(L:B^{3}\to B\), exactly the same as the number of elements in \(B^{3}\).
  7. Yes, essentially just because \(L_{1}\) and \(L_{2}\) are linear transformations.  In detail for any bits \((a,b)\) and vectors \((u,v)\) in \(B^{3}\) it is easy to check the linearity property for \((\alpha L_{1}+\beta L_{2})\) $$(\alpha L_{1}+\beta L_{2})(a u + b v)=\alpha L_{1}(a u + b v)+\beta L_{2}(au + bv) $$ $$=\alpha a L_{1}(u) + \alpha b L_{1}(v)+ \beta a L_{1}(u) + \beta b L_{1}(v) $$ $$= a (\alpha L_{1}(u) + \beta L_{2}(v))+ b (\alpha L_{1}(u) + \beta L_{2}(v))$$ $$=a (\alpha L_{1} + \beta L_{2})(u) + b  (\alpha L_{1} +\beta L_{2})(v)\, .$$Here the first line used the definition of \((\alpha L_{1} + \beta L_{2})\), the second line depended on the linearity of \(L_{1}\) and \(L_{2}\), the third line was just algebra and the fourth used the definition of \((\alpha L_{1}+ \beta L_{2})\) again.
  8. Yes. The easiest way to see this is the identification above of these maps with bit-valued column vectors.  In that notation, a basis is $$\Big\{\begin{pmatrix}1&0&0\end{pmatrix},\begin{pmatrix}0&1&0\end{pmatrix},\begin{pmatrix}0&0&1\end{pmatrix}\Big\}\, .$$ Since this (spanning) set has three (linearly independent) elements, the vector space of linear maps \(B^{3}\to B\) has dimension 3.  This is an example of a general notion called the \(\textit{dual vector space}\).




  1. \(\frac{d^{2} X}{dt^{2}}=\frac{d^{2}\cos(\omega t )}{dt^{2}}\begin{pmatrix}a\\b\\c\end{pmatrix}=-\omega^{2}\cos(\omega t) \begin{pmatrix}a\\b\\c\end{pmatrix}\, .\) Hence \begin{eqnarray*}F=\cos(\omega t) \begin{pmatrix}-a-b\\-a-2b-c\\-b-c\end{pmatrix}&=&\cos(\omega t) \begin{pmatrix}-1&-1&0\\-1&-2&-1\\0&-1&-1\end{pmatrix}\begin{pmatrix}a\\b\\c\end{pmatrix}\\&=&-\omega^{2}\cos(\omega t) \begin{pmatrix}a\\b\\c\end{pmatrix}\, ,\end{eqnarray*}so $$M=\begin{pmatrix}-1&-1&0\\-1&-2&-1\\0&-1&-1\end{pmatrix}\, .$$
  2. \begin{eqnarray*}\det \begin{pmatrix}\lambda+1&1&0\\1&\lambda+2&1\\0&1&\lambda+1\end{pmatrix}&=&(\lambda+1)\big((\lambda+2)(\lambda+1)-1\big)-(\lambda+1)\\&=&(\lambda+1)\big((\lambda+2)(\lambda+1)-2\big)\\&=&(\lambda+1)\big(\lambda^{2}+3\lambda)=\lambda(\lambda+1)(\lambda+3)\end{eqnarray*}so the eigenvalues are \(\lambda=0,-1 ,-3 \).  For the eigenvectors, when \(\lambda=0\) we study:$$M-0.I=\begin{pmatrix}-1&-1&0\\-1&-2&-1\\0&-1&-1\end{pmatrix}\sim\begin{pmatrix}1&1&0\\0&-1&-1\\0&-1&-1\end{pmatrix}\sim\begin{pmatrix}1&0&\!-1\\0&1&1\\0&0&0\end{pmatrix}\, ,$$ so \(\begin{pmatrix}1\\-1\\1\end{pmatrix}\) is an eigenvector.  For \(\lambda=-1\)$$M(-1).I=\begin{pmatrix}0&-1&0\\-1&-1&-1\\0&-1&0\end{pmatrix}\sim\begin{pmatrix}1&0&1\\0&1&0\\0&0&0\end{pmatrix}\, ,$$ so \(\begin{pmatrix}-1\\0\\1\end{pmatrix}\) is an eigenvector.  For \(\lambda=-3\) $$M-(-3).I=\begin{pmatrix}2&-1&0\\-1&1&-1\\0&-1&2\end{pmatrix}\sim\begin{pmatrix}1&-1&1\\0&1&-2\\0&-1&2\end{pmatrix}\sim\begin{pmatrix}1&0&-1\\0&1&-2\\0&0&0\end{pmatrix}\, ,$$so \(\begin{pmatrix}1\\2\\1\end{pmatrix}\) is an eigenvector.
  3. The characteristic frequencies are \(0,1,\sqrt{3}\).
  4. The orthogonal change of basis matrix $$P=\begin{pmatrix}\frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{6}}\\-\frac{1}{\sqrt{3}}& 0&\frac{2}{\sqrt{6}}\\\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{6}}\end{pmatrix}$$ It obeys \(MP=PD\) where $$D=\begin{pmatrix}0&0&0\\0&-1&0\\0&0&-3\end{pmatrix}\, .$$
  5. Yes, the direction given by the eigenvector \(\begin{pmatrix}1\\-1\\1\end{pmatrix}\) because its eigenvalue is zero.  This is probably a bad design for a bridge because it can be displaced in this direction with no force!



  1. If we call \(M=\begin{pmatrix}a&b\\b&d\end{pmatrix}\), then \(X^{T}MX=ax^{2} + 2bxy + dy^{2}\).  Similarly putting \(C=\begin{pmatrix}c\\e\end{pmatrix}\) yields \(X^{T} C + C^{T} X = 2 X \cdot C = 2 cx + 2 e y\). Thus $$0=ax^{2} + 2bxy + dy^{2} + 2cx + 2ey + f$$ $$=\begin{pmatrix}x & y\end{pmatrix} \begin{pmatrix}a & b\\b & d\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix} + \begin{pmatrix}x & y\end{pmatrix} \begin{pmatrix}c\\e\end{pmatrix} + \begin{pmatrix}c & e\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix} + f.$$
  2. Yes, the matrix \(M\) is symmetric, so it will have a basis of eigenvectors and is similar to a diagonal matrix of real eigenvalues.  To find the eigenvalues notice that \(\det\begin{pmatrix}a-\lambda&b\\b&d-\lambda\end{pmatrix}=(a-\lambda)(d-\lambda)-b^{2}=\big(\lambda-\frac{a+d}{2}\big)^{2}-b^{2}-\big(\frac{a-d}{2}\big)^{2}\).  So the eigenvalues are $$\lambda=\frac{a+d}{2}+\sqrt{b^{2}+\big(\frac{a-d}{2}\big)^{2}}\mbox{ and } \mu=\frac{a+d}{2}-\sqrt{b^{2}+\big(\frac{a-d}{2}\big)^{2}}\, .$$
  3. The trick is to write $$X^{T} M X + C^{T} X + X^{T} C =(X^{T}+C^{T} M^{-1}) M (X+M^{-1} C) - C^{T} M^{-1} C\, ,$$ so that $$(X^{T}+C^{T} M^{-1}) M (X+M^{-1} C) = C^{T} M C -f\, .$$ Hence \(Y=X+M^{-1} C\) and \(g=C^{T} M C -f\).
  4. The cosine of the angle between vectors \(V\) and \(W\) is given by $$\frac{V\cdot W}{\sqrt{V\cdot V W\cdot W}}=\frac{V^{T}W}{\sqrt{V^{T} V W^{T} W}}\, .$$ So replacing \(V\to PV\) and \(W\to PW\) will always give a factor \(P^{T} P\) inside all the products, but \(P^{T}P=I\) for orthogonal matrices.  Hence none of the dot products in the above formula changes, so neither does the angle between \(V\) and \(W\).
  5. If we take the eigenvectors of \(M\), normalize them (\(\textit{i.e.}\) divide them by their lengths), and put them in a matrix \(P\) (as columns) then \(P\) will be an orthogonal matrix.  (If it happens that \(\lambda=\mu\), then we also need to make sure the eigenvectors spanning the two dimensional eigenspace corresponding to \(\lambda\) are orthogonal.)  Then, since \(M\) times the eigenvectors yields just the eigenvectors back again multiplied by their eigenvalues, it follows that \(MP=PD\) where \(D\) is the diagonal matrix made from eigenvalues.
  6. If \(Y=PZ\), then \(Y^{T}MY=Z^{T}P^{T}MPZ=Z^{T}P^{T}PDZ=Z^{T}DZ\) where \(D=\begin{pmatrix}\lambda & 0 \\ 0&\mu \end{pmatrix}\).
  7. Using part (f) and (c) we have $$\lambda z^{2} + \mu w^{2} = g\, .$$
  8. When \(\lambda=\mu\) and \(g/\lambda=R^{2}\), we get the equation for a circle radius \(R\) in the \((z,w)\)-plane.  When \(\lambda\), \(\mu\) and \(g\) are postive, we have the equation for an ellipse.  Vanishing \(g\) along with \(\lambda\) and \(\mu\) of opposite signs gives a pair of straight lines.  When \(g\) is non-vanishing, but \(\lambda\) and \(\mu\) have opposite signs, the result is a pair of hyperbol\ae.  These shapes all come from cutting a cone with a plane, and are therefore called conic sections.


13.    We  show  that  \(L\)  is  bijective  if  and  only  if  \(M\)  is  invertible.
a)    We  suppose  that  \(L\)  is  bijective.

  1. Since  \(L\)  is  injective,  its  kernel  consists  of  the  zero  vector  alone.  Hence \[null  L=\dim  \ker  L=0.\] So  by  the  Dimension  Formula, \[\dim  V=null  L+rank  L=rank  L.\] Since  \(L\)  is  surjective,  \(L(V)=W.\)  Thus \[rank  L=\dim  L(V)=\dim  W.\] Thereby  \[\dim  V=rank  L=\dim  W.\]
  2. Since  \(\dim  V=\dim  W\),  the matrix  \(M\) is square so we can talk about its eigenvalues.  Since \(L\) is injective, its kernel is  the zero vector  alone. That is, the only solution to \(LX=0\) is \(X=0_V\).  But \(LX\) is the same as \(MX\), so the only solution to \(MX=0\) is \(X=0_V\).  So \(M\) does not have zero as an eigenvalue.
  3. Since \(MX=0\) has no non-zero solutions, the matrix \(M\) is invertible.

b)    Now  we  suppose  that  \(M\)  is  an  invertible  matrix.

  1. Since \(M\) is invertible, the system \(MX=0\) has no non-zero solutions. But \(LX\) is the same as \(MX\), so the only solution to  \(LX=0\) is \(X=0_V\).  So \(L\) does not have zero as an eigenvalue.
  2. Since \(LX=0\) has no non-zero solutions, the kernel of \(L\) is the zero vector alone.  So \(L\) is injective.
  3. Since \(M\) is invertible, we must have that \(\dim  V=\dim  W\).  By the Dimension Formula, we have \[\dim  V=null  L+rank  L\] and since \(\ker  L=\{0_V\}\) we have \(null  L=\dim  \ker  L=0\), so \[\dim  W=\dim  V=rank  L=\dim  L(V).\] Since \(L(V)\) is a subspace of \(W\) with the same dimension as \(W\), it must be equal to \(W\).  To see why, pick a basis \(B\) of (L(V)\).  Each element of \(B\) is a vector in \(W\), so the elements of \(B\) form a linearly independent set in \(W\).  Therefore \(B\) is a basis of \(W\), since the size of \(B\) is equal to \(\dim  W\).  So \(L(V)=span B=W.\)  So \(L\) is surjective.




  1. \(F_{4}=F_{2}+F_{3}=2+3=5\).
  2. The number of pairs of doves in any given year equals the number of the previous years plus those that hatch and there are as many of them as pairs of doves in the year before the previous year.
  3. \(X_{1}=\begin{pmatrix}F_{1}\\F_{0}\end{pmatrix}=\begin{pmatrix}1\\0\end{pmatrix}\) and \(X_{2}=\begin{pmatrix}F_{2}\\F_{1}\end{pmatrix}=\begin{pmatrix}1\\1\end{pmatrix}\). $$MX_{1}=\begin{pmatrix}1 & 1\\ 1& 0\end{pmatrix}\begin{pmatrix}1\\ 0\end{pmatrix} =\begin{pmatrix}1\\1\end{pmatrix}=X_{2}\, .$$
  4. We just need to use the recursion relationship of part (b) in the top slot of \(X_{n+1}\): $$X_{n+1}=\begin{pmatrix}F_{n+1}\\F_{n}\end{pmatrix}=\begin{pmatrix}F_{n}+F_{n-1}\\F_n\end{pmatrix}=\begin{pmatrix}1 & 1\\ 1& 0\end{pmatrix}\begin{pmatrix}F_{n}\\F_{n-1}\end{pmatrix}=M X_{n}\, .$$
  5. Notice \(M\) is symmetric so this is guaranteed to work. $$\det \begin{pmatrix}1-\lambda&1\\1&-\lambda\end{pmatrix}=\lambda(\lambda-1)-1=\big(\lambda-\frac{1}{2}\big)^{2}-\frac{5}{4}\, ,$$ so the eigenvalues are \(\frac{1\pm\sqrt{5}}{2}\). Hence the eigenvectors are \(\begin{pmatrix}\frac{1\pm\sqrt{5}}{2}\\1\end{pmatrix}\), respectively (notice that \(\frac{1+\sqrt{5}}{2}+1=\frac{1+\sqrt{5}}{2}.\frac{1+\sqrt{5}}{2}\) and \(\frac{1-\sqrt{5}}{2}+1=\frac{1-\sqrt{5}}{2}.\frac{1-\sqrt{5}}{2}\)). Thus \(M=PDP^{-1}\) with $$D=\begin{pmatrix}\frac{1+\sqrt{5}}{2}&0\\0&\frac{1-\sqrt{5}}{2}\end{pmatrix} \mbox{ and } P =\begin{pmatrix}\frac{1+\sqrt{5}}{2}&\frac{1-\sqrt{5}}{2}\\1&1\end{pmatrix}\, .$$
  6. \(M^{n}=(P D P^{-1})^{n} = P D P^{-1} P D P^{-1} \ldots P D P^{-1} = P D^{n} P^{-1}\).
  7. Just use the matrix recursion relation of part 4 repeatedly: $$X_{n+1}= M X_{n} = M^{2} X_{n-1}=\cdots = M^{n} X_{1}\, .$$
  8. The eigenvalues are \(\varphi = \frac{1+\sqrt{5}}{2}\) and \(1-\varphi = \frac{1-\sqrt{5}}{2}\).
  9. $$X_{n+1}=\begin{pmatrix}F_{n+1}\\F_{n}\end{pmatrix}=M^{n} X_{n} = P D^{n} P^{-1} X_{1}$$ $$=P \begin{pmatrix}\varphi&0\\0&1-\varphi\end{pmatrix}^{\!n} \begin{pmatrix}\frac{1}{\sqrt{5}}&\star \\-\frac{1}{\sqrt{5}}& \star\end{pmatrix} \begin{pmatrix}1\\ 0\end{pmatrix}=P \begin{pmatrix}\varphi^{n}&0\\0&(1-\varphi)^{n}\end{pmatrix} \begin{pmatrix}\frac{1}{\sqrt{5}}\\-\frac{1}{\sqrt{5}}\end{pmatrix}$$ $$=\begin{pmatrix}\frac{1+\sqrt{5}}{2}&\frac{1-\sqrt{5}}{2}\\1&1\end{pmatrix}\begin{pmatrix}{\frac{\varphi^n}{\sqrt{5}}}\\-\frac{(1-\varphi)^{n}}{\sqrt{5}}\end{pmatrix}=\begin{pmatrix}\star \\\frac{\varphi^{n}-(1-\varphi)^{n}}{\sqrt{5}}\end{pmatrix}.$$ Hence $$F_{n}=\frac{\varphi^{n}-(1-\varphi)^{n}}{\sqrt{5}}\, .$$ These are the famous Fibonacci numbers.


15.    Call the three vectors \(u,v\) and \(w\), respectively. Then
v^{\perp}=v-\frac{u\cdot v}{u\cdot u} u =v-\frac{3}{4} u = \begin{pmatrix}\frac{1}{4} \\ -\frac{3}{4} \\ \frac{1}{4}\\ \frac{1}{4}\end{pmatrix}\, ,
w^{\perp} = w - \frac{u\cdot w}{u\cdot u} u-\frac{v^{\perp}\cdot w}{v^{\perp}\cdot v^{\perp}} v^{\perp}
=w-\frac{3}{4} u -\frac{\frac{3}{4}}{\frac{3}{4}}v^{\perp} =\begin{pmatrix}-1\\0\\0\\1\end{pmatrix}
Dividing by lengths, an orthonormal basis for \({\rm span}\{u,v,w\}\) is
\right\}\, .




  1. The columns of \(M\) span \({\rm im} L\) in the basis given for \(W\).
  2. The rows of \(M\) span \(({\rm ker}L)^{\perp}\) in the basis given for \(V\).
  3. First we put \(M\) in RREF: $$M =  \begin{pmatrix}1&2&1&3\\2&1&\!-1&2\\1&0&0&\!-1\\4&1&\!-1&0\end{pmatrix} \sim \begin{pmatrix}1&2&1&3\\0&-3&-3&-4\\0&-2&-1&-4\\0&-7&-5&\!-12\end{pmatrix}$$ $$\sim \begin{pmatrix}1&0&-1&\frac{1}{3}\\0&1&1&\frac{4}{3}\\0&0&1&-\frac{4}{3}\\0&0&2&-\frac{8}{3}\end{pmatrix} \sim \begin{pmatrix}1&0&0&-1\\0&1&0&\frac{8}{3}\\0&0&1&-\frac{4}{3}\\0&0&0&0\end{pmatrix}\,.$$ Hence $$ker L=span \{v_{1}-\frac{8}{3}v_{2}+\frac{4}{3} v_{3}+v_{4}\}$$ and $${\rm im} L=span\{v_{1}+2v_{2}+v_{3}+4v_{4},2v_{1}+v_{2}+v_{4},v_{1}-v_{2}-v_{4}\}\, .$$ Thus \(\dim ker L=1\) and \(\dim{\rm im} L=3\) so $$\dim ker L+\dim{\rm im} L=1+3=4=\dim V\, .$$




1. $$\left\{\begin{array}{l}5=4a-2c+c\\2=a-b+c\\0=a+b+c\\3=4a+2b+c\, .\end{array}\right.$$

2. (Also 3, 4 and 5) $$\left(\begin{array}{rrr|r}4&-2&1&5\\1&-1&1&2\\1&1&1&0\\4&2&1&3\end{array}\right) \sim \left(\begin{array}{rrr|r}1&1&1&0\\0&-6&-3&5\\0&-2&0&2\\0&-2&-3&3\end{array}\right) \sim \left(\begin{array}{rrr|r}1&0&1&-1\\0&1&0&1\\0&0&-3&11\\0&0&-3&3\end{array}\right)$$ The system has no solutions because \(c=-1\) and \(c=-\frac{11}{3}\) is impossible.

6. Let $$M=\begin{pmatrix}4&-2&1\\1&-1&1\\1&1&1\\4&2&1\end{pmatrix}\mbox{ and }V=\begin{pmatrix}5\\2\\0\\3\end{pmatrix}\, .$$ Then $$M^{T} M=\begin{pmatrix}34&0&10\\0&10&0\\10&0&4\end{pmatrix} \mbox{ and } M^{T}V=\begin{pmatrix}34\\-6\\10\end{pmatrix}\, .$$ So $$\begin{amatrix}{3}34&0&10&34\\0&10&0&-6\\10&0&4&10\end{amatrix} \sim \begin{amatrix}{3}1&0&\frac{2}{5}&1\\0&10&0&-6\\0&0&-\frac{18}{5}&0\end{amatrix} \sim \begin{amatrix}{3}1&0&0&1\\0&1&0&-\frac{3}{5}\\0&0&1&0\end{amatrix}$$ The least squares solution is \(a=1\), \(b=-\frac{3}{5}\) and \(c=0\).

7. The Captain predicts \(y(2)=1.2^{2}-\frac{3}{5}.2+0=\frac{14}{5}\).


18.    We show that \(L\) is bijective if and only if \(M\) is invertible.

  1. We suppose that \(L\) is bijective.  Since \(L\) is injective, its kernel consists of the zero vector alone. So \[null L=\dim \ker L=0.\] By the dimension formula, \[\dim V=null L+\rank L=\rank L.\] Since \(L\) is surjective, \(L(V)=W.\) So \[\rank L=\dim L(V)=\dim W.\] So \[\dim V=\rank L=\dim W.\]
  2. Since \(\dim V=\dim W\), the matrix \(M\) is square so we can talk about its eigenvalues. Since \(L\) is injective, its kernel is the zero vector alone. That is, the only solution to \(LX=0\) is \(X=0_V\). But \(LX\) is the same as \(MX\), so the only solution to \(MX=0\) is \(X=0_V\). So \(M\) does not have zero as an eigenvalue.
  3. Since \(MX=0\) has no non-zero solutions, the matrix \(M\) is invertible.
  4. Now we suppose that \(M\) is an invertible matrix. Since \(M\) is invertible, the system \(MX=0\) has no non-zero solutions. But \(LX\) is the same as \(MX\), so the only solution to \(LX=0\) is \(X=0_V\). So \(L\) does not have zero as an eigenvalue.
  5. Since \(LX=0\) has no non-zero solutions, the kernel of \(L\) is the zero vector alone. So \(L\) is injective.
  6. Since \(M\) is invertible, we must have that \(\dim V=\dim W\). By the Dimension Formula, we have \[\dim V=null L+\rank L\] and since \(\ker L=\{0_V\}\) we have \(null L=\dim \ker L=0\), so \[\dim W=\dim V=\rank L=\dim L(V).\] Since \(L(V)\) is a subspace of \(W\) with the same dimension as \(W\), it must be equal to \(W\). To see why, pick a basis \(B\) of \(L(V)\). Each element of \(B\) is a vector in \(W\), so the elements of \(B\) form a linearly independent set in \(W\). Therefore \(B\) is a basis of \(W\), since the size of \(B\) is equal to \(\dim W\). So \(L(V)=\spa B=W.\) So \(L\) is surjective.