Skip to main content
Mathematics LibreTexts

18.5: Sample Second Midterm

  • Page ID
    2107
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Here are some worked problems typical for what you might expect on a second midterm examination.

    1.
    Find an LU decomposition for the matrix
    $$
    \begin{pmatrix}
    1&1&-1&2\\
    1&3&2&2\\
    -1&-3&-4&6\\
    0&4&7&-2
    \end{pmatrix}
    $$
    Use your result to solve the system
    $$
    \left\{
    \begin{array}{cccccccc}
    x&+&y&-&z&+&2w&=7\\
    x&+&3y&+&2z&+&2w&=6\\
    -x&-&3y&-&4z&+&6w&=12\\
    &&4y&+&7z&-&2w&=-7
    \end{array}
    \right.
    $$

    2.
    Let
    $$
    A=\left(\begin{array}{ccc}1&1&1\\2&2&3\\4&5&6\end{array}\right)\, .
    $$
    Compute \(\det A\).
    Find all solutions to (i) \(A X = 0\) and (ii) \(A X=\left(
    \begin{array}{c}1\\2\\3\end{array}\right)\) for the vector \(X\in \mathbb{R}^{3}\). Find, but do not solve, the characteristic polynomial of \(A\).

    3.
    Let \(M\) be any \(2\times 2\) matrix. Show
    $$
    \det M = -\frac{1}{2} {\rm tr} M^{2} + \frac{1}{2} ({\rm tr} M)^{2}\, .
    $$

    4.
    \(\textit{The permanent:}\) Let \(M=(M^{i}_{j})\) be an \(n\times n\) matrix. An operation producing a single number from \(M\) similar
    to the determinant is the "permanent''
    $$
    {\rm perm} \, M =\sum_{\sigma} M^{1}_{\sigma(1)} M^{2}_{\sigma(2)}\cdots M^{n}_{\sigma(n)}\, .
    $$
    For example
    $$
    {\rm perm} \begin{pmatrix}a & b \\ c & d\end{pmatrix}=ad+bc\, .
    $$
    Calculate
    $$
    {\rm perm} \begin{pmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9\end{pmatrix}\, .
    $$

    What do you think would happen to the permanent of an \(n\times n\) matrix \(M\) if (include a \(\textit{brief}\) explanation with each answer):

    1. You multiplied \(M\) by a number \(\lambda\).
    2. You multiplied a row of \(M\) by a number \(\lambda\).
    3. You took the transpose of \(M\).
    4. You swapped two rows of \(M\).


    5.
    Let \(X\) be an \(n\times 1\) matrix subject to
    $$
    {X}^{T} X=(1)\, ,
    $$
    and define
    $$
    H=I - 2 X \,\!X^{T}\, ,
    $$
    (where \(I\) is the \(n\times n\) identity matrix).
    Show
    $$
    H=H^{T}=H^{-1}.
    $$

    6. Suppose \(\lambda\) is an eigenvalue of the matrix \(M\) with associated eigenvector \(v\). Is \(v\) an eigenvector of \(M^{k}\) (where \(k\) is any positive integer)? If so, what would the associated
    eigenvalue be?

    Now suppose that the matrix \(N\) is \(\textit{nilpotent}\), \(\textit{i.e.}\)
    $$
    N^{k}=0
    $$
    for some integer \(k\geq 2\). Show that \(0\) is the only eigenvalue of \(N\).

    7.
    Let \(M=\begin{pmatrix}3&-5\\1&-3\end{pmatrix}\). Compute \(M^{12}\). (Hint: \(2^{12}=4096\).)

    8. \(\textit{The Cayley Hamilton Theorem}:
    Calculate the characteristic polynomial \(P_{M}(\lambda)\) of the matrix \(M=\begin{pmatrix}a & b\\c & d\end{pmatrix}\). Now compute the matrix polynomial \(P_{M}(M)\). What do you observe? Now suppose the \(n\times n\) matrix \(A\) is "similar'' to a diagonal matrix \(D\), in other words $$A=P^{-1}DP$$ for some invertible matrix \(P\) and \(D\) is a matrix with values \(\lambda_{1}\), \(\lambda_{2}, \ldots \lambda_{n}\) along its diagonal. Show that the two matrix polynomials \(P_{A}(A)\) and \(P_{A}(D)\) are similar (\(\textit{i.e.}\) \(P_{A}(A)=P^{-1} P_{A}(D) P\)). Finally, compute \(P_{A}(D)\), what can you say about \(P_{A}(A)\)?

    9.
    \(\textit{Define}\) what it means for a set \(U\) to be a subspace of a vector space \(V\). Now let \(U\) and \(W\) be non-trivial subspaces of \(V\). Are the following also subspaces? (Remember that \(\cup\) means "union'' and \(\cap\) means "intersection''.)

    1. \(U \cup W\)
    2. \(U \cap W\)


    In each case \(\textit{draw}\) examples in \(\mathbb{R}^{3}\) that justify your answers. If you answered "yes'' to either part also give a general explanation why this is the case.

    10.
    \(\textit{Define}\) what it means for a set of vectors \(\{v_{1},v_{2},\ldots,v_{n}\}\) to (i) be linearly independent, (ii) span a vector space \(V\) and (iii) be a basis for a vector space \(V\).

    Consider the following vectors in \(\mathbb{R}^{3}\)
    $$ u =\begin{pmatrix} -1\\ -4\\ 3 \end{pmatrix}\, ,\qquad
    v =\begin{pmatrix} 4\\ 5\\ 0 \end{pmatrix}\, ,\qquad
    w =\begin{pmatrix} 10\\ 7\\ h+3 \end{pmatrix}\, .
    $$
    For which values of \(h\) is \(\{u,v,w\}\) a basis for \(\mathbb{R}^{3}\)?

    Solutions

    1.
    $$
    \begin{pmatrix}
    1&1&-1&2\\
    1&3&2&2\\
    -1&-3&-4&6\\
    0&4&7&-2
    \end{pmatrix}
    =
    \begin{pmatrix}
    1&0&0&0\\
    1&1&0&0\\
    -1&0&1&0\\
    0&0&0&1
    \end{pmatrix}
    \begin{pmatrix}
    1&1&-1&2\\
    0&2&3&0\\
    0&-2&-5&8\\
    0&4&7&-2
    \end{pmatrix}
    $$
    $$
    =
    \begin{pmatrix}
    1&0&0&0\\
    1&1&0&0\\
    -1&-1&1&0\\
    0&2&0&1
    \end{pmatrix}
    \begin{pmatrix}
    1&1&-1&2\\
    0&2&3&0\\
    0&0&-2&8\\
    0&0&1&-2
    \end{pmatrix}
    $$ $$=
    \begin{pmatrix}
    1&0&0&0\\
    1&1&0&0\\
    -1&-1&1&0\\
    0&2&-\frac{1}{2}&1
    \end{pmatrix}
    \begin{pmatrix}
    1&1&-1&2\\
    0&2&3&0\\
    0&0&-2&8\\
    0&0&0&2
    \end{pmatrix}\, .
    $$
    To solve \(MX=V\) using \(M=LU\) we first solve \(LW=V\) whose augmented matrix reads
    $$
    \left(
    \begin{array}{cccc|c}
    1&0&0&0&7\\
    1&1&0&0&6\\
    -1&-1&1&0&12\\
    0&2&-\frac12&1&-7
    \end{array}\right)
    \sim
    \left(
    \begin{array}{cccc|c}
    1&0&0&0&7\\
    0&1&0&0&-1\\
    0&0&1&0&18\\
    0&2&-\frac{1}{2}&1&-7
    \end{array}\right)$$ $$\sim
    \left(
    \begin{array}{cccc|c}
    1&0&0&0&7\\
    0&1&0&0&-1\\
    0&0&1&0&18\\
    0&0&0&1&4
    \end{array}\right)\, ,
    $$
    from which we can read off \(W\). Now we compute \(X\) by solving \(UX=W\) with the augmented matrix
    $$
    \left(
    \begin{array}{cccc|c}
    1&1&-1&2&7\\
    0&2&3&0&-1\\
    0&0&-2&8&18\\
    0&0&0&2&4
    \end{array}\right)
    \sim
    \left(
    \begin{array}{cccc|c}
    1&1&-1&2&7\\
    0&2&3&0&-1\\
    0&0&-2&0&2\\
    0&0&0&1&2
    \end{array}\right)
    $$
    $$
    \sim
    \left(
    \begin{array}{cccc|c}
    1&1&-1&2&7\\
    0&2&0&0&2\\
    0&0&1&0&-1\\
    0&0&0&1&2
    \end{array}\right)
    \sim
    \left(
    \begin{array}{cccc|c}
    1&0&0&0&1\\
    0&1&0&0&1\\
    0&0&1&0&-1\\
    0&0&0&1&2
    \end{array}\right)
    $$
    So \(x=1\), \(y=1\), \(z=-1\) and \(w=2\).

    2.
    $$
    {\rm det }A= 1.(2.6-3.5)-1.(2.6-3.4)+1.(2.5-2.4)=-1\, .
    $$
    (i) Since \({\rm det}A\neq 0\), the homogeneous system \(AX=0\) only has the solution \(X=0\).
    (ii) It is efficient to compute the adjoint
    $$
    {\rm adj}\ A= \begin{pmatrix}-3&0& 2\\ -1&2& -1 \\1&-1 & 0 \end{pmatrix}^{\!T}
    = \begin{pmatrix}-3&-1& 1\\ 0&2& -1 \\2&-1 & 0 \end{pmatrix}
    $$
    Hence
    $$A^{-1}=\begin{pmatrix}3&1& -1\\ 0&-2& 1 \\-2&1 & 0 \end{pmatrix}\, .$$

    Thus
    $$
    X=\begin{pmatrix}3&1& -1\\ 0&-2& 1 \\-2&1 & 0 \end{pmatrix}
    \begin{pmatrix}1\\2\\3
    \end{pmatrix}=
    \begin{pmatrix}2\\-1\\0
    \end{pmatrix}\, .
    $$
    Finally,
    $$
    P_{A}(\lambda)=-\det \begin{pmatrix}1-\lambda&1&1\\2&2-\lambda&3\\4&5&6-\lambda\end{pmatrix}$$
    $$
    =-\Big[(1-\lambda)[(2-\lambda)(6-\lambda)-15]-[2.(6-\lambda)-12]+[10-4.(2-\lambda)]\Big]
    $$
    $$
    =\lambda^{3}-9\lambda^{2}-\lambda+1\, .
    $$

    3.
    Call \(M=\begin{pmatrix}a&b\\c&d\end{pmatrix}\). Then \({\rm det} M= ad-bc\), yet
    $$
    -\frac{1}{2} tr M^{2} + \frac{1}{2} (tr M)^{2} = -\frac{1}{2} tr \begin{pmatrix}a^{2} + bc & * \\ * & bc + d^{2}\end{pmatrix} -\frac{1}{2} (a+d)^{2}$$ $$
    =-\frac{1}{2} (a^{2} + 2bc + d^{2}) + \frac{1}{2} (a^{2} + 2ad + d^{2}) = ad - bc\, ,
    $$
    which is what we were asked to show.

    4.

    $$
    {\rm perm} \begin{pmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9\end{pmatrix}
    =1.(5.9+6.8)+2.(4.9+6.7)+3.(4.8+5.7)=450\, .
    $$

    a) Multiplying \(M\) by \(\lambda\) replaces every matrix element \(M^{i}_{\sigma(j)}\) in the formula for the permanent by \(\lambda M^{i}_{\sigma(j)}\), and therefore produces an overall factor \(\lambda^{n}\).

    b) Multiplying the \(i^{\rm th}\) row by \(\lambda\) replaces \(M^{i}_{\sigma(j)}\) in the formula for the permanent by \(\lambda M^{i}_{\sigma(j)}\). Therefore the permanent is multiplied by an overall factor \(\lambda\).

    c) The permanent of a matrix transposed equals the permanent of the original matrix, because in the formula for the permanent this amounts to summing over permutations of rows rather than columns. But we could then sort the product \(M^{\sigma(1)}_{1} M^{\sigma(2)}_{2}\ldots M^{\sigma(n)}_{n}\) back into its original order using the inverse permutation \(\sigma^{-1}\). But summing over permutations is equivalent to summing over inverse permutations, and therefore the permanent is unchanged.

    d) Swapping two rows also leaves the permanent unchanged. The argument is almost the same as in the previous part, except
    that we need only reshuffle two matrix elements \(M^{j}_{\sigma(i)}\) and \(M^{i}_{\sigma(j)}\) (in the case where rows \(i\) and \(j\) were swapped). Then we use the fact that summing over all permutations \(\sigma\) or over all permutations \(\widetilde \sigma\) obtained
    by swapping a pair in \(\sigma\) are equivalent operations.

    5. Firstly, lets call \((1)=1\) (the \(1\times 1\) identity matrix). Then we calculate
    $$
    H^{T}=(I-2 X X^{T})^{T} = I^{T} -2 (X X^{T})^{T} = I -2 (X^{T})^{T} X^{T} = I - 2 X X^{T} = H\, ,
    $$
    which demonstrates the first equality. Now we compute
    $$
    H^{2} = (I-2 X X^{T}) (I - 2 X X^{T}) = I - 4 X X^{T} + 4 X X^{T} X X^{T} $$ $$= I - 4 X X^{T} + 4 X (X^{T} X) X^{T} = I - 4 X X^{T} + 4 X. 1 .X^{T} = I\, .
    $$
    So, since \(HH=I\), we have \(H^{-1}=H\).

    6. We know \(Mv=\lambda v\). Hence
    $$
    M^{2} v = M M v = M \lambda v = \lambda M v = \lambda^{2} v\, ,
    $$
    and similarly
    $$
    M^{k} v = \lambda M^{k-1} v = \ldots = \lambda^{k} v \, .
    $$
    So \(v\) is an eigenvector of \(M^{k}\) with eigenvalue \(\lambda^{k}\).

    Now let us assume \(v\) is an eigenvector of the nilpotent matrix \(N\) with eigenvalue \(\lambda\). Then from above
    $$
    N^{k} v = \lambda^{k} v
    $$
    but by nilpotence, we also have
    $$
    N^{k} v = 0
    $$
    Hence \(\lambda^{k} v = 0\) and \(v\) (being an eigenvector) cannot vanish. Thus \(\lambda^{k}=0\) and in turn \(\lambda=0\).

    7. Let us think about the eigenvalue problem \(Mv=\lambda v\). This has solutions when
    $$
    0={\rm det} \begin{pmatrix}3-\lambda & -5 \\ 1 & -3-\lambda\end{pmatrix}=\lambda^{2}-4\Rightarrow \lambda = \pm 2\, .
    $$
    The associated eigenvalues solve the homogeneous systems (in augmented matrix form)
    $$
    \left(\begin{array}{cc|c}1 & -5 & 0\\ 1 & -5 & 0\end{array}\right)\sim
    \left(\begin{array}{cc|c} 1 & -5 & 0\\ 0 & 0 & 0\end{array}\right)
    \mbox{ and }
    \left(\begin{array}{cc|c} 5 & -5 & 0\\ 1 & -1 & 0\end{array}\right)\sim
    \left(\begin{array}{cc|c} 1 & -1 & 0\\ 0 & 0 & 0\end{array}\right)\, ,$$
    respectively, so are \(v_{2}=\begin{pmatrix} 5 \\ 1 \end{pmatrix}\) and \(v_{-2} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\). Hence \(M^{12} v_{2} = 2^{12} v_{2}\) and \(M^{12}v_{-2} = (-2)^{12} v_{-2}\). Now, \(\begin{pmatrix} x \\ y \end{pmatrix}=\frac{x-y}{4}\begin{pmatrix} 5 \\ 1 \end{pmatrix} -\frac{x-5y}{4} \begin{pmatrix} 1 \\ 1 \end{pmatrix}\) (this was obtained by solving the linear system \(a v_{2} + b v_{-2} = \) for \(a\) and \(b\)).
    Thus
    $$
    M \begin{pmatrix} x \\ y \end{pmatrix} = \frac{x-y}{4} M v_{2} -\frac{x-5y}{4} M v_{-2}$$ $$ = 2^{12} \Big(\frac{x-y}{4} v_{2} -\frac{x-5y}{4} v_{-2}\Big)
    = 2^{12} \begin{pmatrix} x \\ y \end{pmatrix}\, .
    $$
    Thus $$M^{12}=\begin{pmatrix} 4096 & 0 \\ 0 & 4096\end{pmatrix}\, .$$
    \(\textit{If you understand the above explanation, then you have a good understanding of diagonalization. A quicker route}\) \(\textit{is simply to observe that}\) \(M^{2} = \begin{pmatrix}4 & 0 \\ 0 & 4\end{pmatrix}\).

    8.
    $$
    P_{M}(\lambda) = (-1)^{2} {\rm det}\begin{pmatrix} a-\lambda & b \\ c &d-\lambda\end{pmatrix}
    =(\lambda-a)(\lambda-d) - bc\, .
    $$
    Thus
    $$
    P_{M}(M)=(M-a I )(M- d I) - bc I $$ $$=
    \left(\begin{pmatrix}a&b\\c&d\end{pmatrix}-\begin{pmatrix}a&0\\0&a\end{pmatrix}\right)
    \left(\begin{pmatrix}a&b\\c&d\end{pmatrix}-\begin{pmatrix}d&0\\0&d\end{pmatrix}\right)-\begin{pmatrix}bc&0\\0&bc\end{pmatrix}
    $$
    $$
    =\begin{pmatrix}0& b\\c&d-a\end{pmatrix}\begin{pmatrix}a-d&b\\ c&0\end{pmatrix}-\begin{pmatrix}bc&0\\0&bc\end{pmatrix}=0\, .
    $$
    Observe that any \(2\times 2\) matrix is a zero of its own characteristic polynomial (\(\textit{in fact this holds for square matrices of any size}\)).

    Now if \(A=P^{-1}DP\) then \(A^{2}=P^{-1}DPP^{-1}DP=P^{-1}D^{2}P\). Similarly \(A^{k}=P^{-1} D^{k} P\). So for \(\textit{any}\) matrix polynomial we have
    \begin{eqnarray}
    && A^{n} + c_{1} A^{n-1} + \cdots c_{n-1} A + c_{n} I \nonumber \\ &=& P^{-1}D^{n}P + c_{1} P^{-1}D^{n-1}P + \cdots c_{n-1} P^{-1}DP + c_{n} P^{-1}P \nonumber \\ &=&
    P^{-1}( D^{n} + c_{1} D^{n-1} + \cdots c_{n-1} D + c_{n} I)P\, .\nonumber
    \end{eqnarray}
    Thus we may conclude \(P_{A}(A)=P^{-1} P_{A}(D) P\).

    Now suppose
    \(D=\begin{pmatrix}\lambda_{1} & 0 &\cdots & 0 \\ 0 &\lambda_{2} & & 0\\ \vdots& & \ddots &\vdots \\ 0 &&\cdots &\lambda_{n} \end{pmatrix}\). Then
    $$P_{A}(\lambda) = {\rm det} (\lambda I - A) = {\rm det} (\lambda P^{-1} I P - P^{-1} D P) = {\rm det} P . {\rm det} (\lambda I - D). {\rm det} P$$
    $$= {\rm det} (\lambda I - D)={\rm det}
    \begin{pmatrix}\lambda-\lambda_{1} & 0 &\cdots & 0 \\ 0 &\lambda-\lambda_{2} & & 0 \\ \vdots& & \ddots &\vdots \\ 0 & 0&\cdots &\lambda-\lambda_{n} \end{pmatrix}$$ $$=(\lambda-\lambda_{1})(\lambda-\lambda_{2})\ldots (\lambda-\lambda_{n})\, .
    $$
    Thus we see that \(\lambda_{1}\), \(\lambda_{2}, \ldots , \lambda_{n}\) are the eigenvalues of \(M\). Finally we compute
    $$
    P_{A}(D) = (D-\lambda_{1})(D-\lambda_{2})\ldots (D-\lambda_{n})
    $$
    $$
    =\begin{pmatrix} 0 & 0 &\cdots & 0 \\ 0 &\lambda_{2} & & 0 \\ \vdots& & \ddots &\vdots \\ 0 & 0&\cdots &\lambda_{n} \end{pmatrix}
    \begin{pmatrix}\lambda_{1} & 0 &\cdots & 0 \\ 0 & 0 & & 0 \\ \vdots& & \ddots &\vdots \\ 0 & 0&\cdots &\lambda_{n} \end{pmatrix}
    \ldots
    \begin{pmatrix}\lambda_{1} & 0 &\cdots & 0 \\ 0 &\lambda_{2} & & 0 \\ \vdots& & \ddots & \vdots\\ 0 & 0&\cdots & 0\end{pmatrix}
    =0\, .
    $$
    We conclude the \(P_{M}(M)=0\).

    9. A subset of a vector space is called a subspace if it itself is a vector space, using the rules for vector addition and scalar
    multiplication inherited from the original vector space.


    a) So long as \(U\neq U\cup W\neq W\) the answer is \(\textit{no}\). Take, for example, \(U\) to be the \(x\)-axis in \(\mathbb{R}^{2}\)
    and \(W\) to be the \(y\)-axis. Then \(\begin{pmatrix}1,0\end{pmatrix}\in U\) and \(\begin{pmatrix}0,1\end{pmatrix}\in W\), but
    \(\begin{pmatrix}1,0\end{pmatrix}+\begin{pmatrix}0,1\end{pmatrix}=\begin{pmatrix}1,1\end{pmatrix}\notin U\cup W\). So \(U\cup W\) is not additively closed and is not a vector space (and thus not a subspace). It is easy to draw the example described.

    b) Here the answer is always \(\textit{yes}\). The proof is not difficult. Take a vector \(u\) and \(w\) such that \(u\in U\cap W\ni w\). This means that \(\textit{both}\) \(u\) and \(w\) are in \(\textit{both}\) \(U\) and \(W\). But, since \(U\) is a vector space, \(\alpha u + \beta w\) is also in \(U\). Similarly, \(\alpha u + \beta w \in W\). Hence \(\alpha u + \beta w\in U\cap W\). So closure holds in \(U\cap W\) and this set is a subspace by the subspace theorem. Here, a good picture to draw is two planes through the origin in \(\mathbb{R}^{3}\)
    intersecting at a line (also through the origin).

    10.

    (i) We say that the vectors \(\{v_{1},v_{2},\ldots v_{n}\}\) are linearly independent if there exist \(\textit{no}\) constants \(c^{1}\), \(c^{2},\ldots c^{n}\) (all non-vanishing) such that \(c^{1} v_{1} + c^{2} v_{2} +\cdots + c^{n} v_{n}=0\). Alternatively, we can require that there is no non-trivial solution for scalars \(c^{1}\), \(c^{2},\ldots, c^{n}\) to the linear system \(c^{1} v_{1} + c^{2} v_{2} +\cdots + c^{n} v_{n}=0\).

    (ii) We say that these vectors span a vector space \(V\) if the set span\(\{v_{1},v_{2},\ldots v_{n}\}=\{c^{1} v_{1} + c^{2} v_{2} +\cdots + c^{n} v_{n}:c^{1},c^{2},\ldots c^{n}\in \mathbb{R}\}=V\).

    (iii) We call \(\{v_{1},v_{2},\ldots v_{n}\}\) a basis for \(V\) if \(\{v_{1},v_{2},\ldots v_{n}\}\) are linearly independent \(\textit{and}\) span\(\{v_{1},v_{2},\ldots v_{n}\}=V\).

    For \(u,v,w\) to be a basis for \(\mathbb{R}^{3}\), we firstly need (the spanning requirement) that any vector \(\begin{pmatrix}x \\ y \\ z\end{pmatrix}\) can be written as a linear combination of \(u\), \(v\) and \(w\)
    $$
    c^{1} \begin{pmatrix}-1 \\ -4 \\ 3\end{pmatrix} + c^{2} \begin{pmatrix}4 \\ 5 \\ 0\end{pmatrix} + c^{3} \begin{pmatrix}10 \\ 7 \\ h+3\end{pmatrix} = \begin{pmatrix}x \\ y \\ z\end{pmatrix}\, .
    $$
    The linear independence requirement implies that when \(x=y=z=0\), the only solution to the above system is \(c^{1}=c^{2}=c^{3}=0\).
    But the above system in matrix language reads
    $$
    \begin{pmatrix}
    -1 &4 & 10 \\ -4 & 5 & 7 \\ 3 & 0 & h+3
    \end{pmatrix}
    \begin{pmatrix}c^{1} \\ c^{2} \\ c^{3}\end{pmatrix}=\begin{pmatrix}x \\ y \\ z\end{pmatrix}\, .
    $$
    Both requirements mean that the matrix on the left hand side must be invertible, so we examine its determinant
    $$
    {\rm det} \begin{pmatrix}
    -1 &4 & 10 \\ -4 & 5 & 7 \\ 3 & 0 & h+3
    \end{pmatrix}
    = -4. (-4.(h+3)-7.3)+ 5.(-1.(h+3)-10.3)$$ $$=11(h-3)\, .
    $$
    Hence we obtain a basis whenever \(h\neq 3\).

    Contributor


    This page titled 18.5: Sample Second Midterm is shared under a not declared license and was authored, remixed, and/or curated by David Cherney, Tom Denton, & Andrew Waldron.

    • Was this article helpful?