Skip to main content
Mathematics LibreTexts

18.11: Movie Scripts 9-10

  • Page ID
    2199
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    G.9 Linear Independence

    Worked Example

    This video gives some more details behind the example for the following four vectors in \(\mathbb{R}^{3}\). Consider the following vectors in \(\Re^{3}\):
    \[
    v_{1}=\begin{pmatrix}4\\-1\\3\end{pmatrix}, \qquad
    v_{2}=\begin{pmatrix}-3\\7\\4\end{pmatrix}, \qquad
    v_{3}=\begin{pmatrix}5\\12\\17\end{pmatrix}, \qquad
    v_{4}=\begin{pmatrix}-1\\1\\0\end{pmatrix}.
    \]
    The example asks whether they are linearly independent, and the answer is immediate: NO, four vectors can never be linearly independent in \(\mathbb{R}^{3}\). This vector space is simply not big enough for that, but you need to understand the notion of the dimension of a vector space to see why. So we think the vectors \(v_{1}\), \(v_{2}\), \(v_{3}\) and \(v_{4}\) are linearly dependent, which means we need to show that there is a solution to
    $$
    \alpha_{1} v_{1} + \alpha_{2} v_{2} + \alpha_{3} v_{3} + \alpha_{4} v_{4} = 0
    $$
    for the numbers \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\) and \(\alpha_{4}\) not all vanishing.

    To find this solution we need to set up a linear system. Writing out the above linear combination gives
    $$
    \begin{array}{cccccc}
    4\alpha_{1}&-3\alpha_{2}&+5\alpha_{3}&-\alpha_{4} &=&0\, ,\\
    -\alpha_{1}&+7\alpha_{2}&+12\alpha_{3}&+\alpha_{4} &=&0\, ,\\
    3\alpha_{1}&+4\alpha_{2}&+17\alpha_{3}& &=&0\, .\\
    \end{array}
    $$
    This can be easily handled using an augmented matrix whose columns are just the vectors we started with
    $$
    \left(
    \begin{array}{cccc|c}
    4&-3&5&-1 &0\, ,\\
    -1&7&12&1 &0\, ,\\
    3&4&17& 0&0\, .\\
    \end{array}\right)\, .
    $$
    Since there are only zeros on the right hand column, we can drop it. Now we perform row operations to achieve RREF
    $$
    \begin{pmatrix}
    4&-3&5&-1 \\
    -1&7&12&1 \\
    3&4&17& 0\\
    \end{pmatrix}\sim
    \begin{pmatrix}
    1 & 0 & \frac{71}{25}& -\frac{4}{25}\\
    0&1&\frac{53}{25}&\frac{3}{25}\\
    0&0&0&0
    \end{pmatrix}\, .
    $$
    This says that \(\alpha_{3}\) and \(\alpha_{4}\) are not pivot variable so are arbitrary, we set them to \(\mu\) and \(\nu\), respectively.
    Thus
    $$
    \alpha_{1}=\Big(-\frac{71}{25}\, \mu+\frac{4}{25}\, \nu\Big)\, ,\qquad \alpha_{2}=\Big(-\frac{53}{25}\, \mu-\frac{3}{25}\, \nu\Big)\, ,\qquad
    \alpha_{3}=\mu\, ,\qquad \alpha_{4}= \nu\, .
    $$
    Thus we have found a relationship among our four vectors
    $$
    \Big(-\frac{71}{25}\, \mu+\frac{4}{25}\, \nu\Big)\, v_{1}+\Big(-\frac{53}{25}\, \mu-\frac{3}{25}\, \nu\Big)\, v_{2}
    +\mu\, v_{3}+ \mu_{4}\, v_{4}=0\, .
    $$
    In fact this is not just one relation, but infinitely many, for any choice of \(\mu,\nu\). The relationship quoted in the notes is just one of those choices.

    Finally, since the vectors \(v_{1}\), \(v_{2}\), \(v_{3}\) and \(v_{4}\) are linearly dependent, we can try to eliminate some of them. The pattern here is to keep the vectors that correspond to columns with pivots. For example, setting \(\mu=-1\) (say) and \(\nu=0\) in the above allows us to solve for \(v_{3}\) while \(\mu=0\) and \(\nu=-1\) (say) gives \(v_{4}\), explicitly we get
    $$
    v_{3}=\frac{71}{25}\, v_{1} + \frac{53}{25}\, v_{2}\, ,\qquad
    v_{4}=-\frac{4}{25}\, v_{3} + \frac{3}{25} \, v_{4}\, .
    $$
    This eliminates \(v_{3}\) and \(v_{4}\) and leaves a pair of linearly independent vectors \(v_{1}\) and \(v_{2}\).

    Worked Proof

    Here we will work through a quick version of the proof of Theorem 10.1.1. Let \(\{ v_{i} \}\) denote a set of linearly dependent vectors, so \(\sum_{i} c^{i} v_{i} = 0\) where there exists some \(c^{k} \neq 0\). Now without loss of generality we order our vectors such that \(c^{1} \neq 0\), and we can do so since addition is commutative (i.e. \(a + b = b + a\)). Therefore we have
    \begin{align*}
    c^{1} v_{1} & = -\sum_{i=2}^{n} c^{i} v_{i}
    \\ v_{1} & = -\sum_{i=2}^{n} \frac{c^i}{c^1} v_{i}
    \end{align*}
    and we note that this argument is completely reversible since every \(c^{i} \neq 0\) is invertible and \(0 / c^{i} = 0\).

    Hint for Review Problem 1

    Lets first remember how \(\mathbb{Z}_{2}\) works. The only two elements are \(1\) and \(0\). Which means when you add \(1+1\) you get \(0\). It also means when you have a vector \(\vec{v} \in B^{n}\) and you want to multiply it by a scalar, your only choices are \(1\) and \(0\). This is kind of neat because it means that the possibilities are finite, so we can look at an entire vector space.

    Now lets think about \(B^{3}\) there is choice you have to make for each coordinate, you can either put a \(1\) or a \(0\), there are three places where you have to make a decision between two things. This means that you have \(2^{3}= 8\) possibilities for vectors in \(B^{3}\).

    When you want to think about finding a set \(S\) that will span \(B^{3}\) and is linearly independent, you want to think about how many vectors you need. You will need you have enough so that you can make every vector in $B^3$ using linear combinations of elements in \(S\) but you don't want too many so that some of them are linear combinations of each other. I suggest trying something really simple perhaps something that looks like the columns of the identity matrix

    For part (c) you have to show that you can write every one of the elements as a linear combination of the elements in \(S\), this will check to make sure \(S\) actually spans \(B^{3}\).

    For part (d) if you have two vectors that you think will span the space, you can prove that they do by repeating what you did in part (c), check that every vector can be written using only copies of of these two vectors. If you don't think it will work you should show why, perhaps using an argument that counts the number of possible vectors in the span of two vectors.

    G.10 Basis and Dimension

    Proof Explanation

    Lets walk } through the proof of theorem 11.0.1. We want to show that for \(S=\{v_{1}, \ldots, v_{n} \}\) a basis for a vector space \(V\), then every vector \(w \in V\) can be written \(\textit{uniquely}\) as a linear combination of vectors in the basis \(S\):

    \[w=c^{1}v_{1}+\cdots + c^{n}v_{n}.\]

    We should remember that since \(S\) is a basis for \(V\), we know two things

    1. \(V = span S\)
    2. \(v_{1}, \ldots , v_{n}\) are linearly independent, which means that whenever we have \(a^{1}v_{1}+ \ldots + a^{n} v_{n} = 0\) this implies that \(a^{i} =0\) for all \(i=1, \ldots, n\).

    This first fact makes it easy to say that there exist constants \(c^{i}\) such that \(w=c^{1}v_{1}+\cdots + c^{n}v_{n}\). What we don't yet know is that these \(c^{1}, \ldots c^{n}\) are unique.

    In order to show that these are unique, we will suppose that they are not, and show that this causes a contradiction. So suppose there exists a second set of constants \(d^{i}\) such that

    \[w=d^{1}v_{1}+\cdots + d^{n}v_{n}\, .\]

    For this to be a contradiction we need to have \(c^{i} \neq d^{i}\) for some \(i\). Then look what happens when we take the difference of these two versions of \(w\):

    \begin{eqnarray*}0_{V}&=&w-w\\&=&(c^{1}v_{1}+\cdots + c^{n}v_{n})-(d^{1}v_{1}+\cdots + d^{n}v_{n} )\\&=&(c^{1}-d^{1})v_{1}+\cdots + (c^{n}-d^{n})v_{n}. \\\end{eqnarray*}

    Since the \(v_{i}\)'s are linearly independent this implies that \(c^{i} - d^{i} = 0\) for all \(i\), this means that we cannot have \(c^{i} \neq d^{i}\), which is a contradiction.

    Worked Example

    In this video we will work through an example of how to extend a set of linearly independent vectors to a basis. For fun, we will take the vector space

    \[V=\{(x,y,z,w)|x,y,z,w\in \mathbb{Z}^{5}\}\, .\]

    This is like four dimensional space \(\mathbb{R}^{4}\) except that the numbers can only be \(\{0,1,2,3,4\}\). This is like bits, but now the rule is

    \[0=5\, .\]

    Thus, for example, \(\frac{1}{4}=4\) because \(4\times 4=16=1+3\times 5=1\). Don't get too caught up on this aspect, its a choice of base field designed to make computations go quicker!

    Now, here's the problem we will solve:

    \[\bf{Find~ a~ basis~ for~ V~ that~ includes ~the ~vectors~ \begin{pmatrix}1\\2\\3\\4\end{pmatrix}~ and ~\begin{pmatrix}0\\3\\2\\1\end{pmatrix}.}\]

    The way to proceed is to add a known (and preferably simple) basis to the vectors given, thus we consider

    \[v_{1}=\begin{pmatrix}1\\2\\3\\4\end{pmatrix},v_{2}=\begin{pmatrix}0\\3\\2\\1\end{pmatrix},e_{1}=\begin{pmatrix}1\\0\\0\\0\end{pmatrix},e_{2}=\begin{pmatrix}0\\1\\0\\0\end{pmatrix},e_{3}=\begin{pmatrix}0\\0\\1\\0\end{pmatrix},e_{4}=\begin{pmatrix}0\\0\\0\\1\end{pmatrix}.\]

    The last four vectors are clearly a basis (make sure you understand this....) and are called the \(\textit {canonical basis}\). We want to keep \(v_{1}\) and \(v_{2}\) but find a way to turf out two of the vectors in the canonical basis leaving us a basis of four vectors. To do that, we have to study linear independence, or in other words a linear system problem defined by

    \[0=\alpha_{1} e_{1} + \alpha_{2} e_{2} + \alpha_{3} v_{1} + \alpha_{4} v_{2} + \alpha_{5} e_{3} + \alpha_{6} e_{4} \, .\]

    We want to find solutions for the \(\alpha's\) which allow us to determine two of the \(e's\). For that we use an augmented matrix

    \[\left(\begin{array}{cccccc|c}1&0&1&0&0&0&0\\2&3&0&1&0&0&0\\3&2&0&0&1&0&0\\4&1&0&0&0&1&0\end{array}\right)\, .\]

    Next comes a bunch of row operations. Note that we have dropped the last column of zeros since it has no information--you can fill in the row operations used above the \(\sim\)'s as an exercise:

    \[\begin{pmatrix}1&0&1&0&0&0\\2&3&0&1&0&0\\3&2&0&0&1&0\\4&1&0&0&0&1\end{pmatrix}\sim\begin{pmatrix}1&0&1&0&0&0\\0&3&3&1&0&0\\0&2&2&0&1&0\\0&1&1&0&0&1\end{pmatrix}\]

    \[\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&2&0&0\\0&2&2&0&1&0\\0&1&1&0&0&1\end{pmatrix}\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&2&0&0\\0&0&0&1&1&0\\0&0&0&3&0&1\end{pmatrix}\]

    \[\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&0&3&0\\0&0&0&1&1&0\\0&0&0&0&2&1\end{pmatrix}\sim\begin{pmatrix}1&0&1&0&0&0\\0&1&1&0&3&0\\0&0&0&1&1&0\\0&0&0&0&1&3\end{pmatrix}\]

    \[\sim\begin{pmatrix}\underline1&0&1&0&0&0\\0&\underline1&1&0&0&1\\0&0&0&\underline1&0&2\\0&0&0&0&\underline1&3\end{pmatrix}\]

    The pivots are underlined. The columns corresponding to non-pivot variables are the ones that can be eliminated--their coefficients (the \(\alpha\)'s) will be arbitrary, so set them all to zero save for the one next to the vector you are solving for which can be taken to be unity. Thus that vector can certainly be expressed in terms of previous ones. Hence, altogether, our basis is

    \[\left\{\begin{pmatrix}1\\2\\3\\4\end{pmatrix} \, , \begin{pmatrix}0\\3\\2\\1\end{pmatrix} ,\begin{pmatrix}0\\1\\0\\0\end{pmatrix} ,\begin{pmatrix}0\\0\\1\\0\end{pmatrix}\right\}\, .\]

    Finally, as a check, note that \(e_{1}=v_{1}+v_{2}\) which explains why we had to throw it away.

    Hint for Review Problem 2

    Since there are two possible values for each entry, we have \(|B^{n}| = 2^{n}\). We note that \(\dim B^{n} = n\) as well. Explicitly we have \(B^{1} = {(0), (1)}\) so there is only 1 basis for \(B^{1}\). Similarly we have

    \[B^{2}=\left\{\begin{pmatrix}0\\0\end{pmatrix} \, , \begin{pmatrix}1\\0\end{pmatrix} ,\begin{pmatrix}0\\1\end{pmatrix} ,\begin{pmatrix}1\\1\end{pmatrix}\right\}\, .\]

    and so choosing any two non-zero vectors will form a basis. Now in general we note that we can build up a basis \({e_{i}}\) by arbitrarily (independently) choosing the first \(i-1\) entries, then setting the \(i\)-th entry to \(1\) and all higher entries to \(0\).

    Contributor


    This page titled 18.11: Movie Scripts 9-10 is shared under a not declared license and was authored, remixed, and/or curated by David Cherney, Tom Denton, & Andrew Waldron.

    • Was this article helpful?