# 6.4: Bases (Take 1)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

The central idea of linear algebra is to exploit the hidden simplicity of linear functions. It ends up there is a lot of freedom in how to do this. That freedom is what makes linear algebra powerful.

You saw that a linear operator acting on \(\Re^{2}\) is completely specified by how it acts on the pair of vectors \(\begin{pmatrix}1\\0\end{pmatrix}\) and \(\begin{pmatrix}0\\1\end{pmatrix}\). In fact, any linear operator acting on \(\Re^{2}\) is also completely specified by how it acts on the pair of vectors \(\begin{pmatrix}1\\1\end{pmatrix}\) and \(\begin{pmatrix}1\\-1\end{pmatrix}\).

Example 65

The linear operator \(L\) is a linear operator then it is completely specified by the two equalities

$$L\begin{pmatrix}1\\1\end{pmatrix}=\begin{pmatrix}2\\4\end{pmatrix}$$ and $$L\begin{pmatrix}1\\-1\end{pmatrix}=\begin{pmatrix}6\\8\end{pmatrix}.$$

This is because any vector \(\begin{pmatrix}x\\y\end{pmatrix}\) in \(\Re^{2}\) is a sum of multiples of \(\begin{pmatrix}1\\1\end{pmatrix}\) and \(\begin{pmatrix}1\\-1\end{pmatrix}\) which can be calculated via a linear systems problem as follows:

$$\begin{pmatrix}x\\y\end{pmatrix}=a\begin{pmatrix}1\\1\end{pmatrix}+b\begin{pmatrix}1\\-1\end{pmatrix}$$

$$\Leftrightarrow\begin{pmatrix}1&1\\1&-1\end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}=\begin{pmatrix}x\\y\end{pmatrix}$$

$$\Leftrightarrow \left(\begin{array}{rr|r}1&1&x\\1&-1&y\end{array}\right) \sim \left(\begin{array}{rr|r}1&0&\frac{x+y}{2}\\0&1&\frac{x-y}{2}\end{array}\right)$$

$$\Leftrightarrow\left\{ \begin{array}{l}a=\frac{x+y}{2}\\b=\frac{x-y}{2}\, .\end{array}\right.$$

Thus

$$\begin{pmatrix}x\\y\end{pmatrix}=\frac{x+y}{2}\begin{pmatrix}1\\1\end{pmatrix}+\frac{x-y}{2}\begin{pmatrix}1\\-1\end{pmatrix}\,.$$

We can then calculate how \(L\) acts on any vector by first expressing the vector as a sum of multiples and then applying linearity;

\begin{eqnarray*}

L\begin{pmatrix}x\\y\end{pmatrix}

&=&L\left[ \frac{x+y}{2} \begin{pmatrix}1\\1\end{pmatrix} + \frac{x-y}{2} \begin{pmatrix}1\\-1\end{pmatrix} \right]\\

&=&\frac{x+y}{2} L \begin{pmatrix}1\\1\end{pmatrix} + \frac{x-y}{2} L \begin{pmatrix}1\\-1\end{pmatrix} \\

&=&\frac{x+y}{2} \begin{pmatrix}2\\4\end{pmatrix} + \frac{x-y}{2} \begin{pmatrix}6\\8\end{pmatrix} \\

&=&\begin{pmatrix}x+y \\ 2(x+y)\end{pmatrix} + \begin{pmatrix}3(x-y)\\4(x-y)\end{pmatrix}\\

&=&\begin{pmatrix}4x-2y \\ 6x-y\end{pmatrix}

\end{eqnarray*}

Thus \(L\) is completely specified by its value at just two inputs.

It should not surprise you to learn there are infinitely many pairs of vectors from \(\Re^{2}\) with the property that any vector can be expressed as a linear combination of them; any pair that when used as columns of a matrix gives an invertible matrix works. Such a pair is called a {\it basis}\index{basis} for \(\Re^{2}\).

Similarly, there are infinitely many triples of vectors with the property that any vector from \(\Re^{3}\) can be expressed as a linear combination of them: these are the triples that used as columns of a matrix give an invertible matrix. Such a triple is called a basis for \(\Re^{3}\).

In a similar spirit, there are infinitely many pairs of vectors with the property that every vector in

$$V=\left\{ c_{1}\begin{pmatrix}1\\1\\0\end{pmatrix} +c_{2}\begin{pmatrix}0\\1\\1\end{pmatrix} \middle\vert c_{1},c_{2}\in \Re \right\} $$

can be expressed as a linear combination of them. Some examples are

$$V=

\left\{ c_{1}\begin{pmatrix}1\\1\\0\end{pmatrix} +c_{2}\begin{pmatrix}0\\2\\2\end{pmatrix} \middle\vert c_{1},c_{2}\in \Re \right\}

=\left\{c_{1} \begin{pmatrix}1\\1\\0\end{pmatrix}+c_{2} \begin{pmatrix}1\\3\\2\end{pmatrix} \middle\vert c_{1},c_{2}\in \Re \right\}

$$

Such a pair is a called a basis for \(V\).

You probably have some intuitive notion of what dimension means (the careful mathematical definition is given in chapter 11). Roughly speaking, dimension is the number of independent directions available. To figure out the dimension of a vector space, I stand at the origin, and pick a direction. If there are any vectors in my vector space that aren't in that direction, then I choose another direction that isn't in the line determined by the direction I chose. If there are any vectors in my vector space not in the plane determined by the first two directions, then I choose one of them as my next direction. In other words, I choose a collection of \\(\textit{independent}\) vectors in the vector space (independent vectors are defined in chapter 10). A minimal set of independent vectors is called a \(\textit{basis}\) (see chapter 11 for the precise definition). The number of vectors in my basis is the dimension of the vector space. Every vector space has many bases, but all bases for a particular vector space have the same number of vectors. Thus dimension is a well-defined concept.

The fact that every vector space (over \(\Re\)) has infinitely many bases is actually very useful. Often a good choice of basis can reduce the time required to run a calculation in dramatic ways!

In summary:

$$\textit{A basis is a set of vectors in terms of which it is possible to uniquely express any other vector.}$$

### Contributor

David Cherney, Tom Denton, and Andrew Waldron (UC Davis)