Skip to main content
Mathematics LibreTexts

2.2: Linear Combinations

  • Page ID
    112407
  • This page is a draft and is under active development. 

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \def\Span#1{\text{Span}\left\lbrace #1\right\rbrace} \def\vect#1{\mathbf{#1}} \def\ip{\boldsymbol{\cdot}} \def\iff{\Longleftrightarrow} \def\cp{\times} \)
    Definition \(\PageIndex{1}\)
    Let \(\vect{v}_1, \ldots, \vect{v}_n\) be vectors in \(\mathbb{R}^m\). Any expression of the form \[ x_1 \vect{v}_1+\cdots+x_n \vect{v}_n,\nonumber\] where \(x_1, \ldots, x_n\) are real numbers, is called a linear combination of the vectors \(\vect{v}_1, \ldots, \vect{v}_n\).
    Example \(\PageIndex{2}\)
    The vectors \(\vect{v}_1\) and \(\vect{v}_2\) are two vectors in the plane \(\mathbb{R}^2\). As we can see in Figure \(\PageIndex{1}\), the vector \(\vect{u}\) is a linear combination of \(\vect{v}_1\) and \(\vect{v}_2\) since it can be written as \(\vect{u}=2\vect{v}_1+\vect{v}_2\). The vector \(\vect{w}\) is a linear combination of these two vectors as well. It can be written as \(\vect{w}=-3\vect{v}_1+2\vect{v}_2\).
     
    Fig-LinearCombinations-LinComb.svg
    Figure \(\PageIndex{1}\): Linear combinations of vectors in the plane.

    If we want to determine whether a given vector is a linear combination of other vectors, then we can do that using systems of equations.
    Example \(\PageIndex{3}\)
    \[ \vect{v}_1= \left[\begin{array}{r} 1 \\ 2 \\ 1 \end{array}\right] \quad \vect{v}_2= \left[\begin{array}{r} 3 \\ 1 \\ 2 \end{array}\right] \quad \vect{b}= \left[\begin{array}{r} -1 \\ 3 \\ 0 \end{array}\right]\nonumber\] Is the vector \(\vect{b}\) a linear combination of \(\vect{v}_1\) and \(\vect{v}_2\)? We can use the definition of a linear combination to solve this problem. If \(\vect{b}\) is in fact a linear combination of the two other vectors, then it can be written as \(x_1 \vect{v}_1+x_2 \vect{v}_2\). This means that we should verify whether the system of equations \(x_1 \vect{v}_1+x_2 \vect{v}_2=\vect{b}\) has a solution. The equation \[ x_1 \left[\begin{array}{r} 1 \\ 2 \\ 1 \end{array}\right]+x_2 \left[\begin{array}{r} 3 \\ 1 \\ 2 \end{array}\right]= \left[\begin{array}{r} -1 \\ 3 \\ 0 \end{array}\right]\nonumber\] is equivalent to the system \[ \left\{\begin{array}{l} x_1+3x_2=-1 \\ 2x_1+x_2=3 \\ x_1+2x_2=0\end{array} \right.\nonumber\] The augmented matrix of this system of equations is equal to \[ \left[\begin{array}{cc | c} 1 & 3 & -1 \\ 2 & 1 & 3 \\ 1 & 2 & 0 \end{array}\right]\nonumber\] and its reduced echelon form is equal to \[ \left[\begin{array}{cc | c} 1 & 0 & 2 \\ 0 & 1 & -1 \\ 0 & 0 & 0 \end{array}\right].\nonumber\] This means that \(\vect{b}\) is indeed a linear combination of \(\vect{v}_1\) and \(\vect{v}_2\). \[ 2 \left[\begin{array}{r} 1 \\ 2 \\ 1 \end{array}\right]- \left[\begin{array}{r} 3 \\ 1 \\ 2 \end{array}\right]= \left[\begin{array}{r} -1 \\ 3 \\ 0 \end{array}\right]\nonumber\] We have found that \(\vect{b}\) can be written as \(2\vect{v}_1-\vect{v}_2\).
    Example \(\PageIndex{4}\)
    \[ \vect{v}_1= \left[\begin{array}{r} 1 \\ 0 \\ 2 \end{array}\right] \quad \vect{v}_2= \left[\begin{array}{r} 3 \\ 0 \\ 1 \end{array}\right] \quad \vect{b}= \left[\begin{array}{r} 2 \\ 1 \\ 1 \end{array}\right]\nonumber\] In this case it is a lot easier to decide whether \(\vect{b}\) is a linear combination of \(\vect{v}_1\) and \(\vect{v}_2\). Since the second component of both \(\vect{v}_1\) and \(\vect{v}_2\) is equal to zero, we know that the second component of each linear combination of those vectors will be zero. This means that \(\vect{b}\) can never be a linear combination of \(\vect{v}_1\) and \(\vect{v}_2\).

    Span

    In linear algebra it is often important to know whether each vector in \(\mathbb{R}^n\) can be written as a linear combination of a set of given vectors. In order to investigate when it is possible to write any given vector as a linear combination of a set of given vectors we introduce the notion of a span.
    Definition \(\PageIndex{5}\)
    Let \(S\) be a set of vectors. The set of all linear combinations \(a_1\vect{v}_1+a_2\vect{v}_2+ \cdots +a_k \vect{v}_k\), where \(\vect{v}_1, \ldots, \vect{v}_k\) are vectors in \(S\), will be called the span of those vectors and will be denoted as \(\Span{S}\). When \(S\) is equal to a finite set \(\{\vect{v}_1, \ldots, \vect{v}_k\}\), then we will simply write \(\Span{\vect{v}_1, \ldots, \vect{v}_k}\). The span of an empty collection of vectors will be defined as the set that only contains the zero vector \(\vect{0}\).
    Note \(\PageIndex{6}\)
    The collection \(\Span{\vect{v}_1, \ldots, \vect{v}_k}\) always contains all of the vectors \(\vect{v}_1, \ldots, \vect{v}_k\). This is true since each vector \(\vect{v}_k\) can be written as the linear combination \(0\vect{v}_1+\cdots+\vect{v}_k+\cdots +0\vect{v}_k\). Moreover, the span of any set of vectors always contains the zero vector. Whatever set of vectors we start with, we can always write \(\vect{0}=0\vect{v}_1+0\vect{v}_2+\cdots +0\vect{v}_k\).
    The following examples will give us a bit of an idea what spans look like.
    Example \(\PageIndex{7}\)
    What does the span of a single non-zero vector look like? A linear combination of a vector \(\vect{v}\) is of the form \(x\vect{v}\), where \(x\) is some real number. Linear combinations of a single vector \(\vect{v}\) are thus just multiples of that vector. This means that \(\Span{\vect{v}}\) is simply the collection of all vectors on the line through the origin and with directional vector \(\vect{v}\) as we can see in Figure \(\PageIndex{2}\).
     
    Fig-LinearCombinations-SpanOne.svg
    Figure \(\PageIndex{2}\): The span of a single non-zero vector.

    Example \(\PageIndex{8}\)
    Let \(\vect{u}\) and \(\vect{v}\) be two non-zero vectors in \(\mathbb{R}^3\), as depicted in Figure \(\PageIndex{3}\). What does the span of these vectors look like? By definition, \(\Span{\vect{u}, \vect{v}}\) contains all linear combinations of \(\vect{u}\) and \(\vect{v}\). Each of these linear combinations is of the form \[ x_1\vect{u}+x_2\vect{v} \quad \textrm{\(x_1\), \(x_2\) in \(\mathbb{R}\)}.\nonumber\] This looks like the parametric vector equation of a plane. Since the span must contain the zero vector we find that we obtain a plane through the origin like in Figure \(\PageIndex{3}\).
     
    Fig-LinearCombinations-SpanTwoPlane.svg
    Figure \(\PageIndex{3}\): The span of two non-zero, non-parallel vectors.

    Example \(\PageIndex{9}\)
    The span of two non-zero vectors does not need to be a plane through the origin. If \(\vect{u}\) and \(\vect{v}\) are parallel, as in Figure \(\PageIndex{4}\), then the span is actually a line through the origin.
     
    Fig-LinearCombinations-SpanTwoLine.svg
    Figure \(\PageIndex{4}\): The span of two non-zero, parallel vectors.

    If two non-zero vectors \(\vect{u}\) and \(\vect{v}\) are parallel, then \(\vect{v}\) can be written as a multiple of \(\vect{u}\). Assume for example that \(\vect{v}=2\vect{u}\). Any linear combination \(x_1\vect{u}+x_2\vect{v}\) can then be written as \(x_1\vect{u}+2x_2\vect{u}\) or \((x_1+2x_2)\vect{u}\). This means that in this case each vector in the span of \(\vect{u}\) and \(\vect{v}\) is a multiple of \(\vect{u}\). Therefore, the span will be a line through the origin.
    Example \(\PageIndex{10}\)
    If we start with three non-zero vectors in \(\mathbb{R}^3\), then the resulting span may take on different forms. The span of the three vectors in Figure \(\PageIndex{5}\), for example, is equal to the entire space \(\mathbb{R}^3\). In section \(\PageIndex{Section:Basis}\) we will see why this is the case.
     
    Fig-LinearCombinations-SpanThreeR3.svg
    Figure \(\PageIndex{5}\): The span of three vectors.

    On the other hand, if we start with the three vectors that you can see in Figure \(\PageIndex{6}\), then the span is equal to a plane through the origin.
     
    Fig-LinearCombinations-SpanThreePlane.svg
    Figure \(\PageIndex{6}\): The span of three vectors lying in the same plane.

    There is also a possibility where the span of three non-zero vectors in \(\mathbb{R}^3\) is equal to a line through the origin. Can you figure out when this happens?
    We will now look at a very specific set of vectors in \(\mathbb{R}^n\) of which the span is always the entire space \(\mathbb{R}^n\).
    Definition \(\PageIndex{11}\)
    Suppose we are working in \(\mathbb{R}^n\). Let \(\vect{e}_k\) be the vector of which all components are equal to 0, with the exception that the entry on place \(k\) is equal to 1. The vectors \((\vect{e}_1, \ldots, \vect{e}_n)\) will be called the standard basis of \(\mathbb{R}^n\).
    Example \(\PageIndex{12}\)
    The following vectors form the standard basis for \(\mathbb{R}^2\). \[ \vect{e}_1= \left[\begin{array}{r} 1 \\ 0 \end{array}\right] \quad \vect{e}_2= \left[\begin{array}{r} 0 \\ 1 \end{array}\right] \nonumber\nonumber\] Each vector \(\vect{v}\) can be written as a linear combination of the vectors \(\vect{e}_1\) and \(\vect{e}_2\) in a unique way. Later on we will call each collection of vectors with this property a basis for \(\mathbb{R}^2\). If \[ \vect{v}= \left[\begin{array}{r} a \\ b \end{array}\right], \nonumber\nonumber\] then clearly we have that \[ \vect{v}=a \left[\begin{array}{r} 1 \\ 0 \end{array}\right]+b \left[\begin{array}{r} 0 \\ 1 \end{array}\right]. \nonumber\nonumber\] It is easy to see that this is the only linear combination of \(\vect{e}_1\) and \(\vect{e}_2\) that is equal to \(\vect{v}\).
    Example \(\PageIndex{13}\)
    The three vectors below form the standard basis for \(\mathbb{R}^3\). \[ \vect{e}_1= \left[\begin{array}{r} 1 \\ 0 \\ 0 \end{array}\right] \quad \vect{e}_2= \left[\begin{array}{r} 0 \\ 1 \\ 0 \end{array}\right] \quad \vect{e}_3= \left[\begin{array}{r} 0 \\ 0 \\ 1 \end{array}\right] \nonumber\nonumber\] Here too, it is true that each vector in \(\mathbb{R}^3\) can be written as a unique linear combination of these three vectors.
    Proposition \(\PageIndex{14}\)
    If \((\vect{e}_1, \ldots, \vect{e}_n)\) is the standard basis for \(\mathbb{R}^n\), then \(\Span{\vect{e}_1, \ldots, \vect{e}_n}\) is equal to \(\mathbb{R}^n\).
    Skip/Read the proof
    Proof
    Take an arbitrary vector \(\vect{v}\) in \(\mathbb{R}^n\) with \[ \vect{v}= \left[\begin{array}{r} a_1 \\ \vdots \\ a_n \end{array}\right].\nonumber\nonumber\] The vector \(\vect{v}\) can be written as \begin{aligned} \vect{v} &= a_1 \left[\begin{array}{r} 1 \\ 0 \\ \vdots \\ 0 \end{array}\right]+a_2 \left[\begin{array}{r} 0 \\ 1 \\ \vdots \\ 0 \end{array}\right]+ \ldots a_n \left[\begin{array}{r} 0 \\ 0 \\ \vdots \\ 1 \end{array}\right] \\ &= a_n\vect{e}_1+a_2\vect{v}_2+\ldots +a_n\vect{e}_n. \end{aligned} This means that \(\vect{v}\) is in the span of \(\vect{e}_1, \ldots, \vect{e}_n\). On the other hand, each vector in \(\Span{\vect{e}_1, \ldots, \vect{e}_n}\) is a linear combination of vectors in \(\mathbb{R}^n\) and thus itself a vector in \(\mathbb{R}^n\).
    In proposition \(\PageIndex{14}\) we saw that the span of the standard basis of \(\mathbb{R}^n\) is equal to the entire space. In what follows we will try to find out when, for an arbitrary set of vectors \(\vect{v}_1, \ldots, \vect{v}_k\), the collection \(\Span{\vect{v}_1, \ldots, \vect{v}_k}\) contains every vector in \(\mathbb{R}^n\).
    Proposition \(\PageIndex{15}\)
    Let \(\vect{v}_1, \ldots, \vect{v}_k\) be vectors in \(\mathbb{R}^n\). Define the matrix \(A\) such that \[ A= \left[\begin{array}{c} \vect{v}_1 & \vect{v}_2 & \ldots & \vect{v}_k \end{array}\right].\nonumber\] The collection \(\Span{\vect{v}_1, \ldots, \vect{v}_k}\) is equal to \(\mathbb{R}^n\) if and only if the equation \(A \vect{x}=\vect{b}\) has a solution for each \(\vect{b}\) in \(\mathbb{R}^n\).
    Skip/Read the proof
    Proof
    If \(\Span{\vect{v}_1, \ldots, \vect{v}_k}\) is equal to \(\mathbb{R}^n\), then each vector \(\vect{b}\) is a vector in the span of the vectors \(\vect{v}_1, \ldots, \vect{v}_k\). This means that we can write \(\vect{b}\) as a linear combination \[ \vect{b}=x_1\vect{v}_1+ \ldots + x_k\vect{v}_k.\nonumber\] Define the vector \(\vect{x}\) such that \[ \vect{x}= \left[\begin{array}{r} x_1 \\ \vdots \\ x_k \end{array}\right].\nonumber\] By definition of the matrix-vector product we now have \begin{aligned} A\vect{x} &= x_1\vect{v}_1+ \ldots + x_k\vect{v}_k \\ &= \vect{b} \end{aligned} The proof of the other implication is similar.
    Proposition \(\PageIndex{16}\)
    The equation \(A \vect{x}=\vect{b}\) has a solution for each \(\vect{b}\) in \(\mathbb{R}^n\) if and only if \(A\) has a pivot position in each row.
    Skip/Read the proof
    Proof
    Suppose that \(A\) does not contain a pivot position in each row. By definition of the reduced echelon form we know that the last row of \(A\) does not have a pivot position. If \(E\) is the reduced echelon form of \(A\), then this means that the bottom row of \(E\) contains only zeros. Let \(\vect{e}_n\) be again the vector of which the last entry is equal to 1 and all other entries are equal to zero. Since \(E\) is the reduced form of \(A\) we can find a sequence of elementary row operations \(R_1, \ldots , R_m\) that transform in \(A\) into \(E\). Now take the augmented matrix \([E | \vect{e}_n]\) and perform the row operations \(R_m^{-1}, \ldots , R_1^{-1}\), where \(R_i^{-1}\) is the inverse row operation of \(R_i\). We obtain a matrix \([A | \vect{b}]\), where \(\vect{b}\) is a vector in \(R^n\). Because \([E | \vect{e}_n]\) is the reduced echelon form of the augmented matrix \([A | \vect{b}]\) and \([E | \vect{e}_n]\) has a pivot in the last column, we know that \([A | \vect{b}]\) is inconsistent. This means that \(A\vect{x}=\vect{b}\) does not have a solution. On the other hand, if we assume that \(A\vect{x}=\vect{b}\) does not have a solution for some \(\vect{b}\) in \(\mathbb{R}^n\), then the reduced echelon form \([E | \vect{c}]\) of the augmented matrix \([A | \vect{b}]\) has a pivot in the last column. Let us assume that this pivot is located in row \(m\). The matrix \(E\) cannot have a pivot in row \(m\), but \(E\) is also the reduced echelon form of \(A\). This means that \(A\) has no pivot position in row \(m\).
    Proposition \(\PageIndex{17}\)
    Let \(\vect{v}_1, \ldots, \vect{v}_k\) be vectors in \(\mathbb{R}^n\). Define the matrix \(A\) such that \[ A= \left[\begin{array}{c} \vect{v}_1 & \vect{v}_2 & \ldots & \vect{v}_k \end{array}\right].\nonumber\] The following statements are equivalent:
    1. The cset \(\Span{\vect{v}_1, \ldots, \vect{v}_k}\) is equal to \(\mathbb{R}^n\).
    2. The equation \(A \vect{x}=\vect{b}\) has a solution for each \(\vect{b}\) in \(\mathbb{R}^n\).
    3. The matrix \(A\) has a pivot position in each row.
    Skip/Read the proof
    Proof
    This follows from Propositions \(\PageIndex{15}\) and \(\PageIndex{16}\).
    Example \(\PageIndex{18}\)
    Is the span of the following three vectors equal to \(\mathbb{R}^3\)? \[ \vect{v}_1= \left[\begin{array}{r} 1 \\ 1 \\ -1 \end{array}\right] \quad \vect{v}_2= \left[\begin{array}{r} 0 \\ 1 \\ 1 \end{array}\right] \quad \vect{v}_3= \left[\begin{array}{r} 3 \\5 \\-1 \end{array}\right]\nonumber\] We can use Proposition \(\PageIndex{17}\) to solve this problem. We will first use these vectors as the columns of a matrix \(A\). \[ A= \left[\begin{array}{rrr} 1 & 0 & 3 \\ 1 & 1 & 5 \\ -1 & 1 & -1 \end{array}\right]\nonumber\] The three given vectors span the entire space \(\mathbb{R}^3\) if and only if the matrix \(A\) has three pivot positions. Using elementary row operations we find that A has the following reduced echelon form. \[ A= \left[\begin{array}{rrr} 1 & 0 & 3 \\ 1 & 1 & 5 \\ -1 & 1 & -1 \end{array}\right]\sim \left[\begin{array}{rrr} 1 & 0 & 3 \\ 0 & 1 & 2 \\ 0 & 0 & 0 \end{array}\right]\nonumber\] Since there are only two pivots in the reduced echelon form, we know that \(\vect{v}_1\), \(\vect{v}_2\) and \(\vect{v}_3\) do not span the space \(\mathbb{R}^3\).
    Proposition \(\PageIndex{19}\)
    If \(\vect{v}_1, \dots ,\vect{v}_k\) are vectors in \(\mathbb{R}^n\) and \(k < n\), then the span of \(\vect{v}_1, \dots ,\vect{v}_k\) is not equal to \(\mathbb{R}^n\).
    Skip/Read the proof
    Proof
    Use the vectors \(\vect{v}_1, \dots ,\vect{v}_k\) as the columns for a matrix \(A\). By definition, the matrix \(A\) is an \(n\times k\) matrix. Let \(E\) be the reduced echelon form of \(A\). Since \(E\) has \(k\) columns we know that \(E\) can have at most \(k\) pivots. Because \(k < n\) this means that the number of pivots is less than \(n\). Therefore, we find that the number of pivots is less than the number of rows in \(E\). This implies that it is impossible for \(E\) to have a pivot in each row. Proposition \(\PageIndex{17}\) now tells us that the span of the vectors \(\vect{v}_1, \dots ,\vect{v}_k\) cannot be equal to \(\mathbb{R}^n\).

    2.2: Linear Combinations is shared under a CC BY license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?