Skip to main content
Mathematics LibreTexts

5.11.1.1: Examples and Basic Properties

  • Page ID
    134813
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Many mathematical entities have the property that they can be added and multiplied by a number. Numbers themselves have this property, as do \(m \times n\) matrices: The sum of two such matrices is again \(m \times n\) as is any scalar multiple of such a matrix. Polynomials are another familiar example, as are the geometric vectors in Chapter [chap:4]. It turns out that there are many other types of mathematical objects that can be added and multiplied by a scalar, and the general study of such systems is introduced in this chapter. Remarkably, much of what we could say in Chapter [chap:5] about the dimension of subspaces in \(\mathbb{R}^n\) can be formulated in this generality.

    Vector Spaces017629 A vector space consists of a nonempty set \(V\) of objects (called vectors) that can be added, that can be multiplied by a real number (called a scalar in this context), and for which certain axioms hold.If \(\mathbf{v}\) and \(\mathbf{w}\) are two vectors in \(V\), their sum is expressed as \(\mathbf{v} + \mathbf{w}\), and the scalar product of \(\mathbf{v}\) by a real number \(a\) is denoted as \(a\mathbf{v}\). These operations are called vector addition and scalar multiplication, respectively, and the following axioms are assumed to hold.

    Axioms for vector addition

    • If \(\mathbf{u}\) and \(\mathbf{v}\) are in \(V\), then \(\mathbf{u} + \mathbf{v}\) is in \(V\).
    • \(\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}\) for all \(\mathbf{u}\) and \(\mathbf{v}\) in \(V\).
    • \(\mathbf{u} + (\mathbf{v} + \mathbf{w}) = (\mathbf{u} + \mathbf{v}) + \mathbf{w}\) for all \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) in \(V\).
    • An element \(\mathbf{0}\) in \(V\) exists such that \(\mathbf{v} + \mathbf{0} = \mathbf{v} = \mathbf{0} + \mathbf{v}\) for every \(\mathbf{v}\) in \(V\).
    • For each \(\mathbf{v}\) in \(V\), an element \(-\mathbf{v}\) in \(V\) exists such that \(-\mathbf{v} + \mathbf{v} = \mathbf{0}\) and \(\mathbf{v} + (-\mathbf{v}) = \mathbf{0}\).

    Axioms for scalar multiplication

    • If \(\mathbf{v}\) is in \(V\), then \(a\mathbf{v}\) is in \(V\) for all \(a\) in \(\mathbb{R}\).
    • \(a(\mathbf{v} + \mathbf{w}) = a\mathbf{v} + a\mathbf{w}\) for all \(\mathbf{v}\) and \(\mathbf{w}\) in \(V\) and all \(a\) in \(\mathbb{R}\).
    • \((a + b)\mathbf{v} = a\mathbf{v} + b\mathbf{v}\) for all \(\mathbf{v}\) in \(V\) and all \(a\) and \(b\) in \(\mathbb{R}\).
    • \(a(b\mathbf{v}) = (ab)\mathbf{v}\) for all \(\mathbf{v}\) in \(V\) and all \(a\) and \(b\) in \(\mathbb{R}\).
    • \(1\mathbf{v} = \mathbf{v}\) for all \(\mathbf{v}\) in \(V\).

    The content of axioms A1 and S1 is described by saying that \(V\) is closed under vector addition and scalar multiplication. The element \(\mathbf{0}\) in axiom A4 is called the zero vector, and the vector \(-\mathbf{v}\) in axiom A5 is called the negative of \(\mathbf{v}\).

    The rules of matrix arithmetic, when applied to \(\mathbb{R}^n\), give

    017663 \(\mathbb{R}^n\) is a vector space using matrix addition and scalar multiplication.

    It is important to realize that, in a general vector space, the vectors need not be \(n\)-tuples as in \(\mathbb{R}^n\). They can be any kind of objects at all as long as the addition and scalar multiplication are defined and the axioms are satisfied. The following examples illustrate the diversity of the concept.

    The space \(\mathbb{R}^n\) consists of special types of matrices. More generally, let \(\mathbf{M}_{mn}\) denote the set of all \(m \times n\) matrices with real entries. Then Theorem [thm:002170] gives:

    017672 The set \(\mathbf{M}_{mn}\) of all \(m \times n\) matrices is a vector space using matrix addition and scalar multiplication. The zero element in this vector space is the zero matrix of size \(m \times n\), and the vector space negative of a matrix (required by axiom A5) is the usual matrix negative discussed in Section [sec:2_1]. Note that \(\mathbf{M}_{mn}\) is just \(\mathbb{R}^{mn}\) in different notation.

    In Chapter [chap:5] we identified many important subspaces of \(\mathbb{R}^n\) such as \(im \; A\) and \(\func{null} A\) for a matrix \(A\). These are all vector spaces.

    017680 Show that every subspace of \(\mathbb{R}^n\) is a vector space in its own right using the addition and scalar multiplication of \(\mathbb{R}^n\).

    Axioms A1 and S1 are two of the defining conditions for a subspace \(U\) of \(\mathbb{R}^n\) (see Section [sec:5_1]). The other eight axioms for a vector space are inherited from \(\mathbb{R}^n\). For example, if \(\mathbf{x}\) and \(\mathbf{y}\) are in \(U\) and \(a\) is a scalar, then \(a(\mathbf{x} + \mathbf{y}) = a\mathbf{x} + a\mathbf{y}\) because \(\mathbf{x}\) and \(\mathbf{y}\) are in \(\mathbb{R}^n\). This shows that axiom S2 holds for \(U\); similarly, the other axioms also hold for \(U\).

    017691 Let \(V\) denote the set of all ordered pairs \((x, y)\) and define addition in \(V\) as in \(\mathbb{R}^2\). However, define a new scalar multiplication in \(V\) by

    \[a(x, y) = (ay, ax) \nonumber \]

    Determine if \(V\) is a vector space with these operations.

    Axioms A1 to A5 are valid for \(V\) because they hold for matrices. Also \(a(x, y) = (ay, ax)\) is again in \(V\), so axiom S1 holds. To verify axiom S2, let \(\mathbf{v} = (x, y)\) and \(\mathbf{w} = (x_{1}, y_{1})\) be typical elements in \(V\) and compute

    \[\begin{aligned} a(\mathbf{v} + \mathbf{w}) &= a(x + x_1, y + y_1) = (a(y + y_1), a(x + x_1)) \\ a\mathbf{v} + a\mathbf{w} &= (ay, ax) + (ay_1, ax_1) = (ay + ay_1, ax + ax_1)\end{aligned} \nonumber \]

    Because these are equal, axiom S2 holds. Similarly, the reader can verify that axiom S3 holds. However, axiom S4 fails because

    \[a(b(x, y)) = a(by, bx) = (abx, aby) \nonumber \]

    need not equal \(ab(x, y) = (aby, abx)\). Hence, \(V\) is not a vector space. (In fact, axiom S5 also fails.)

    Sets of polynomials provide another important source of examples of vector spaces, so we review some basic facts. A polynomial in an indeterminate \(x\) is an expression

    \[p(x) = a_0 + a_1x + a_2x^2 + \dots + a_nx^n \nonumber \]

    where \(a_{0}, a_{1}, a_{2}, \dots, a_{n}\) are real numbers called the coefficients of the polynomial. If all the coefficients are zero, the polynomial is called the zero polynomial and is denoted simply as \(0\). If \(p(x) \neq 0\), the highest power of \(x\) with a nonzero coefficient is called the degree of \(p(x)\) denoted as \(\text{deg} p(x)\). The coefficient itself is called the leading coefficient of \(p(x)\). Hence \(\text{deg} (3 + 5x) = 1\), \(\text{deg} (1 + x + x^{2}) = 2\), and \(\text{deg} (4) = 0\). (The degree of the zero polynomial is not defined.)

    Let \(\mathbf{P}\) denote the set of all polynomials and suppose that

    \[\begin{aligned} p(x) &= a_0 + a_1x + a_2x^2 + \cdots \\ q(x) &= b_0 + b_1x + b_2x^2 + \cdots\end{aligned} \nonumber \]

    are two polynomials in \(\mathbf{P}\) (possibly of different degrees). Then \(p(x)\) and \(q(x)\) are called equal [written \(p(x) = q(x)\)] if and only if all the corresponding coefficients are equal—that is, \(a_{0} = b_{0}\), \(a_{1} = b_{1}\), \(a_{2} = b_{2}\), and so on. In particular, \(a_{0} + a_{1}x + a_{2}x^{2} + \dots = 0\) means that \(a_{0} = 0\), \(a_{1} = 0\), \(a_{2} = 0\), \(\dots\), and this is the reason for calling \(x\) an indeterminate. The set \(\mathbf{P}\) has an addition and scalar multiplication defined on it as follows: if \(p(x)\) and \(q(x)\) are as before and \(a\) is a real number,

    \[\begin{aligned} p(x) + q(x) &= (a_0 + b_0) + (a_1 + b_1)x + (a_2 + b_2)x^2 + \cdots \\ ap(x) &= aa_0 + (aa_1)x + (aa_2)x^2 + \cdots\end{aligned} \nonumber \]

    Evidently, these are again polynomials, so \(\mathbf{P}\) is closed under these operations, called pointwise addition and scalar multiplication. The other vector space axioms are easily verified, and we have

    017729 The set \(\mathbf{P}\) of all polynomials is a vector space with the foregoing addition and scalar multiplication. The zero vector is the zero polynomial, and the negative of a polynomial \(p(x) = a_{0} + a_{1}x + a_{2}x^{2} + \dots\) is the polynomial \(-p(x) = -a_{0} - a_{1}x - a_{2}x^{2} - \dots\) obtained by negating all the coefficients.

    There is another vector space of polynomials that will be referred to later.

    017741 Given \(n \geq 1\), let \(\mathbf{P}_{n}\) denote the set of all polynomials of degree at most \(n\), together with the zero polynomial. That is

    \[\mathbf{P}_n = \{a_0 + a_1x + a_2x^2 + \dots + a_nx^n \mid a_0, a_1, a_2, \dots, a_n \mbox{ in } \mathbb{R}\}. \nonumber \]

    Then \(\mathbf{P}_{n}\) is a vector space. Indeed, sums and scalar multiples of polynomials in \(\mathbf{P}_{n}\) are again in \(\mathbf{P}_{n}\), and the other vector space axioms are inherited from \(\mathbf{P}\). In particular, the zero vector and the negative of a polynomial in \(\mathbf{P}_{n}\) are the same as those in \(\mathbf{P}\).

    If \(a\) and \(b\) are real numbers and \(a < b\), the interval \([a, b]\) is defined to be the set of all real numbers \(x\) such that \(a \leq x \leq b\). A (real-valued) function \(f\) on \([a, b]\) is a rule that associates to every number \(x\) in \([a, b]\) a real number denoted \(f(x)\). The rule is frequently specified by giving a formula for \(f(x)\) in terms of \(x\). For example, \(f(x) = 2^{x}\), \(f(x) = \sin x\), and \(f(x) = x^{2} + 1\) are familiar functions. In fact, every polynomial \(p(x)\) can be regarded as the formula for a function \(p\).

    The set of all functions on \([a, b]\) is denoted \(\mathbf{F}[a, b]\). Two functions \(f\) and \(g\) in \(\mathbf{F}[a, b]\) are equal if \(f(x) = g(x)\) for every \(x\) in \([a, b]\), and we describe this by saying that \(f\) and \(g\) have the same action. Note that two polynomials are equal in \(\mathbf{P}\) (defined prior to Example [exa:017729]) if and only if they are equal as functions.

    If \(f\) and \(g\) are two functions in \(\mathbf{F}[a, b]\), and if \(r\) is a real number, define the sum \(f + g\) and the scalar product \(rf\) by

    \[\begin{aligned} {2} (f + g)(x) &= f(x) + g(x) \quad &\mbox{for each }x \mbox{ in }[a, b] \\ (rf)(x) &= rf(x) \quad &\mbox{for each }x \mbox{ in }[a, b]\end{aligned} \nonumber \]

    In other words, the action of \(f + g\) upon \(x\) is to associate \(x\) with the number \(f(x) + g(x)\), and \(rf\) associates \(x\) with \(rf(x)\). The sum of \(f(x) = x^{2}\) and \(g(x) = -x\) is shown in the diagram. These operations on \(\mathbf{F}[a, b]\) are called pointwise addition and scalar multiplication of functions and they are the usual operations familiar from elementary algebra and calculus.

    017760 The set \(\mathbf{F}[a, b]\) of all functions on the interval \([a, b]\) is a vector space using pointwise addition and scalar multiplication. The zero function (in axiom A4), denoted \(0\), is the constant function defined by

    \[0(x) = 0 \quad \mbox{ for each }x \mbox{ in } [a, b] \nonumber \]

    The negative of a function \(f\) is denoted \(-f\) and has action defined by

    \[(-f)(x) = -f(x) \quad \mbox{ for each }x \mbox{ in } [a, b] \nonumber \]

    Axioms A1 and S1 are clearly satisfied because, if \(f\) and \(g\) are functions on \([a, b]\), then \(f + g\) and \(rf\) are again such functions. The verification of the remaining axioms is left as Exercise [ex:6_1_14].

    Other examples of vector spaces will appear later, but these are sufficiently varied to indicate the scope of the concept and to illustrate the properties of vector spaces to be discussed. With such a variety of examples, it may come as a surprise that a well-developed theory of vector spaces exists. That is, many properties can be shown to hold for all vector spaces and hence hold in every example. Such properties are called theorems and can be deduced from the axioms. Here is an important example.

    Cancellation017768 Let \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) be vectors in a vector space \(V\). If \(\mathbf{v} + \mathbf{u} = \mathbf{v} + \mathbf{w}\), then \(\mathbf{u} = \mathbf{w}\).

    We are given \(\mathbf{v} + \mathbf{u} = \mathbf{v} + \mathbf{w}\). If these were numbers instead of vectors, we would simply subtract \(\mathbf{v}\) from both sides of the equation to obtain \(\mathbf{u} = \mathbf{w}\). This can be accomplished with vectors by adding \(-\mathbf{v}\) to both sides of the equation. The steps (using only the axioms) are as follows:

    \[\begin{aligned} \mathbf{v} + \mathbf{u} &= \mathbf{v} + \mathbf{w} \nonumber \\ -\mathbf{v} + (\mathbf{v} + \mathbf{u}) &= -\mathbf{v} + (\mathbf{v} + \mathbf{w}) \tag{axiom A5} \\ (-\mathbf{v} + \mathbf{v}) + \mathbf{u} &= (-\mathbf{v} + \mathbf{v}) + \mathbf{w} \tag{axiom A3} \\ \mathbf{0} + \mathbf{u} &= \mathbf{0} + \mathbf{w} \tag{axiom A5} \\ \mathbf{u} &= \mathbf{w} \tag{axiom A4}\end{aligned} \]

    This is the desired conclusion.1

    As with many good mathematical theorems, the technique of the proof of Theorem [thm:017768] is at least as important as the theorem itself. The idea was to mimic the well-known process of numerical subtraction in a vector space \(V\) as follows: To subtract a vector \(\mathbf{v}\) from both sides of a vector equation, we added \(-\mathbf{v}\) to both sides. With this in mind, we define difference \(\mathbf{u} - \mathbf{v}\) of two vectors in \(V\) as

    \[\mathbf{u} - \mathbf{v} = \mathbf{u} + (-\mathbf{v}) \nonumber \]

    We shall say that this vector is the result of having subtracted \(\mathbf{v}\) from \(\mathbf{u}\) and, as in arithmetic, this operation has the property given in Theorem [thm:017781].

    017781 If \(\mathbf{u}\) and \(\mathbf{v}\) are vectors in a vector space \(V\), the equation

    \[\mathbf{x} + \mathbf{v} = \mathbf{u} \nonumber \]

    has one and only one solution \(\mathbf{x}\) in \(V\) given by

    \[\mathbf{x} = \mathbf{u} - \mathbf{v} \nonumber \]

    The difference \(\mathbf{x} = \mathbf{u} - \mathbf{v}\) is indeed a solution to the equation because (using several axioms)

    \[\mathbf{x} + \mathbf{v} = (\mathbf{u} - \mathbf{v}) + \mathbf{v} = [ \mathbf{u} + (-\mathbf{v})] + \mathbf{v} = \mathbf{u} + (-\mathbf{v} + \mathbf{v}) = \mathbf{u} + \mathbf{0} = \mathbf{u} \nonumber \]

    To see that this is the only solution, suppose \(\mathbf{x}_{1}\) is another solution so that \(\mathbf{x}_{1} + \mathbf{v} = \mathbf{u}\). Then \(\mathbf{x} + \mathbf{v} = \mathbf{x}_{1} + \mathbf{v}\) (they both equal \(\mathbf{u}\)), so \(\mathbf{x} = \mathbf{x}_{1}\) by cancellation.

    Similarly, cancellation shows that there is only one zero vector in any vector space and only one negative of each vector (Exercises [ex:6_1_10] and [ex:6_1_11]). Hence we speak of the zero vector and the negative of a vector.

    The next theorem derives some basic properties of scalar multiplication that hold in every vector space, and will be used extensively.

    017797 Let \(\mathbf{v}\) denote a vector in a vector space \(V\) and let \(a\) denote a real number.

    1. \(0\mathbf{v} = \mathbf{0}\).
    2. \(a\mathbf{0} = \mathbf{0}\).
    3. If \(a\mathbf{v} = \mathbf{0}\), then either \(a = 0\) or \(\mathbf{v} = \mathbf{0}\).
    4. \((-1)\mathbf{v} = -\mathbf{v}\).
    5. \((-a)\mathbf{v} = -(a\mathbf{v}) = a(-\mathbf{v})\).
    1. Observe that \(0\mathbf{v} + 0\mathbf{v} = (0 + 0)\mathbf{v} = 0\mathbf{v} = 0\mathbf{v} + \mathbf{0}\) where the first equality is by axiom S3. It follows that \(0\mathbf{v} = \mathbf{0}\) by cancellation.
    2. The proof is similar to that of (1), and is left as Exercise [ex:6_1_12](a).
    3. Assume that \(a\mathbf{v} = \mathbf{0}\). If \(a = 0\), there is nothing to prove; if \(a \neq 0\), we must show that \(\mathbf{v} = \mathbf{0}\). But \(a \neq 0\) means we can scalar-multiply the equation \(a\mathbf{v} = \mathbf{0}\) by the scalar \(\frac{1}{a}\). The result (using (2) and Axioms S5 and S4) is

      \[\mathbf{v} = 1\mathbf{v} = \left(\frac{1}{a}a\right)\mathbf{v} = \frac{1}{a}(a\mathbf{v}) = \frac{1}{a}\mathbf{0} = \mathbf{0} \nonumber \]

    4. We have \(-\mathbf{v} + \mathbf{v} = \mathbf{0}\) by axiom A5. On the other hand,

      \[(-1)\mathbf{v} + \mathbf{v} = (-1)\mathbf{v} + 1\mathbf{v} = (-1 + 1)\mathbf{v} = 0\mathbf{v} = \mathbf{0} \nonumber \]

    5. The proof is left as Exercise [ex:6_1_12].

    The properties in Theorem [thm:017797] are familiar for matrices; the point here is that they hold in every vector space. It is hard to exaggerate the importance of this observation.

    Axiom A3 ensures that the sum \(\mathbf{u} + (\mathbf{v} + \mathbf{w}) = (\mathbf{u} + \mathbf{v}) + \mathbf{w}\) is the same however it is formed, and we write it simply as \(\mathbf{u} + \mathbf{v} + \mathbf{w}\). Similarly, there are different ways to form any sum \(\mathbf{v}_{1} + \mathbf{v}_{2} + \dots + \mathbf{v}_{n}\), and Axiom A3 guarantees that they are all equal. Moreover, Axiom A2 shows that the order in which the vectors are written does not matter (for example: \(\mathbf{u} + \mathbf{v} + \mathbf{w} + \mathbf{z} = \mathbf{z} + \mathbf{u} + \mathbf{w} + \mathbf{v}\)).

    Similarly, Axioms S2 and S3 extend. For example

    \[a(\mathbf{u} + \mathbf{v} + \mathbf{w}) = a\left[ \mathbf{u} + (\mathbf{v}+\mathbf{w})\right] = a\mathbf{u} + a(\mathbf{v}+\mathbf{w})=a\mathbf{u} + a\mathbf{v} + a\mathbf{w} \nonumber \]

    for all \(a\), \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\). Similarly \((a + b + c)\mathbf{v} = a\mathbf{v} + b\mathbf{v} + c\mathbf{v}\) hold for all values of \(a\), \(b\), \(c\), and \(\mathbf{v}\) (verify). More generally,

    \[\begin{aligned} a(\mathbf{v}_1 + \mathbf{v}_2 + \dots + \mathbf{v}_n) &= a\mathbf{v}_1 + a\mathbf{v}_2 + \dots + a\mathbf{v}_n \\ (a_1 + a_2 + \dots + a_n)\mathbf{v} &= a_1\mathbf{v} + a_2\mathbf{v} + \dots + a_n\mathbf{v}\end{aligned} \nonumber \]

    hold for all \(n \geq 1\), all numbers \(a, a_{1}, \dots, a_{n}\), and all vectors, \(\mathbf{v}, \mathbf{v}_{1}, \dots, \mathbf{v}_{n}\). The verifications are by induction and are left to the reader (Exercise [ex:6_1_13]). These facts—together with the axioms, Theorem [thm:017797], and the definition of subtraction—enable us to simplify expressions involving sums of scalar multiples of vectors by collecting like terms, expanding, and taking out common factors. This has been discussed for the vector space of matrices in Section [sec:2_1] (and for geometric vectors in Section [sec:4_1]); the manipulations in an arbitrary vector space are carried out in the same way. Here is an illustration.

    017838 If \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) are vectors in a vector space \(V\), simplify the expression

    \[2(\mathbf{u} + 3 \mathbf{w}) - 3(2\mathbf{w} - \mathbf{v}) - 3[2(2\mathbf{u} + \mathbf{v} - 4\mathbf{w}) - 4(\mathbf{u} - 2\mathbf{w})] \nonumber \]

    The reduction proceeds as though \(\mathbf{u}\), \(\mathbf{v}\), and \(\mathbf{w}\) were matrices or variables.

    \[\begin{aligned} & 2(\mathbf{u} + 3 \mathbf{w}) - 3(2\mathbf{w} - \mathbf{v}) - 3[2(2\mathbf{u} + \mathbf{v} - 4\mathbf{w}) - 4(\mathbf{u} - 2\mathbf{w})] \\ &= 2\mathbf{u} + 6\mathbf{w} - 6\mathbf{w} + 3\mathbf{v} - 3[4\mathbf{u} + 2\mathbf{v} - 8\mathbf{w} - 4\mathbf{u} + 8\mathbf{w}] \\ &= 2\mathbf{u} + 3\mathbf{v} - 3[2\mathbf{v}] \\ &= 2\mathbf{u} + 3\mathbf{v} - 6\mathbf{v} \\ &= 2\mathbf{u} - 3\mathbf{v}\end{aligned} \nonumber \]

    Condition (2) in Theorem [thm:017797] points to another example of a vector space.

    017847 A set \(\{\mathbf{0}\}\) with one element becomes a vector space if we define

    \[\mathbf{0} + \mathbf{0} = \mathbf{0} \quad \mbox{ and } \quad a\mathbf{0} = \mathbf{0} \mbox{ for all scalars } a. \nonumber \]

    The resulting space is called the zero vector space and is denoted \(\{\mathbf{0}\}\).

    The vector space axioms are easily verified for \(\{\mathbf{0}\}\). In any vector space \(V\), Theorem [thm:017797] shows that the zero subspace (consisting of the zero vector of \(V\) alone) is a copy of the zero vector space.


    1. Observe that none of the scalar multiplication axioms are needed here.↩

    This page titled 5.11.1.1: Examples and Basic Properties is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.