Skip to main content
Mathematics LibreTexts

1.3: The n-dimensional vector space V(n)

  • Page ID
    40992
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The manipulation of directed quantities, such as velocities, accelerations, forces and the like is of considerable importance in classical mechanics and electrodynamics. The need to simplify the rather complex operations led to the development of an abstraction: the concept of a vector.

    The precise meaning of this concept is implicit in the rules governing its manipulations. These rules fall into three main categories: they pertain to

    1. the addition of vectors,
    2. the multiplication of vectors by numbers (scalars),
    3. the multiplication of vectors by vectors (inner product and vector product.

    While the subtle problems involved in 3 will be taken up in the next chapter, we proceed here to show that rules falling under 1 and 2 find their precise expression in the abstract theory of finite dimensional vector spaces.

    The rules related to the addition of vectors can be concisely expressed as follows: vectors are elements of a set \(V\) that forms an Abelian group under the operation of addition, briefly an additive group.

    The inverse of a vector is its negative, the zero vector plays the role of unity.

    The numbers, or “scalars” mentioned under (ii) are usually taken to be the real or the complex numbers. For many considerations involving vector spaces there is no need to specify which of these alternatives is chosen. In fact all we need is that the scalars form a field. More explicitly, they are elements of a set which is closed with respect to two binary operations: addition and multi­ plication which satisfy the common commutative, associative and distributive laws; the operations are all invertible provided they do not involve division by zero.

    A vector space \(V(F)\) over a field F is formally defined as a set of elements forming an additive group that can be multiplied by the elements of the field F .

    In particular, we shall consider real and complex vector fields \(V(R)\) and \(V(C)\) respectively.

    I note in passing that the use of the field concept opens the way for a much greater variety of inter­ pretations, but this is of no interest in the present context. In contrast, the fact that we have been considering “vector” as an undefined concept will enable us to propose in the sequel interpretations that go beyond the classical one as directed quantities. Thus the above defintion is consistent with the interpretation of a vector as a pair of numbers indicating the amounts of two chemical species present in a mixture, or alternatively, as a point in phase space spanned by the coordinates and momenta of a system of mass points.

    We shall now summarize a number of standard results of the theory of vector spaces.

    Suppose we have a set of non-zero vectors \(\{\vec{x}_{1}, \vec{x}_{2}, \cdots , \vec{x}_{n}\}\) in V which satisfy the relation

    \[\begin{array}{c} {\sum_{k} a_{k}\vec{x}_{k} = 0} \end{array} \label{EQ1.3.1}\]

    where the scalars \(a_{k} \in F\), and not all of them vanish. In this case the vectors are said to be linearly dependent. If, in contrast, the relation \ref{EQ1.3.1} implies that all \(a_{k} = 0\), then we say that the vectors are linearly independent.

    In the former, case there is at least one vector of the. set that.can be written as a linear combination of the rest:

    \[\begin{array}{c} {\vec{x}_{m} = \sum_{1}^{m-1} b_{k}\vec{x}_{k}} \end{array}\]

    Definition 1.1

    A (linear) basis in a vector space V is a set \(E = \{\vec{e}_{1}, \vec{e}_{2}, \cdots , \vec{e}_{n}\}\)of linearly independent vectors such that every vector in V is a linear combination of the \(\vec{e}_{n}\). The basis is said to span or generate the space.

    A vector space is finite dimensional if it has a finite basis. It is a fundamental theorem of linear algebra that the number of elements in any basis in a finite dimensional space is the same as in any other basis. This number n is the basis independent dimension of V; we include it into the designation of the vector space: \(V(n, F)\).

    Given a particular basis we can express any \(\vec{x} \in V\) as a linear combination

    \[\begin{array}{c} {\vec{x} = \sum_{1}^{n} x^{k}\vec{e}_{k}} \end{array}\]

    where the coordinates xk are uniquely determined by E. The \(x^{k}\vec{e}_{k} (k = l, 2, \cdots, n)\) are called the components of \(\vec{x}\). The use of superscripts is to suggest a contrast between the transformation properties of coordinates and basis to be derived shortly.

    Using bases, called also coordinate systems, or frames is convenient for handling vectors — thus addition performed by adding coordinates. However, the choice of a particular basis introduces an element of arbitrariness into the formalism and this calls for countermeasures.

    Suppose we introduce a new basis by means of a nonsingular linear transformation:

    \[\begin{array}{c} {\vec{e}_{i}' = \sum_{k} S_{i}^{k}\vec{e}_{k}} \end{array} \label{EQ1.3.4}\]

    where the matrix of the transformation has a nonvanishing determinant

    \[\begin{array}{c} {S_{i}^{k} \ne 0} \end{array} \label{EQ1.3.5}\]

    ensuring that the \(\vec{e}_{i}\) form a linearly independent set, i.e., an acceptable basis. Within the context of the linear theory this is the most general transformation we have to consider

    We ensure the equivalence of the different bases by requiring that

    \[\begin{array}{c} {\vec{x} = \sum x^{k} \vec{e}_{k} = \sum {x^{i}}' {\vec{e}}'_{k}} \end{array} \label{EQ1.3.6}\]

    Inserting Equation \ref{EQ1.3.4} into Equation \ref{EQ1.3.6} we get

    \[\begin{array}{c} {\vec{x} = \sum {x^{i}}' (\sum_{k} S_{i}^{k}\vec{e}_{k} = \sum (\sum {x^{i}}' S_{i}^{k}) \vec{e}_{k}} \end{array}\]

    and hence in conjunction with Equation \ref{EQ1.3.5}

    \[\begin{array}{c} {x^{k} = \sum S_{i}^{k} {x^{i}}'} \end{array} \label{EQ1.3.8}\]

    Note the characteristic “turning around” of the indices as we pass from Equation \ref{EQ1.3.4} to Equa­tion \ref{EQ1.3.8} with a simultaneous interchange of the roles of the old and the new frame. The un­derlying reason can be better appreciated if the foregoing calculation is carried out in symbolic form.

    Let us write the coordinates and the basis vectors as \(n \times 1\) column matrices

    \[\begin{array}{cc} {X = \begin{pmatrix} {x^{1}}\\ {\vdots}\\ {x^{n}} \end{pmatrix}}&{X = \begin{pmatrix} {\vec{e}_{1}}\\ {\vdots}\\ {\vec{e}_{k}} \end{pmatrix}} \end{array}\]

    Equation \ref{EQ1.3.6} appears then as a matrix product

    \[\begin{array}{c} {\vec{x} = X^{T}E = X^{T}S^{-1}SE = {X'}^{T}E'} \end{array}\]

    where the superscript stands for “transpose.”

    We ensure consistency by setting

    \[\begin{array}{c} {E' = SE} \end{array}\]

    \[\begin{array}{c} {{X'}^T = X^{T}S^{-1}} \end{array}\]

    \[\begin{array}{c} {X' = S^{-1T}X} \end{array}\]

    Thus we arrive in a lucid fashion at the results contained in Equations \ref{EQ1.3.4} and \ref{EQ1.3.8}. We see that the “objective” or “invariant” representations of vectors are based on the procedure of transforming bases and coordinates in what is called a contragredient way.

    The vector \(\vec{x}\) itself is sometimes called a contravariant vector, to be distinguished by its transfor­ mation properties from covariant vectors to be introduced later.

    There is a further point to be noted in connection with the factorization of a vector into basis and coordinates.

    The vectors we will be dealing with have usually a dimention such as length, velocity, momentum, force and the like. It is important, in such cases, that the dimension be absorbed in the basis vectors \(\vec{e}_{k}\). In contrast, the coordinates \(x^k\) are elements of the field F, the products of which are still in F, they are simply numbers. It is not surprising that the multiplication of vectors with other vectors constitutes a subtle problem. Vector spaces in which there is provision for such an operation are called algebras; they deserve a careful examination.

    It should be finally pointed out that there are interesting cases in which vectors have a dimen­ sionless character. They can be built up from the elements of the field F, which are arranged as n-tuples, or as \(m \times n\) matrices.

    The \(n \times n\) case is particularly interesting, because matrix multiplication makes these vector spaces into algebras in the sense just defined.


    This page titled 1.3: The n-dimensional vector space V(n) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by László Tisza (MIT OpenCourseWare) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.