Skip to main content
Mathematics LibreTexts

4: Vector Spaces - Rⁿ

  • Page ID
    197421
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    • 4.1: Vector Spaces
      In this section we consider the idea of an abstract vector space.
    • 4.2: Subspaces
    • 4.3: Review of Vectors
      This page covers the foundational concepts of vectors in \(\mathbb{R}^n\), including position vectors, vector operations (addition, subtraction, scalar multiplication), and geometric interpretations. It explains distance between points using the Pythagorean theorem, introduces unit vectors and their calculation, and discusses linear combinations of vectors.
    • 4.4: Dot and Cross Product
      There are two ways of multiplying vectors which are of great importance in applications. The first of these is called the dot product. When we take the dot product of vectors, the result is a scalar. For this reason, the dot product is also called the scalar product and sometimes the inner product.
    • 4.5: Lines and Planes
      We can use the concept of vectors and points to find equations for arbitrary lines in \(\mathbb{R}^n\), although in this section the focus will be on lines in \(\mathbb{R}^3\).
    • 4.6: Spanning Sets in Rⁿ
      By generating all linear combinations of a set of vectors one can obtain various subsets of \(\mathbb{R}^{n}\) which we call subspaces. For example what set of vectors in \(\mathbb{R}^{3}\) generate the \(XY\)-plane? What is the smallest such set of vectors can you find? The tools of spanning, linear independence and basis are exactly what is needed to answer these and similar questions and are the focus of this section.
    • 4.7: Linear Independence
      This section discusses the linear dependence and independence between vectors.
    • 4.8: Subspaces
    • 4.9: Subspaces and Bases
      The goal of this section is to develop an understanding of a subspace of \(\mathbb{R}^n\).
    • 4.10: Row, Column and Null Spaces
      This section discusses the Row, Column, and Null Spaces of a matrix, focusing on their definitions, properties, and computational methods.
    • 4.11: Dot Products and Orthogonality
      This page covers the concepts of dot product, vector length, distance, and orthogonality within vector spaces. It defines the dot product mathematically in \(\mathbb{R}^n\) and explains properties like commutativity and distributivity. Length is derived from the dot product, and the distance between points is defined as the length of the connecting vector. Unit vectors are introduced, and orthogonality is defined as having a dot product of zero.
    • 4.12: Orthogonal Vectors and Matrices
      In this section, we examine what it means for vectors (and sets of vectors) to be orthogonal and orthonormal. First, it is necessary to review some important concepts. You may recall the definitions for the span of a set of vectors and a linear independent set of vectors.
    • 4.13: Gram-Schmidt Process
      The Gram-Schmidt process is an algorithm to transform a set of vectors into an orthonormal set spanning the same subspace, that is generating the same collection of linear combinations.
    • 4.14: Orthogonal Complements
      This page explores orthogonal complements in linear algebra, defining them as vectors orthogonal to a subspace \(W\) in \(\mathbb{R}^n\). It details properties, computation methods (such as using RREF), and visual representations in \(\mathbb{R}^2\) and \(\mathbb{R}^3\). Key concepts include the relationship between a subspace and its double orthogonal complement, the equality of row and column ranks of matrices, and the significance of dimensions in relation to null spaces.
    • 4.15: Orthogonal Projections
      An important use of the Gram-Schmidt Process is in orthogonal projections, the focus of this section.
    • 4.16: Orthogonal Projection
      This page explains the orthogonal decomposition of vectors concerning subspaces in \(\mathbb{R}^n\), detailing how to compute orthogonal projections using matrix representations. It includes methods for deriving projection matrices, with an emphasis on linear transformations and their properties. The text outlines the relationship between a subspace and its orthogonal complement, utilizing examples to illustrate projection calculations and reflections across subspaces.
    • 4.17: Least Squares Approximation
      In this section, we discuss a very important technique derived from orthogonal projections: the least squares approximation.

    Thumbnail: Animation showing how the vector cross product (green) varies when the angles between the blue and red vectors is changed. (Public Domain; Nicostella via Wikipedia)


    This page titled 4: Vector Spaces - Rⁿ is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Fatemeh Yarahmadi, De Anza College.