Skip to main content
Mathematics LibreTexts

A.4: Subspaces, Dimension, and The Kernel

  • Page ID
    73038
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Subspaces, Basis, and Dimension

    We often find ourselves looking at the set of solutions of a linear equation \(L\vec{x} = \vec{0}\) for some matrix \(L\), that is, we are interested in the kernel of \(L\). The set of all such solutions has a nice structure: It looks and acts a lot like some euclidean space \({\mathbb R}^k\).

    We say that a set \(S\) of vectors in \({\mathbb R}^n\) is a subspace if whenever \(\vec{x}\) and \(\vec{y}\) are members of \(S\) and \(\alpha\) is a scalar, then \[\vec{x} + \vec{y}, \qquad \text{and} \qquad \alpha \vec{x} \nonumber \] are also members of \(S\). That is, we can add and multiply by scalars and we still land in \(S\). So every linear combination of vectors of \(S\) is still in \(S\). That is really what a subspace is. It is a subset where we can take linear combinations and still end up being in the subset. Consequently the span of a number of vectors is automatically a subspace.

    Example \(\PageIndex{1}\)

    If we let \(S = {\mathbb R}^n\), then this \(S\) is a subspace of \({\mathbb R}^n\). Adding any two vectors in \({\mathbb R}^n\) gets a vector in \({\mathbb R}^n\), and so does multiplying by scalars.

    The set \(S' = \{ \vec{0} \}\), that is, the set of the zero vector by itself, is also a subspace of \({\mathbb R}^n\). There is only one vector in this subspace, so we only need to verify the definition for that one vector, and everything checks out: \(\vec{0}+\vec{0} = \vec{0}\) and \(\alpha \vec{0} = \vec{0}\).

    The set \(S''\) of all the vectors of the form \((a,a)\) for any real number \(a\), such as \((1,1)\), \((3,3)\), or \((-0.5,-0.5)\) is a subspace of \({\mathbb R}^2\). Adding two such vectors, say \((1,1)+(3,3) = (4,4)\) again gets a vector of the same form, and so does multiplying by a scalar, say \(8(1,1) = (8,8)\).

    If \(S\) is a subspace and we can find \(k\) linearly independent vectors in \(S\) \[\vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k , \nonumber \] such that every other vector in \(S\) is a linear combination of \(\vec{v}_1, \vec{v}_2,\ldots, \vec{v}_k\), then the set \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k \}\) is called a basis of \(S\). In other words, \(S\) is the span of \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k \}\). We say that \(S\) has dimension \(k\), and we write \[\dim S = k . \nonumber \]

    Theorem \(\PageIndex{1}\)

    If \(S \subset {\mathbb R}^n\) is a subspace and \(S\) is not the trivial subspace \(\{ \vec{0} \}\), then there exists a unique positive integer \(k\) (the dimension) and a (not unique) basis \(\{ \vec{v}_1, \vec{v}_2, \ldots, \vec{v}_k \}\), such that every \(\vec{w}\) in \(S\) can be uniquely represented by \[\vec{w} = \alpha_1 \vec{v}_1 + \alpha_2 \vec{v}_2 + \cdots + \alpha_k \vec{v}_k , \nonumber \] for some scalars \(\alpha_1\), \(\alpha_2\), …, \(\alpha_k\).

    Just as a vector in \({\mathbb R}^k\) is represented by a \(k\)-tuple of numbers, so is a vector in a \(k\)-dimensional subspace of \({\mathbb R}^n\) represented by a \(k\)-tuple of numbers. At least once we have fixed a basis. A different basis would give a different \(k\)-tuple of numbers for the same vector.

    We should reiterate that while \(k\) is unique (a subspace cannot have two different dimensions), the set of basis vectors is not at all unique. There are lots of different bases for any given subspace. Finding just the right basis for a subspace is a large part of what one does in linear algebra. In fact, that is what we spend a lot of time on in linear differential equations, although at first glance it may not seem like that is what we are doing.

    Example \(\PageIndex{2}\)

    The standard basis \[\vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n , \nonumber \] is a basis of \({\mathbb R}^n\), (hence the name). So as expected \[\dim {\mathbb R}^n = n . \nonumber \] On the other hand the subspace \(\{ \vec{0} \}\) is of dimension \(0\).

    The subspace \(S''\) from Example \(\PageIndex{1}\), that is, the set of vectors \((a,a)\), is of dimension 1. One possible basis is simply \(\{ (1,1) \}\), the single vector \((1,1)\): every vector in \(S''\) can be represented by \(a (1,1) = (a,a)\). Similarly another possible basis would be \(\{ (-1,-1) \}\). Then the vector \((a,a)\) would be represented as \((-a) (1,1)\).

    Row and column spaces of a matrix are also examples of subspaces, as they are given as the span of vectors. We can use what we know about rank, row spaces, and column spaces from the previous section to find a basis.

    Example \(\PageIndex{3}\)

    In the last section, we considered the matrix \[A = \begin{bmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 5 & 6 \\ 3 & 6 & 7 & 8 \end{bmatrix} . \nonumber \] Using row reduction to find the pivot columns, we found \[\text{column space of $A$} \left( \begin{bmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 5 & 6 \\ 3 & 6 & 7 & 8 \end{bmatrix} \right) = \operatorname{span} \left\{ \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} , \begin{bmatrix} 3 \\ 5 \\ 7 \end{bmatrix} \right\} . \nonumber \] What we did was we found the basis of the column space. The basis has two elements, and so the column space of \(A\) is two dimensional. Notice that the rank of \(A\) is two.

    We would have followed the same procedure if we wanted to find the basis of the subspace \(X\) spanned by \[\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} , \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} , \begin{bmatrix} 3 \\ 5 \\ 7 \end{bmatrix} , \begin{bmatrix} 4 \\ 6 \\ 8 \end{bmatrix} . \nonumber \] We would have simply formed the matrix \(A\) with these vectors as columns and repeated the computation above. The subspace \(X\) is then the column space of \(A\).

    Example \(\PageIndex{4}\)

    Consider the matrix \[L = \begin{bmatrix} {1} & 2 & 0 & 0 & 3 \\ 0 & 0 & {1} & 0 & 4 \\ 0 & 0 & 0 & {1} & 5 \end{bmatrix} . \nonumber \] Conveniently, the matrix is in reduced row echelon form. The matrix is of rank 3. The column space is the span of the pivot columns. It is the 3-dimensional space \[\text{column space of $L$} = \operatorname{span} \left\{ \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \right\} = {\mathbb{R}}^3 . \nonumber \] The row space is the 3-dimensional space \[\text{row space of $L$} = \operatorname{span} \left\{ \begin{bmatrix} 1 & 2 & 0 & 0 & 3 \end{bmatrix} , \begin{bmatrix} 0 & 0 & 1 & 0 & 4 \end{bmatrix} , \begin{bmatrix} 0 & 0 & 0 & 1 & 5 \end{bmatrix} \right\} . \nonumber \] As these vectors have 5 components, we think of the row space of \(L\) as a subspace of \({\mathbb{R}}^5\).

    The way the dimensions worked out in the examples is not an accident. Since the number of vectors that we needed to take was always the same as the number of pivots, and the number of pivots is the rank, we get the following result.

    Theorem \(\PageIndex{2}\)

    Rank

    The dimension of the column space and the dimension of the row space of a matrix \(A\) are both equal to the rank of \(A\).

    Kernel

    The set of solutions of a linear equation \(L\vec{x} = \vec{0}\), the kernel of \(L\), is a subspace: If \(\vec{x}\) and \(\vec{y}\) are solutions, then \[L(\vec{x}+\vec{y}) = L\vec{x}+L\vec{y} = \vec{0}+\vec{0} = \vec{0} , \qquad \text{and} \qquad L(\alpha \vec{x}) = \alpha L \vec{x} = \alpha \vec{0} = \vec{0}. \nonumber \] So \(\vec{x}+\vec{y}\) and \(\alpha \vec{x}\) are solutions. The dimension of the kernel is called the nullity of the matrix.

    The same sort of idea governs the solutions of linear differential equations. We try to describe the kernel of a linear differential operator, and as it is a subspace, we look for a basis of this kernel. Much of this book is dedicated to finding such bases.

    The kernel of a matrix is the same as the kernel of its reduced row echelon form. For a matrix in reduced row echelon form, the kernel is rather easy to find. If a vector \(\vec{x}\) is applied to a matrix \(L\), then each entry in \(\vec{x}\) corresponds to a column of \(L\), the column that the entry multiplies. To find the kernel, pick a non-pivot column make a vector that has a \(-1\) in the entry corresponding to this non-pivot column and zeros at all the other entries corresponding to the other non-pivot columns. Then for all the entries corresponding to pivot columns make it precisely the value in the corresponding row of the non-pivot column to make the vector be a solution to \(L \vec{x} = \vec{0}\). This procedure is best understood by example.

    Example \(\PageIndex{5}\)

    Consider \[L = \begin{bmatrix} \fbox{1} & 2 & 0 & 0 & 3 \\ 0 & 0 & \fbox{1} & 0 & 4 \\ 0 & 0 & 0 & \fbox{1} & 5 \end{bmatrix} . \nonumber \] This matrix is in reduced row echelon form, the pivots are marked. There are two non-pivot columns, so the kernel has dimension 2, that is, it is the span of 2 vectors. Let us find the first vector. We look at the first non-pivot column, the \(2^{\text{nd}}\) column, and we put a \(-1\) in the \(2^{\text{nd}}\) entry of our vector. We put a \(0\) in the \(5^{\text{th}}\) entry as the \(5^{\text{th}}\) column is also a non-pivot column: \[\begin{bmatrix} ? \\ -1 \\ ? \\ ? \\ 0 \end{bmatrix} . \nonumber \] Let us fill the rest. When this vector hits the first row, we get a \(-2\) and \(1\) times whatever the first question mark is. So make the first question mark \(2\). For the second and third rows, it is sufficient to make it the question marks zero. We are really filling in the non-pivot column into the remaining entries. Let us check while marking which numbers went where: \[\begin{bmatrix} 1 & \fbox{2} & 0 & 0 & 3 \\ 0 & \fbox{0} & 1 & 0 & 4 \\ 0 & \fbox{0} & 0 & 1 & 5 \end{bmatrix} \begin{bmatrix} \fbox{2} \\ -1 \\ \fbox{0} \\ \fbox{0} \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} . \nonumber \] Yay! How about the second vector. We start with \[\begin{bmatrix} ? \\ 0 \\ ? \\ ? \\ -1 . \end{bmatrix} \nonumber \] We set the first question mark to 3, the second to 4, and the third to 5. Let us check, marking things as previously, \[\begin{bmatrix} 1 & 2 & 0 & 0 & \fbox{3} \\ 0 & 0 & 1 & 0 & \fbox{4} \\ 0 & 0 & 0 & 1 & \fbox{5} \end{bmatrix} \begin{bmatrix} \fbox{3} \\ 0 \\ \fbox{4} \\ \fbox{5} \\ -1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} . \nonumber \] There are two non-pivot columns, so we only need two vectors. We have found the basis of the kernel. So, \[\text{kernel of $L$} = \operatorname{span} \left\{ \begin{bmatrix} 2 \\ -1 \\ 0 \\ 0 \\ 0 \end{bmatrix} , \begin{bmatrix} 3 \\ 0 \\ 4 \\ 5 \\ -1 \end{bmatrix} \right\} \nonumber \]

    What we did in finding a basis of the kernel is we expressed all solutions of \(L \vec{x} = \vec{0}\) as a linear combination of some given vectors.

    The procedure to find the basis of the kernel of a matrix \(L\):

    1. Find the reduced row echelon form of \(L\).
    2. Write down the basis of the kernel as above, one vector for each non-pivot column.

    The rank of a matrix is the dimension of the column space, and that is the span on the pivot columns, while the kernel is the span of vectors one for each non-pivot column. So the two numbers must add to the number of columns.

    Theorem \(\PageIndex{3}\)

    Rank-Nullity

    If a matrix \(A\) has \(n\) columns, rank \(r\), and nullity \(k\) (dimension of the kernel), then \[n = r+k . \nonumber \]

    The theorem is immensely useful in applications. It allows one to compute the rank \(r\) if one knows the nullity \(k\) and vice versa, without doing any extra work.

    Let us consider an example application, a simple version of the so-called Fredholm alternative. A similar result is true for differential equations. Consider \[A \vec{x} = \vec{b} , \nonumber \] where \(A\) is a square \(n \times n\) matrix. There are then two mutually exclusive possibilities:

    1. A nonzero solution \(\vec{x}\) to \(A \vec{x} = \vec{0}\) exists.
    2. The equation \(A \vec{x} = \vec{b}\) has a unique solution \(\vec{x}\) for every \(\vec{b}\).

    How does the Rank–Nullity theorem come into the picture? Well, if \(A\) has a nonzero solution \(\vec{x}\) to \(A \vec{x} = \vec{0}\), then the nullity \(k\) is positive. But then the rank \(r = n-k\) must be less than \(n\). It means that the column space of \(A\) is of dimension less than \(n\), so it is a subspace that does not include everything in \({\mathbb{R}}^n\). So \({\mathbb{R}}^n\) has to contain some vector \(\vec{b}\) not in the column space of \(A\). In fact, most vectors in \({\mathbb{R}}^n\) are not in the column space of \(A\).


    A.4: Subspaces, Dimension, and The Kernel is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?