Skip to main content
Mathematics LibreTexts

6.10.3: Isomorphisms and Composition

  • Page ID
    134840
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    Often two vector spaces can consist of quite different types of vectors but, on closer examination, turn out to be the same underlying space displayed in different symbols. For example, consider the spaces

    \[\mathbb{R}^2 = \{(a, b) \mid a, b \in \mathbb{R}\} \quad \mbox{and} \quad \mathbf{P}_1 = \{a + bx \mid a, b \in \mathbb{R}\} \nonumber \]

    Compare the addition and scalar multiplication in these spaces:

    \[\begin{aligned} {2} (a, b) + (a_1, b_1) &= (a + a_1, b + b_1) \quad & \quad (a + bx) + (a_1 + b_1x) &= (a + a_1) + (b + b_1)x \\ r(a, b) &= (ra, rb) \quad & \quad r(a + bx) &= (ra) + (rb)x\end{aligned} \nonumber \]

    Clearly these are the same vector space expressed in different notation: if we change each \((a, b)\) in \(\mathbb{R}^2\) to \(a + bx\), then \(\mathbb{R}^2\) becomes \(\mathbf{P}_{1}\), complete with addition and scalar multiplication. This can be expressed by noting that the map \((a, b) \mapsto a + bx\) is a linear transformation \(\mathbb{R}^2 \to \mathbf{P}_{1}\) that is both one-to-one and onto. In this form, we can describe the general situation.

    Definition: Isomorphic Vector Spaces

    A linear transformation \(T : V \to W\) is called an isomorphism if it is both onto and one-to-one. The vector spaces \(V\) and \(W\) are said to be isomorphic if there exists an isomorphism \(T : V \to W\), and we write \(V \cong W\) when this is the case.

    Example \(\PageIndex{1}\)

    The identity transformation \(1_{V} : V \to V\) is an isomorphism for any vector space \(V\).

    Example \(\PageIndex{2}\)

    If \(T :\mathbf{M}_{mn} \to \mathbf{M}_{nm}\) is defined by \(T(A) = A^{T}\) for all \(A\) in \(\mathbf{M}_{mn}\), then \(T\) is an isomorphism (verify). Hence \(\mathbf{M}_{mn} \cong \mathbf{M}_{nm}\).

    Example \(\PageIndex{3}\)

    Isomorphic spaces can “look” quite different. For example, \(\mathbf{M}_{22} \cong \mathbf{P}_{3}\) because the map \(T : \mathbf{M}_{22} \to \mathbf{P}_{3}\) given by \(T\left[ \begin{array}{rr} a & b \\ c & d \end{array} \right] = a + bx + cx^2 + dx^3\) is an isomorphism (verify).

    Isomorphic spaces can “look” quite different. For example, \(\mathbf{M}_{22} \cong {P}_{3}\) because the map \(T : {M}_{22} \to {P}_{3}\) given by \(T\left[ \begin{array}{rr} a & b \\ c & d \end{array} \right] = a + bx + cx^2 + dx^3\) is an isomorphism (verify).

    The word isomorphism comes from two Greek roots: iso, meaning “same,” and morphos, meaning “form.” An isomorphism \(T : V \to W\) induces a pairing

    \[\mathbf{v} \leftrightarrow T(\mathbf{v}) \nonumber \]

    between vectors \(\mathbf{v}\) in \(V\) and vectors \(T(\mathbf{v})\) in \(W\) that preserves vector addition and scalar multiplication. Hence, as far as their vector space properties are concerned, the spaces \(V\) and \(W\) are identical except for notation. Because addition and scalar multiplication in either space are completely determined by the same operations in the other space, all vector space properties of either space are completely determined by those of the other.

    One of the most important examples of isomorphic spaces was considered in Chapter 4]. Let \(A\) denote the set of all “arrows” with tail at the origin in space, and make \(A\) into a vector space using the parallelogram law and the scalar multiple law (see Section [sec:4_1]). Then define a transformation \(T : \mathbb{R}^3 \to A\) by taking

    \[T\left[ \begin{array}{c} x \\ y \\ z \end{array} \right] = \mbox{ the arrow } \mathbf{v} \mbox{ from the origin to the point } P(x, y, z). \nonumber \]

    In Section 4.1 matrix addition and scalar multiplication were shown to correspond to the parallelogram law and the scalar multiplication law for these arrows, so the map \(T\) is a linear transformation. Moreover \(T\) is an isomorphism: it is one-to-one by Theorem 4.1.2, and it is onto because, given an arrow \(\mathbf{v}\) in \(A\) with tip \(P(x, y, z)\), we have \(T\left[ \begin{array}{c} x \\ y \\ z \end{array} \right] = \mathbf{v}\). This justifies the identification \(\mathbf{v} = \left[ \begin{array}{c} x \\ y \\ z \end{array} \right]\) in Chapter 4 of the geometric arrows with the algebraic matrices. This identification is very useful. The arrows give a “picture” of the matrices and so bring geometric intuition into \(\mathbb{R}^3\); the matrices are useful for detailed calculations and so bring analytic precision into geometry. This is one of the best examples of the power of an isomorphism to shed light on both spaces being considered.

    The following theorem gives a very useful characterization of isomorphisms: They are the linear transformations that preserve bases.

    Theorem \(\PageIndex{1}\)

    If \(V\) and \(W\) are finite dimensional spaces, the following conditions are equivalent for a linear transformation \(T : V \to W\).

    1. \(T\) is an isomorphism.
    2. If \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) is any basis of \(V\), then \(\{T(\mathbf{e}_{1}), T(\mathbf{e}_{2}), \dots, T(\mathbf{e}_{n})\}\) is a basis of \(W\).
    3. There exists a basis \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) of \(V\) such that \(\{T(\mathbf{e}_{1}), T(\mathbf{e}_{2}), \dots, T(\mathbf{e}_{n})\}\) is a basis of \(W\).

    Proof.

    (1) \(\Rightarrow\) (2). Let \(\{\mathbf{e}_{1}, \dots, \mathbf{e}_{n}\}\) be a basis of \(V\). If \(t_{1}T(\mathbf{e}_{1}) + \cdots + t_{n}T(\mathbf{e}_{n}) = \mathbf{0}\) with \(t_{i}\) in \(\mathbb{R}\), then \(T(t_{1}\mathbf{e}_{1} + \cdots + t_{n}\mathbf{e}_{n}) = \mathbf{0}\), so \(t_{1}\mathbf{e}_{1} + \cdots + t_{n}\mathbf{e}_{n} = \mathbf{0}\) (because \(\text{ker }T = \{\mathbf{0}\}\)). But then each \(t_{i} = 0\) by the independence of the \(\mathbf{e}_{i}\), so \(\{T(\mathbf{e}_{1}), \dots, T(\mathbf{e}_{n})\}\) is independent. To show that it spans \(W\), choose \(\mathbf{w}\) in \(W\). Because \(T\) is onto, \(\mathbf{w} = T(\mathbf{v})\) for some \(\mathbf{v}\) in \(V\), so write \(\mathbf{v} = t_{1}\mathbf{e}_{1} + \cdots + t_{n}\mathbf{e}_{n}\). Hence we obtain \(\mathbf{w} = T(\mathbf{v}) = t_{1}T(\mathbf{e}_{1}) + \cdots + t_{n}T(\mathbf{e}_{n})\), proving that \(\{T(\mathbf{e}_{1}), \dots, T(\mathbf{e}_{n})\}\) spans \(W\).

    (2) \(\Rightarrow\) (3). This is because \(V\) has a basis.

    (3) \(\Rightarrow\) (1). If \(T(\mathbf{v}) = \mathbf{0}\), write \(\mathbf{v} = v_{1}\mathbf{e}_{1} + \cdots + v_{n}\mathbf{e}_{n}\) where each \(v_{i}\) is in \(\mathbb{R}\). Then

    \[\mathbf{0} = T(\mathbf{v}) = v_{1}T(\mathbf{e}_{1}) + \cdots + v_{n}T(\mathbf{e}_{n}) \nonumber \]

    so \(v_{1} = \cdots = v_{n} = 0\) by (3). Hence \(\mathbf{v} = \mathbf{0}\), so \(\text{ker }T = \{\mathbf{0}\}\) and \(T\) is one-to-one. To show that \(T\) is onto, let \(\mathbf{w}\) be any vector in \(W\). By (3) there exist \(w_{1}, \dots, w_{n}\) in \(\mathbb{R}\) such that

    \[\mathbf{w} = w_{1}T(\mathbf{e}_1) + \cdots + w_{n}T(\mathbf{e}_n) = T(w_{1}\mathbf{e}_1 + \cdots + w_n\mathbf{e}_n) \nonumber \]

    Thus \(T\) is onto.

    Theorem \(\PageIndex{1}\) dovetails nicely with Theorem 7.1.3 as follows. Let \(V\) and \(W\) be vector spaces of dimension \(n\), and suppose that \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) and \(\{\mathbf{f}_{1}, \mathbf{f}_{2}, \dots, \mathbf{f}_{n}\}\) are bases of \(V\) and \(W\), respectively. Theorem [thm:020916] asserts that there exists a linear transformation \(T : V \to W\) such that

    \[T(\mathbf{e}_i) = \mathbf{f}_i \quad \mbox{for each } i = 1, 2, \dots, n \nonumber \]

    Then \(\{T(\mathbf{e}_{1}), \dots, T(\mathbf{e}_{n})\}\) is evidently a basis of \(W\), so \(T\) is an isomorphism by Theorem [thm:022044]. Furthermore, the action of \(T\) is prescribed by

    \[T(r_1\mathbf{e}_1 + \cdots + r_n\mathbf{e}_n) = r_1\mathbf{f}_1 + \cdots + r_n\mathbf{f}_n \nonumber \]

    so isomorphisms between spaces of equal dimension can be easily defined as soon as bases are known. In particular this shows that if two vector spaces \(V\) and \(W\) have the same dimension then they are isomorphic, that is \(V \cong W\). This is half of the following theorem.

    Theorem \(\PageIndex{2}\)

    If \(V\) and \(W\) are finite dimensional vector spaces, then \(V \cong W\) if and only if \(dim \;V = dim \;W\).

    Proof. It remains to show that if \(V \cong W\) then \(dim \;V = dim \;W\). But if \(V \cong W\), then there exists an isomorphism \(T : V \to W\). Since \(V\) is finite dimensional, let \(\{\mathbf{e}_{1}, \dots, \mathbf{e}_{n}\}\) be a basis of \(V\). Then \(\{T(\mathbf{e}_{1}), \dots, T(\mathbf{e}_{n})\}\) is a basis of \(W\) by Theorem [thm:022044], so \(dim \;W = n = dim \;V\).

    Corollary\(\PageIndex{1}\)

    Let \(U\), \(V\), and \(W\) denote vector spaces. Then:

    1. \(V \cong V\) for every vector space \(V\).
    2. If \(V \cong W\) then \(W \cong V\).
    3. If \(U \cong V\) and \(V \cong W\), then \(U \cong W\).

    The proof is left to the reader. By virtue of these properties, the relation \(\cong\) is called an equivalence relation on the class of finite dimensional vector spaces. Since \(dim \;(\mathbb{R}^n) = n\) it follows that

    Corollary \(\PageIndex{2}\)

    If \(V\) is a vector space and \(dim \;V = n\), then \(V\) is isomorphic to \(\mathbb{R}^n\).

    If \(V\) is a vector space of dimension \(n\), note that there are important explicit isomorphisms \(V \to \mathbb{R}^n\). Fix a basis \(B = \{\mathbf{b}_{1}, \mathbf{b}_{2}, \dots, \mathbf{b}_{n}\}\) of \(V\) and write \(\{\mathbf{e}_{1}, \mathbf{e}_{2}, \dots, \mathbf{e}_{n}\}\) for the standard basis of \(\mathbb{R}^n\). By Theorem 7.1.3 there is a unique linear transformation \(C_{B} : V \to \mathbb{R}^n\) given by

    \[C_B(v_1\mathbf{b}_1 + v_2\mathbf{b}_2 + \cdots + v_n\mathbf{b}_n) = v_1\mathbf{e}_1 + v_2\mathbf{e}_2 + \cdots + v_n\mathbf{e}_n = \left[ \begin{array}{c} v_1 \\ v_2 \\ \vdots \\ v_n \end{array} \right] \nonumber \]

    where each \(v_{i}\) is in \(\mathbb{R}\). Moreover, \(C_{B}(\mathbf{b}_{i}) = \mathbf{e}_{i}\) for each \(i\) so \(C_{B}\) is an isomorphism by Theorem [thm:022044], called the coordinate isomorphism corresponding to the basis \(B\). These isomorphisms will play a central role in Chapter 9.

    The conclusion in the above corollary can be phrased as follows: As far as vector space properties are concerned, every \(n\)-dimensional vector space \(V\) is essentially the same as \(\mathbb{R}^n\); they are the “same” vector space except for a change of symbols. This appears to make the process of abstraction seem less important—just study \(\mathbb{R}^n\) and be done with it! But consider the different “feel” of the spaces \(\mathbf{P}_{8}\) and \(\mathbf{M}_{33}\) even though they are both the “same” as \(\mathbb{R}^9\): For example, vectors in \(\mathbf{P}_{8}\) can have roots, while vectors in \(\mathbf{M}_{33}\) can be multiplied. So the merit in the abstraction process lies in identifying common properties of the vector spaces in the various examples. This is important even for finite dimensional spaces. However, the payoff from abstraction is much greater in the infinite dimensional case, particularly for spaces of functions.

    Example \(\PageIndex{4}\)

    Let \(V\) denote the space of all \(2 \times 2\) symmetric matrices. Find an isomorphism \(T :\mathbf{P}_{2} \to V\) such that \(T(1) = I\), where \(I\) is the \(2 \times 2\) identity matrix.

    Solution

    \(\{1, x, x^{2}\}\) is a basis of \(\mathbf{P}_{2}\), and we want a basis of \(V\) containing \(I\). The set \(\left\lbrace \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right], \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right], \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] \right\rbrace\) is independent in \(V\), so it is a basis because \(dim \;V = 3\) (by Example [exa:018930]). Hence define \(T :\mathbf{P}_{2} \to V\) by taking \(T(1) = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right]\), \(T(x) = \left[ \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right]\), \(T(x^2) = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right]\), and extending linearly as in Theorem \(\PageIndex{1}\). Then \(T\) is an isomorphism by Theorem [thm:022044], and its action is given by

    \[T(a + bx + cx^2) = aT(1) + bT(x) + cT(x^2) = \left[ \begin{array}{cc} a & b \\ b & a + c \end{array} \right] \nonumber \]

    The dimension theorem (Theorem [thm:021499]) gives the following useful fact about isomorphisms.

    Theorem \(\PageIndex{3}\)

    If \(V\) and \(W\) have the same dimension \(n\), a linear transformation \(T : V \to W\) is an isomorphism if it is either one-to-one or onto.

    Proof. The dimension theorem asserts that \(dim \;(\text{ker }T) + dim \;(im \;T) = n\), so \(dim \;(\text{ker }T) = 0\) if and only if \(dim \;(im \;T) = n\). Thus \(T\) is one-to-one if and only if \(T\) is onto, and the result follows.

    Composition

    Suppose that \(T : V \to W\) and \(S : W \to U\) are linear transformations. They link together as in the diagram so, as in Section [sec:2_3], it is possible to define a new function \(V \to U\) by first applying \(T\) and then \(S\).

    Definition: Composition of Linear Transformations

    Given linear transformations \(V \xrightarrow{T} W \xrightarrow{S} U\), the composite \(ST : V \to U\) of \(T\) and \(S\) is defined by

    \[ST(\mathbf{v}) = S\left[T(\mathbf{v})\right] \quad \mbox{for all } \mathbf{v} \mbox{ in } V \nonumber \]

    The operation of forming the new function \(ST\) is called composition.

    Note: \text { In Section } 2.3 \text { we denoted the composite as } S \circ T \text {. However, it is more convenient to use the simpler notation } S T \text {. }

    clipboard_e093e3aeaa36c18d54ae21040162b6706.png

    The action of \(ST\) can be described compactly as follows: \(ST\) means first \(T\) then \(S\).

    Not all pairs of linear transformations can be composed. For example, if \(T : V \to W\) and \(S : W \to U\) are linear transformations then \(ST : V \to U\) is defined, but \(TS\) cannot be formed unless \(U = V\) because \(S : W \to U\) and \(T : V \to W\) do not “link” in that order. Actually, all that is required is \( U \subseteq V \).

    Moreover, even if \(ST\) and \(TS\) can both be formed, they may not be equal. In fact, if \(S : \mathbb{R}^m \to \mathbb{R}^n\) and \(T : \mathbb{R}^n \to \mathbb{R}^m\) are induced by matrices \(A\) and \(B\) respectively, then \(ST\) and \(TS\) can both be formed (they are induced by \(AB\) and \(BA\) respectively), but the matrix products \(AB\) and \(BA\) may not be equal (they may not even be the same size). Here is another example.

    Example \(\PageIndex{5}\)

    Define: \(S :\mathbf{M}_{22} \to \mathbf{M}_{22}\) and \(T : \mathbf{M}_{22} \to \mathbf{M}_{22}\) by \(S\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] = \left[ \begin{array}{cc} c & d \\ a & b \end{array} \right]\) and \(T(A) = A^{T}\) for \(A \in {M}_{22}\). Describe the action of \(ST\) and \(TS\), and show that \(ST \neq TS\).

    Solution

    \(ST\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] = S\left[ \begin{array}{cc} a & c \\ b & d \end{array} \right] = \left[ \begin{array}{cc} b & d \\ a & c \end{array} \right]\), whereas \(TS\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] = T\left[ \begin{array}{cc} c & d \\ a & b \end{array} \right] = \left[ \begin{array}{cc} c & a \\ d & b \end{array} \right]\).

    It is clear that \(TS\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]\) need not equal \(ST\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right]\), so \(TS \neq ST\).

    The next theorem collects some basic properties of the composition operation.

    Theorem \(\PageIndex{4}\)

    Let \(V \xrightarrow{T} W \xrightarrow{S} U \xrightarrow{R} Z\) be linear transformations.

    1. The composite \(ST\) is again a linear transformation.
    2. \(T1_{V} = T\) and \(1_{W}T = T\).
    3. \((RS)T = R(ST)\).

    Theorem 7.3.4 can be expressed by saying that vector spaces and linear transformations are an example of a category. In general a category consists of certain objects and, for any two objects \(\mathrm{X}\) and \(\mathrm{Y}\), a set \(\operatorname{mor}(\mathrm{X}, \mathrm{Y})\). The elements \(\alpha\) of mor(X, \(\mathrm{Y})\) are called morphisms from \(\mathrm{X}\) to \(\mathrm{Y}\) and are written \(\alpha: \mathrm{X} \rightarrow \mathrm{Y}\). It is assumed that identity morphisms and composition are defined in such a way that Theorem 7.3.4 holds. Hence, in the category of vector spaces the objects are the vector spaces themselves and the morphisms are the linear transformations. Another example is the category of metric spaces, in which the objects are sets equipped with a distance function (called a metric), and the morphisms are continuous functions (with respect to the metric). The category of sets and functions is a very basic example.

    Proof. The proofs of (1) and (2) are left as Exercise [ex:ex7_3_25]. To prove (3), observe that, for all \(\mathbf{v}\) in \(V\):

    \[\{(RS)T\}(\mathbf{v}) = (RS)\left[T(\mathbf{v})\right] = R\{S\left[T(\mathbf{v})\right]\} = R\{(ST)(\mathbf{v})\} = \{R(ST)\}(\mathbf{v}) \nonumber \]

    Up to this point, composition seems to have no connection with isomorphisms. In fact, the two notions are closely related.

    Theorem \(\PageIndex{5}\)

    Let \(V\) and \(W\) be finite dimensional vector spaces. The following conditions are equivalent for a linear transformation \(T : V \to W\).

    1. \(T\) is an isomorphism.
    2. There exists a linear transformation \(S : W \to V\) such that \(ST = 1_{V}\) and \(TS = 1_{W}\).

    Moreover, in this case \(S\) is also an isomorphism and is uniquely determined by \(T\):

    \[\mbox{If } \mathbf{w} \mbox{ in } W \mbox{ is written as } \mathbf{w} = T(\mathbf{v}), \mbox{ then } S(\mathbf{w}) = \mathbf{v}. \nonumber \]

    Proof.

    (1) \(\Rightarrow\) (2). If \(B = \{\mathbf{e}_{1}, \dots, \mathbf{e}_{n}\}\) is a basis of \(V\), then \(D = \{T(\mathbf{e}_{1}), \dots, T(\mathbf{e}_{n})\}\) is a basis of \(W\) by Theorem [thm:022044]. Hence (using Theorem [thm:020916]), define a linear transformation \(S : W \to V\) by

    \[\label{eq:proofthm7_3_5} S[T(\mathbf{e}_i)] = \mathbf{e}_i \quad \mbox{for each } i \]

    Since \(\mathbf{e}_{i} = 1_{V}(\mathbf{e}_{i})\), this gives \(ST = 1_{V}\) by Theorem [thm:020878]. But applying \(T\) gives \(T\left[S\left[T(\mathbf{e}_{i})\right]\right] = T(\mathbf{e}_{i})\) for each \(i\), so \(TS = 1_{W}\) (again by Theorem [thm:020878], using the basis \(D\) of \(W\)).

    (2) \(\Rightarrow\) (1). If \(T(\mathbf{v}) = T(\mathbf{v}_{1})\), then \(S\left[T(\mathbf{v})\right] = S\left[T(\mathbf{v}_{1})\right]\). Because \(ST = 1_{V}\) by (2), this reads \(\mathbf{v} = \mathbf{v}_{1}\); that is, \(T\) is one-to-one. Given \(\mathbf{w}\) in \(W\), the fact that \(TS = 1_{W}\) means that \(\mathbf{w} = T\left[S(\mathbf{w})\right]\), so \(T\) is onto.

    Finally, \(S\) is uniquely determined by the condition \(ST = 1_{V}\) because this condition implies ([eq:proofthm7_3_5]). \(S\) is an isomorphism because it carries the basis \(D\) to \(B\). As to the last assertion, given \(\mathbf{w}\) in \(W\), write \(\mathbf{w} = r_{1}T(\mathbf{e}_{1}) + \cdots + r_{n}T(\mathbf{e}_{n})\). Then \(\mathbf{w} = T(\mathbf{v})\), where \(\mathbf{v} = r_{1}\mathbf{e}_{1} + \cdots + r_{n}\mathbf{e}_{n}\). Then \(S(\mathbf{w}) = \mathbf{v}\) by ([eq:proofthm7_3_5]).

    Given an isomorphism \(T : V \to W\), the unique isomorphism \(S : W \to V\) satisfying condition (2) of Theorem [thm:022252] is called the inverse of \(T\) and is denoted by \(T^{-1}\). Hence \(T : V \to W\) and \(T^{-1} : W \to V\) are related by the fundamental identities:

    \[T^{-1}\left[T(\mathbf{v})\right] = \mathbf{v} \mbox{ for all } \mathbf{v} \mbox{ in } V \quad \mbox{ and } \quad T\left[T^{-1}(\mathbf{w})\right] = \mathbf{w} \mbox{ for all } \mathbf{w} \mbox{ in } W \nonumber \]

    In other words, each of \(T\) and \(T^{-1}\) reverses the action of the other. In particular, equation ([eq:proofthm7_3_5]) in the proof of Theorem [thm:022252] shows how to define \(T^{-1}\) using the image of a basis under the isomorphism \(T\). Here is an example.

    Example \(\PageIndex{6}\)

    Define \(T :\mathbf{P}_{1} \to\mathbf{P}_{1}\) by \(T(a + bx) = (a - b) + ax\). Show that \(T\) has an inverse, and find the action of \(T^{-1}\).

    Solution

    The transformation \(T\) is linear (verify). Because \(T(1) = 1 + x\) and \(T(x) = -1\), \(T\) carries the basis \(B = \{1, x\}\) to the basis \(D = \{1 + x, -1\}\). Hence \(T\) is an isomorphism, and \(T^{-1}\) carries \(D\) back to \(B\), that is,

    \[T^{-1}(1 + x) = 1 \quad \mbox{and} \quad T^{-1}(-1) = x \nonumber \]

    Because \(a + bx = b(1 + x) + (b - a)(-1)\), we obtain

    \[T^{-1}(a + bx) = bT^{-1}(1 + x) + (b - a)T^{-1}(-1) = b + (b -a)x \nonumber \]

    Sometimes the action of the inverse of a transformation is apparent.

    Example \(\PageIndex{7}\)

    If \(B = \{\mathbf{b}_{1}, \mathbf{b}_{2}, \dots, \mathbf{b}_{n}\}\) is a basis of a vector space \(V\), the coordinate transformation \(C_{B} : V \to \mathbb{R}^n\) is an isomorphism defined by

    \[C_B(v_1\mathbf{b}_1 + v_2\mathbf{b}_2 + \cdots + v_n\mathbf{b}_n) = (v_1, v_2, \dots, v_n)^T \nonumber \]

    The way to reverse the action of \(C_{B}\) is clear: \(C_{B}^{-1} : \mathbb{R}^n \to V\) is given by

    \[C_B^{-1}(v_1, v_2, \dots, v_n) = v_1\mathbf{b}_1 + v_2\mathbf{b}_2 + \cdots + v_n\mathbf{b}_n \quad \mbox{for all } v_i \mbox{ in } V \nonumber \]

    Condition (2) in Theorem \(\PageIndex{5}\)characterizes the inverse of a linear transformation \(T : V \to W\) as the (unique) transformation \(S : W \to V\) that satisfies \(ST = 1_{V}\) and \(TS = 1_{W}\). This often determines the inverse.

    Example \(\PageIndex{8}\)

    Define \(T : \mathbb{R}^3 \to \mathbb{R}^3\) by \(T(x, y, z) = (z, x, y)\). Show that \(T^{3} = 1_{\mathbb{R}^3}\), and hence find \(T^{-1}\).

    Solution

    \(T^{2}(x, y, z) = T\left[T(x, y, z)\right] = T(z, x, y) = (y, z, x)\). Hence

    \[T^3(x, y, z) = T\left[T^2(x, y, z)\right] = T(y, z, x) = (x, y, z) \nonumber \]

    Since this holds for all \((x, y, z)\), it shows that \(T^3 = 1_{\mathbb{R}^3}\), so \(T(T^{2}) = 1_{\mathbb{R}^3} = (T^{2})T\). Thus \(T^{-1} = T^{2}\) by (2) of Theorem \(\PageIndex{5}\).

    Example \(\PageIndex{9}\)

    Define \(T :\mathbf{P}_{n} \to \mathbb{R}^{n+1}\) by \(T(p) = (p(0), p(1), \dots, p(n))\) for all \(p\) in \(\mathbf{P}_{n}\). Show that \(T^{-1}\) exists.

    Solution

    The verification that \(T\) is linear is left to the reader. If \(T(p) = 0\), then \(p(k) = 0\) for \(k = 0, 1, \dots, n\), so \(p\) has \(n + 1\) distinct roots. Because \(p\) has degree at most \(n\), this implies that \(p = 0\) is the zero polynomial (Theorem \(\PageIndex{5}\)) and hence that \(T\) is one-to-one. But \(dim \;\mathbf{P}_{n} = n + 1 = dim \;\mathbb{R}^{n+1}\), so this means that \(T\) is also onto and hence is an isomorphism. Thus \(T^{-1}\) exists by Theorem [thm:022252]. Note that we have not given a description of the action of \(T^{-1}\), we have merely shown that such a description exists. To give it explicitly requires some ingenuity; one method involves the Lagrange interpolation expansion (Theorem 6.3.5).


    This page titled 6.10.3: Isomorphisms and Composition is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by W. Keith Nicholson (Lyryx Learning Inc.) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.