Skip to main content
Mathematics LibreTexts

9.9: The Matrix of a Linear Transformation

  • Page ID
    29493
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Outcomes

    1. Find the matrix of a linear transformation with respect to general bases in vector spaces.

    You may recall from \(\mathbb{R}^n\) that the matrix of a linear transformation depends on the bases chosen. This concept is explored in this section, where the linear transformation now maps from one arbitrary vector space to another.

    Let \(T: V \mapsto W\) be an isomorphism where \(V\) and \(W\) are vector spaces. Recall from Lemma 9.7.2 that \(T\) maps a basis in \(V\) to a basis in \(W\). When discussing this Lemma, we were not specific on what this basis looked like. In this section we will make such a distinction.

    Consider now an important definition.

    Definition \(\PageIndex{1}\): Coordinate Isomorphism

    Let \(V\) be a vector space with \(\mathrm{dim}(V)=n\), let \(B=\{ \vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n \}\) be a fixed basis of \(V\), and let \(\{ \vec{e}_1, \vec{e}_2, \ldots, \vec{e}_n \}\) denote the standard basis of \(\mathbb{R}^n\). We define a transformation \(C_B:V\to\mathbb{R}^n\) by \[C_B(a_1\vec{b}_1 + a_2\vec{b}_2 + \cdots + a_n\vec{b}_n) = a_1\vec{e}_1 + a_2\vec{e}_2 + \cdots + a_n\vec{e}_n = \left [\begin{array}{c} a_1 \\ a_2 \\ \vdots \\ a_n \end{array}\right ].\nonumber \] Then \(C_B\) is a linear transformation such that \(C_B(\vec{b}_i)=\vec{e}_i\), \(1\leq i\leq n\).

    \(C_B\) is an isomorphism, called the coordinate isomorphism corresponding to \(B\).

    We continue with another related definition.

    Definition \(\PageIndex{2}\): Coordinate Vector

    Let \(V\) be a finite dimensional vector space with \(\mathrm{dim}(V)=n\), and let \(B=\{\vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n\}\) be an ordered basis of \(V\) (meaning that the order that the vectors are listed is taken into account). The coordinate vector of \(\vec{v}\) with respect to \(B\) is defined as \(C_B(\vec{v})\).

    coordinatevector Consider the following example.

    Example \(\PageIndex{1}\): Coordinate Vector

    Let \(V = \mathbb{P}_2\) and \(\vec{x} = -x^2 -2x + 4\). Find \(C_B(\vec{x})\) for the following bases \(B\):

    1. \(B = \left\{ 1, x, x^2 \right\}\)
    2. \(B = \left\{ x^2, x, 1 \right\}\)
    3. \(B = \left\{ x + x^2 , x , 4 \right\}\)
    Solution
    1. First, note the order of the basis is important. Now we need to find \(a_1, a_2, a_3\) such that \(\vec{x} = a_1 (1) + a_2 (x) + a_3(x^2)\), that is: \[-x^2 -2x + 4 = a_1 (1) + a_2 (x) + a_3(x^2)\nonumber \] Clearly the solution is \[\begin{aligned} a_1 &= 4 \\ a_2 &= -2 \\ a_3 &= -1\end{aligned}\] Therefore the coordinate vector is \[C_B(\vec{x}) = \left [ \begin{array}{r} 4 \\ -2 \\ -1 \end{array} \right ]\nonumber \]
    2. Again remember that the order of \(B\) is important. We proceed as above. We need to find \(a_1, a_2, a_3\) such that \(\vec{x} = a_1 (x^2) + a_2 (x) + a_3(1)\), that is: \[-x^2 -2x + 4 = a_1 (x^2) + a_2 (x) + a_3(1)\nonumber \] Here the solution is \[\begin{aligned} a_1 &= -1 \\ a_2 &= -2 \\ a_3 &= 4\end{aligned}\] Therefore the coordinate vector is \[C_B(\vec{x}) = \left [ \begin{array}{r} -1 \\ -2 \\ 4 \end{array} \right ]\nonumber \]
    3. Now we need to find \(a_1, a_2, a_3\) such that \(\vec{x} = a_1 (x + x^2) + a_2 (x) + a_3(4)\), that is: \[\begin{aligned} -x^2 -2x + 4 &= a_1 (x + x^2 ) + a_2 (x) + a_3(4)\\ &= a_1 (x^2) + (a_1 + a_2) (x) + a_3(4)\end{aligned}\] The solution is \[\begin{aligned} a_1 &= -1 \\ a_2 &= -1 \\ a_3 &= 1\end{aligned}\] and the coordinate vector is \[C_B(\vec{x})=\left[\begin{array}{r}-1\\-1\\1\end{array}\right]\nonumber\]

    Given that the coordinate transformation \(C_B:V\to\mathbb{R}^n\) is an isomorphism, its inverse exists.

    Theorem \(\PageIndex{1}\): Inverse of the Coordinate Isomorphism

    Let \(V\) be a finite dimensional vector space with dimension \(n\) and ordered basis \(B=\{\vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n\}\). Then \(C_B:V\to\mathbb{R}^n\) is an isomorphism whose inverse, \[C_B^{-1}:\mathbb{R}^n\to V\nonumber \] is given by \[C_B^{-1} =\left [\begin{array}{c} a_1 \\ a_2 \\ \vdots \\ a_n \end{array}\right ] = a_1\vec{b}_1 + a_2\vec{b}_2 + \cdots + a_n\vec{b}_n ~\mbox{ for all }~ \left [\begin{array}{c} a_1 \\ a_2 \\ \vdots \\ a_n \end{array}\right ] \in\mathbb{R}^n.\nonumber \]

    We now discuss the main result of this section, that is how to represent a linear transformation with respect to different bases.

    Let \(V\) and \(W\) be finite dimensional vector spaces, and suppose

    • \(\dim(V)=n\) and \(B_1=\{\vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n\}\) is an ordered basis of \(V\);
    • \(\dim(W)=m\) and \(B_2\) is an ordered basis of \(W\).

    Let \(T:V\to W\) be a linear transformation. If \(V=\mathbb{R}^n\) and \(W=\mathbb{R}^m\), then we can find a matrix \(A\) so that \(T_A=T\). For arbitrary vector spaces \(V\) and \(W\), our goal is to represent \(T\) as a matrix., i.e., find a matrix \(A\) so that \(T_A:\mathbb{R}^n\to\mathbb{R}^m\) and \(T_A=C_{B_2}TC_{B_1}^{-1}\).

    To find the matrix \(A\):

    \[T_A=C_{B_2}TC_{B_1}^{-1}~\mbox{ implies that }~ T_AC_{B_1}=C_{B_2}T,\nonumber \] and thus for any \(\vec{v}\in V\), \[C_{B_2}[T(\vec{v})] = T_A[C_{B_1}(\vec{v})] =AC_{B_1}(\vec{v}).\nonumber \]

    Since \(C_{B_1}(\vec{b}_j)=\vec{e}_j\) for each \(\vec{b}_j\in B_1\), \(AC_{B_1}(\vec{b}_j)=A\vec{e}_j\), which is simply the \(j^{th}\) column of \(A\). Therefore, the \(j^{th}\) column of \(A\) is equal to \(C_{B_2}[T(\vec{b}_j)]\).

    The matrix of \(T\) corresponding to the ordered bases \(B_1\) and \(B_2\) is denoted \(M_{B_2B_1}(T)\) and is given by \[M_{B_2B_1}(T)= \left [\begin{array}{cccc} C_{B_2} [ T(\vec{b}_1)] & C_{B_2}[T(\vec{b}_2) ] & \cdots & C_{B_2}[T(\vec{b}_n) ] \end{array}\right ].\nonumber \] This result is given in the following theorem.

    Theorem \(\PageIndex{2}\)

    Let \(V\) and \(W\) be vectors spaces of dimension \(n\) and \(m\) respectively, with \(B_1=\{\vec{b}_1, \vec{b}_2, \ldots, \vec{b}_n\}\) an ordered basis of \(V\) and \(B_2\) an ordered basis of \(W\). Suppose \(T:V\to W\) is a linear transformation. Then the unique matrix \(M_{B_2B_1}(T)\) of \(T\) corresponding to \(B_1\) and \(B_2\) is given by \[M_{B_2B_1}(T)= \left [\begin{array}{cccc} C_{B_2}[T(\vec{b}_1)] & C_{B_2}[T(\vec{b}_2)] & \cdots & C_{B_2}[T(\vec{b}_n)] \end{array}\right ].\nonumber \]

    This matrix satisfies \(C_{B_2}[T(\vec{v})]=M_{B_2B_1}(T)C_{B_1}(\vec{v})\) for all \(\vec{v}\in V\).

    We demonstrate this content in the following examples.

    Example \(\PageIndex{2}\): Matrix of a Linear Transformation

    Let \(T: \mathbb{P}_3 \mapsto \mathbb{R}^4\) be an isomorphism defined by \[T( ax^3 + bx^2 + cx + d) = \left [ \begin{array}{c} a + b \\ b - c \\ c + d \\ d + a \end{array} \right ]\nonumber \]

    Suppose \(B_1 = \left\{ x^3, x^2, x, 1 \right\}\) is an ordered basis of \(\mathbb{P}_3\) and \[B_2 = \left\{ \left [ \begin{array}{r} 1 \\ 0 \\ 1 \\ 0 \end{array} \right ], \left [ \begin{array}{r} 0 \\ 1 \\ 0 \\ 0 \end{array} \right ], \left [ \begin{array}{r} 0 \\ 0 \\ -1 \\ 0 \end{array} \right ], \left [ \begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \end{array} \right ] \right\}\nonumber \] be an ordered basis of \(\mathbb{R}^4\). Find the matrix \(M_{B_2B_1}(T)\).

    Solution

    To find \(M_{B_2B_1}(T)\), we use the following definition. \[M_{B_2B_1}(T) = \left [ \begin{array}{cccc} C_{B_2}[T(x^3)] & C_{B_2}[T(x^2)] & C_{B_2}[T(x)] & C_{B_2}[T(x^2)] \end{array} \right ]\nonumber \] First we find the result of applying \(T\) to the basis \(B_1\). \[T(x^3) = \left [ \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \end{array} \right ], T(x^2) = \left [ \begin{array}{c} 1 \\ 1 \\ 0 \\ 0 \end{array} \right ], T(x) = \left [ \begin{array}{c} 0 \\ -1 \\ 1 \\ 0 \end{array} \right ], T(1) = \left [ \begin{array}{c} 0 \\ 0 \\ 1 \\ 1 \end{array} \right ]\nonumber \]

    Next we apply the coordinate isomorphism \(C_{B_2}\) to each of these vectors. We will show the first in detail. \[C_{B_2} \left( \left [ \begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \end{array} \right ] \right) = a_1 \left [ \begin{array}{r} 1 \\ 0 \\ 1 \\ 0 \end{array} \right ] + a_2 \left [ \begin{array}{r} 0 \\ 1 \\ 0 \\ 0 \end{array} \right ] + a_3 \left [ \begin{array}{r} 0 \\ 0 \\ -1 \\ 0 \end{array} \right ] + a_4 \left [ \begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \end{array} \right ]\nonumber \] This implies that \[\begin{aligned} a_1 &= 1 \\ a_2 &= 0 \\ a_1 - a_3 &= 0 \\ a_4 &= 1 \end{aligned}\] which has a solution given by \[\begin{aligned} a_1 &= 1 \\ a_2 &= 0 \\ a_3 &= 1 \\ a_4 &= 1 \end{aligned}\]

    Therefore \(C_{B_2} [T(x^3)] = \left [ \begin{array}{r} 1 \\ 0 \\ 1 \\ 1 \end{array} \right ]\).

    You can verify that the following are true. \[C_{B_2}[T(x^2)] = \left [ \begin{array}{r} 1 \\ 1 \\ 1 \\ 0 \end{array} \right ], C_{B_2}[T(x)] = \left [ \begin{array}{r} 0 \\ -1 \\ -1 \\ 0 \end{array} \right ], C_{B_2}[T(1)] = \left [ \begin{array}{r} 0 \\ 0 \\ -1 \\ 1 \end{array} \right ]\nonumber \]

    Using these vectors as the columns of \(M_{B_2B_1}(T)\) we have \[M_{B_2B_1}(T) = \left [ \begin{array}{rrrr} 1 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 1 & 1 & -1 & -1 \\ 1 & 0 & 0 & 1 \end{array} \right ]\nonumber \]

    The next example demonstrates that this method can be used to solve different types of problems. We will examine the above example and see if we can work backwards to determine the action of \(T\) from the matrix \(M_{B_2B_1}(T)\).

    Example \(\PageIndex{3}\): Finding the Action of a Linear Transformation

    Let \(T: \mathbb{P}_3 \mapsto \mathbb{R}^4\) be an isomorphism with \[M_{B_2B_1}(T) = \left [ \begin{array}{rrrr} 1 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 1 & 1 & -1 & -1 \\ 1 & 0 & 0 & 1 \end{array} \right ],\nonumber \] where \(B_1 = \left\{ x^3, x^2, x, 1 \right\}\) is an ordered basis of \(\mathbb{P}_3\) and \[B_2 = \left\{ \left [ \begin{array}{r} 1 \\ 0 \\ 1 \\ 0 \end{array} \right ], \left [ \begin{array}{r} 0 \\ 1 \\ 0 \\ 0 \end{array} \right ], \left [ \begin{array}{r} 0 \\ 0 \\ -1 \\ 0 \end{array} \right ], \left [ \begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \end{array} \right ] \right\}\nonumber \] is an ordered basis of \(\mathbb{R}^4\). If \(p(x) = ax^3 + bx^2 + cx + d\), find \(T(p(x))\).

    Solution

    Recall that \(C_{B_2}[T(p(x))] = M_{B_2B_1}(T) C_{B_1}(p(x))\). Then we have \[\begin{aligned} C_{B_2}[T(p(x))] &= M_{B_2B_1}(T) C_{B_1}(p(x)) \\ &= \left [ \begin{array}{rrrr} 1 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 1 & 1 & -1 & -1 \\ 1 & 0 & 0 & 1 \end{array} \right ] \left [ \begin{array}{c} a \\ b \\ c \\ d \end{array} \right ] \\ &= \left [ \begin{array}{c} a + b \\ b - c \\ a + b - c - d\\ a + d \end{array} \right ]\end{aligned}\]

    Therefore \[\begin{aligned} T(p(x)) &= C^{-1}_D \left [ \begin{array}{c} a + b \\ b - c \\ a + b - c - d\\ a + d \end{array} \right ] \\ &= (a+b) \left [ \begin{array}{r} 1 \\ 0 \\ 1 \\ 0 \end{array} \right ] + (b-c) \left [ \begin{array}{r} 0 \\ 1 \\ 0 \\ 0 \end{array} \right ] + (a+b-c-d) \left [ \begin{array}{r} 0 \\ 0 \\ -1 \\ 0 \end{array} \right ] + (a+d) \left [ \begin{array}{r} 0 \\ 0 \\ 0 \\ 1 \end{array} \right ] \\ &= \left [ \begin{array}{c} a + b \\ b - c \\ c + d \\ a +d \end{array} \right ]\end{aligned}\]

    You can verify that this was the definition of \(T(p(x))\) given in the previous example.

    We can also find the matrix of the composite of multiple transformations.

    Theorem \(\PageIndex{3}\): Matrix of Composition

    Let \(V,W\) and \(U\) be finite dimensional vector spaces, and suppose \(T : V \mapsto W\), \(S: W \mapsto U\) are linear transformations. Suppose \(V, W\) and \(U\) have ordered bases of \(B_1\), \(B_2\) and \(B_3\) respectively. Then the matrix of the composite transformation \(S \circ T\) (or \(ST\)) is given by \[M_{B_3B_1}(ST)=M_{B_3B_2}(S) M_{B_2B_1}(T).\nonumber \]

    The next important theorem gives a condition on when \(T\) is an isomorphism.

    Theorem \(\PageIndex{4}\): Isomorphism

    Let \(V\) and \(W\) be vector spaces such that both have dimension \(n\) and let \(T: V \mapsto W\) be a linear transformation. Suppose \(B_1\) is an ordered basis of \(V\) and \(B_2\) is an ordered basis of \(W\).

    Then the conditions that \(M_{B_2B_1}(T)\) is invertible for all \(B_1\) and \(B_2\), and that \(M_{B_2B_1}(T)\) is invertible for some \(B_1\) and \(B_2\) are equivalent. In fact, these occur if and only if \(T\) is an isomorphism.

    If \(T\) is an isomorphism, the matrix \(M_{B_2B_1}(T)\) is invertible and its inverse is given by \(\left [ M_{B_2B_1}(T) \right ] ^{-1} = M_{B_1B_2}(T^{-1})\).

    Consider the following example.

    Example \(\PageIndex{4}\)

    Suppose \(T:\mathbb{P}_3\to\mathbb{M}_{22}\) is a linear transformation defined by \[T(ax^3+bx^2+cx+d)= \left [\begin{array}{cc} a+d & b-c \\ b+c & a-d \end{array}\right ]\nonumber \] for all \(ax^3+bx^2+cx+d\in\mathbb{P}_3\). Let \(B_1=\{ x^3, x^2, x, 1\}\) and \[B_2=\left\{ \left [\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right ], \left [\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right ], \left [\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right ], \left [\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right ]\right\}\nonumber \] be ordered bases of \(\mathbb{P}_3\) and \(\mathbb{M}_{22}\), respectively.

    1. Find \(M_{B_2B_1}(T)\).
    2. Verify that \(T\) is an isomorphism by proving that \(M_{B_2B_1}(T)\) is invertible.
    3. Find \(M_{B_1B_2}(T^{-1})\), and verify that \(M_{B_1B_2}(T^{-1}) = \left [ M_{B_2B_1}(T)\right ]^{-1}\).
    4. Use \(M_{B_1B_2}(T^{-1})\) to find \(T^{-1}\).
    Solution
    1. \[\begin{aligned} M_{B_2B_1}(T) & = \left [ \begin{array}{cccc} C_{B_2}[T(1)] & C_{B_2}[T(x)] & C_{B_2}[T(x^2)] & C_{B_2}[T(x^3)] \end{array}\right ] \\ & = \left [ \begin{array}{cccc} C_{B_2}\left [\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right ] & C_{B_2}\left [\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right ] & C_{B_2}\left [\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right ] & C_{B_2}\left [\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right ] \end{array}\right ] \\ & = \left [\begin{array}{rrrr} 1 & 0 & 0 & 1 \\ 0 & 1 & -1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & -1 \end{array}\right ]\end{aligned}\]
    2. \(\det(M_{B_2B_1}(T))=4\), so the matrix is invertible, and hence \(T\) is an isomorphism.
    3. \[T^{-1}\left [\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right ] = 1, T^{-1}\left [\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right ]= x, T^{-1}\left [\begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right ]= x^2, T^{-1}\left [\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right ]=x^3,\nonumber \] so \[T^{-1}\left [\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right ] = \frac{1+x^3}{2}, T^{-1}\left [\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right ]= \frac{x-x^2}{2},\nonumber \] \[T^{-1}\left [\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right ] = \frac{x+x^2}{2}, T^{-1}\left [\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right ]= \frac{1-x^3}{2}.\nonumber \] Therefore, \[M_{B_1B_2}(T^{-1})=\frac{1}{2}\left [\begin{array}{rrrr} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & -1 & 1 & 0 \\ 1 & 0 & 0 & -1 \end{array}\right ]\nonumber \] You should verify that \(M_{B_2B_1}(T) M_{B_1B_2}(T^{-1}) = I_4\). From this it follows that \([M_{B_2B_1}(T)]^{-1}= M_{B_1B_2}(T^{-1})\).
    4. \[\begin{aligned} C_{B_1}\left(T^{-1}\left [\begin{array}{cc} p & q \\ r & s \end{array}\right ]\right) & = M_{B_1B_2}(T^{-1}) C_{B_2}\left( \left [\begin{array}{cc} p & q \\ r & s \end{array}\right ]\right)\\ T^{-1}\left [\begin{array}{cc} p & q \\ r & s \end{array}\right ] & = C_{B_1}^{-1}\left(M_{B_1B_2}(T^{-1}) C_{B_2}\left( \left [\begin{array}{cc} p & q \\ r & s \end{array}\right ]\right)\right)\\ & = C_{B_1}^{-1}\left( \frac{1}{2}\left [\begin{array}{rrrr} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & -1 & 1 & 0 \\ 1 & 0 & 0 & -1 \end{array}\right ] \left [\begin{array}{c} p \\ q\\ r\\ s\end{array}\right ]\right) \\ & = C_{B_1}^{-1}\left(\frac{1}{2}\left [\begin{array}{c} p+s \\ q+r \\ r-q \\ p-s \end{array}\right ]\right) \\ & = \frac{1}{2}(p+s)x^3 +\frac{1}{2}(q+r)x^2 +\frac{1}{2}(r-q)x + \frac{1}{2}(p-s).\end{aligned}\]

    This page titled 9.9: The Matrix of a Linear Transformation is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Ken Kuttler (Lyryx) via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.