Skip to main content
Mathematics LibreTexts

Macdonald Polynomials

  • Page ID
    1068
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    In 1988 Macdonald introduced a new class of symmetric functions depending on two parameters \(t \)and \(q \)in Seminaire Lotharingian.

    His definition requires

    1. Orthogonality with respect to new scalar product \(\langle\cdot,\cdot\rangle_{q,t}\)
    2. Lower unitriangularity with respect to monomial symmetric functions

    The new scalar product in our language is

    \[\langle f,g\rangle _{q,t} = \langle g\left[\frac{1-q}{1-t}Z\right]\rangle\]

    Since plethystic substitution is self-adjoint, this definition is symmetric in \(f \)and \(g\)

    Remark: By the cauchy indentiy we have that \(u \)and \(v \)are \(\langle \cdot, \cdot\rangle_{q,t} \)- dual if and only if \(u \)and \(v[\frac{1-q}{1-t}] \)are \(\langle \cdot, \cdot\rangle \)- dual. So we have to check either of the equations

    \[\Omega[XY]=\displaystyle \sum_\lambda u_\lambda[X]v_\lambda[Y\frac{1-q}{1-t}]\qquad \Omega[XY\frac{1-t}{1-q}]=\displaystyle \sum_\lambda u_\lambda[X]v_\lambda[Y]\]

    Remark: The non-plethystic expression for \(\Omega[XY\frac{1-t}{1-q}] \)is

    \[\prod_{i,j}\frac{(tx_iy_j;q)_{\infty}}{ (x_iy_j;q)_{\infty} } \qquad (a;q)_\infty := \prod_{k=0}^\infty (1-aq^k)\]

    The strategy is to write them as eigenfunctions of an operator.

    Define \[\Delta'f = [z^0]f[X-(1-q)/z]\Omega[zX(1-t^{-1})]\] where \([z^0] \)is the operator that picks the coefficient of \(z^0 \)(i.e. the constant term) in the expansion.

    Also define \[B_\lambda(q,t)=\displaystyle \sum_{(i,j)\in \mu} t^iq^j\] so for example for \(\mu=(4,2,1) \)we have \(B_\mu(q,t) = (1+q+q^2+q^3) + t(1+q) +t^2\). With the convention that \(\mu_i = 0\) for all \(i>l(\mu)\), we have the alternative description

    \[B_{\lambda}(q,t) = \displaystyle \sum_i^\infty \dfrac{1-q^{\mu_i}}{1-q}t^{i-1} \]

    The Macdonald polynomial \(P_\mu(X;q,t) \)is the eigenfunction of the operator \(\Delta'\)

    \[ \Delta'P_\mu = \left( 1 - (1-q)(1-t^{-1})B_{\mu}(q,t^{-1})\right)P_\mu\] normalized so that \[P_\mu = m_\mu + \sum_{\lambda<\mu} c_{\lambda\mu}m_\mu\]

    A proposition we will use below

    Proposition: A symmetric function \(f(X;q,t) \)is an eigenfuncion of \(\Delta' \)with eigenvalue \(\alpha(q,t^{-1}) \)if and only off \(f[X/(1-t^{-1};q,t^{-1})] \)is an eigenfunction of \(\Delta \)with eigenvalue \(\alpha(q,t) \). Here \[\Delta f = [z^0]f[X+(1-q)(1-t)/z]\Omega[-zX]\]

    Proof: It is straightforward to check that \[\Delta(f[X/(1-t^{-1})];q,t)=(\Delta'f)[X/(1-t^{-1}) - (1-q)/z;q,t^{-1}]\] which are equal to \[[z^0] f\left[\frac{X}{(1-t^{-1})}+\frac{1-q}{z};q,t^{-1}\right]\Omega[-zX] \qquad [z^0]f\left[\frac{X}{(1-t^{-1})}-\frac{1-q}{z};q,t^{-1}\right]\Omega[zX]\] respectively, and they are both equal since changing \(z \)by its negative doesn't change the constant term

    Our main job is to show the Macdonald symmetric functions exist and are well-defined. First we settle the existence of such eigenfunctions with the lower triangularity property.


    Lower triangularity of Macdonald symmetric functions

    Proposition: \(\Delta' \)is lower triangular with respect to the basis of monomial symmetric functions. More precisely

    \[\Delta' m_{\mu} = \left( 1-(1-q)(1-t^{-1})B_{\mu}(q,t^{-1})\right) m_{\mu} + \sum_{\lambda<\mu} b_{\lambda\mu}m_\lambda \]

    Proof: The first step of the proof is to go back to the schur function section and read about the Bernstein operator to get a feeling of what's going on. This follows the same ideas:

    Since the schur functions are lower unitriangular with respect to monomials (remember the Kostka coefficients) it suffices to show

    \[\Delta'm_\mu = \sum_{\lambda\leq \mu} a_{\lambda\mu}s_\lambda\] where \(a_{\mu\mu}=1-(1-q)(1-t^{-1})B_\mu(q,t^{-1}) \)

    We work with finitely many variables \(X=x_1+\cdots+x_n \). Then by partial fraction expansion we have

    \[ \Omega\left[zX(1-t^{-1})\right] = \prod_{i=1}^n \dfrac{1-t^{-1}zx_i}{1-zx_i}\]

    \[= t^{-n} + \sum_{i=1}^n\dfrac{1}{1-zx_i}\dfrac{\prod_{j=1}^n(1-x_j/tx_i)}{\prod_{j\neq i}(1-x_j/x_i)}\]

    Note that in the numerator we have \(1-t^{-1}\) when \(i=j\) and for the rest we can write

    \[\dfrac{\prod_{i\neq j}(tx_i-x_j)}{\prod_{i\neq j}t(x_i-x_j)}=t^{1-n}\dfrac{v(X)_{x_i\to tx_i}}{v(X)}\] where \(v(X)=\prod_{i< j}(x_i-x_j)\) is the vandermonde determinant, and \(v(X)_{x_i\to tx_i}\) means we replace the variable \(x_i\) with \(tx_i\) (note that we could have used the plethystic notation \(v[X-(1-t)x_i]\)). This is true since all the terms not involving \( x_i \) will cancel out, and the terms involving \(x_i\) will have the same sign bottom and top so we can consider all of them with \(x_i\) first. So we have

    \[ \Omega\left[zX(1-t^{-1})\right] = t^{-n} + t^{1-n}(1-t^{-1})\sum_{i}\dfrac{1}{1-zx_i} \dfrac{v(X)_{(x_i\to tx_i)}}{v(X)} \]

    Now recall that \(\left[z^0\right] f\left(\dfrac{1}{z}\right)/(1-zx)=f(x)\). So

    \[ \left[z^0\right] m_\mu\left[X-(1-q)/z\right]\Omega[zX(1-t^{-1})] = \left[z^0\right] m_\mu\left[X-(1-q)/z\right]t^{-n} + t^{1-n}(1-t^{-1})\sum_{i}\left[z^0\right]\dfrac{m_\mu(X)\left[X-(1-q)/z\right]}{1-zx_i} \dfrac{v(X)_{(x_i\to tx_i)}}{v(X)} \]

    \[ = m_\mu(X) t^{-n} + t^{1-n}(1-t^{-1})\sum_{i}m_\mu\left[X-(1-q)x_i\right]\dfrac{v(X)_{(x_i\to tx_i)}}{v(X)} \]

    \[ = m_\mu(X) t^{-n} + t^{1-n}(1-t^{-1})\sum_{i}m_\mu(X)_{(x_i\to qx_i)}\dfrac{v(X)_{(x_i\to tx_i)}}{v(X)} \]

    Recall that the coefficient of \(s_\lambda\) in a symmetric function \(\rho(X)\) is equal so the coefficient of \(x^{\lambda+\delta}\) in \(\rho(X)v(X)\). Consider the coefficient in the first term, call it \(k_{\mu\lambda}\). It is not the Kostka number, since they express schur in terms monomials and we need monomial in terms of schur, but it is an entry of the inverse to the Kostka matrix. Since the Kostka matrix is lower unitriangular its inverse is too, so . Thus to find the coefficient of \(s_\lambda\) in \( \Delta' m_\mu\) it suffices to compute the second summand of

    \[t^{-n}k_{\mu\lambda} + \left[x^{\lambda+\delta}\right] t^{1-n}(1-t^{-1})\sum_{i}m_\mu(X)_{(x_i\to qx_i)}v(X)_{(x_i\to tx_i)} \]

    But on each summand the dominant exponent is \(\mu+\delta\), given by \(x^{\mu}\) in \(m_\mu\) and \( x^\delta \) in v(X). An further analysis reveals that all possible \(\lambda\) appearing will be smaller in dominance order. Therefore the largest partition than can come up is \(\mu\) and we have the triangularity. Now lets analyze the diagonal elements, i.e. the coefficient of \(x^{\mu+\delta}\).

    On each product the coefficient of \(x^{\mu+\delta}\) goes with a coefficient \(q^{\mu_i}t^{n-i}\) and the first summand contributes a \(K_{\mu\mu}=1\) so the diagonal elements are

    \[a_{\mu\mu} = t^{-n} + t^{1-n}(1-t^{-1})\sum_{i=1}^nq^{\mu_i}t^{n-i}\]

    recall that \(\mu_i = 0\) for all \(i>l(\mu)\), so that the sum is independent of \(n\) and finally

    \[a_{\mu\mu} = (1-t^{-1})\sum_{i=1}^{\infty}q^{\mu_i}t^{1-i} = 1-(1-q)(1-t^{-1})B_{\mu}(q,t^{-1}) \]

    By using the relation \(B_{\mu}(q,t^{-1}) = \displaystyle \sum_i^\infty \dfrac{1-q^{\mu_i}}{1-q}t^{1-i} \) discussed above.

    Summarizing: \(\Delta'\) has distinct eigenvalues \(1-(1-q)(1-t^{-1})B_{\mu}(q,t^{-1}) \) and its corresponding eigenfunctions have lowertriangularity with respect to the monomial basis and nonzero coefficient of \(m_\mu\). So we have the first requirement for the Macdonald polynomials, we still need to say stuff about their inner products.


    Macdonald - Kostka polynomials

    Recall \(P_{\lambda} = m_\mu + \sum_{\lambda<\mu} c_{\lambda\mu}m_\lambda\) where \(c_{\lambda}(q,t)\) are rational functions

    Conjecture/Theorem
    Macdonald / Garsia, Remmel, Noumi, Kiriloov, Sahi, Knop

    \[ J_{\mu}(X;q,t) = \prod_{s\in \lambda}\left( 1-q^{a(s)}t^{l(s)+1} \right)P_\mu(X;q,t) \]

    is a polynomial in \(q,t\), called the integral form. Here \(a(s),l(s)\) stand for the arm and the leg of the cell \(s\) in the diagram of \(\lambda\).

    Conjecture/Theorem
    Macdonald / Haiman

    Define \[ J_\mu(X;q,t) = \sum_{\lambda}K_{\lambda\mu}(q,t) s_\lambda\left[X(1-t)\right] \]

    Then \( K_{\lambda\mu}(q,t) \in \mathbb{N}\left[q,t\right] \)

    The plethystic substution \(s_\lambda\left[X(1-t)\right]\) seems to appear out of no where. There is a more natural expression involving transformed Macdonald symmetric functions.

    Remark: Haiman's proof is based on the \(n!\) conjecture of Garsia and Haiman and the geometry of Hilber schemes. For an excellent, inspiring, mind-blowing, survey see

    Mark Haiman Combinatorics, symmetric functions, and Hilbert schemes Current Developments in Mathematics 2002, no. 1 (2002), 39–111.

    Remark: \(K_{\lambda\mu}(0,t)=K_{\lambda\mu}(t)\) the Kostka-Foulkes polynomial. A combinatorial interpretation of \(K_{\lambda\mu}(q,t)\) along the lines of Lascoux and Schutzenberger formula is still missing.


    Transformed Macdonald Symmetric Functions

    For many reasons that we shall not discuss here, the following transformed Macdonald polynomials play a fundamental role in the theory

    \[\tilde{H}(X;q,t)=t^{n(\mu)}J_{\mu}\left[ \dfrac{X}{1-t^{-1}};q,t^{-1} \right]\]

    where \(n_\mu=\sum_{i} (i-1)\mu_i\)

    So now we have that \(\tilde{H}_\mu\) is an eigenfunction of \(\Delta\). More precisely

    \[ \Delta\tilde{H}_\mu = \left(1-(1-q)(1-t)B_\mu\right)\tilde{H} \]

    and \[\tilde{H}_\mu = \sum_\lambda \tilde{K}_{\lambda\mu}(q,t) s_\lambda \]

    where \(\tilde{K}_{\lambda\mu}(q,t)=t^{n(\mu)}K_{\lambda\mu}(q,t^{-1})\)

    By the symmetry of the operator and checking the eigenvalues we get that

    \[\tilde{H}_{\mu'}(X;q,t) = \tilde{H}_\mu(X,t,q) \]

    and

    \[\tilde{K}_{\lambda\mu'}(X;q,t) = \tilde{K}_{\lambda\mu}(X,t,q) \]

    Remark: \(K_{(n)\mu}(q,t) = t^{n(\mu)}\) as proved by Macdonald. So the coefficient of \(s_{(n)}\) in \(\tilde{H}\) is 1.

    We have the following important proposition

    Proposition: The transformed Macdonald symmetric functions are uniquely characterized by

    1. \(\tilde{H}_\mu\left[X(1-q);q,t\right]\in \mathbb{Q}(q,t) \{s_\lambda : \lambda\geq \mu\} \)
    2. \(\tilde{H}_\mu\left[X(1-t);q,t\right]\in \mathbb{Q}(q,t) \{s_\lambda : \lambda\geq \mu' \} \)
    3. \(\tilde{H}_\mu\left[1;q,t\right] = \langle\tilde{H}_\mu, s_{(n)}\rangle = 1\)

    Proof: \(P_\mu\) and hence \(\tilde{H}_\mu\) are homogeneous of degree \(|\mu|\). So

    \[\tilde{H}_\mu\left[X(1-t);q,t\right]=t^{|\mu|}\tilde{H}_\mu\left[ -X(1-t^{-1});q,t \right] \] is a scalar multiple of \(P_\mu\left[ -X;q,t^{-1} \right]\) and hence of \(\omega P_\mu(X;q,t^{-1})\).

    Since \(P_\mu(X;q,t)\in \mathbb{Q}(q,t)\{s_\lambda : \lambda \leq \mu\}\) and the involution transposes the lambdas and transposing reverses the dominance ordering we have

    \[ \omega P_\mu(X;q,t)\in \mathbb{Q}(q,t)\{s_\lambda : \lambda \geq \mu'\}\] Hence we have number 2 and by the symmetry mentioned before the proposition we have also number 1. Number 3 is the last remark.

    For uniqueness: Assume \(H_\mu'(X)\) is another solution to the conditions 1 and 2. The first condition implies

    \[ H_\mu' \left[ X(1-q)\right] \in \mathbb{Q}(q,t) \{ \tilde{H}_\lambda\left[ X(1-q) \right] : \lambda\geq \mu \} \]

    Which implies

    \[ H_\mu' \in \mathbb{Q}(q,t) \{ \tilde{H}_\lambda: \lambda\geq \mu \} \]

    Similarly the second condition implies

    \[ H_\mu' \in \mathbb{Q}(q,t) \{ \tilde{H}_\lambda: \lambda\leq \mu \} \]

    Implying that \(H_\mu'\) is a scalar multiple of \(\tilde{H}_\lambda\), but the third condition fixes the scalar equal to 1.

    As a corollary we have

    \[ \omega\tilde{H}_{\mu}(X;q,t) = t^{n(\mu)}q^{n(\mu')} \tilde{H}_\mu(X,q^{-1},t^{-1}) \] which implies also

    \[ \omega\tilde{K}_{\lambda'\mu}(X;q,t) = t^{n(\mu)}q^{n(\mu')} \tilde{K}_\mu(X,q^{-1},t^{-1}) \]

    To prove this, first observe that \(\omega t^{n(\mu)}q^{n(\mu')} \tilde{K}_\mu(X,q^{-1},t^{-1}) \) satisfies conditions 1 and 2 from the proposition, hence it is a scalar multiple of \(\tilde{H}_\mu\). The scalar is 1 since

    \[ \tilde{K}_{(1^n)\mu} = t^{n(\mu)}q^{n(\mu')}\] a non trivial identity proved by Macdonald.


    Orthogonality of the Macdonald symmetric functions

    Replacing \(t\to t^{-1}\) we are to show

    \[ \langle P_\mu(X;q,t^{-1}) , P_\upsilon \left[ X\dfrac{1-q}{1-t^{-1}};q,t^{-1} \right] = 0 \rangle \qquad \textrm{for}\hspace{5pt} \lambda\neq\upsilon \]

    Or equivalently

    \[ \langle P_\mu(X;q,t^{-1}) , P_\upsilon \left[ -X\dfrac{1-q}{1-t};q,t^{-1} \right] \rangle = 0 \qquad \textrm{for}\hspace{5pt} \lambda\neq\upsilon \]

    because \(1-t^{-1}=-t(1-t)\) so we pull out the \(1/t\) and give the minus to \( X \).

    Since \( P_\mu(X;q,t^{-1})\) is a scalar multiple of \( \tilde{H}_\mu\left[X(1-t);q,t \right] \) what we need to show in the transformed Macdonald polynomials is

    \[\langle \tilde{H}_\mu[X(1-t);q,t] , \tilde{H}_\mu[-X(1-q);q,t] \rangle = 0 \qquad \textrm{for}\hspace{5pt} \lambda\neq\upsilon \]

    But up to sign we have \( \tilde{H}_\mu[-X(1-q);q,t] = \omega \tilde{H}_\mu[X(1-q);q,t] \) so now using the first two conditions on the characterization of the transformed Macdonald symmetric functions we're done.


    Macdonald Polynomials is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?