Skip to main content
Mathematics LibreTexts

7.2: Standard Types of Continuous Random Variables

  • Page ID
    105842
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Section 1: Uniform Random Variable

    Next, we will consider one of the standard continuous random variables called uniform and how to work with it.

    Definition: Uniform probability density curve

    A uniform probability density curve with parameters \(a\) and \(b\) is a horizontal line \(y=\frac{1}{b-a}\) from \(x=a\) to \(x=b\).

    clipboard_e3d6772ef6ba99cee1d45b18322924458.png

    Definition: Uniform random variable

    A random variable with a uniform probability density curve is called a uniform random variable with parameters \(a\) and \(b\). We adopt the following notation to denote a uniform random variable with parameters \(a\) and \(b\):

    \(X\sim U(a,b)\)

    Example \(\PageIndex{1.1.1}\)

    If \(X\sim U(-1,3)\) is a uniform random variable with parameters \(-1\) and \(3\) then its probability density curve will look like a horizontal line spanning from \(-1\) to \(3\).

    clipboard_e245a607eeb44984783538e0ba05253ba.png

    Easy to see that the graph satisfies the criteria for a probability density curve as it is above the \(x\)-axis and its area is the area of a rectangle with the base \(4\) and the height \(\frac{1}{4}\) which can be found using the formula for the area of a rectangle and is equal to \(1\).

    Example \(\PageIndex{1.1.2}\)

    Let’s find the probability of \(X=2\). As we previously discussed, the probability of a continuous random variable being equal to a single value is equal to \(P(X=2)=0\).

    clipboard_ed84a3d0e1b3d71c05aeb1ba063042bd0.png

    Example \(\PageIndex{1.1.3}\)

    To find the probability that \(X\) is less than \(2\) we will look for the area under the curve between \(-1\) and \(2\). This region is a rectangle with the base \(3\) and the height \(\frac{1}{4}\). The area can be found using the formula for the area of a rectangle and is equal to \(P(X<2)=0.75\) or \(75\%\).

    clipboard_ed7acdf47015409699c28cef45615579e.png

    Example \(\PageIndex{1.1.4}\)

    To find the probability that \(X\) is less than \(0\) we will look for the area under the curve between \(-1\) and \(0\). This region is a rectangle with the base \(1\) and the height \(\frac{1}{4}\). The area can be found using the formula for the area of a rectangle and is equal to \(P(-1<X<2)=0.25\) or \(25\%\).

    clipboard_e942c741d7308a57e88acd412d252c841.png

    Example \(\PageIndex{1.1.5}\)

    To find the probability that \(X\) is between \(0\) and \(2\) we will look for the area under the curve between \(0\) and \(2\). This region is a rectangle with the base \(2\) and the height \(\frac{1}{4}\). The area can be found using the formula for the area of a rectangle and is equal to \(P(0<X<2)=0.5\) or \(50\%\). Alternatively, we could have used the subdivision rule to compute the same probability as the difference between the probability of \(X\) being less than \(2\) and the probability of \(X\) being less than \(0\) both of which we already computed previously, so the answer is again \(50\%\).

    clipboard_ee744931b6752c02e7c65b5bb84dc5a56.png

    Example \(\PageIndex{1.1.6}\)

    To find the probability that \(X\) is greater than \(2\) we will look for the area under the curve between \(2\) and \(3\). This region is a rectangle with the base \(1\) and the height \(\frac{1}{4}\). The area can be found using the formula for the area of a rectangle and is equal to \(P(X>2)=0.25\) or \(25\%\). Alternatively, we could have used the complementary rule to compute the same probability as the complement of the probability of \(X\) being less than \(2\) which we already computed previously, so the answer is again \(25\%\).

    clipboard_e6eb3adfe2ccbb58e9a07ad40eacfb87d.png

    In general, for \(X\sim U(a,b)\) and any \(c\), \(d\) such that \(a<c<d<b\):

    \(P(c<X<d)=\frac{d-c}{b-a}\)

    \(P(X<d)=\frac{d-a}{b-a}\)

    \(P(X>c)=\frac{b-c}{b-a}\)

    Also,

    \(\mu_X=\frac{a+b}{2}\)

    \(\sigma_X=\frac{b-a}{\sqrt{12}}\)

     

    Example \(\PageIndex{1.2}\)

    For example, for \(X\sim U(-1,3)\):

    \(P(0<X<2)=\frac{2-0}{3-(-1)}=\frac{2}{4}=0.50\)

    \(P(X<0)=\frac{0-(-1)}{3-(-1)}=\frac{1}{4}=0.25\)

    \(P(X>2)=\frac{3-2}{3-(-1)}=\frac{1}{4}=0.25\)

    Also,

    \(\mu_X=\frac{-1+3}{2}=2\)

    \(\sigma_X=\frac{3-(-1)}{\sqrt{12}}=1.15\)

    Example \(\PageIndex{1.3}\)

    A city metro train runs every 15 minutes.

    1. Find the probability that the waiting time is:
      1. less than 8 minutes;
      2. more than 3 minutes;
      3. between 3 and 7 minutes.
    2. Compute the following:
      1. the expected waiting time;
      2. the standard deviation of waiting time.
    Solution

    Waiting time is frequently estimated by a uniform random variable. Let \(X\) be the waiting time of a randomly selected passenger then it can be assumed uniform with parameters \(0\) and \(15\), i.e.:

    \(X\sim U(0,15)\)

    Using the formulas we can compute all the quantities.

    1. Find the probability that the waiting time is:
      1. less than 8 minutes: \(P(X<8)=\frac{8-0)}{15-0}=\frac{8}{15}=0.53\)
      2. more than 3 minutes: \(P(X>3)=\frac{15-3}{15-0}=\frac{12}{15}=0.80\)
      3. between 3 and 7 minutes: \(P(3<X<7)=\frac{7-3}{15-0}=\frac{4}{15}=0.27\)
    2. Compute the following:
      1. the expected value of \(X\): \(\mu_X=\frac{0+15}{2}=7.5\)
      2. the standard deviation of \(X\): \(\sigma_X=\frac{15-0}{\sqrt{12}}=4.33\)

    Section 2: Standard Normal Variable

    Definition: Standard normal density curve

    The standard normal density curve is a bell-shaped curve that satisfies the following properties:

    1. Has the peak at \(0\) and symmetric about \(0\).
    2. Extends indefinitely in both directions, approaching but never touching the horizontal axis.
    3. The Empirical rule holds that is:
      1. ~68% of the area under the curve is between -1 and +1
      2. ~95% of the area under the curve is between -2 and +2
      3. ~99.7% of the area under the curve is between -3 and +3

    The description of the curve should sound familiar as we have seen it before!

    clipboard_eef43c1af7a6b4ae7b5b6cb0da921ca5b.png

    Definition: Probability density curve

    We define the standard normal random variable as a random variable with the standard normal probability density curve. We denote the standard normal random variable as

    \(Z \sim N(0,1)\)

     

    Let’s make sure we understand the properties of the standard normal probability density curve and use the empirical rules to find some probabilities.

    • To find the probability that \(X\) is less than \(0\) we will look for the area under the curve to the left of \(0\). This region’s area can be found by adding the areas of included regions together to get \(P(X<0)=0.50\) or \(50\%\).
    • To find the probability that \(X\) is between \(-1\) and \(1\) we will look for the area under the curve between \(-1\) and \(1\). This region’s area can be found by adding the areas of included regions together to get \(P(-1<X<1)=0.68\) or \(68\%\).
    • To find the probability that \(X\) is between \(0\) and \(2\) we will look for the area under the curve between \(0\) and \(2\). This region’s area can be found by adding the areas of included regions together to get \(P(0<X<2)=0.475\) or \(47.5\%\).
    • To find the probability that \(X\) is between \(-1\) and \(2\) we will look for the area under the curve between \(-1\) and \(2\). This region’s area can be found by adding the areas of included regions together to get \(P(-1<X<2)=0.815\) or \(81.5\%\).
    • To find the probability that \(X\) is greater than \(1\) we will look for the area under the curve to the right of \(1\). This region’s area can be found by adding the areas of included regions together to get \(P(X>1)=0.16\) or \(16\%\).
    • To find the probability that \(X\) is less than \(-2\) we will look for the area under the curve to the left of \(-2\). This region’s area can be found by adding the areas of included regions together to get \(P(X<-2)=0.025\) or \(2.5\%\).
    • To find the probability that \(X\) is greater than \(4\) we will look for the area under the curve to the right of \(4\). This region’s area is essentially equal to \(0\) so the probability is nearly \(0\%\), that is \(P(X>4)=0\).

    To find the probability that \(X\) is less than \(1.23\) we will look for the area under the curve to the left of \(1.23\). While it is clear what this region’s area looks like it cannot be found by using the empirical rule. So how do we find the probabilities that involve values that are not covered by the Empirical rule such as \(P(Z<1.23)\), \(P(Z>1.23)\), \(P(0<Z<1.23)\). The good news is that the latter two probabilities can be related to the first one by using the complementary rule and the subdivision rule:

    \(P(Z>1.23)=1-P(Z<1.23)\)

    \(P(0<Z<1.23)=P(Z<1.23)-0.5\)

    But how do we find the probability \(P(Z<1.23)\)? Luckily all such probabilities have been computed and put together in the form of a table that looks like this!

    clipboard_eabad895d2d05242a2d29ad27e847e137.png

    To find the probability that \(Z\) is less than \(1.23\) we split the number \(1.23\) into \(1.2\) and \(0.03\) then find the row corresponding to \(1.2\) and the column corresponding to \(0.03\). In the intersection, we find the desired probability, so the probability is \(P(Z<1.23)=0.8907\) or \(89.07\%\).

    Now that we found the probability \(P(Z<1.23)\), we can find the other probabilities using the complementary rule and the subdivision rule:

    • \(P(Z<1.23)=0.8907\)
    • \(P(Z>1.23)=1-P(Z<1.23)=1-0.8907=0.1093\)
    • \(P(0<Z<1.23)=P(Z<1.23)-0.5=0.8907-0.5=0.3907\)

    Let’s practice using the table!

    Example \(\PageIndex{2.1}\)

    To find the probability that \(Z\) is less than \(0.71\), we split the number \(0.71\) into \(0.7\) and \(0.01\), find the row corresponding to \(0.7\), find the column corresponding to \(0.01\). In the intersection, we find the desired probability, so the probability is \(P(Z<0.71)=0.7611\) or \(76.11\%\). Also,

    • \(P(Z<0.71)=0.7611\)
    • \(P(Z>0.71)=1-P(Z<0.71)=1-0.7611=0.2389\)
    • \(P(0<Z<0.71)=P(Z<0.71)-0.5=0.7611-0.5=0.2611\)
    • \(P(0.71<Z<1.23)=P(Z<1.23)-P(Z<0.71)=0.8907-0.7611=0.1296\)
    Example \(\PageIndex{2.2}\)

    Example: To find the probability that \(Z\) is less than \(-1.02\), we split the number \(-1.02\) into \(-1.0\) and \(-0.02\), find the row corresponding to \(-1.0\), find the column corresponding to \(-0.02\). In the intersection, we find the desired probability, so the probability is \(P(Z<-1.02)=0.1539\) or \(15.39\%\). Also,

    • \(P(Z<-1.02)=0.1539\)
    • \(P(Z>-1.02)=1-P(Z<-1.02)=1-0.1539=0.8461\)
    Example \(\PageIndex{2.3}\)

    To find the probability that \(Z\) is less than \(-0.47\), we split the number \(-0.47\) into \(-0.4\) and \(-0.07\), find the row corresponding to \(-0.4\), find the column corresponding to \(-0.07\). In the intersection, we find the desired probability, so the probability is \(P(Z<-0.47)=0.3192\) or \(31.92\%\). Also,

    • \(P(Z<-0.47)=0.3192\)
    • \(P(Z>-0.47)=1-P(Z<-0.47)=1-0.3192=0.6808\)
    • \(P(-1.02<Z<-0.47)=P(Z<-0.47)-P(Z<-1.02)=0.3192-0.1539=0.1653\)

    Section 3: \(\alpha\)-notation

    Definition: \(\alpha\)-notation

    Consider a random variable \(X\) and its probability density curve. A value the area to the right of which is equal to alpha is called \(x_\alpha\).

    clipboard_ed9c38d43b3e933c50f2393d8f9ed5c6c.png

    This fact can be expressed as the following probability statement:

    \(P(X>x_\alpha)=\alpha\)

    As a result, the area to the left of \(x_\alpha\) is by the complementary rule must be \(1-\alpha\). This fact can be expressed as the following probability statement:

    \(P(X<x_\alpha)=1-\alpha\)

    Example \(\PageIndex{3.1}\)

    For a continuous random variable \(X\), the value the area to the right of which is equal to \(0.3\) is called \(x_{0.3}\).

    clipboard_ecb54c50bb7f784ce31a0aa758120fae1.png

    This fact can be expressed as the following probability statement:

    \(P(X>x_{0.3})=0.3\)

    As a result, the area to the left of \(X\) is by the complementary rule must be \(1-0.3=0.7\). This fact can be expressed as the following probability statement:

    \(P(X<x_{0.3})=0.7\)

    Example \(\PageIndex{3.2.1}\)

    Consider a uniform random variable with parameters \(-1\) and \(3\), i.e., \(X \sim U(-1,3)\). The value the area to the right of which is equal to \(0.25\) is called \(x_{0.25}\). In this case, we happened to know that \(x_{0.25}=2\).

    clipboard_e2ef7dca56f9fcefa0b1eb6383d30ce87.png

    This fact can be expressed in two ways:

    \(P(X<x_{0.25})=0.75\) and \(P(X>x_{0.25})=0.25\)

    Example \(\PageIndex{3.2.2}\)

    Consider a uniform random variable with parameters \(-1\) and \(3\), i.e., \(X \sim U(-1,3)\). The value the area to the left of which is equal to \(0.6\) is called \(x_{0.4}\). In this case, we happened to know that \(x_{0.40}=1.4\).

    clipboard_ef3d2ba1f8ddc5e1a93d77a5bef3b8b1b.png

    This fact can be expressed in two ways:

    \(P(X<x_{0.4})=0.6\) and \(P(X>x_{0.4})=0.4\)

    In general, for a uniform random variable with parameters \(a\) and \(b\), to find \(x_\alpha\) means to find the value the area to the right of which is \(\alpha\) and the area to the left is \(1-\alpha\). The following formula can be used to find the \(x_\alpha\):

    \(x_\alpha=a\cdot\alpha+b\cdot(1-\alpha)\)

    Example \(\PageIndex{3.3}\)

    Again, consider a uniform random variable with parameters \(-1\) and \(3\), i.e. \(X \sim U(-1,3)\).

    1. \(x_{0.25}=-1\cdot0.25+3\cdot0.75=2\)
    2. \(x_{0.4}=-1\cdot0.4+3\cdot0.6=1.4\)

    Now that we know what the alpha-notation is and how to find \(u_\alpha\) for uniform random variables let’s consider the standard normal random variable, \(Z\), and learn how to do the same.

    Example \(\PageIndex{3.4}\)

    clipboard_e0e63bd6723ce73a8045db3401c746e20.png

    In this case, we happened to know that according to the empirical rule the area to the right of \(1\) is \(16\%\). A value the area to the right of which under the \(Z\)-curve is equal to \(0.16\) is called \(z_{0.16}\). Therefore

    \(z_{0.16}=1\)

    and this fact can be expressed as the following probability statements:

    \(P(Z<1)=0.84\) and \(P(Z>1)=0.16\)

    Example \(\PageIndex{3.5}\)

    Let’s consider the standard normal random variable, \(Z\), and interpret the previously discovered probability statement \(P(Z<1.23)=0.89\) as \(z_{0.11}=1.23\).

    clipboard_e7c3a09d6296a0faae714cc4d48614eb8.png

    In general, we want to be able to find \(z_\alpha\) for any \(\alpha\).

    clipboard_ea8a4329ff4ee99d8f8da01c3fa52a464.png

    To do that we can use the \(Z\)-table:

    clipboard_ea59fb9e12aa64c2995139edb901eca73.png

    Example \(\PageIndex{3.6}\)

    Find \(z_{0.38}\) using the \(Z\)-table.

    Solution
    1. We identify alpha as \(\alpha=0.38\) and \(1-\alpha=0.62\). In the \(z\)-table, the closest value to \(0.62\) is \(0.6217\).
    2. We read the corresponding numbers \(0.3\) on the left and \(0.01\) on top and put them together to obtain the answer.
    3. \(z_{0.38}=0.31\)
    Example \(\PageIndex{3.7}\)

    Find \(z_{0.10}\) using the \(Z\)-table.

    Solution
    1. We identify alpha as \(\alpha=0.10\) and \(1-\alpha=0.90\). In the \(z\)-table, the closest value to \(0.90\) is \(0.8997\).
    2. We read the corresponding numbers \(1.2\) on the left and \(0.08\) on top and put them together to obtain the answer.
    3. \(z_{0.10}=1.28\)

    For \(Z\) and other distributions symmetric about zero:

    \(x_{1-\alpha}=-x_\alpha\)

    Example \(\PageIndex{3.8}\)

    \(z_{0.9}=-z_{0.10}=-1.28\)

    \(z_{0.95}=-z_{0.05}=-1.645\)

    \(z_{0.99}=-z_{0.01}=-2.33\)

    Section 4: Percentiles

    Definition: p-th percentile

    The \(p\)-th percentile of a random variable \(X\) is the value \(x_{p\%}\) that is greater than \(p\%\) of the observations if the experiment is repeated many times.

    In other words,

    \(P(X<x_{p\%})=p\%\)

    The relation between the percentiles and \(\alpha\)-notation:

    \(x_{p\%}=x_{\alpha=1-\frac{p}{100}}\)

    Example \(\PageIndex{4.1}\)

    Find the \(80\)-th percentile for \(Z\).

    Solution

    \(z_{80\%}=z_{0.2}=0.84\)

    Example \(\PageIndex{4.2}\)

    Find the \(55\)-th percentile for \(X \sim U(1,5)\).

    Solution

    \(x_{55\%}=x_{0.45}=1\cdot0.45+5\cdot0.55=3.2\)

    Example \(\PageIndex{4.3}\)

    \(z_{10\%}=z_{0.9}=-1.28\) is the 10-th percentile for \(Z\).

    \(z_{62\%}=z_{0.38}=0.31\) is the 62-nd percentile for \(Z\).

    \(x_{75\%}=x_{0.25}=2\) is the 75-th percentile for \(X \sim U(-1,3)\).

    \(x_{60\%}=x_{0.40}=1.4\) is the 60-th percentile for \(X \sim U(-1,3)\).


    7.2: Standard Types of Continuous Random Variables is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?