Skip to main content
Mathematics LibreTexts

3.5: Measures of Relative Standing

  • Page ID
    105824
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

    ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\id}{\mathrm{id}}\)

    \( \newcommand{\Span}{\mathrm{span}}\)

    \( \newcommand{\kernel}{\mathrm{null}\,}\)

    \( \newcommand{\range}{\mathrm{range}\,}\)

    \( \newcommand{\RealPart}{\mathrm{Re}}\)

    \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

    \( \newcommand{\Argument}{\mathrm{Arg}}\)

    \( \newcommand{\norm}[1]{\| #1 \|}\)

    \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

    \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    \( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

    \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

    \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vectorC}[1]{\textbf{#1}} \)

    \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

    \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

    \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

    \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

    \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

    \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

    Section 1: Z-Scores

    Next, we will discuss the measures of relative standing that allows us rank each observation in the data set.

    For an observation \(x_i\), the value \(z=\frac{x_i-\mu}{\sigma}\) is called the z-score of the observation \(x_i\). The term standard score is often used instead of z-score. For example, the average age of all presidents at inauguration is \(\mu=55\) and the standard deviation is \(\sigma=6.5\). Consider the age of Lincoln, 52 and the age of Eisenhower, 62.

    \(z_L=\frac{52-55}{6.5}=-0.46\)

    and

    \(z_L=\frac{62-55}{6.5}=1.08\)

    A negative z-score indicates that the observation is below (less than) the mean, whereas a positive z-score indicates that the observation is above (greater than) the mean. The meaning of the z-score of an observation is the number of standard deviations that the observation is away from the mean. We can see that Lincoln’s age 52 is 0.42 standard deviations below the mean and Eisenhower’s age 62 is 1.08 standard deviations above the mean.

    clipboard_e39e2e3e8ed41da53d7c086ecd559b98b.png

    Unlike the measures of center and variation the measures of relative standing are individual measures of each observation. The purpose of it is to identify the relative position of each observation in the entire data set in relation to other observations. The z-score of an observation, therefore, can be used as a rough measure of its relative standing. For instance:

    ·         a z-score of 3 or more indicates that the observation is larger than most of the other observations.

    ·         a z-score of −3 or less indicates that the observation is smaller than most of the other observations.

    ·         a z-score near 0 indicates that the observation is located near the mean.

    If two distributions have the same shape or, more generally, if they differ only by center and variation, then z-scores can be used to compare the relative standings of two observations from those distributions. Consider the following example!

    Example \(\PageIndex{1.1}\)

    Arthur scored 92 in the class with the mean 80 and the standard deviation 6, but Bethany scored 90 in the class with the mean 78 and the standard deviation 4. Who scored relatively better?

    Solution

    The Arthur’s z-score is \(\frac{92-80}{6}=2\) and the Bethany’s z-score is \(\frac{90-78}{4}=3\). Since Bethany's z-score is higher, Bethany did relatively better than Arthur despite the fact that her exam score was lower than Arthur's.

    A set consisting of the z-scores of all observations is called the standardized dataset. For example, consider the following population:

    Data, \(x_i\)

    72

    73

    76

    76

    78

    which has the mean 75 and standard deviation 2.2. By replacing each value with its z-score we obtain a new set called standardized:

    Standardized Data, \(\frac{x_i-\mu}{\sigma}\)

    \(\frac{72-75}{2.2}=-1.36\)

    \(\frac{73-75}{2.2}=-0.91\)

    \(\frac{76-75}{2.2}=0.45\)

    \(\frac{76-75}{2.2}=0.45\)

    \(\frac{78-75}{2.2}=1.36\)

    Note that a standardized data set always has mean 0 and standard deviation 1.

    One way to improve our interpretation skills of z-scores is to rephrase the rules from the previous section using the z-score language. For example:

    •       any observation with the z-score less than -3 or greater than 3 is an outlier.

    •       any observation with the z-score less than -2 is significantly low.

    •       any observation with the z-score more than 2 is significantly high.

    By Chebyshev’s Rule, for any data set with  and :

    •       At least 75% of observations have z-scores between -2 and 2;

    •       At least 89% of observations have z-scores between -3 and 3;

    •       At least 93.75% of observations have z-scores between -4 and 4.

    By the Empirical Rule, for any bell-shaped data set with  and :

    •       Approximately 68% of observations have z-scores between -1 and 1.

    •       Approximately 95% of observations have z-scores between -2 and 2.

    •       Approximately 99.7% of observations have z-scores between -3 and 3.

    We discussed the z-scores as one of the ways to measure relative standing - how to compute it and how to interpret it.

    Section 2: Percentiles

    Next, we will discuss another measure of relative standing that allows us to rank each observation in the data set. Another way to measure the relative standing is computing the percentiles. The p-th percentile is a value that separates the bottom p% of the data from the remaining values in the top of the data. To find the percentile of a particular observation \(x_i\), we use the following formula:

    \(\text{percentile of }x_i=\frac{\text{the number of values }< x_i}{\text{total number of values}}\cdot100\%\)

    Example \(\PageIndex{2.1}\)

    Consider the following population and before we proceed any further make sure that the data is organized in ascending order:

    {72, 73, 76, 76, 78}

    The percentile of \(x_3\) is \(\frac{2}{5}\cdot100\%=0.4\cdot100\%=40\)-th percentile.

    To find the p-th percentile of a data set, we use the following formula:

    \(x_p=x_{[i=\frac{p}{100}\cdot{n}]}\)

    In this formula, \(n\) is the size of the data set, \(p\) is the desired percentile, and \(i\) is the index of the observation that we are trying to find!

    Example \(\PageIndex{2.2}\)

    Consider the following population and before we proceed any further make sure that the data is organized in ascending order:

    {72, 73, 76, 76, 78}

    The 30-th percentile is \(x_{[\frac{p}{100}\cdot{n}]}=x_{[\frac{30}{100}\cdot{5}]}=x_{[1.5]}=x_2=73\).

    Certain percentiles are particularly important:

    • the deciles divide a data set into tenths (10 equal parts).
    • the quartiles divide a data set into quarters (4 equal parts). Quartiles are the most commonly used percentiles. A data set has three quartiles, which we denote Q1, Q2, and Q3. The second quartile is also the median of the data set.

    Now that we know two different ways to measure a relative standing of every observation the question is which one is better?

    Percentiles usually give a more accurate and meaningful measure of relative standing than z-scores. However, to compute the percentile one needs to have access to the entire data set while to compute the z-score one needs to know only the mean and standard deviation of the data set. With very little information, z-scores provide a feasible alternative to percentiles for measuring relative standing. For example, for a student to be able to compute the percentile of their exam score the instructor must allow the access to the entire list of scores which is impossible, but to compute the z-score the instructor must share only the mean and the standard deviation which is something instructors are more comfortable doing.


    3.5: Measures of Relative Standing is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?