# 1.2: Combinations and Permutations

- Page ID
- 7139

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

We turn first to *counting*. While this sounds simple, perhaps too simple to study, it is not. When we speak of counting, it is shorthand for determining the size of a set, or more often, the sizes of many sets, all with something in common, but different sizes depending on one or more parameters. For example: how many outcomes are possible when a die is rolled? Two dice? \(n \) dice? As stated, this is ambiguous: what do we mean by "outcome''? Suppose we roll two dice, say a red die and a green die. Is "red two, green three'' a different outcome than "red three, green two''? If yes, we are counting the number of possible "physical'' outcomes, namely 36. If no, there are 21. We might even be interested simply in the possible totals, in which case there are 11 outcomes.

Even the quite simple first interpretation relies on some degree of knowledge about counting; we first make two simple facts explicit. In terms of set sizes, suppose we know that set \(A \) has size \(m \) and set \(B \) has size \(n\). What is the size of \(A \) and \(B \) together, that is, the size of \(A\cup B\)? If we know that \(A \) and \(B \) have no elements in common, then the size \(A\cup B \) is \(m+n\); if they do have elements in common, we need more information. A simple but typical problem of this type: if we roll two dice, how many ways are there to get either 7 or 11? Since there are 6 ways to get 7 and two ways to get 11, the answer is \(6+2=8\). Though this principle is simple, it is easy to forget the requirement that the two sets be disjoint, and hence to use it when the circumstances are otherwise. This principle is often called the **addition principle**.

This principle can be generalized: if sets \(A_1 \) through \(A_n \) are pairwise disjoint and have sizes \(m_1,\ldots m_n\), then the size of \(A_1\cup\cdots\cup A_n=\sum_{i=1}^n m_i\). This can be proved by a simple induction argument.

Why do we know, without listing them all, that there are 36 outcomes when two dice are rolled? We can view the outcomes as two separate outcomes, that is, the outcome of rolling die number one and the outcome of rolling die number two. For each of 6 outcomes for the first die the second die may have any of 6 outcomes, so the total is \(6+6+6+6+6+6=36\), or more compactly, \(6\cdot6=36\). Note that we are really using the addition principle here: set \(A_1 \) is all pairs \((1,x)\), set \(A_2 \) is all pairs \((2,x)\), and so on. This is somewhat more subtle than is first apparent. In this simple example, the outcomes of die number two have nothing to do with the outcomes of die number one. Here's a slightly more complicated example: how many ways are there to roll two dice so that the two dice don't match? That is, we rule out 1-1, 2-2, and so on. Here for each possible value on die number one, there are five possible values for die number two, but they are a different five values for each value on die number one. Still, because all are the same, the result is \(5+5+5+5+5+5=30\), or \(6\cdot 5=30\). In general, then, if there are \(m \) possibilities for one event, and \(n \) for a second event, the number of possible outcomes for both events together is \(m\cdot n\). This is often called the **multiplication principle**.

In general, if \(n \) events have \(m_i \) possible outcomes, for \(i=1,\ldots,n\), where each \(m_i \) is unaffected by the outcomes of other events, then the number of possible outcomes overall is \(\prod_{i=1}^n m_i\). This too can be proved by induction.

Example \(\PageIndex{1}\)

How many outcomes are possible when three dice are rolled, if no two of them may be the same? The first two dice together have \(6\cdot 5=30 \) possible outcomes, from above. For each of these 30 outcomes, there are four possible outcomes for the third die, so the total number of outcomes is \(30\cdot 4=6\cdot 5\cdot 4=120\). (Note that we consider the dice to be distinguishable, that is, a roll of 6, 4, 1 is different than 4, 6, 1, because the first and second dice are different in the two rolls, even though the numbers as a set are the same.)

Example \(\PageIndex{2}\)

Suppose blocks numbered 1 through \(n \) are in a barrel; we pull out \(k \) of them, placing them in a line as we do. How many outcomes are possible? That is, how many different arrangements of \(k \) blocks might we see?

This is essentially the same as the previous example: there are \(k \) "spots'' to be filled by blocks. Any of the \(n \) blocks might appear first in the line; then any of the remaining \(n-1 \) might appear next, and so on. The number of outcomes is thus \(n(n-1)(n-2)\cdots(n-k+1)\), by the multiplication principle. In the previous example, the first "spot'' was die number one, the second spot was die number two, the third spot die number three, and \(6\cdot5\cdot4=6(6-1)(6-2)\); notice that \(6-2=6-3+1\).

This is quite a general sort of problem:

Definition: permutations

The number of permutations of \(n \) things taken \(k \) at a time is

$$(P(n,k)=n(n-1)(n-2)\cdots(n-k+1)={n!\over (n-k)!}.$$

A permutation of some objects is a particular linear ordering of the objects; \(P(n,k) \) in effect counts two things simultaneously: the number of ways to choose and order \(k \) out of \(n \) objects. A useful special case is \(k=n\), in which we are simply counting the number of ways to order all \(n \) objects. This is \(n(n-1)\cdots(n-n+1)=n!\). Note that the second form of \(P(n,k) \) from the definition gives \(${n!\over (n-n)!}={n!\over 0!}.$ \) This is correct only if \(0!=1\), so we adopt the standard convention that this is true, that is, we *define* \(0! \) to be \(1\).

Suppose we want to count only the number of ways to choose \(k \) items out of \(n\), that is, we don't care about order. In example 1.2.1, we counted the number of rolls of three dice with different numbers showing. The dice were distinguishable, or in a particular order: a first die, a second, and a third. Now we want to count simply how many combinations of numbers there are, with 6, 4, 1 now counting as the same combination as 4, 6, 1.

Example \(\PageIndex{3}\)

Suppose we were to list all 120 possibilities in example 1.2.1. The list would contain many outcomes that we now wish to count as a single outcome; 6, 4, 1 and 4, 6, 1 would be on the list, but should not be counted separately. How many times will a single outcome appear on the list? This is a permutation problem: there are \(3! \) orders in which 1, 4, 6 can appear, and all 6 of these will be on the list. In fact every outcome will appear on the list 6 times, since every outcome can appear in \(3! \) orders. Hence, the list is too big by a factor of 6; the correct count for the new problem is \(120/6=20\).

Following the same reasoning in general, if we have \(n \) objects, the number of ways to choose \(k \) of them is \(P(n,k)/k!\), as each collection of \(k \) objects will be counted \(k! \) times by \(P(n,k)\).

Definition: permutations

The number of subsets of size \(k \) of a set of size \(n \) (also called an \(n\)-set) is \($C(n,k)={P(n,k)\over k!}={n!\over k!(n-k)!}={n\choose k}.$ \) The notation \(C(n,k) \) is rarely used; instead we use \(n\choose k\), pronounced "\(n \) choose \(k\)''.

Example \(\PageIndex{4}\)

Consider \(n=0,1,2,3\). It is easy to list the subsets of a small \(n$-set; a typical \(n$-set is \(\{a_1,a_2,\ldots,a_n\}\). A \(0$-set, namely the empty set, has one subset, the empty set; a \(1$-set has two subsets, the empty set and \(\{a_1\}\); a \(2$-subset has four subsets, \(\emptyset\), \(\{a_1\}\), \(\{a_2\}\), \(\{a_1,a_2\}\); and a \(3$-subset has eight: \(\emptyset\), \(\{a_1\}\), \(\{a_2\}\), \(\{a_3\}\), \(\{a_1,a_2\}\), \(\{a_1,a_3\}\), \(\{a_2,a_3\}\), \(\{a_1,a_2,a_3\}\). From these lists it is then easy to compute \(n\choose k$: \($\displaylines{\cr \matrix{ &\rlap{\lower 3pt\hbox{$\Rule{65pt}{0pt}{0.5pt}$}}\cr &0\cr n&1\cr &2\cr &3\cr }\left\vert \matrix{ 0&\lower 3.5pt\hbox{}\rlap{\smash{\raise 1.5em \hbox{$k$}}}1&2&3\cr 1\cr 1&1\cr 1&2&1\cr 1&3&3&1\cr }\right.\cr}$$

You probably recognize these numbers: this is the beginning of **Pascal's Triangle**. Each entry in Pascal's triangle is generated by adding two entries from the previous row: the one directly above, and the one above and to the left. This suggests that \({n\choose k}={n-1\choose k-1}+{n-1\choose k}\), and indeed this is true. To make this work out neatly, we adopt the convention that \({n\choose k}=0 \) when \(k< 0 \) or \(k>n\).

Theorem 1.2.7

\(\ds{n\choose k}={n-1\choose k-1}+{n-1\choose k}\).

Proof

A typical \(n$-set is \(A=\{a_1,\ldots,a_n\}\). We consider two types of subsets: those that contain \(a_n \) and those that do not. If a \(k$-subset of \(A \) does not contain \(a_n\), then it is a \(k$-subset of \(\{a_1,…,a_{n-1}\}\), and there are \(n-1\choose k \) of these. If it does contain \(a_n\), then it consists of \(a_n \) and \(k-1 \) elements of \(\{a_1,…,a_{n-1}\}\); since there are \(n-1\choose k-1 \) of these, there are \(n-1\choose k-1 \) subsets of this type. Thus the total number of \(k$-subsets of \(A \) is \({n-1\choose k-1}+{n-1\choose k}\).

Note that when \(k=0\), \({n-1\choose k-1}={n-1\choose -1}=0\), and when \(k=n\), \({n-1\choose k}={n-1\choose n}=0\), so that \({n\choose 0}={n-1\choose 0} \) and \({n\choose n}={n-1\choose n-1}\). These values are the boundary ones in Pascal's Triangle.

$\square$

Many counting problems rely on the sort of reasoning we have seen. Here are a few variations on the theme.

Example \(\PageIndex{5}\):

Six people are to sit at a round table; how many seating arrangements are there?

**Solution**

It is not clear exactly what we mean to count here. If there is a "special seat'', for example, it may matter who ends up in that seat. If this doesn't matter, we only care about the relative position of each person. Then it may or may not matter whether a certain person is on the left or right of another. So this question can be interpreted in (at least) three ways. Let's answer them all.

First, if the actual chairs occupied by people matter, then this is exactly the same as lining six people up in a row: 6 choices for seat number one, 5 for seat two, and so on, for a total of \(6!\). If the chairs don't matter, then \(6! \) counts the same arrangement too many times, once for each person who might be in seat one. So the total in this case is \(6!/6=5!\). Another approach to this: since the actual seats don't matter, just put one of the six people in a chair. Then we need to arrange the remaining 5 people in a row, which can be done in \(5! \) ways. Finally, suppose all we care about is who is next to whom, ignoring right and left. Then the previous answer counts each arrangement twice, once for the counterclockwise order and once for clockwise. So the total is \(5!/2=P(5,3)\).

We have twice seen a general principle at work: if we can overcount the desired set in such a way that every item gets counted the same number of times, we can get the desired count just by dividing by the common overcount factor. This will continue to be a useful idea. A variation on this theme is to overcount and then *subtract* the amount of overcount.

Example \(\PageIndex{6}\):

How many ways are there to line up six people so that a particular pair of people are not adjacent?

**Solution**

Denote the people \(A \) and \(B\). The total number of orders is \(6!\), but this counts those orders with \(A \) and \(B \) next to each other. How many of these are there? Think of these two people as a unit; how many ways are there to line up the \(AB \) unit with the other 4 people? We have 5 items, so the answer is \(5!\). Each of these orders corresponds to two different orders in which \(A \) and \(B \) are adjacent, depending on whether \(A \) or \(B \) is first. So the \(6! \) count is too high by \(2\cdot5! \) and the count we seek is \(6!-2\cdot 5!=4\cdot5!\).