1.4: Combinatorial Proofs
 Last updated
 Save as PDF
 Page ID
 15318
 Contributed by Oscar Levin
 Associate Professor (Mathematics) at University of Northern Colorado
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \def\d{\displaystyle}\)
\( \newcommand{\f}[1]{\mathfrak #1}\)
\( \newcommand{\s}[1]{\mathscr #1}\)
\( \def\N{\mathbb N}\)
\( \def\B{\mathbf{B}}\)
\( \def\circleA{(.5,0) circle (1)}\)
\( \def\Z{\mathbb Z}\)
\( \def\circleAlabel{(1.5,.6) node[above]{$A$}}\)
\( \def\Q{\mathbb Q}\)
\( \def\circleB{(.5,0) circle (1)}\)
\( \def\R{\mathbb R}\)
\( \def\circleBlabel{(1.5,.6) node[above]{$B$}}\)
\( \def\C{\mathbb C}\)
\( \def\circleC{(0,1) circle (1)}\)
\( \def\F{\mathbb F}\)
\( \def\circleClabel{(.5,2) node[right]{$C$}}\)
\( \def\A{\mathbb A}\)
\( \def\twosetbox{(2,1.5) rectangle (2,1.5)}\)
\( \def\X{\mathbb X}\)
\( \def\threesetbox{(2,2.5) rectangle (2,1.5)}\)
\( \def\E{\mathbb E}\)
\( \def\O{\mathbb O}\)
\( \def\U{\mathcal U}\)
\( \def\pow{\mathcal P}\)
\( \def\inv{^{1}}\)
\( \def\nrml{\triangleleft}\)
\( \def\st{:}\)
\( \def\~{\widetilde}\)
\( \def\rem{\mathcal R}\)
\( \def\sigalg{$\sigma$algebra }\)
\( \def\Gal{\mbox{Gal}}\)
\( \def\iff{\leftrightarrow}\)
\( \def\Iff{\Leftrightarrow}\)
\( \def\land{\wedge}\)
\( \def\And{\bigwedge}\)
\( \def\entry{\entry}\)
\( \def\AAnd{\d\bigwedge\mkern18mu\bigwedge}\)
\( \def\Vee{\bigvee}\)
\( \def\VVee{\d\Vee\mkern18mu\Vee}\)
\( \def\imp{\rightarrow}\)
\( \def\Imp{\Rightarrow}\)
\( \def\Fi{\Leftarrow}\)
\( \def\var{\mbox{var}}\)
\( \def\Th{\mbox{Th}}\)
\( \def\entry{\entry}\)
\( \def\sat{\mbox{Sat}}\)
\( \def\con{\mbox{Con}}\)
\( \def\iffmodels{\bmodels\models}\)
\( \def\dbland{\bigwedge \!\!\bigwedge}\)
\( \def\dom{\mbox{dom}}\)
\( \def\rng{\mbox{range}}\)
\( \def\isom{\cong}\)
\(\DeclareMathOperator{\wgt}{wgt}\)
\( \newcommand{\vtx}[2]{node[fill,circle,inner sep=0pt, minimum size=4pt,label=#1:#2]{}}\)
\( \newcommand{\va}[1]{\vtx{above}{#1}}\)
\( \newcommand{\vb}[1]{\vtx{below}{#1}}\)
\( \newcommand{\vr}[1]{\vtx{right}{#1}}\)
\( \newcommand{\vl}[1]{\vtx{left}{#1}}\)
\( \renewcommand{\v}{\vtx{above}{}}\)
\( \def\circleA{(.5,0) circle (1)}\)
\( \def\circleAlabel{(1.5,.6) node[above]{$A$}}\)
\( \def\circleB{(.5,0) circle (1)}\)
\( \def\circleBlabel{(1.5,.6) node[above]{$B$}}\)
\( \def\circleC{(0,1) circle (1)}\)
\( \def\circleClabel{(.5,2) node[right]{$C$}}\)
\( \def\twosetbox{(2,1.4) rectangle (2,1.4)}\)
\( \def\threesetbox{(2.5,2.4) rectangle (2.5,1.4)}\)
\( \def\ansfilename{practiceanswers}\)
\( \def\shadowprops
Callstack: at (Template:MathJaxLevin), /content/body/div/p[1]/span, line 1, column 11 at template() at (Bookshelves/Combinatorics_and_Discrete_Mathematics/Book:_Discrete_Mathematics_(Levin)/1:_Counting/1.4:_Combinatorial_Proofs), /content/body/p/span, line 1, column 22 at wiki.page() at (Courses/Saint_Mary's_College,_Notre_Dame,_IN/SMC:_MATH_339__Discrete_Mathematics_(Rohatgi)/Text/1:_Counting/1.4:_Combinatorial_Proofs), /content/body/div/pre, line 2, column 10
\( \renewcommand{\bar}{\overline}\)
\( \newcommand{\card}[1]{\left #1 \right}\)
\( \newcommand{\twoline}[2]{\begin{pmatrix}#1 \\ #2 \end{pmatrix}}\)
\( \newcommand{\lt}{<}\)
\( \newcommand{\gt}{>}\)
\( \newcommand{\amp}{&}\)
\( \newcommand{\hexbox}[3]{
\def\x{cos{30}*\r*#1+cos{30}*#2*\r*2}
\def\y{\r*#1sin{30}*\r*#1}
\draw (\x,\y) +(90:\r)  +(30:\r)  +(30:\r)  +(90:\r)  +(150:\r)  +(150:\r)  cycle;
\draw (\x,\y) node{#3};
}\)
\(\renewcommand{\bar}{\overline}\)
\(\newcommand{\card}[1]{\left #1 \right}\)
\(\newcommand{\twoline}[2]{\begin{pmatrix}#1 \\ #2 \end{pmatrix}}\)
\(\newcommand{\lt}{<}\)
\(\newcommand{\gt}{>;}\)
\(\newcommand{\amp}{&}\)
Investigate!

The Stanley Cup is decided in a best of 7 tournament between two teams. In how many ways can your team win? Let's answer this question two ways:
 How many of the 7 games does your team need to win? How many ways can this happen?
 What if the tournament goes all 7 games? So you win the last game. How many ways can the first 6 games go down?
 What if the tournament goes just 6 games? How many ways can this happen? What about 5 games? 4 games?
 What are the two different ways to compute the number of ways your team can win? Write down an equation involving binomial coefficients (that is, \({n \choose k}\)'s). What pattern in Pascal's triangle is this an example of?

Generalize. What if the rules changed and you played a best of \(9\) tournament (5 wins required)? What if you played an \(n\) game tournament with \(k\) wins required to be named champion?
Patterns in Pascal's Triangle
Have a look again at Pascal's triangle. Forget for a moment where it comes from. Just look at it as a mathematical object. What do you notice?
There are lots of patterns hidden away in the triangle, enough to fill a reasonably sized book. Here are just a few of the most obvious ones:
 The entries on the border of the triangle are all 1.
 Any entry not on the border is the sum of the two entries above it.
 The triangle is symmetric. In any row, entries on the left side are mirrored on the right side.
 The sum of all entries on a given row is a power of 2. (You should check this!)
We would like to state these observations in a more precise way, and then prove that they are correct. Now each entry in Pascal's triangle is in fact a binomial coefficient. The 1 on the very top of the triangle is \({0 \choose 0}\). The next row (which we will call row 1, even though it is not the topmost row) consists of \({1 \choose 0}\) and \({1 \choose 1}\). Row 4 (the row 1, 4, 6, 4, 1) consists of the binomial coefficients
\begin{equation*} {4 \choose 0} ~~ {4 \choose 1} ~~ {4 \choose 2} ~~ {4 \choose 3} ~~ {4 \choose 4}. \end{equation*}Given this description of the elements in Pascal's triangle, we can rewrite the above observations as follows:
 \({n \choose 0} = 1\) and \({n \choose n} = 1\).
 \({n \choose k} = {n1 \choose k1} + {n1 \choose k}\).
 \({n \choose k} = {n \choose nk}\).
 \({n\choose 0} + {n \choose 1} + {n \choose 2} + \cdots + {n \choose n} = 2^n\).
Each of these is an example of a binomial identity: an identity (i.e., equation) involving binomial coefficients.
Our goal is to establish these identities. We wish to prove that they hold for all values of \(n\) and \(k\). These proofs can be done in many ways. One option would be to give algebraic proofs, using the formula for \({n \choose k}\text{:}\)
\begin{equation*} {n \choose k} = \frac{n!}{(nk)!\,k!}. \end{equation*}Here's how you might do that for the second identity above.
Example \(\PageIndex{1}\)
Give an algebraic proof for the binomial identity
\begin{equation*} {n \choose k} = {n1\choose k1} + {n1 \choose k}. \end{equation*}
 Solution

Proof
By the definition of \({n \choose k}\), we have
\begin{equation*} {n1 \choose k1} = \frac{(n1)!}{(n1(k1))!(k1)!} = \frac{(n1)!}{(nk)!(k1)!} \end{equation*}and
\begin{equation*} {n1 \choose k} = \frac{(n1)!}{(n1k)!k!}. \end{equation*}Thus, starting with the righthand side of the equation:
\begin{align*} {n1 \choose k1} + {n1 \choose k} \amp = \frac{(n1)!}{(nk)!(k1)!}+ \frac{(n1)!}{(n1k)!\,k!}\\ \amp = \frac{(n1)!k}{(nk)!\,k!} + \frac{(n1)!(nk)}{(nk)!\,k!}\\ \amp = \frac{(n1)!(k+nk)}{(nk)!\,k!}\\ \amp = \frac{n!}{(nk)!\, k!}\\ \amp = {n \choose k}. \end{align*}The second line (where the common denominator is found) works because \(k(k1)! = k!\) and \((nk)(nk1)! = (nk)!\).
\(\square\)
This is certainly a valid proof, but also is entirely useless. Even if you understand the proof perfectly, it does not tell you why the identity is true. A better approach would be to explain what \({n \choose k}\) means and then say why that is also what \({n1 \choose k1} + {n1 \choose k}\) means. Let's see how this works for the four identities we observed above.
Example \(\PageIndex{2}\)
Explain why \({n \choose 0} = 1\) and \({n \choose n} = 1\).
 Solution

What do these binomial coefficients tell us? Well, \({n \choose 0}\) gives the number of ways to select 0 objects from a collection of \(n\) objects. There is only one way to do this, namely to not select any of the objects. Thus \({n \choose 0} = 1\). Similarly, \({n \choose n}\) gives the number of ways to select \(n\) objects from a collection of \(n\) objects. There is only one way to do this: select all \(n\) objects. Thus \({n \choose n} = 1\).
Alternatively, we know that \({n \choose 0}\) is the number of \(n\)bit strings with weight 0. There is only one such string, the string of all 0's. So \({n \choose 0} = 1\). Similarly \({n \choose n}\) is the number of \(n\)bit strings with weight \(n\). There is only one string with this property, the string of all 1's.
Another way: \({n \choose 0}\) gives the number of subsets of a set of size \(n\) containing 0 elements. There is only one such subset, the empty set. \({n \choose n}\) gives the number of subsets containing \(n\) elements. The only such subset is the original set (of all elements).
Example \(\PageIndex{3}\)
Explain why \({n \choose k} = {n1 \choose k1} + {n1 \choose k}\).
 Solution

The easiest way to see this is to consider bit strings. \({n \choose k}\) is the number of bit strings of length \(n\) containing \(k\) 1's. Of all of these strings, some start with a 1 and the rest start with a 0. First consider all the bit strings which start with a 1. After the 1, there must be \(n1\) more bits (to get the total length up to \(n\)) and exactly \(k1\) of them must be 1's (as we already have one, and we need \(k\) total). How many strings are there like that? There are exactly \({n1 \choose k1}\) such bit strings, so of all the length \(n\) bit strings containing \(k\) 1's, \({n1 \choose k1}\) of them start with a 1. Similarly, there are \({n1\choose k}\) which start with a 0 (we still need \(n1\) bits and now \(k\) of them must be 1's). Since there are \({n1 \choose k}\) bit strings containing \(n1\) bits with \(k\) 1's, that is the number of length \(n\) bit strings with \(k\) 1's which start with a 0. Therefore \({n \choose k} = {n1\choose k1} + {n1 \choose k}\).
Another way: consider the question, how many ways can you select \(k\) pizza toppings from a menu containing \(n\) choices? One way to do this is just \({n \choose k}\). Another way to answer the same question is to first decide whether or not you want anchovies. If you do want anchovies, you still need to pick \(k1\) toppings, now from just \(n1\) choices. That can be done in \({n1 \choose k1}\) ways. If you do not want anchovies, then you still need to select \(k\) toppings from \(n1\) choices (the anchovies are out). You can do that in \({n1 \choose k}\) ways. Since the choices with anchovies are disjoint from the choices without anchovies, the total choices are \({n1 \choose k1}+{n1 \choose k}\). But wait. We answered the same question in two different ways, so the two answers must be the same. Thus \({n \choose k} = {n1\choose k1} + {n1 \choose k}\).
You can also explain (prove) this identity by counting subsets, or even lattice paths.
Example \(\PageIndex{4}\)
Prove the binomial identity \[{n \choose k} = {n \choose nk}. \nonumber\]
 Solution

Why is this true? \({n \choose k}\) counts the number of ways to select \(k\) things from \(n\) choices. On the other hand, \({n \choose nk}\) counts the number of ways to select \(nk\) things from \(n\) choices. Are these really the same? Well, what if instead of selecting the \(nk\) things you choose to exclude them. How many ways are there to choose \(nk\) things to exclude from \(n\) choices. Clearly this is \({n \choose nk}\) as well (it doesn't matter whether you include or exclude the things once you have chosen them). And if you exclude \(nk\) things, then you are including the other \(k\) things. So the set of outcomes should be the same.
Let's try the pizza counting example like we did above. How many ways are there to pick \(k\) toppings from a list of \(n\) choices? On the one hand, the answer is simply \({n \choose k}\). Alternatively, you could make a list of all the toppings you don't want. To end up with a pizza containing exactly \(k\) toppings, you need to pick \(nk\) toppings to not put on the pizza. You have \({n \choose nk}\) choices for the toppings you don't want. Both of these ways give you a pizza with \(k\) toppings, in fact all the ways to get a pizza with \(k\) toppings. Thus these two answers must be the same: \({n \choose k} = {n \choose nk}\).
You can also prove (explain) this identity using bit strings, subsets, or lattice paths. The bit string argument is nice: \({n \choose k}\) counts the number of bit strings of length \(n\) with \(k\) 1's. This is also the number of bit string of length \(n\) with \(k\) 0's (just replace each 1 with a 0 and each 0 with a 1). But if a string of length \(n\) has \(k\) 0's, it must have \(nk\) 1's. And there are exactly \({n\choose nk}\) strings of length \(n\) with \(nk\) 1's.
Example \(\PageIndex{5}\)
Prove the binomial identity \[{n\choose 0} + {n \choose 1} + {n\choose 2} + \cdots + {n \choose n} = 2^n. \nonumber\]
 Solution

Proof
Let's do a “pizza proof” again. We need to find a question about pizza toppings which has \(2^n\) as the answer. How about this: If a pizza joint offers \(n\) toppings, how many pizzas can you build using any number of toppings from no toppings to all toppings, using each topping at most once?
On one hand, the answer is \(2^n\). For each topping you can say “yes” or “no,” so you have two choices for each topping.
On the other hand, divide the possible pizzas into disjoint groups: the pizzas with no toppings, the pizzas with one topping, the pizzas with two toppings, etc. If we want no toppings, there is only one pizza like that (the empty pizza, if you will) but it would be better to think of that number as \({n \choose 0}\) since we choose 0 of the \(n\) toppings. How many pizzas have 1 topping? We need to choose 1 of the \(n\) toppings, so \({n \choose 1}\). We have:
Pizzas with 0 toppings: \({n \choose 0}\) Pizzas with 1 topping: \({n \choose 1}\) Pizzas with 2 toppings: \({n \choose 2}\)The total number of possible pizzas will be the sum of these, which is exactly the lefthand side of the identity we are trying to prove.
Again, we could have proved the identity using subsets, bit strings, or lattice paths (although the lattice path argument is a little tricky).
 \(\vdots\)
 Pizzas with \(n\) toppings: \({n \choose n}\).
\(\square\)
Hopefully this gives some idea of how explanatory proofs of binomial identities can go. It is worth pointing out that more traditional proofs can also be beautiful. ^{ 3 }Most every binomial identity can be proved using mathematical induction, using the recursive definition for \(n \choose k\). We will discuss induction in Section 2.5. For example, consider the following rather slick proof of the last identity.
Expand the binomial \((x+y)^n\text{:}\)
\begin{equation*} (x + y)^n = {n \choose 0}x^n + {n \choose 1}x^{n1}y + {n \choose 2}x^{n2}y^2 + \cdots + {n \choose n1}x\cdot y^n + {n \choose n}y^n. \end{equation*}Let \(x = 1\) and \(y = 1\). We get:
\begin{equation*} (1 + 1)^n = {n \choose 0}1^n + {n \choose 1}1^{n1}1 + {n \choose 2}1^{n2}1^2 + \cdots + {n \choose n1}1\cdot 1^n + {n \choose n}1^n. \end{equation*}Of course this simplifies to:
\begin{equation*} (2)^n = {n \choose 0} + {n \choose 1} + {n \choose 2} + \cdots + {n \choose n1} + {n \choose n}. \end{equation*}Something fun to try: Let \(x = 1\) and \(y = 2\). Neat huh?
More Proofs
The explanatory proofs given in the above examples are typically called combinatorial proofs. In general, to give a combinatorial proof for a binomial identity, say \(A = B\) you do the following:
 Find a counting problem you will be able to answer in two ways.
 Explain why one answer to the counting problem is \(A\).
 Explain why the other answer to the counting problem is \(B\).
Since both \(A\) and \(B\) are the answers to the same question, we must have \(A = B\).
The tricky thing is coming up with the question. This is not always obvious, but it gets easier the more counting problems you solve. You will start to recognize types of answers as the answers to types of questions. More often what will happen is you will be solving a counting problem and happen to think up two different ways of finding the answer. Now you have a binomial identity and the proof is right there. The proof is the problem you just solved together with your two solutions.
For example, consider this counting question:
How many 10letter words use exactly four A's, three B's, two C's and one D?
Let's try to solve this problem. We have 10 spots for letters to go. Four of those need to be A's. We can pick the four Aspots in \({10 \choose 4}\) ways. Now where can we put the B's? Well there are only 6 spots left, we need to pick \(3\) of them. This can be done in \({6 \choose 3}\) ways. The two C's need to go in two of the 3 remaining spots, so we have \({3 \choose 2}\) ways of doing that. That leaves just one spot of the D, but we could write that 1 choice as \({1 \choose 1}\). Thus the answer is:
\begin{equation*} {10 \choose 4}{6 \choose 3}{3 \choose 2}{1 \choose 1}. \end{equation*}But why stop there? We can find the answer another way too. First let's decide where to put the one D: we have 10 spots, we need to choose 1 of them, so this can be done in \({10 \choose 1}\) ways. Next, choose one of the \({9 \choose 2}\) ways to place the two C's. We now have \(7\) spots left, and three of them need to be filled with B's. There are \({7 \choose 3}\) ways to do this. Finally the A's can be placed in \({4 \choose 4}\) (that is, only one) ways. So another answer to the question is
\begin{equation*} {10 \choose 1}{9 \choose 2}{7 \choose 3}{4 \choose 4}. \end{equation*}Interesting. This gives us the binomial identity:
\begin{equation*} {10 \choose 4}{6 \choose 3}{3 \choose 2}{1 \choose 1} = {10 \choose 1}{9 \choose 2}{7 \choose 3}{4 \choose 4}. \end{equation*}Here are a couple of other binomial identities with combinatorial proofs.
Example \(\PageIndex{6}\)
Prove the identity
\begin{equation*} 1 n + 2(n1) + 3 (n2) + \cdots + (n1) 2 + n 1 = {n+2 \choose 3}. \end{equation*}
 Solution

To give a combinatorial proof we need to think up a question we can answer in two ways: one way needs to give the lefthandside of the identity, the other way needs to be the righthandside of the identity. Our clue to what question to ask comes from the righthand side: \({n+2 \choose 3}\) counts the number of ways to select 3 things from a group of \(n+2\) things. Let's name those things \(1, 2, 3, \ldots, n+2\). In other words, we want to find 3element subsets of those numbers (since order should not matter, subsets are exactly the right thing to think about). We will have to be a bit clever to explain why the lefthandside also gives the number of these subsets. Here's the proof.
Proof
Consider the question “How many 3element subsets are there of the set \(\{1,2,3,\ldots, n+2\}\text{?}\)” We answer this in two ways:
Answer 1: We must select 3 elements from the collection of \(n+2\) elements. This can be done in \({n+2 \choose 3}\) ways.
Answer 2: Break this problem up into cases by what the middle number in the subset is. Say each subset is \(\{a,b,c\}\) written in increasing order. We count the number of subsets for each distinct value of \(b\). The smallest possible value of \(b\) is \(2\), and the largest is \(n+1\).
When \(b = 2\), there are \(1 \cdot n\) subsets: 1 choice for \(a\) and \(n\) choices (3 through \(n+2\)) for \(c\).
When \(b = 3\), there are \(2 \cdot (n1)\) subsets: 2 choices for \(a\) and \(n1\) choices for \(c\).
When \(b = 4\), there are \(3 \cdot (n2)\) subsets: 3 choices for \(a\) and \(n2\) choices for \(c\).
And so on. When \(b = n+1\), there are \(n\) choices for \(a\) and only 1 choice for \(c\), so \(n \cdot 1\) subsets.
Therefore the total number of subsets is
\begin{equation*} 1 n + 2 (n1) + 3 (n2) + \cdots + (n1)2 + n 1. \end{equation*}Since Answer 1 and Answer 2 are answers to the same question, they must be equal. Therefore
\begin{equation*} 1 n + 2 (n1) + 3 (n2) + \cdots + (n1) 2 + n 1 = {n+2 \choose 3}. \end{equation*}\(\square\)
Example \(\PageIndex{7}\)
Prove the binomial identity
\begin{equation*} {n \choose 0}^2 + {n \choose 1}^2 + {n \choose 2}^2 + \cdots + {n \choose n}^2 = {2n \choose n}. \end{equation*}
 Solution 1

We will give two different proofs of this fact. The first will be very similar to the previous example (counting subsets). The second proof is a little slicker, using lattice paths.
Proof
Consider the question: “How many pizzas can you make using \(n\) toppings when there are \(2n\) toppings to choose from?”
Answer 1: There are \(2n\) toppings, from which you must choose \(n\). This can be done in \({2n \choose n}\) ways.
Answer 2: Divide the toppings into two groups of \(n\) toppings (perhaps \(n\) meats and \(n\) veggies). Any choice of \(n\) toppings must include some number from the first group and some number from the second group. Consider each possible number of meat toppings separately:
0 meats: \({n \choose 0}{n \choose n}\), since you need to choose 0 of the \(n\) meats and \(n\) of the \(n\) veggies.
1 meat: \({n \choose 1}{n \choose n1}\), since you need 1 of \(n\) meats so \(n1\) of \(n\) veggies.
2 meats: \({n \choose 2}{n \choose n2}\). Choose 2 meats and the remaining \(n2\) toppings from the \(n\) veggies.
And so on. The last case is \(n\) meats, which can be done in \({n \choose n}{n \choose 0}\) ways.
Thus the total number of pizzas possible is
\begin{equation*} {n \choose 0}{n \choose n} + {n \choose 1}{n \choose n1} + {n \choose 2}{n \choose n2} + \cdots + {n \choose n}{n \choose 0}. \end{equation*}This is not quite the lefthand side … yet. Notice that \({n \choose n} = {n \choose 0}\) and \({n \choose n1} = {n \choose 1}\) and so on, by the identity in Example 1.4.4. Thus we do indeed get
\begin{equation*} {n \choose 0}^2 + {n \choose 1}^2 + {n \choose 2}^2 + \cdots + {n \choose n}^2. \end{equation*}Since these two answers are answers to the same question, they must be equal, and thus
\begin{equation*} {n \choose 0}^2 + {n \choose 1}^2 + {n \choose 2}^2 + \cdots + {n \choose n}^2 = {2n \choose n}. \end{equation*}
\(\square\)
For an alternative proof, we use lattice paths. This is reasonable to consider because the righthand side of the identity reminds us of the number of paths from \((0,0)\) to \((n,n)\).
Proof
Consider the question: How many lattice paths are there from \((0,0)\) to \((n,n)\text{?}\)
Answer 1: We must travel \(2n\) steps, and \(n\) of them must be in the up direction. Thus there are \({2n \choose n}\) paths.
Answer 2: Note that any path from \((0,0)\) to \((n,n)\) must cross the line \(x + y = n\). That is, any path must pass through exactly one of the points: \((0,n)\), \((1,n1)\), \((2,n2)\), …, \((n, 0)\). For example, this is what happens in the case \(n = 4\text{:}\)
How many paths pass through \((0,n)\text{?}\) To get to that point, you must travel \(n\) units, and \(0\) of them are to the right, so there are \({n \choose 0}\) ways to get to \((0,n)\). From \((0,n)\) to \((n,n)\) takes \(n\) steps, and \(0\) of them are up. So there are \({n \choose 0}\) ways to get from \((0,n)\) to \((n,n)\). Therefore there are \({n \choose 0}{n \choose 0}\) paths from \((0,0)\) to \((n,n)\) through the point \((0,n)\).
What about through \((1,n1)\). There are \({n \choose 1}\) paths to get there (\(n\) steps, 1 to the right) and \({n \choose 1}\) paths to complete the journey to \((n,n)\) (\(n\) steps, \(1\) up). So there are \({n \choose 1}{n \choose 1}\) paths from \((0,0)\) to \((n,n)\) through \((1,n1)\).
In general, to get to \((n,n)\) through the point \((k,nk)\) we have \({n \choose k}\) paths to the midpoint and then \({n \choose k}\) paths from the midpoint to \((n,n)\). So there are \({n \choose k}{n \choose k}\) paths from \((0,0)\) to \((n,n)\) through \((k, nk)\).
All together then the total paths from \((0,0)\) to \((n,n)\) passing through exactly one of these midpoints is
\begin{equation*} {n \choose 0}^2 + {n \choose 1}^2 + {n \choose 2}^2 + \cdots + {n \choose n}^2. \end{equation*}Since these two answers are answers to the same question, they must be equal, and thus
\begin{equation*} {n \choose 0}^2 + {n \choose 1}^2 + {n \choose 2}^2 + \cdots + {n \choose n}^2 = {2n \choose n}. \end{equation*}
\(\square\)