# 2.5: Induction

- Page ID
- 14758

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)

\( \def\d{\displaystyle}\)

\( \newcommand{\f}[1]{\mathfrak #1}\)

\( \newcommand{\s}[1]{\mathscr #1}\)

\( \def\N{\mathbb N}\)

\( \def\B{\mathbf{B}}\)

\( \def\circleA{(-.5,0) circle (1)}\)

\( \def\Z{\mathbb Z}\)

\( \def\circleAlabel{(-1.5,.6) node[above]{$A$}}\)

\( \def\Q{\mathbb Q}\)

\( \def\circleB{(.5,0) circle (1)}\)

\( \def\R{\mathbb R}\)

\( \def\circleBlabel{(1.5,.6) node[above]{$B$}}\)

\( \def\C{\mathbb C}\)

\( \def\circleC{(0,-1) circle (1)}\)

\( \def\F{\mathbb F}\)

\( \def\circleClabel{(.5,-2) node[right]{$C$}}\)

\( \def\A{\mathbb A}\)

\( \def\twosetbox{(-2,-1.5) rectangle (2,1.5)}\)

\( \def\X{\mathbb X}\)

\( \def\threesetbox{(-2,-2.5) rectangle (2,1.5)}\)

\( \def\E{\mathbb E}\)

\( \def\O{\mathbb O}\)

\( \def\U{\mathcal U}\)

\( \def\pow{\mathcal P}\)

\( \def\inv{^{-1}}\)

\( \def\nrml{\triangleleft}\)

\( \def\st{:}\)

\( \def\~{\widetilde}\)

\( \def\rem{\mathcal R}\)

\( \def\sigalg{$\sigma$-algebra }\)

\( \def\Gal{\mbox{Gal}}\)

\( \def\iff{\leftrightarrow}\)

\( \def\Iff{\Leftrightarrow}\)

\( \def\land{\wedge}\)

\( \def\And{\bigwedge}\)

\( \def\entry{\entry}\)

\( \def\AAnd{\d\bigwedge\mkern-18mu\bigwedge}\)

\( \def\Vee{\bigvee}\)

\( \def\VVee{\d\Vee\mkern-18mu\Vee}\)

\( \def\imp{\rightarrow}\)

\( \def\Imp{\Rightarrow}\)

\( \def\Fi{\Leftarrow}\)

\( \def\var{\mbox{var}}\)

\( \def\Th{\mbox{Th}}\)

\( \def\entry{\entry}\)

\( \def\sat{\mbox{Sat}}\)

\( \def\con{\mbox{Con}}\)

\( \def\iffmodels{\bmodels\models}\)

\( \def\dbland{\bigwedge \!\!\bigwedge}\)

\( \def\dom{\mbox{dom}}\)

\( \def\rng{\mbox{range}}\)

\( \def\isom{\cong}\)

\(\DeclareMathOperator{\wgt}{wgt}\)

\( \newcommand{\vtx}[2]{node[fill,circle,inner sep=0pt, minimum size=4pt,label=#1:#2]{}}\)

\( \newcommand{\va}[1]{\vtx{above}{#1}}\)

\( \newcommand{\vb}[1]{\vtx{below}{#1}}\)

\( \newcommand{\vr}[1]{\vtx{right}{#1}}\)

\( \newcommand{\vl}[1]{\vtx{left}{#1}}\)

\( \renewcommand{\v}{\vtx{above}{}}\)

\( \def\circleA{(-.5,0) circle (1)}\)

\( \def\circleAlabel{(-1.5,.6) node[above]{$A$}}\)

\( \def\circleB{(.5,0) circle (1)}\)

\( \def\circleBlabel{(1.5,.6) node[above]{$B$}}\)

\( \def\circleC{(0,-1) circle (1)}\)

\( \def\circleClabel{(.5,-2) node[right]{$C$}}\)

\( \def\twosetbox{(-2,-1.4) rectangle (2,1.4)}\)

\( \def\threesetbox{(-2.5,-2.4) rectangle (2.5,1.4)}\)

\( \def\ansfilename{practice-answers}\)

\( \def\shadowprops{ {fill=black!50,shadow xshift=0.5ex,shadow yshift=0.5ex,path fading={circle with fuzzy edge 10 percent}} }\)

\( \renewcommand{\bar}{\overline}\)

\( \newcommand{\card}[1]{\left| #1 \right|}\)

\( \newcommand{\twoline}[2]{\begin{pmatrix}#1 \\ #2 \end{pmatrix}}\)

\( \newcommand{\lt}{<}\)

\( \newcommand{\gt}{>}\)

\( \newcommand{\amp}{&}\)

\( \newcommand{\hexbox}[3]{

\def\x{-cos{30}*\r*#1+cos{30}*#2*\r*2}

\def\y{-\r*#1-sin{30}*\r*#1}

\draw (\x,\y) +(90:\r) -- +(30:\r) -- +(-30:\r) -- +(-90:\r) -- +(-150:\r) -- +(150:\r) -- cycle;

\draw (\x,\y) node{#3};

}\)

\(\renewcommand{\bar}{\overline}\)

\(\newcommand{\card}[1]{\left| #1 \right|}\)

\(\newcommand{\twoline}[2]{\begin{pmatrix}#1 \\ #2 \end{pmatrix}}\)

\(\newcommand{\lt}{<}\)

\(\newcommand{\gt}{>}\)

\(\newcommand{\amp}{&}\)

Mathematical induction is a proof technique, not unlike direct proof or proof by contradiction or combinatorial proof.^{ 3 }You might or might not be familiar with these yet. We will consider these in Chapter 3. In other words, induction is a style of argument we use to convince ourselves and others that a mathematical statement is always true. Many mathematical statements can be proved by simply explaining what they mean. Others are very difficult to prove—in fact, there are relatively simple mathematical statements which nobody yet knows how to prove. To facilitate the discovery of proofs, it is important to be familiar with some standard styles of arguments. Induction is one such style. Let's start with an example:

## Stamps

Investigate!

You need to mail a package, but don't yet know how much postage you will need. You have a large supply of 8-cent stamps and 5-cent stamps. Which amounts of postage can you make exactly using these stamps? Which amounts are impossible to make?

Perhaps in investigating the problem above you picked some amounts of postage, and then figured out whether you could make that amount using just 8-cent and 5-cent stamps. Perhaps you did this in order: can you make 1 cent of postage? Can you make 2 cents? 3 cents? And so on. If this is what you did, you were actually answering a *sequence* of questions. We have methods for dealing with sequences. Let's see if that helps.

Actually, we will not make a sequence of questions, but rather a sequence of statements. Let \(P(n)\) be the statement “you can make \(n\) cents of postage using just 8-cent and 5-cent stamps.” Since for each value of \(n\text{,}\) \(P(n)\) is a statement, it is either true or false. So if we form the sequence of statements

\begin{equation*} P(1), P(2), P(3), P(4), \ldots \end{equation*}the sequence will consist of \(T\)'s (for true) and \(F\)'s (for false). In our particular case the sequence starts

\begin{equation*} F,F,F,F,T,F,F,T,F,F,T,F,F,T,\ldots \end{equation*}because \(P(1), P(2), P(3), P(4)\) are all false (you cannot make 1, 2, 3, or 4 cents of postage) but \(P(5)\) is true (use one 5-cent stamp), and so on.

Let's think a bit about how we could find the value of \(P(n)\) for some specific \(n\) (the “value” will be either \(T\) or \(F\)). How did we find the value of the \(n\)th term of a sequence of numbers? How did we find \(a_n\text{?}\) There were two ways we could do this: either there was a closed formula for \(a_n\text{,}\) so we could plug in \(n\) into the formula and get our output value, or we had a recursive definition for the sequence, so we could use the previous terms of the sequence to compute the \(n\)th term. When dealing with sequences of statements, we could use either of these techniques as well. Maybe there is a way to use \(n\) itself to determine whether we can make \(n\) cents of postage. That would be something like a closed formula. Or instead we could use the previous terms in the sequence (of statements) to determine whether we can make \(n\) cents of postage. That is, if we know the value of \(P(n-1)\text{,}\) can we get from that to the value of \(P(n)\text{?}\) That would be something like a recursive definition for the sequence. Remember, finding recursive definitions for sequences was often easier than finding closed formulas. The same is true here.

Suppose I told you that \(P(43)\) was true (it is). Can you determine from this fact the value of \(P(44)\) (whether it true or false)? Yes you can. Even if we don't know how exactly we made 43 cents out of the 5-cent and 8-cent stamps, we do know that there was some way to do it. What if that way used at least three 5-cent stamps (making 15 cents)? We could replace those three 5-cent stamps with two 8-cent stamps (making 16 cents). The total postage has gone up by 1, so we have a way to make 44 cents, so \(P(44)\) is true. Of course, we assumed that we had at least three 5-cent stamps. What if we didn't? Then we must have at least three 8-cent stamps (making 24 cents). If we replace those three 8-cent stamps with five 5-cent stamps (making 25 cents) then again we have bumped up our total by 1 cent so we can make 44 cents, so \(P(44)\) is true.

Notice that we have not said how to make 44 cents, just that we can, on the basis that we can make 43 cents. How do we know we can make 43 cents? Perhaps because we know we can make \(42\) cents, which we know we can do because we know we can make 41 cents, and so on. It's a recursion! As with a recursive definition of a numerical sequence, we must specify our initial value. In this case, the initial value is “\(P(1)\) is false.” That's not good, since our recurrence relation just says that \(P(k+1)\) is true *if* \(P(k)\) is also true. We need to start the process with a true \(P(k)\). So instead, we might want to use “\(P(31)\) is true” as the initial condition.

Putting this all together we arrive at the following fact: it is possible to (exactly) make any amount of postage greater than 27 cents using just 5-cent and 8-cent stamps.^{ 4 }This is not claiming that there are no amounts less than 27 cents which can also be made. In other words, \(P(k)\) is true for any \(k \ge 28\). To prove this, we could do the following:

- Demonstrate that \(P(28)\) is true.
- Prove that if \(P(k)\) is true, then \(P(k+1)\) is true (for any \(k \ge 28\)).

Suppose we have done this. Then we know that the 28th term of the sequence above is a \(T\) (using step 1, the initial condition or base case), and that every term after the 28th is \(T\) also (using step 2, the recursive part or inductive case). Here is what the proof would actually look like.

#### Proof

Let \(P(n)\) be the statement “it is possible to make exactly \(n\) cents of postage using 5-cent and 8-cent stamps.” We will show \(P(n)\) is true for all \(n \ge 28\).

First, we show that \(P(28)\) is true: \(28 = 4 \cdot 5+ 1\cdot 8\text{,}\) so we can make \(28\) cents using four 5-cent stamps and one 8-cent stamp.

Now suppose \(P(k)\) is true for some arbitrary \(k \ge 28\). Then it is possible to make \(k\) cents using 5-cent and 8-cent stamps. Note that since \(k \ge 28\text{,}\) it cannot be that we use less than three 5-cent stamps *and* less than three 8-cent stamps: using two of each would give only 26 cents. Now if we have made \(k\) cents using at least three 5-cent stamps, replace three 5-cent stamps by two 8-cent stamps. This replaces 15 cents of postage with 16 cents, moving from a total of \(k\) cents to \(k+1\) cents. Thus \(P(k+1)\) is true. On the other hand, if we have made \(k\) cents using at least three 8-cent stamps, then we can replace three 8-cent stamps with five 5-cent stamps, moving from 24 cents to 25 cents, giving a total of \(k+1\) cents of postage. So in this case as well \(P(k+1)\) is true.

Therefore, by the principle of mathematical induction, \(P(n)\) is true for all \(n \ge 28\).

## Formalizing Proofs

What we did in the stamp example above works for many types of problems. Proof by induction is useful when trying to prove statements about all natural numbers, or all natural numbers greater than some fixed first case (like 28 in the example above), and in some other situations too. In particular, induction should be used when there is some way to go from one case to the next – when you can see how to always “do one more.”

This is a big idea. Thinking about a problem *inductively* can give new insight into the problem. For example, to really understand the stamp problem, you should think about how any amount of postage (greater than 28 cents) can be made (this is non-inductive reasoning) and also how the ways in which postage can be made *changes* as the amount increases (inductive reasoning). When you are asked to provide a proof by induction, you are being asked to think about the problem *dynamically*; how does increasing \(n\) change the problem?

But there is another side to proofs by induction as well. In mathematics, it is not enough to understand a problem, you must also be able to communicate the problem to others. Like any discipline, mathematics has standard language and style, allowing mathematicians to share their ideas efficiently. Proofs by induction have a certain formal style, and being able to write in this style is important. It allows us to keep our ideas organized and might even help us with formulating a proof.

Here is the general structure of a proof by mathematical induction:

#### Induction Proof Structure

Start by saying what the statement is that you want to prove: “Let \(P(n)\) be the statement…” To prove that \(P(n)\) is true for all \(n \ge 0\text{,}\) you must prove two facts:

- Base case: Prove that \(P(0)\) is true. You do this directly. This is often easy.
- Inductive case: Prove that \(P(k) \imp P(k+1)\) for all \(k \ge 0\). That is, prove that for any \(k \ge 0\) if \(P(k)\) is true, then \(P(k+1)\) is true as well. This is the proof of an if … then … statement, so you can assume \(P(k)\) is true (\(P(k)\) is called the
*inductive hypothesis*). You must then explain why \(P(k+1)\) is also true, given that assumption.

Assuming you are successful on both parts above, you can conclude, “Therefore by the principle of mathematical induction, the statement \(P(n)\) is true for all \(n \ge 0\).”

Sometimes the statement \(P(n)\) will only be true for values of \(n \ge 4\text{,}\) for example, or some other value. In such cases, replace all the 0's above with 4's (or the other value).

The other advantage of formalizing inductive proofs is it allows us to verify that the logic behind this style of argument is valid. Why does induction work? Think of a row of dominoes set up standing on their edges. We want to argue that in a minute, all the dominoes will have fallen down. For this to happen, you will need to push the first domino. That is the base case. It will also have to be that the dominoes are close enough together that when any particular domino falls, it will cause the next domino to fall. That is the inductive case. If both of these conditions are met, you push the first domino over and each domino will cause the next to fall, then all the dominoes will fall.

Induction is powerful! Think how much easier it is to knock over dominoes when you don't have to push over each domino yourself. You just start the chain reaction, and the rely on the relative nearness of the dominoes to take care of the rest.

Think about our study of sequences. It is easier to find recursive definitions for sequences than closed formulas. Going from one case to the next is easier than going directly to a particular case. That is what is so great about induction. Instead of going directly to the (arbitrary) case for \(n\text{,}\) we just need to say how to get from one case to the next.

When you are asked to prove a statement by mathematical induction, you should first think about *why* the statement is true, using inductive reasoning. Explain why induction is the right thing to do, and roughly why the inductive case will work. Then, sit down and write out a careful, formal proof using the structure above.

## Examples

Here are some examples of proof by mathematical induction.

Example \(\PageIndex{1}\)

Prove for each natural number \(n \ge 1\) that \(1 + 2 + 3 + \cdots + n = \frac{n(n+1)}{2}\).

**Answer**-
First, let's think inductively about this equation. In fact, we know this is true for other reasons (reverse and add comes to mind). But why might induction be applicable? The left-hand side adds up the numbers from 1 to \(n\). If we know how to do that, adding just one more term (\(n+1\)) would not be that hard. For example, if \(n = 100\text{,}\) suppose we know that the sum of the first 100 numbers is \(5050\) (so \(1 + 2 + 3 + \cdots + 100 = 5050\text{,}\) which is true). Now to find the sum of the first 101 numbers, it makes more sense to just add 101 to 5050, instead of computing the entire sum again. We would have \(1 + 2 + 3 + \cdots + 100 + 101 = 5050 + 101 = 5151\). In fact, it would always be easy to add just one more term. This is why we should use induction.

Now the formal proof:

**Proof**Let \(P(n)\) be the statement \(1 + 2 + 3 + \cdots + n = \frac{n(n+2)}{2}\). We will show that \(P(n)\) is true for all natural numbers \(n \ge 1\).

Base case: \(P(1)\) is the statement \(1 = \frac{1(1+1)}{2}\) which is clearly true.

Inductive case: Let \(k \ge 1\) be a natural number. Assume (for induction) that \(P(k)\) is true. That means \(1 + 2 + 3 + \cdots + k = \frac{k(k+1)}{2}\). We will prove that \(P(k+1)\) is true as well. That is, we must prove that \(1 + 2 + 3 + \cdots + k + (k+1) = \frac{(k+1)(k+2)}{2}\). To prove this equation, start by adding \(k+1\) to both sides of the inductive hypothesis:

\begin{equation*} 1 + 2 + 3 + \cdots + k + (k+1) = \frac{k(k+1)}{2} + (k+1). \end{equation*}Now, simplifying the right side we get:

\begin{align*} \frac{k(k+1)}{2} + k+1 \amp = \frac{k(k+1)}{2} + \frac{2(k+1)}{2}\\ \amp = \frac{k(k+1) + 2(k+1)}{2}\\ \amp = \frac{(k+2)(k+1)}{2}. \end{align*}Thus \(P(k+1)\) is true, so by the principle of mathematical induction \(P(n)\) is true for all natural numbers \(n \ge 1\).

Note that in the part of the proof in which we proved \(P(k+1)\) from \(P(k)\text{,}\) we used the equation \(P(k)\). This was the inductive hypothesis. Seeing how to use the inductive hypotheses is usually straight forward when proving a fact about a sum like this. In other proofs, it can be less obvious where it fits in.

Example \(\PageIndex{2}\)

Prove that for all \(n \in \N\text{,}\) \(6^n - 1\) is a multiple of 5.

**Solution**-
Again, start by understanding the dynamics of the problem. What does increasing \(n\) do? Let's try with a few examples. If \(n = 1\text{,}\) then yes, \(6^1 - 1 = 5\) is a multiple of 5. What does incrementing \(n\) to 2 look like? We get \(6^2 - 1 = 35\text{,}\) which again is a multiple of 5. Next, \(n = 3\text{:}\) but instead of just finding \(6^3 - 1\text{,}\) what did the increase in \(n\) do? We will still subtract 1, but now we are multiplying by another 6 first. Viewed another way, we are multiplying a number which is one more than a multiple of 5 by 6 (because \(6^2 - 1\) is a multiple of 5, so \(6^2\) is one more than a multiple of 5). What do numbers which are one more than a multiple of 5 look like? They must have last digit 1 or 6. What happens when you multiply such a number by 6? Depends on the number, but in any case, the last digit of the new number must be a 6. And then if you subtract 1, you get last digit 5, so a multiple of 5.

The point is, every time we multiply by just one more six, we still get a number with last digit 6, so subtracting 1 gives us a multiple of 5. Now the formal proof:

#### Proof

Let \(P(n)\) be the statement, “\(6^n - 1\) is a multiple of 5.” We will prove that \(P(n)\) is true for all \(n \in \N\).

Base case: \(P(0)\) is true: \(6^0 -1 = 0\) which is a multiple of 5.

Inductive case: Let \(k\) be an arbitrary natural number. Assume, for induction, that \(P(k)\) is true. That is, \(6^k - 1\) is a multiple of \(5\). Then \(6^k - 1 = 5j\) for some integer \(j\). This means that \(6^k = 5j + 1\). Multiply both sides by \(6\text{:}\)

\begin{equation*} 6^{k+1} = 6(5j+1) = 30j + 6. \end{equation*}But we want to know about \(6^{k+1} - 1\text{,}\) so subtract 1 from both sides:

\begin{equation*} 6^{k+1} - 1 = 30j + 5. \end{equation*}Of course \(30j+5 = 5(6j+1)\text{,}\) so is a multiple of 5.

Therefore \(6^{k+1} - 1\) is a multiple of 5, or in other words, \(P(k+1)\) is true. Thus, by the principle of mathematical induction \(P(n)\) is true for all \(n \in \N\).

We had to be a little bit clever (i.e., use some algebra) to locate the \(6^k - 1\) inside of \(6^{k+1} - 1\) before we could apply the inductive hypothesis. This is what can make inductive proofs challenging.

In the two examples above, we started with \(n = 1\) or \(n = 0\). We can start later if we need to.

Example \(\PageIndex{3}\)

Prove that \(n^2 \lt 2^n\) for all integers \(n \ge 5\).

**Solution**-
First, the idea of the argument. What happens when we increase \(n\) by 1? On the left-hand side, we increase the base of the square and go to the next square number. On the right-hand side, we increase the power of 2. This means we double the number. So the question is, how does doubling a number relate to increasing to the next square? Think about what the difference of two consecutive squares looks like. We have \((n+1)^2 - n^2\). This factors:

\begin{equation*} (n+1)^2 - n^2 = (n+1-n)(n+1+n) = 2n+1. \end{equation*}But doubling the right-hand side increases it by \(2^n\text{,}\) since \(2^{n+1} = 2^n + 2^n\). When \(n\) is large enough, \(2^n > 2n + 1\).

What we are saying here is that each time \(n\) increases, the left-hand side grows by less than the right-hand side. So if the left-hand side starts smaller (as it does when \(n = 5\)), it will never catch up.

Now the formal proof:

Let \(P(n)\) be the statement \(n^2 \lt 2^n\). We will prove \(P(n)\) is true for all integers \(n \ge 5\).

Base case: \(P(5)\) is the statement \(5^2 \lt 2^5\). Since \(5^2 = 25\) and \(2^5 = 32\text{,}\) we see that \(P(5)\) is indeed true.

Inductive case: Let \(k \ge 5\) be an arbitrary integer. Assume, for induction, that \(P(k)\) is true. That is, assume \(k^2 \lt 2^k\). We will prove that \(P(k+1)\) is true, i.e., \((k+1)^2 \lt 2^{k+1}\). To prove such an inequality, start with the left-hand side and work towards the right-hand side:

\begin{align*} (k+1)^2 \amp = k^2 + 2k + 1 \amp\\ \amp \lt 2^k + 2k + 1 \amp \ldots\text{by the inductive hypothesis.}\\ \amp \lt 2^k + 2^k \amp \ldots\text{ since } 2k + 1 \lt 2^k \text{ for }k \ge 5.\\ \amp = 2^{k+1}. \amp \end{align*}Following the equalities and inequalities through, we get \((k+1)^2 \lt 2^{k+1}\text{,}\) in other words, \(P(k+1)\). Therefore by the principle of mathematical induction, \(P(n)\) is true for all \(n \ge 5\).

\(\square\)

The previous example might remind you of the *racetrack principle* from calculus, which says that if \(f(a) \lt g(a)\text{,}\) and \(f'(x) \lt g'(x)\) for \(x > a\text{,}\) then \(f(x) \lt g(x)\) for \(x > a\). Same idea: the larger function is increasing at a faster rate than the smaller function, so the larger function will stay larger. In discrete math, we don't have derivatives, so we look at differences. Thus induction is the way to go.

### Warning:

With great power, comes great responsibility. Induction isn't magic. It seems very powerful to be able to assume \(P(k)\) is true. After all, we are trying to prove \(P(n)\) is true and the only difference is in the variable: \(k\) vs. \(n\). Are we assuming that what we want to prove is true? Not really. We assume \(P(k)\) is true only for the sake of proving that \(P(k+1)\) is true.

Still you might start to believe that you can prove anything with induction. Consider this incorrect “proof” that every Canadian has the same eye color: Let \(P(n)\) be the statement that any \(n\) Canadians have the same eye color. \(P(1)\) is true, since everyone has the same eye color as themselves. Now assume \(P(k)\) is true. That is, assume that in any group of \(k\) Canadians, everyone has the same eye color. Now consider an arbitrary group of \(k+1\) Canadians. The first \(k\) of these must all have the same eye color, since \(P(k)\) is true. Also, the last \(k\) of these must have the same eye color, since \(P(k)\) is true. So in fact, everyone the group must have the same eye color. Thus \(P(k+1)\) is true. So by the principle of mathematical induction, \(P(n)\) is true for all \(n\).

Clearly something went wrong. The problem is that the proof that \(P(k)\) implies \(P(k+1)\) assumes that \(k \ge 2\). We have only shown \(P(1)\) is true. In fact, \(P(2)\) is false.

## Strong Induction

Investigate!

Start with a square piece of paper. You want to cut this square into smaller squares, leaving no waste (every piece of paper you end up with must be a square). Obviously it is possible to cut the square into 4 squares. You can also cut it into 9 squares. It turns out you can cut the square into 7 squares (although not all the same size). What other numbers of squares could you end up with?

Sometimes, to prove that \(P(k+1)\) is true, it would be helpful to know that \(P(k)\) *and* \(P(k-1)\) *and* \(P(k-2)\) are all true. Consider the following puzzle:

You have a rectangular chocolate bar, made up of \(n\) identical squares of chocolate. You can take such a bar and break it along any row or column. How many times will you have to break the bar to reduce it to \(n\) single chocolate squares?

At first, this question might seem impossible. Perhaps I meant to ask for the *smallest* number of breaks needed? Let's investigate.

Start with some small cases. If \(n=2\text{,}\) you must have a \(1\times 2\) rectangle, which can be reduced to single pieces in one break. With \(n=3\text{,}\) we must have a \(1\times 3\) bar, which requires two breaks: the first break creates a single square and a \(1\times 2\) bar, which we know takes one (more) break.

What about \(n=4\text{?}\) Now we could have a \(2\times 2\) bar, or a \(1 \times 4\) bar. In the first case, break the bar into two \(2\times 2\) bars, each which require one more break (that's a total of three breaks required). If we started with a \(1 \times 4\) bar, we have choices for our first break. We could break the bar in half, creating two \(1\times 2\) bars, or we could break off a single square, leaving a \(1\times 3\) bar. But either way, we still need two more breaks, giving a total of three.

It is starting to look like no matter how we break the bar (and no matter how the \(n\) squares are arranged into a rectangle), we will always have the same number of breaks required. It also looks like that number is one less than \(n\text{:}\)

#### Conjecture2.5.4

Given a \(n\)-square rectangular chocolate bar, it always takes \(n-1\) breaks to reduce the bar to single squares.

It makes sense to prove this by induction because after breaking the bar once, you are left with *smaller* chocolate bars. Reducing to smaller cases is what induction is all about. We can inductively assume we already know how to deal with these smaller bars. The problem is, if we are trying to prove the inductive case about a \((k+1)\)-square bar, we don't know that after the first break the remaining bar will have \(k\) squares. So we really need to assume that our conjecture is true for all cases less than \(k+1\).

Is it valid to make this stronger assumption? Remember, in induction we are attempting to prove that \(P(n)\) is true for all \(n\). What if that were not the case? Then there would be some first \(n_0\) for which \(P(n_0)\) was false. Since \(n_0\) is the *first* counterexample, we know that \(P(n)\) is true for all \(n \lt n_0\). Now we proceed to prove that \(P(n_0)\) is actually true, based on the assumption that \(P(n)\) is true for all smaller \(n\).

This is quite an advantage: we now have a stronger inductive hypothesis. We can assume that \(P(1)\text{,}\) \(P(2)\text{,}\) \(P(3)\text{,}\) … \(P(k)\) is true, just to show that \(P(k+1)\) is true. Previously, we just assumed \(P(k)\) for this purpose.

It is slightly easier if we change our variables for strong induction. Here is what the formal proof would look like:

#### Strong Induction Proof Structure

Again, start by saying what you want to prove: “Let \(P(n)\) be the statement…” Then establish two facts:

- Base case: Prove that \(P(0)\) is true.
- Inductive case: Assume \(P(k)\) is true for all \(k \lt n\). Prove that \(P(n)\) is true.

Conclude, “therefore, by strong induction, \(P(n)\) is true for all \(n \gt 0\).”

Of course, it is acceptable to replace 0 with a larger base case if needed.^{ 5 }Technically, strong induction does not require you to prove a separate base case. This is because when proving the inductive case, you must show that \(P(0)\) is true, assuming \(P(k)\) is true for all \(k \lt 0\). But this is not any help so you end up proving \(P(0)\) anyway. To be on the safe side, we will always include the base case separately.

Let's prove our conjecture about the chocolate bar puzzle:

#### Proof

Let \(P(n)\) be the statement, “it takes \(n-1\) breaks to reduce a \(n\)-square chocolate bar to single squares.”

Base case: Consider \(P(2)\). The squares must be arranged into a \(1\times 2\) rectangle, and we require \(2-1 = 1\) breaks to reduce this to single squares.

Inductive case: Fix an arbitrary \(n\ge 2\) and assume \(P(k)\) is true for all \(k \lt n\). Consider a \(n\)-square rectangular chocolate bar. Break the bar once along any row or column. This results in two chocolate bars, say of sizes \(a\) and \(b\). That is, we have an \(a\)-square rectangular chocolate bar, a \(b\)-square rectangular chocolate bar, and \(a+b = n\).

We also know that \(a \lt n\) and \(b \lt n\text{,}\) so by our inductive hypothesis, \(P(a)\) and \(P(b)\) are true. To reduce the \(a\)-sqaure bar to single squares takes \(a-1\) breaks; to reduce the \(b\)-square bar to single squares takes \(b-1\) breaks. Doing this results in our original bar being reduced to single squares. All together it took the initial break, plus the \(a-1\) and \(b-1\) breaks, for a total of \(1+a-1+b-1 = a+b-1 = n-1\) breaks. Thus \(P(n)\) is true.

Therefore, by strong induction, \(P(n)\) is true for all \(n \ge 2\).

Here is a more mathematically relevant example:

Example \(\PageIndex{5}\)

Prove that any natural number greater than 1 is either prime or can be written as the product of primes.

**Solution**-
First, the idea: if we take some number \(n\text{,}\) maybe it is prime. If so, we are done. If not, then it is composite, so it is the product of two smaller numbers. Each of these factors is smaller than \(n\) (but at least 2), so we can repeat the argument with these numbers. We have reduced to a smaller case.

Now the formal proof:

**Proof**Let \(P(n)\) be the statement, “\(n\) is either prime or can be written as the product of primes.” We will prove \(P(n)\) is true for all \(n \ge 2\).

Base case: \(P(2)\) is true because \(2\) is indeed prime.

Inductive case: assume \(P(k)\) is true for all \(k \lt n\). We want to show that \(P(n)\) is true. That is, we want to show that \(n\) is either prime or is the product of primes. If \(n\) is prime, we are done. If not, then \(n\) has more than 2 divisors, so we can write \(n = m_1 \cdot m_2\text{,}\) with \(m_1\) and \(m_2\) less than \(n\) (and greater than 1). By the inductive hypothesis, \(m_1\) and \(m_2\) are each either prime or can be written as the product of primes. In either case, we have that \(n\) is written as the product of primes.

Thus by the strong induction, \(P(n)\) is true for all \(n \ge 2\).

Whether you use regular induction or strong induction depends on the statement you want to prove. If you wanted to be safe, you could always use strong induction. It really is *stronger*, so can accomplish everything “weak” induction can. That said, using regular induction is often easier since there is only one place you can use the induction hypothesis. There is also something to be said for *elegance* in proofs. If you can prove a statement using simpler tools, it is nice to do so.

As a final contrast between the two forms of induction, consider once more the stamp problem. Regular induction worked by showing how to increase postage by one cent (either replacing three 5-cent stamps with two 8-cent stamps, or three 8-cent stamps with five 5-cent stamps). We could give a slightly different proof using strong induction. First, we could show *five* base cases: it is possible to make 28, 29, 30, 31, and 32 cents (we would actually say how each of these is made). Now assume that it is possible to make \(k\) cents of postage for all \(k \lt n\) as long as \(k \ge 28\). As long as \(n > 32\text{,}\) this means in particular we can make \(k = n-5\) cents. Now add a 5-cent stamp to get make \(n\) cents.