# 2.4: Two-Column Proofs

- Page ID
- 19370

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

If you’ve ever spent much time trying to check someone else’s work in solving an algebraic problem, you’d probably agree that it would be a help to know what they were *trying *to do in each step. Most people have this fairly vague notion that they’re allowed to “do the same thing on both sides” and they’re allowed to simplify the sides of the equation separately – but more often than not, several different things get done on a given line, mistakes get made, and it can be nearly impossible to figure out what went wrong and where.

Now, after all, the beauty of math is supposed to lie in its crystal clarity, so this sort of situation is really unacceptable. It may be an impossible goal to get “the average Joe” to perform algebraic manipulations with clarity, but those of us who aspire to become mathematicians must certainly hold ourselves to a higher standard. Two-column proofs are usually what is meant by a “higher standard” when we are talking about relatively mechanical manipulations – like doing algebra, or more to the point, proving logical equivalences. Now don’t despair! You will not, in a mathematical career, be expected to provide two-column proofs very often. In fact, in more advanced work one tends to not give *any *sort of proof for a statement that lends itself to a two-column approach. But, if you find yourself writing “As the reader can easily verify, Equation 17 holds. . . ” in a paper, or making some similar remark to your students, you are *morally obligated* to being able to produce a two-column proof.

So what, exactly, is a two-column proof? In the left column, you show your work, being careful to go one step at a time. In the right column you provide a justification for each step.

We’re going to go through a couple of examples of two-column proofs in the context of proving logical equivalences. One thing to watch out for: if you’re trying to prove a given equivalence, and the first thing you write down is that very equivalence, *it’s wrong!* This would constitute the logical error known as “begging the question” also known as “circular reasoning.” It’s clearly not okay to try to demonstrate some fact by first* asserting the very same fact.* Nevertheless, there is (for some unknown reason) a powerful temptation to do this very thing. To avoid making this error, we will not put any equivalences on a single line. Instead, we will start with one side or the other of the statement to be proved, and modify it using known rules of equivalence, until we arrive at the other side.

Without further ado, let’s provide a proof of the equivalence \(A ∧ (B ∨ ¬A) \cong A ∧ B\).^{1}

\( ∧ (B ∨ ¬A) \cong (A ∧ B) ∨ (A ∧ ¬A) \tag{distributive law}\)

\(\cong (A ∧ B) ∨ c \tag{complementarity}\)

\(\cong (A ∧ B) \tag{identity law}\)

We have assembled a nice, step-by-step sequence of equivalences – each justified by a known law – that begins with the left-hand side of the statement to be proved and ends with the right-hand side. That’s an irrefutable proof!

In the next example we’ll highlight a slightly sloppy habit of thought that tends to be problematic. People usually (at first) associate a direction with the basic logical equivalences. This is reasonable for several of them because one side is markedly simpler than the other. For example, the domination rule would normally be used to replace a part of a statement that looked like “\(A ∧ c\)” with the simpler expression “\(c\)”. There is a certain amount of strategization necessary in doing these proofs, and I usually advise people to start with the more complicated side of the equivalence to be proved. It just feels right to work in the direction of making things simpler, but there are times when one has to take one step back before proceeding two steps forward. . .

Let’s have a look at another equivalence: \(A∧(B∨C) \cong (A∧(B∨C))∨(A∧ C)\). There are many different ways in which valid steps can be concatenated to convert one side of this equivalence into the other, so a subsidiary goal is to find a proof that uses the least number of steps. Following my own advice, I’ll start with the right-hand side of this one.

\((A ∧ (B ∨ C)) ∨ (A ∧ C) \cong ((A∧B)∨(A∧C))∨(A∧C) \tag{distributive law}\)

\(\cong (A∧B)∨((A∧C)∨(A∧C)) \tag{associative law}\)

\(\cong (A ∧ B) ∨ (A ∧ C) \tag{idempotence}\)

\(\cong A ∧ (B ∨ C) \tag{distributive law}\)

Note that in the example we’ve just done, the two applications of the distributive law go in opposite directions as far as their influence on the complexity of the expressions are concerned.

## Exercises:

Write two-column proofs that verify each of the following logical equivalences.

- \(A ∨ (A ∧ B) \cong A ∧ (A ∨ B)\)
- \((A ∧ ¬B) ∨ A \cong A\)
- \(A ∨ B \cong A ∨ (¬A ∧ B)\)
- \(¬(A ∨ ¬B) ∨ (¬A ∧ ¬B) \cong ¬A\)
- \(A \cong A ∧ ((A ∨ ¬B) ∨ (A ∨ B))\)
- \((A ∧ ¬B) ∧ (¬A ∨ B) \cong c\)
- \(A \cong A ∧ (A ∨ (A ∧ (B ∨ C)))\)
- \(¬(A ∧ B) ∧ ¬(A ∧ C) \cong ¬A ∨ (¬B ∧ ¬C)\)