# 3.2: Completeness

- Page ID
- 9694

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

Let us fix a collection of nonlogical axioms, \(\Sigma\). Our goal in this section is to show that for any formula \(\phi\), if \(\Sigma \models \phi\), then \(\Sigma \vdash \phi\). In some sense, this is the only possible interpretation of the phrase "you can prove anything that is true", if you are discussing the adequacy of the deductive system. To say that \(\phi\) is true whenever \(\Sigma\) is a collection of true axioms is precisely to say that \(\Sigma\) logically implies \(\phi\). Thus, the Completeness Theorem will say that whenever \(\phi\) is logically implied by \(\Sigma\), there is a deduction from \(\Sigma\) of \(\phi\). So the Completeness Theorem is the converse of the Soundness Theorem.

We have to begin with a short discussion of consistency.

Definition 3.2.1: consistency

Let \(\Sigma\) be a set of \(\mathcal{L}\)-formulas. We will say that \(\Sigma\) is **inconsistent** if there is a deduction from \(\Sigma\) of \(\left[ \left( \forall x \right) x = x \right] \land \neg \left[ \left( \forall x \right) x = x \right]\). We say that \(\Sigma\) is **consistent** if it is not inconsistent.

So \(\Sigma\) is inconsistent if \(\Sigma\) proves a contradiction. Exercise 1 asks you to show that if \(\Sigma\) is inconsistent, then there is a deduction from \(\Sigma\) of every \(\mathcal{L}\)-formula. For notational convenience, let us agree to use the symbol \(\perp\) (read "false" or "eet") for the contradictory sentence \(\left[ \left( \forall x \right) x = x \right] \land \neg \left[ \left( \forall x \right) x = x \right]\). All you will have to remember is that \(\perp\) is a sentence that is in every language and is true in no structure.

Theorem 3.2.2: Completeness Theorem

*Suppose that \(\Sigma\) is a set of \(\mathcal{L}\)-formulas and \(\phi\) is an \(\mathcal{L}\)-formula. If \(\Sigma \models \phi\), then \(\Sigma \vdash \phi\).*

Proof

*Chaff:* This theorem was established in 1929 by the Austrian mathematician Kurt Gödel, in his PhD dissertation. If you haven't picked it up already, you should know that the work of Gödel is central to the development of logic in the twentieth century. He is responsible for most of the major results that we will state in the rest of the book: The Completeness Theorem, the Compactness Theorem, and the two Incompleteness Theorems. Gödel was an absolutely brilliant man, with a complex and troubled personality. A wonderful and engaging biography of Gödel is [Dawson 97]. The first volume of Gödel's collected works, [Gödel-Works], also includes a biography and introductory comments about his papers that can help your understanding of this wonderful mathematics.

The proof we present of the Completeness Theorem is based on work of Leon Henkin. The idea of Henkin's proof is brilliant, but the details take some time to work through.

Before we get involved in the details, let us look at a rough outline of how the argument proceeds. There are a few simplifications and one or two outright lies in the outline, but we will straighten everything out as we work out the proof.

**Outline of the Proof**

There will be a **preliminary argument** that will show that it is sufficient to prove that if \(\Sigma\) is a consistent set of sentences, then \(\Sigma\) has a model. Then we will proceed to assume that we are given such a set of sentences, and we will construct a model for \(\Sigma\).

The construction of the model will proceed in several steps, but the central idea was introduced in Example 1.6.4. The elements of the model will be variable-free terms of a language. We will construct this model so that the formulas that will be true in the model are precisely the formulas that are in a certain set of formulas, which we will call \(\Sigma^\prime\). We will make sure that \(\Sigma \subseteq \Sigma^\prime\), so all of the formulas of \(\Sigma\) will be true in this constructed model. In other words, we will have constructed a model of \(\Sigma\).

To make the construction work we will take our given set of \(\mathcal{L}\)-sentences \(\Sigma\) and extend it to a bigger set of sentences \(\Sigma^\prime\) in a bigger language \(\mathcal{L}^\prime\). We do this extension in two steps. First, we will add in some new axioms, called Henkin Axioms, to get a collection \(\hat{\Sigma}\). Then we will extend \(\hat{\Sigma}\) to \(\Sigma^\prime\) in such a way that:

- \(\Sigma^\prime\) is consistent.
- For every \(\mathcal{L}^\prime\)-sentence \(\theta\), either \(\theta \in \Sigma^\prime\) or \(\left( \neg \theta \right) \in \Sigma^\prime\).

Thus we will say that \(\Sigma^\prime\) is a maximal consistent extension of \(\Sigma\), where *maximal* means that it is impossible to add any sentences to \(\Sigma^\prime\) without making \(\Sigma^\prime\) inconsistent.

Now there are two possible sources of problems in this expansion of \(\Sigma\) to \(\Sigma^\prime\). The first is that we will **change languages from \(\mathcal{L}\) to \(\mathcal{L}^\prime\)**, where \(\mathcal{L} \subseteq \mathcal{L}^\prime\). It is conceivable that \(\Sigma\) will not be consistent when viewed as a set of \(\mathcal{L}^\prime\)-sentences, even though \(\Sigma\) is consistent when viewed as a set of \(\mathcal{L}\)-sentences. The reason that this might happen is that there are more \(\mathcal{L}^\prime\) deductions than there are \(\mathcal{L}\)-deductions, and one of these new deductions just might happen to be a deduction of \(\perp\). Fortunately, Lemma 3.2.3 will show us that this does not happen, so \(\Sigma\) is consistent as a set of \(\mathcal{L}^\prime\)-sentences. The other possible problem is in our two **extensions of \(\Sigma\)**, first to \(\hat{\Sigma}\) and then to \(\Sigma^\prime\). It certainly might happen that we could add a sentence to \(\Sigma\) in such a way as to make \(\Sigma^\prime\) inconsistent. But Lemma 3.2.4 and Exercise 4 will prove that \(\Sigma^\prime\) is still consistent.

Once we have our maximal consistent set of sentences \(\Sigma^\prime\), we will **construct a model \(\mathfrak{A}\)** and prove that the sentences of \(\mathcal{L}^\prime\) that are in \(\Sigma^\prime\) are precisely the sentences that are true in \(\mathfrak{A}\). Thus, \(\mathfrak{A}\) will be a model of \(\Sigma^\prime\), and as \(\Sigma \subseteq \Sigma^\prime\), \(\mathfrak{A}\) will be a model of \(\Sigma\), as well.

This looks daunting, but if we keep our wits about us and do things one step at a time, it will all come together at the end.

**Preliminary Argument**

So let us fix our setting for the rest of this proof. We are working in a language \(\mathcal{L}\). For the purposes of this proof, we assume that the language is countable, which means that the formulas of \(\mathcal{L}\) can be written in an infinite list \(\alpha_1, \alpha_2, \ldots, \alpha_n, \ldots\). (An outline of the changes in the proof necessary for the case when \(\mathcal{L}\) is not countable can be found in Exercise 6.)

We are given a set of formulas \(\Sigma\), and we are assuming that \(\Sigma \models \phi\). We have to prove that \(\Sigma \vdash \phi\).

Note that we can assume that \(\phi\) is a sentence: By Lemma 2.7.2, \(\Sigma \vdash \phi\) if and only if there is a deduction from \(\Sigma\) of the universal closure of \(\phi\). Also, by the comments following Lemma 2.7.3, we can also assume that every element of \(\Sigma\) is a sentence. So, now all(!) we have to do is prove that if \(\Sigma\) is a set of* sentences* and \(\phi\) is a *sentence* and if \(\Sigma \models \phi\), then \(\Sigma \vdash \phi\).

Now we claim that it suffices to prove the case where \(\phi\) is the sentence \(\perp\). For suppose we know that if \(\Sigma \models \perp\), then \(\Sigma \vdash \perp\), and suppose we are given a sentence \(\phi\) such that \(\Sigma \models \phi\). Then \(\Sigma \cup \left( \neg \phi \right) \models \perp\), as there are no models of \(\Sigma \cup \left( \neg \phi \right)\), so \(\Sigma \cup \left( \neg \phi \right) \vdash \perp\). This tells us, by Exercise 4 in Section 2.7, that \(\Sigma \vdash \phi\), as needed.

So we have reduced what we need to do to proving that if \(\Sigma \models \perp\), then \(\Sigma \vdash \perp\), for \(\Sigma\) a set of \(\mathcal{L}\)-sentences. This is equivalent to saying that if there is no model of \(\Sigma\), then \(\Sigma \vdash \perp\). We will work with the contrapositive: If \(\Sigma \nvdash \perp\), then there is a model of \(\Sigma\). In other words, we will prove:

If \(\Sigma\) is a consistent set of sentences, then there is a model of \(\Sigma\).

This ends the preliminary argument that was promised in the outline of the proof. Now, we will assume that \(\Sigma\) is a consistent set of \(\mathcal{L}\)-sentences and go about the task of constructing a model of \(\Sigma\).

**Changing the Language from \(\mathcal{L}\) to \(\mathcal{L}_1\)**

The model of \(\Sigma\) that we will construct will be a model whose elements are variable-free terms of a language. This might lead to problems. For example, suppose that \(\mathcal{L}\) contains no constant symbols. Then there will be no variable-free terms of \(\mathcal{L}\). Or, perhaps \(\mathcal{L}\) has exactly one constant symbol \(c\), no function symbols, one unary relation \(P\), and

\[\Sigma = \{ \exists x P ( x ) , \neg P ( c ) \}.\]

Here \(\Sigma\) is consistent, but no structure whose universe is \(\{ c \}\) (\(c\) is the only variable-free tern of \(\mathcal{L}\)) can be a model of \(\Sigma\). So we have to expand our language to give us enough constant symbols to build our model.

So let \(\mathcal{L}_0 = \mathcal{L}\), and define

\[\mathcal { L } _ { 1 } = \mathcal { L } _ { 0 } \cup \left\{ c _ { 1 } , c _ { 2 } , \dots , c _ { n } , \dots \right\},\]

where the \(c_i\)'s are new constant symbols.

*Chaff:* Did you notice that when we were defining \(\mathcal{L}_1\) we took something we already knew about, \(\mathcal{L}\), and gave it a new name, \(\mathcal{L}_0\)? When you are reading mathematics and something like that happens, it is almost always a clue that whatever happens next is going to be iterated, in this case to build \(\mathcal{L}_2\), \(\mathcal{L}_3\), and so on. In those literature courses we took, they called that foreshadowing.

We say (for the obvious reason) that \(\mathcal{L}_1\) is an **extension by constants** of \(\mathcal{L}_0\). As mentioned in the outline, it is not immediately clear that \(\Sigma\) remains consistent when viewed as a collection of \(\mathcal{L}_1\)-sentences rather than \(\mathcal{L}\)-sentences. The following lemma, the proof of which is delayed, shows that \(\Sigma\) remains consistent.

Lemma 3.2.3.

*If \(\Sigma\) is a consistent set of \(\mathcal{L}\)-sentences and \(\mathcal{L}_1\) is an extension by constants of \(\mathcal{L}\), then \(\Sigma\) is consistent when viewed as a set of \(\mathcal{L}_1\)-sentences.*

The constants that we have added to form \(\mathcal{L}_1\) are called **Henkin constants**, and they serve a particular purpose. They will be the witnesses that allow us to ensure that any time \(\Sigma\) claims \(\exists x \phi \left( x \right)\), then in our constructed model \(\mathfrak{A}\), there will be an element (which will be one of these constants \(c\)) such that \(\mathfrak{A} \models \phi \left( c \right)\).

*Chaff:* Recall that the notation \(\exists x \phi \left( x \right)\) implies that \(\phi\) is a formula with \(x\) as the only free variable. Then \(\phi \left( c \right)\) is the result of replacing the free occurrences of \(x\) with the constant symbol \(c\). Thus \(\phi \left( c \right)\) is \(\phi_c^x\).

The next step in our construction makes sure that the Henkin constants will be the witnesses for the existential sentences in \(\Sigma\).

**Extending \(\Sigma\) to Include Henkin Axioms**

Consider the collection of sentences of the form \(\exists x \theta\) in the language \(\mathcal{L}_0\). As the language \(\mathcal{L}_0\) is countable, the collection of \(\mathcal{L}_0\)-sentences is countable, so we can list all such sentences of the form \(\exists x \theta\), enumerating them by the positive integers:

\[\exists x \theta _ { 1 } , \exists x \theta _ { 2 } , \exists x \theta _ { 3 } , \ldots , \exists x \theta _ { n } , \dots.\]

We will now use the Henkin constants of \(\mathcal{L}_1\) to add to \(\Sigma\) countably many axioms, called **Henkin axioms**. These axioms will ensure that every existential sentence that is asserted by \(\Sigma\) will have a witness in our constructed structure \(\mathfrak{A}\). The collection of Henkin axioms is

\[H _ { 1 } = \{ \left[ \exists x \theta _ { i } \right] \rightarrow \theta _ { i } \left( c _ { i } \right) | \left( \exists x \theta _ { i } \right) \: \text{is an} \: \mathcal{L}_0 \: \text{sentence} \},\]

where \(\theta_i \left( c_i \right)\) is shorthand for \(\theta^x_{c_i}\).

Now let \(\Sigma_0 = \Sigma\), and define

\[\Sigma _ { 1 } = \Sigma _ { 0 } \cup H _ { 1 }.\]

*Chaff:* Foreshadowing!

As \(\Sigma_1\) contains many more sentences than \(\Sigma_0\), it seems entirely possible that \(\Sigma_1\) is no longer consistent. Fortunately, the next lemma shows that is not the case.

lemma 3.2.4.

*If \(\Sigma_0\) is a consistent set of sentences and \(\Sigma_1\) is created by adding Henkin axioms to \(\Sigma_0\), then \(\Sigma_1\) is consistent**.*

Now we have \(\Sigma_1\), a consistent set of \(\mathcal{L}_1\)-sentences. We can repeat this construction, building a larger language \(\mathcal{L}_2\) consisting of \(\mathcal{L}_1\) together with an infinite set of new Henkin constants \(k_i\). Then we can let \(H_2\) be a new set of Henkin axioms:

\[H_2 = \{ \left[ \exists x \theta_i \right] \rightarrow \theta_i \left( k_i \right) | \left( \exists x \theta_i \right) \: \text{is an} \: \mathcal{L}_1 \: \text{sentence} \},\]

and let \(\Sigma_2\) be \(\Sigma_1 \cup H_2\). As before, \(\Sigma_2\) will be consistent. We can continue this process to build:

- \(\mathcal{L} = \mathcal{L}_0 \subseteq \mathcal{L}_1 \subseteq \mathcal{L}_2 \cdots\), an increasing chain of languages.
- \(H_1, H_2, H_3, \ldots\), each \(H_i\) a collection of Henkin axioms in the language \(\mathcal{L}_i\).
- \(\Sigma = \Sigma_0 \subseteq \Sigma_1 \subseteq \Sigma_2 \subseteq \cdots\), where each \(\Sigma_i\) is a consistent set of \(\mathcal{L}_i\)-sentences.

Let \(\mathcal{L}^\prime = \bigcup_{i < \infty} \mathcal{L}_i\) and let \(\hat{Sigma} = \bigcup_{i < \infty} \Sigma_i\). Each \(\Sigma_i\) is a consistent set of \(\mathcal{L}^\prime\)-sentences, as can be shown by proofs that are identical to those of Lemmas 3.2.3 and 3.2.4. You will show in Exercise 2 that \(\hat{Sigma}\) is a consistent set of \(\mathcal{L}^\prime\)-sentences.

### Extending to a Maximal Consistent Set of Sentences

As you recall, we were going to construct our model \(\mathfrak{A}\) in such a way that the sentences that were true in \(\mathfrak{A}\) were exactly the elements of a set of sentences \(\Sigma^\prime\). It is time to build \(\Sigma^\prime\). Since every sentence is either true or false in a given model, it will be necessary for us to make sure that for every sentence \(\sigma \in \mathcal{L}^\prime\), either \(\sigma \in \Sigma^\prime\) or \(\neg \sigma \in \Sigma^\prime\). Since we can't have both \(\sigma\) and \(\neg \sigma\) true in any structure, we must also make sure that we don't put both \(\sigma\) and \(\neg \sigma\) into \(\Sigma^\prime\). Thus, \(\Sigma^\prime\) will be a maximal consistent extension of \(\hat{Sigma}\).

To build this extension, fix an enumeration of all of the \(\mathcal{L}^\prime\)-sentences

\[\sigma_1, \sigma_2, \ldots, \sigma_n, \ldots .\]

We can do this as \(\mathcal{L}^\prime\) is countable, being a countable union of countable sets. Now we work our way through this list, one sentence at a time, adding either \(\sigma_n\) or the denial of \(\sigma_n\) to our growing list of sentences, depending on which one keeps our collection consistent.

Here are the details: Let \(\Sigma^0 = \hat{Sigma}\), and assume that \(\Sigma^k\) is known to be a consistent set of \(\mathcal{L}^\prime\)-sentences. We will show how to build \(\Sigma^{k+1} \supseteq \Sigma^k\) and prove that \(\Sigma^{k+1}\) is also a consistent set of \(\mathcal{L}^\prime\)-sentences. Then we let

\[\Sigma^\prime = \Sigma^0 \cup \Sigma^1 \cup \Sigma^2 \cup \cdots \cup \Sigma^n \cup \cdots.\]

You will prove in Exercise 4 that \(\Sigma^\prime\) is a consistent set of sentences. It will be obvious from the construction of \(\Sigma^{k+1}\) from \(\Sigma^k\) that \(\Sigma^\prime\) is maximal, and thus we will have completed our task of producing a maximal consistent extension of \(\hat{\Sigma}\).

So all we have to do is describe how to get \(\Sigma^{k+1}\) from \(\Sigma^k\) and prove that \(\Sigma^{k+1}\) is consistent. Given \(\Sigma^k\), consider the set \(\Sigma^k \cup \{ \sigma^{k+1} \}\), where \(\sigma_{k+1}\) is the \(\left( k + 1 \right)\)st element of our fixed list of all of the \(\mathcal{L}^\prime\)-sentences. Let

\[\sigma^{k+1} = \begin{cases} \begin{array}{ll} \Sigma^k \cup \{ \sigma_{k+1} \} & \text{if} \: \Sigma^k \cup \{ \sigma_{k+1} \} \: \text{is consistent,} \\ \Sigma^k \cup \{ \neg \sigma_{k+1} \} & \text{otherwise.} \end{array} \end{cases}\]

You are asked in Exercise 3 to prove that \(\Sigma^{k+1}\) is consistent. Once you have done that, we have constructed a maximal consistent \(\Sigma^\prime\) that extends \(\Sigma\).

The next lemma states that \(\Sigma^\prime\) is deductively closed, at least as far as sentences are concerned. As you work through the proof, the Deduction Theorem will be useful.

Lemma 3.2.5.

*If \(\sigma\) is a sentence, then \(\sigma \in \Sigma^\prime\) if and only if \(\Sigma^\prime \vdash \sigma\).*

proof

Exercise 5.

### Construction of the Model - Preliminaries

We have mentioned a few times that the model of \(\Sigma\) that we are going to construct will have as its universe the collection of variable-free terms of the language \(\mathcal{L}^\prime\). It is now time to confess that we have lied. It is easy to see why the plan of using the terms as the elements of the universe is doomed to failure. Suppose that there are two different terms \(t_1\) and \(t_2\) of the language and somewhere in \(\Sigma^\prime\) is the sentence \(t_1 = t_2\). If the *terms* were the elements of the universe, then we could not model \(\Sigma^\prime\), as the two terms \(t_1\) and \(t_2\) are not the same (they are typographically distinct), while \(\Sigma^\prime\) demands that they be equal. Our solution to this problem is to take the collection of variable-free terms, define an equivalence relation on that set, and then construct a model from the *equivalence classes* of the variable-free terms.

So let \(T\) be the set of variable-free terms of the language \(\mathcal{L}^\prime\), and define a relation \(\sim\) on \(T\) by

\[t_1 \sim t_2 \: \text{if and only if} \: \left( t_1 = t_2 \right) \in \Sigma^\prime.\]

It is not difficult to show that \(\sim\) is an equivalence relation. We will verify that \(\sim\) is symmetric, leaving reflexivity and transitivity to the Exercises.

To show that \(\sim\) is symmetric, assume that \(t_1 \sim t_2\). We must prove that \(t_2 \sim t_1\). As we know \(t_1 \sim t_2\), by definition we know that the sentence \(\left( t_1 = t_2 \right)\) is an element of \(\Sigma^\prime\). We need to show that \(\left( t_2 = t_1 \right) \in \Sigma^\prime\). Assume not. Then by the maximality of \(\Sigma^\prime\), \(\neg \left( t_2 = t_1 \right) \in \Sigma^\prime\). But since we know that \(\Sigma^\prime \vdash t_1 = t_2\), by Theorem 2.7.1, \(\Sigma^\prime \vdash t_2 = t_1\). (Can you provide the details?) But since we also know that \(\Sigma^\prime \vdash \neg \left( t_2 = t_1 \right)\), it must be the case that \(\Sigma^\prime \vdash \perp\), which is a contradiction, as we know that \(\Sigma^\prime\) is consistent. So our assumption is wrong and \(\left( t_2 = t_1 \right) \in \Sigma^\prime\), and thus \(\sim\) is a symmetric relation.

So, assuming that you have worked through Exercise 7, we have established that \(\sim\) is an equivalence relation. Now let \(\left[ t \right]\) be the set of all variable-free terms \(s\) of the language \(\mathcal{L}^\prime\) such that \(t \sim s\). So \(\left[ t \right]\) is the equivalence class of all terms that \(\Sigma^\prime\) tells us are equal to \(t\). The collection of all such equivalence classes will be the universe of our model \(\mathfrak{A}\).

### Construction of the Model - The Main Ideas

To define our model of \(\Sigma^\prime\), we must construct an \(\mathcal{L}^\prime\)-structure. Thus, we have to describe the universe of our structure as well as interpretations of all of the constant, function, and relation symbols of the language \(\mathcal{L}^\prime\). We discuss each of them separately:

**The Universe \(A\):** As explained above, the universe of \(\mathfrak{A}\) will be the collection of \(\sim\)-equivalence classes of the variable-free terms of \(\mathcal{L}^\prime\). For example, if \(\mathcal{L}^\prime\) includes the binary function symbol \(f\), the non-Henkin constant symbol \(k\), and the Henkin constants \(c_1, c_2, \ldots, c_n, \ldots\), then the universe of our structure would include among its elements \(\left[ c_{17} \right]\) and \(\left[ f \left( k, c_3 \right) \right]\).

**The Constants:** For each constant symbol \(c\) of \(\mathcal{L}^\prime\) (including the Henkin constants), we need to pick out an element \(c^\mathfrak{A}\) of the universe to be the element represented by that symbol. We don't do anything fancy here:

\[c^\mathfrak{A} = \left[ c \right].\]

So each constant symbol will denote its own equivalence class.

**The Functions:** If \(f\) is an \(n\)-ary function symbol, we must define an \(n\)-ary function \(f^\mathfrak{A} : A^n \rightarrow A\). Let us write down the definition of \(f^\mathfrak{A}\) and then we can try to figure out exactly what the definition is saying:

\[f^\mathfrak{A} \left( \left[ t_1 \right], \left[ t_2 \right], \ldots, \left[ t_n \right] \right) = \left[ f t_1 t_2 \ldots t_n \right].\]

On the left-hand side of the equality you will notice that there are \(n\) equivalence classes that are the inputs to the function \(f^\mathfrak{A}\). Since the elements of \(A\) are equivalence classes and \(f^\mathfrak{A}\) is an \(n\)-ary function, that should be all right. On the right side of the equation there is a single equivalence class, and the thing inside the brackets is a variable-free term of \(\mathcal{L}^\prime\). Notice that the function \(f^\mathfrak{A}\) acts by placing the symbol \(f\) in front of the terms and then taking the equivalence class of the result.

There is one detail that has to be addressed. We must show that the function \(f^\mathfrak{A}\) is well defined. Let us say a bit about what that means, assuming that \(f\) is a unary function symbol, for simplicity. Notice that our definition of \(f^\mathfrak{A} \left( \left[ t \right] \right)\) depends on the *name* of the equivalence class that we are putting into \(f^\mathfrak{A}\). This might lead to problems, as it is at least conceivable that we could have two terms, \(t_1\) and \(t_2\), such that \(\left[ t_1 \right]\) is the same set as \(\left[ t_2 \right]\), but \(f^\mathfrak{A} \left( \left[ t_1 \right] \right)\) and \(f^\mathfrak{A} \left( \left[ t_2 \right] \right)\) evaluate to be different sets. Then our alleged function \(f^\mathfrak{A}\) wouldn't even be a function. Showing that this does not happen is what we mean when we say that we must show that the function \(f^\mathfrak{A}\) is well defined.

Let us look at the proof that our function \(f^\mathfrak{A}\) is, in fact, well defined. Suppose that \(\left[ t_1 \right] = \left[ t_2 \right]\). We must show that \(f^\mathfrak{A} \left( \left[ t_1 \right] \right) = f^\mathfrak{A} \left( \left[ t_2 \right] \right)\). In other words, we must show that if \(\left[ t_1 \right] = \left[ t_2 \right]\), then \(\left[ f t_1 \right] = \left[ f t_2 \right]\). Again looking at the definition of our equivalence relation \(\sim\), this means that we must show that if \(t_1 = t_2\) is an element of \(\Sigma^\prime\), then so is \(f \left( t_1 \right) = f \left( t_2 \right)\). So assume that \(t_1 = t_2\) is an element of \(\Sigma^\prime\). Here is an outline of a deduction from \(\Sigma^\prime\) of \(f \left( t_1 \right) = f \left( t_2 \right)\).:

\[\begin{array}{ll} x = y \rightarrow f \left( x \right) = f \left( y \right) & \text{Axiom (E2)} \\ \vdots & \\ t_1 = t_2 \rightarrow f \left( t_1 \right) = f \left( t_2 \right) & \\ t_1 = t_2 & \text{element of} \: \Sigma^\prime \\ f \left( t_1 \right) = f \left( t_2 \right) & \text{PC} \end{array}\]

Since \(\Sigma^\prime \vdash f \left( t_1 \right) = f \left( t_2 \right)\), Lemma 3.2.5 tells us that \(f \left( t_1 \right) = f \left( t_2 \right)\) is an element of \(\Sigma^\prime\), as needed. So the function \(f^\mathfrak{A}\) is well defined.

**The Relations:** Suppose that \(R\) is an \(n\)-ary relation symbol of \(\mathcal{L}^\prime\). We must define an \(n\)-ary relation \(R^\mathfrak{A}\) on \(A\). In other words, we must decide which \(n\)-tuples of equivalence classes will stand in the relation \(R^\mathfrak{A}\). Here is where we use the elements of \(\Sigma^\prime\). We define \(R^\mathfrak{A}\) by this statement:

\[R^\mathfrak{A} \left( \left[ t_1 \right], \left[ t_2 \right], \ldots, \left[ t_n \right] \right) \: \text{is true if and only if} \: R t_1 t_2 \ldots t_n \in \Sigma^\prime\].

So elements of the universe are in the relation \(R\) if and only if \(\Sigma^\prime\) *says* they are in the relation \(R\). Of course, we must show that the relation \(R^\mathfrak{A}\) is well defined, also. Or rather, **you** must show that the relation \(R^\mathfrak{A}\) is well defined. See Exercise 8.

At this point we have constructed a perfectly good \(\mathcal{L}^\prime\)-structure. What we have to do next is show that \(\mathfrak{A}\) makes all of the sentences of \(\Sigma^\prime\) true. Then we will have shown that we have constructed a model of \(\Sigma^\prime\).

proposition 3.2.6.

\(\mathfrak{A} \models \Sigma^\prime\)

Proof

We will in fact prove something slightly stronger. We will prove, for each sentence \(\sigma\), that

\[\sigma \in \Sigma^\prime \: \text{if and only if} \: \mathfrak{A} \models \sigma.\]

Well, since you have noticed, this isn't *really* stronger, as we know that \(\Sigma^\prime\) is maximal. But it does appear stronger, and this version of the proposition is what we need to get the inductive steps to work out nicely.

We proceed by induction on the complexity of the formulas in \(\Sigma^\prime\). For the base case, suppose that \(\sigma\) is an atomic sentence. Then \(\sigma\) is of the form \(R t_1 t_2 \ldots t_n\), where \(R\) is an \(n\)-ary relation symbol and the \(t_i\)'s are variable free terms. But then our definition of \(R^\mathfrak{A}\) guaranteed that \(\mathfrak{A} \models \sigma\) if and only if \(\sigma \in \Sigma^\prime\). Notice that if \(R\) is \(=\) and \(\sigma\) is \(t_1 = t_2\), then \(\sigma \in \Sigma^\prime\) iff \(t_1 \sim t_2\) iff \(\left[ t_1 \right] = \left[ t_2 \right]\) iff \(\mathfrak{A} \models \sigma\).

For the inductive cases, suppose first that \(\sigma : \equiv \neg \alpha\), where we know by inductive hypothesis that \(\mathfrak{A} \models \alpha\) if and only if \(\alpha \in \Sigma^\prime\). Notice that as \(\Sigma^\prime\) is a maximal consistent set of sentences, we know that \(\sigma \in \Sigma^\prime\) if and only if \(\alpha \notin \Sigma^\prime\). Thus

\[\begin{align} \sigma \in \Sigma^\prime \: &\text{if and only if} \: \alpha \notin \Sigma^\prime \\ &\text{if and only if} \: \mathfrak{A} \not\models \alpha \\ &\text{if and only if} \: \mathfrak{A} \models \neg \alpha \\ &\text{if and only if} \: \mathfrak{A} \models \sigma \end{align}.\]

The second inductive case, when \(\sigma : \equiv \alpha \lor \beta\), is similar and is left to the Exercises.

The interesting case is when \(\sigma\) is a sentence of the form \(\forall x \phi\). We must show that \(\forall x \phi \in \Sigma^\prime\) if and only if \(\mathfrak{A} \models \forall x \phi\). We do each implication separately.

First, assume that \(\forall x \phi \in \Sigma^\prime\). We must show that \(\mathfrak{A} \models \forall x \phi\), which means that we must show, given an assignment function \(s\), that \(\mathfrak{A} \models \forall x \phi \left[ s \right]\). Since the elements of \(A\) are equivalence classes of variable-free terms, this means that we have to show for any variable-free term \(t\) that

\[\mathfrak{A} \models \phi \left[ s \left[ x | \left[ t \right] \right] \right].\]

But (here is another lemma for you to prove) for any variable-free term \(t\) and any assignment function \(s\), \(\bar{s} \left( t \right) = \left[ t \right]\), and so by Theorem 2.6.2, we need to prove that

\[\mathfrak{A} \models \phi_t^x \left[ s \right].\]

Notice that \(\phi_t^x\) is a sentence, so \(\mathfrak{A} \models \phi_t^x \left[ s\right]\) if and only if \(\mathfrak{A} \models \phi_t^x\). But also notice that \(\Sigma^\prime \vdash \phi_t^x\), as \(\forall x \phi\) is an element of \(\Sigma^\prime\), \(\forall x \phi \rightarrow \phi_t^x\) is a quantifier axiom of type (Q1) (\(t\) is substitutable for \(x\) in \(\phi\) as \(t\) is variable-free), and \(\Sigma^\prime\) is deductively closed for sentences by Lemma 3.2.5. But \(\phi_t^x\) is less complex than \(\forall x \phi\), and thus by our inductive hypothesis, \(\mathfrak{A} \models \phi_t^x\), as needed.

For the reverse direction of our biconditional, assume that \(\forall x \phi \notin \Sigma^\prime\). We need to show that \(\mathfrak{A} \not\models \forall x \phi\). As \(\Sigma^\prime\) is maximal, \(\neg \forall x \phi \in \Sigma^\prime\). Be deductive closure again, this means that \(\exists x \neg \phi \in \Sigma^\prime\). From our construction of \(\Sigma^\prime\), we know there is some Henkin constant \(c_i\) such that \(\left( \left[ \exists x \neg \phi \right] \rightarrow \neg \left( c_i \right) \right) \in \Sigma^\prime\), and using deductive closure once again, this tells us that \(\neg \phi \left( c_i \right) \in \Sigma^\prime\). Having stripped off a quantifier, we can assert via the inductive hypothesis that \(\mathfrak{A} \models \neg \phi \left( c_i \right)\), so \(\mathfrak{A} \not\models \forall x \phi\), as needed.

This finishes our proof of Lemma 3.2.6, so we know that the \(\mathcal{L}^\prime\)-structure \(\mathfrak{A}\) is a model of \(\Sigma^\prime\).

### Construction of the Model - Cleaning Up

As you recall, back in our outline of the proof of the Completeness Theorem, we were going to prove the theorem by constructing a model of \(\Sigma\). We are almost there. We have a structure, \(\mathfrak{A}\), we know that \(\mathfrak{A}\) is a model of \(\Sigma^\prime\), and we know that \(\Sigma \subseteq \Sigma^\prime\), so every sentence in \(\Sigma\) is true in the structure \(\mathfrak{A}\). We're just about done. The only problem is that \(\Sigma\) began life as a set of \(\mathcal{L}\)-sentences, while \(\mathfrak{A}\) is an \(\mathcal{L}^\prime\)-structure, not an \(\mathcal{L}\)-structure.

Fortunately, this is easily remedied by a slight bit of amnesia: Define the structure \(\mathfrak{A} \upharpoonright_\mathcal{L}\) (read **\(\mathfrak{A}\) restricted to \(\mathcal{L}\)**, or the** restriction of \(\mathfrak{A}\) to \(\mathcal{L}\)**) as follows: The universe of \(\mathfrak{A} \upharpoonright_\mathcal{L}\) is the same as the universe of \(\mathfrak{A}\). Any constant symbols, function symbols, and relations symbols of \(\mathcal{L}\) are interpreted in \(\mathfrak{A} \upharpoonright_\mathcal{L}\) exactly as they were interpreted in \(\mathfrak{A}\), and we just ignore all of the symbols that were added as we moved from \(\mathcal{L}\) to \(\mathcal{L}^\prime\). Now, \(\mathfrak{A} \upharpoonright_\mathcal{L}\) is a perfectly good \(\mathcal{L}\)-structure, and all that's left to finish the proof of the Completeness Theorem is to work through one last lemma:

lemma 3.2.7.

*If \(\sigma\) is an \(\mathcal{L}\)-sentence, then \(\mathfrak{A} \models \sigma\) if and only if \(\mathfrak{A} \upharpoonright_\mathcal{L} \models \sigma\).*

proof

(Outline) Use induction on the complexity of \(\sigma\), proving that \(\mathfrak{A} \upharpoonright_\mathcal{L} \models \sigma\) if and only if \(\sigma \in \Sigma^\prime\), as in the proof of Lemma 3.2.6.

Thus, we have succeeded in producing an \(\mathcal{L}\)-structure that is a model of \(\Sigma\), so we know that every consistent set of sentences has a model. By our preliminary remarks, we thus know that if \(\Sigma \models \phi\), then \(\Sigma \vdash \phi\), and our proof of the Completeness Theorem is complete.

### Proofs of the Lemmas

We present here the proofs of two lemmas that were used in the proof of the Completeness Theorem. The first lemma was introduced when we expanded the language \(\mathcal{L}\) to the language \(\mathcal{L}^\prime\) and we were concerned about the consistency of \(\Sigma\) in the new, expanded language.

Lemma 3.2.3.

*If \(\Sigma\) is a consistent set of \(\mathcal{L}\)-sentences and \(\mathcal{L}^\prime\) is an extension by constants of \(\mathcal{L}\), then \(\Sigma\) is consistent when viewed as a set of \(\mathcal{L}^\prime\)-sentences.*

proof

Suppose, by way of contradiction, that \(\Sigma\) is not consistent as a set of \(\mathcal{L}^\prime\)-sentences. Thus there is a deduction (in \(\mathcal{L}^\prime\)) of \(\perp\) from \(\Sigma\). Let \(n\) be the smallest number of new constants used in any such deduction, and let \(D^\prime\) be a deduction using exactly \(n\) such constants. Notice that \(n > 0\), as otherwise \(D^\prime\) would be a deduction of \(\perp\) in \(\mathcal{L}\). We show that there is a deduction of \(\perp\) using fewer than \(n\) constants, a contradiction that establishes the lemma.

Let \(v\) be a variable that does not occur in \(D^\prime\), let \(c\) be one of the new constants that occurs in \(D^\prime\), and let \(D\) be the sequence of formulas \(\left( \phi_i \right)\) that is formed by taking each formula \(\phi_i^\prime\) in \(D^\prime\) and replacing all occurrences of \(c\) in \(\phi_i^\prime\) by \(v\). The last formula in \(D\) is \(\perp\), so if we can show that \(D\) is a deduction, we will be finished.

So we use induction on the elements of the deduction \(D^\prime\). If \(\phi_i^\prime\) is an element of \(D^\prime\) by virtue of being an equality axiom or an element of \(\Sigma\), then \(\phi_i = \phi_i^\prime\), and \(\phi_i\) is an element of a deduction by the same reason. If \(\phi_i^\prime\) is a quantifier axiom, for example \(\left( \forall x \right) \theta^\prime \rightarrow \theta_t^{\prime x}\), then \(\phi_i\) will also be a quantifier axiom, in this case \(\left( \forall x \right) \theta \rightarrow \theta_t^x\). There will be no problems with substitutability of \(t\) for \(x\), given that \(t^\prime\) is substitutable for \(x\). If \(\phi^\prime\) is an element of the deduction by virtue of being the conclusion of a rule of inference \(\left( \Gamma^\prime, \phi^\prime \right)\), then \(\left( \Gamma, \phi \right)\) will be a rule of inference that will justify \(\phi\).

This completes the argument that \(D\) is a deduction of \(\perp\). Since \(D\) clearly uses fewer new constant symbols than \(D^\prime\), we have our contradiction and our proof is complete.

The second lemma was needed when we added the Henkin axioms to our consistent set of sentences \(\Sigma\). We needed to prove that the resulting set, \(\hat{\Sigma}\), was still consistent.

lemma 3.2.4:

*If \(\Sigma\) is a consistent set of sentences and \(\hat{\Sigma}\) is created by adding Henkin axioms to \(\Sigma\), then \(\hat{\Sigma}\) is consistent.*

proof

Suppose that \(\hat{\Sigma}\) is not consistent. Let \(n\) be the smallest number of Henkin axioms used in any possible deduction from \(\hat{\Sigma}\) of \(\perp\). Fix such a set of \(n\) Henkin axioms, and let \(\alpha\) be one of those Henkin axioms. So we know that

\[\Sigma \cup H \cup \alpha \vdash \perp,\]

where \(H\) is the collection of the other \(n - 1\) Henkin axioms needed in the proof. Now \(\alpha\) is of the form \(\exists x \phi \rightarrow \phi \left( c \right)\), where \(c\) is a Henkin constant and \(\phi \left( c \right)\) is our shorthand for \(\phi_c^x\).

By the Deduction Theorem (Theorem 2.7.4), as \(\alpha\) is a sentence, this means that \(\Sigma \cup H \vdash \neg \alpha\), so

\[\Sigma \cup H \vdash \exists x \phi \: \: \: \: \: \text{and} \: \: \: \: \: \Sigma \cup H \vdash \neg \phi_c^x.\]

Since \(\exists x \phi\) is the same as \(\neg \forall x \neg \phi\), from the first of these facts we know that

\[\Sigma \cup H \vdash \neg \forall x \neg \phi.\]

We also know that \(\Sigma \cup H \vdash \neg \phi_c^x\). If we take a deduction of \(\neg \phi_c^x\) and replace each occurrence of the constant \(c\) by a new variable \(z\), the result is still a deduction (as in the proof of Lemma 3.2.3 above), so \(\Sigma \cup H \vdash \neg \phi_z^x\). By Lemma 2.7.2, we know that

\[\Sigma \cup H \vdash \forall z \neg \phi_z^x.\]

Our quantifier axiom (Q1) states that as long as \(x\) is substitutable for \(z\) in \(\neg \phi_z^x\), (which it is, as \(z\) is a *new* variable), then we may assert that \(\left[ \forall z \neg \phi_z^x \right] \rightarrow \neg \left( \phi_z^x \right)_x^z\). Therefore

\[\Sigma \cup H \vdash \neg \left( \phi_z^x \right)_x^z.\]

But \(\left( \phi_z^x \right)_x^z = \phi\), so \(\Sigma \cup H \vdash \neg \phi\). But now we can use Lemma 2.7.2 again to conclude that

\[\Sigma \cup H \vdash \forall x \neg \phi.\]

So, by Equations (3.2.24) and (3.2.27), we see that \(\Sigma \cup H \vdash \perp\). This is a contradiction, as \(\Sigma \cup H\) contains only \(n - 1\) Henkin axioms. Thus we are led to conclude that \(\hat{\Sigma}\) is consistent.

## Exercises

- Suppose that \(\Sigma\) is inconsistent and \(\phi\) is an \(\mathcal{L}\)-formula. Prove that \(\Sigma \vdash \phi\).
- Assume that \(\Sigma_0 \subseteq \Sigma_1 \subseteq \Sigma_2 \cdots\) are such that each \(\Sigma_i\) is a consistent set of sentences in a language \(\mathcal{L}\). Show \(\bigcup \sigma_i\) is consistent.
- Show that if \(\Pi\) is any consistent set of sentences and \(\sigma\) is a sentence such that \(\Pi \cup \{ \sigma \}\) is inconsistent, then \(\Pi \cup \{ \neg \sigma \}\) is consistent. Conclude that in the proof of the Completeness Theorem, if \(\Sigma^k\) is consistent, then \(\Sigma^{k+1}\) is consistent.
- Prove that the \(\Sigma^\prime\) constructed in the proof of the Completeness Theorem is consistent. [
*Suggestion:*Deductions are finite in length.] - Prove Lemma 3.2.5.
- Toward a proof of the Completeness Theorem in a more general setting:

(a) Do not assume that the language \(\mathcal{L}\) is countable. Suppose that you have been given a set of sentences \(\Sigma_\text{max}\) that is maximal and consistent. So for each sentence \(\sigma\), either \(\sigma \in \Sigma_\text{max}\) or \(\neg \sigma \in \Sigma_\text{max}\). Mimic the proof of Proposition 3.2.6 to convince yourself that we can construct a model \(\mathfrak{A}\) of \(\Sigma\).

(b) Zorn's Lemma implies the following: If we are given a consistent set of \(\mathcal{L}^\prime\)-sentences \(\hat{\Sigma}\), then the collection of consistent extensions of \(\hat{\Sigma}\) has a maximal (with respect to \(\subseteq\)) element \(\Sigma_\text{max}\). If you are familiar with Zorn's Lemma, prove this fact.

(c) Use parts (a) and (b) of this problem to outline a proof of the Completeness Theorem in the case where the language \(\mathcal{L}\) is not countable. - Complete the proof of the claim that the relation \(\sim\) is an equivalence relation.
- Show that the relation \(R^\mathfrak{A}\) of the structure \(\mathfrak{A}\) is well defined. So let \(R\) be a relation symbol (a unary relation symbol is fine), and show that if \(\left[ t_1 \right] = \left[ t_2 \right]\), then \(R^\mathfrak{A} \left( \left[ t_1 \right] \right)\) is true if and only if \(R^\mathfrak{A} \left( \left[ t_2 \right] \right)\) is true.
- Finish the inductive clause of the proof of Proposition 3.2.6.
- Fill in the details of the proof of Lemma 3.2.7.