# 6.1: Relations

- Page ID
- 19395

A **relation** in mathematics is a symbol that can be placed between two numbers (or variables) to create a logical statement (or open sentence). The main point here is that the insertion of a relation symbol between two numbers creates a statement whose value is either true or false. For example, we have previously seen the divisibility symbol (\(|\)) and noted the common error of mistaking it for the division symbol (\(/\)); one of these tells us to perform an arithmetic operation, the other asks us whether *if* such an operation were performed there would be a remainder. There are many other symbols that we have seen which have this characteristic, the most important is probably \(=\), but there are lots: \(\neq, <, \leq, >, ≥\) all work this way – if we place them between two numbers we get a Boolean thing, it’s either true or false. If, instead of numbers, we think of placing sets on either side of a relation symbol, then \(=\), \(⊆\) and \(⊇\) are valid relation symbols. If we think of placing logical expressions on either side of a relation then, honestly, *any* of the logical symbols is a relation, but we normally think of \(∧\) and \(∨\) as operators and give things like \(≡\), \(\implies\) and \(\iff\) the status of relations.

In the examples we’ve looked at the things on either side of a relation are of the same type. This is usually, but not always, the case. The prevalence of relations with the same kind of things being compared has even lead to the aphorism “Don’t compare apples and oranges.” Think about the symbol \(∈\) for a moment. As we’ve seen previously, it isn’t usually appropriate to put **sets** on either side of this, we might have numbers or other objects on the left and sets on the right. Let’s look at a small example. Let \(A = \{1, 2, 3, a, b\}\) and let \(B = \{\{1, 2, a\}, \{1, 3, 5, 7, . . .\}, \{1\}\}\). The “element of” relation, \(∈\), is a **relation from** \(A\) to \(B\).

A diagram such as we have given in Figure \(6.1.1\) seems like a very natural thing. Such pictures certainly give us an easy visual tool for thinking about relations. But we should point out certain hidden assumptions. First, they’ll only work if we are dealing with finite sets, or sets like the odd numbers in our example (sets that are infinite but could in principle be listed). Second, by drawing the two sets separately, it seems that we are assuming they are not only different, but disjoint. The sets not only need not be disjoint, but often (most of the time!) we have relations that go from a set to itself so the sets in a picture like this may be identical. In Figure \(6.1.2\) we illustrate the divisibility relation on the set of all divisors of \(6\) — this is an example in which the sets on either side of the relation are the same. Notice the linguistic distinction, we can talk about either “a relation from \(A\) to \(B\)” (when there are really two different sets) or “a relation on \(A\)” (when there is only one).

Purists will note that it is really inappropriate to represent the same set in two different places in a Venn diagram. The diagram in Figure \(6.1.2\) should really look like this:

Indeed, this representation is definitely preferable, although it may be more crowded. A picture such as this is known as the **directed graph** (a.k.a. **digraph**) of the relation.

Recall that when we were discussing sets we said the best way to describe a set is simply to list all of its elements. Well, what is the best way to describe a relation? In the same spirit, it would seem we should explicitly list all the things that make the relation true. But it takes a **pair** of things, one to go on the left side and one to go on the right, to make a relation true (or for that matter false!). Also, it should be evident that order is important in this context, for example \(2 < 3\) is true but \(3 < 2\) isn’t. The identity of a relation is so intimately tied up with the set of ordered pairs that make it true, that when dealing with abstract relations we* define them* as sets of ordered pairs.

Given two sets, \(A\) and \(B\), the **Cartesian product** of \(A\) and \(B\) is the set of all ordered pairs \((a, b)\) where \(a\) is in \(A\) and \(b\) is in \(B\). We denote the Cartesian product using the symbol \(×\).

\(A × B = \{(a, b) a ∈ A ∧ b ∈ B\}\)

From here on out in your mathematical career, you’ll need to take note of the context that the symbol \(×\) appears in. If it appears between numbers go ahead and multiply, but if it appears between sets you’re doing something different – forming the Cartesian product.

The familiar \(x\)–\(y\) plane, is often called the Cartesian plane. This is done for two reasons. Rene Descartes, the famous mathematician and philosopher, was the first to consider coordinatizing the plane and thus is responsible for our current understanding of the relationship between geometry and algebra. Rene Descartes’ name is also memorialized in the definition of the Cartesian product of sets, and the plane is nothing more than the product \(\mathbb{R} × \mathbb{R}\). Indeed, the plane provided the very first example of the concept that was later generalized to the Cartesian product of sets.

Suppose \(A = \{1, 2, 3\}\) and \(B = \{a, b, c\}\). Is \((a, 1)\) in the Cartesian product \(A × B\)? List all elements of \(A × B\).

In the abstract, we can define a relation as any subset of an appropriate Cartesian product. So an abstract relation \(\text{R}\) from a set \(A\) to a set \(B\) is just some subset of A×B. Similarly, a relation \(\text{R}\) on a set \(S\) is defined by a subset of \(S × S\). This definition looks a little bit strange when we apply it to an actual (concrete) relation that we already know about. Consider the relation “less than.” To describe “less than” as a subset of a Cartesian product we must write

\(< = \{(x, y) ∈ \mathbb{R} × \mathbb{R} y − x ∈ \mathbb{R}^+\}\).

This looks funny.

Also, if we have defined some relation \(\text{R} ⊆ A × B\), then in order to say that a particular pair, \((a, b)\), of things make the relation true we have to write

\(a\text{R}b\).

This looks funny too.

Despite the strange appearances, these examples do express the correct way to deal with relations.

Let’s do a completely made-up example. Suppose \(A\) is the set \(\{a, e, i, o, u\}\) and \(B\) is the set \(\{r, s, t, l, n\}\) and we define a relation from \(A\) to \(B\) by

\(R = \{(a, s),(a, t),(a, n),(e, t),(e, l),(e, n),(i, s),(i, t),(o, r),(o, n),(u, s)\}\).

Then, for example, because \((e, t) ∈ \text{R}\) we can write \(e\text{R}t\). We indicate the negation of the concept that two elements are related by drawing a slash through the name of the relation, for example the notation \(\neq\) is certainly familiar to you, as is \(\nless\) (although in this latter case we would normally write \(≥\) instead). We can denote the fact that \((a, l)\) is not a pair that makes the relation true by writing \(a \not \text{R} l\).

We should mention another way of visualizing relations. When we are dealing with a relation on \(\mathbb{R}\), the relation is actually a subset of \(\mathbb{R} × \mathbb{R}\), that means we can view the relation as a subset of the \(x\)–\(y\) plane. In other words, we can graph it. The graph of the “\(<\)” relation is given in Figure \(6.1.3\).

A relation on any set that is a subset of \(\mathbb{R}\) can likewise be graphed. The graph of the “\(|\)” relation is given in Figure \(6.1.4\).

Eventually, we will get around to defining functions as relations that have a certain nice property. For the moment, we’ll just note that some of the operations that you are used to using with functions also apply with relations. When one function “undoes” what another function “does” we say the functions are inverses. For example, the function \(f(x) = 2x\) (i.e. doubling) and the function \(g(x) = \dfrac{x}{2}\) (halving) are inverse functions because, no matter what number we start with, if we double it and then halve that result, we end up with the original number. The inverse of a relation \(\text{R}\) is written \(\text{R}^{−1}\) and it consists of the reversals of the pairs in \(\text{R}\),

\(\text{R}^{−1} = \{(b, a) (a, b) ∈ \text{R}\}\).

This can also be expressed by writing

\(b\text{R}^{−1}a \iff a\text{R}b.\)

The process of “doing one function and then doing another” is known as functional composition. For instance, if \(f(x) = 2x + 1\) and \(g(x) = \sqrt{x}\), then we can compose them (in two different orders) to obtain either \(f(g(x)) = 2 \sqrt{x} + 1\) or \(g(f(x)) = \sqrt{2x + 1}\). When composing functions there is an “intermediate result” that you get by applying the first function to your input, and then you calculate the second function’s value at the intermediate result. (For example, in calculating \(g(f(4))\) we get the intermediate result \(f(4) = 9\) and then we go on to calculate \(g(9) = 3\).)

The definition of the composite of two relations focuses very much on this idea of the intermediate result. Suppose \(\text{R}\) is a relation from \(A\) to \(B\) and \(\text{S}\) is a relation from \(B\) to \(C\) then the composite \(\text{S} ◦ \text{R}\) is given by

\(\text{S} ◦ \text{R} = \{(a, c) ∃b ∈ B,(a, b) ∈ \text{R} ∧ (b, c) ∈ \text{S}\}\).

In this definition, \(b\) is the “intermediate result,” if there is no such b that serves to connect \(a\) to \(c\) then \((a, c)\) won’t be in the composite. Also, notice that this is the composition \(\text{R}\) first, then \(\text{S}\), but it is written as \(\text{S} ◦ \text{R}\) – watch out for this! The compositions of relations should be read from right to left. This convention makes sense when you consider functional composition, \(f(g(x))\) means \(g\) first, then \(f\) so if we use the “little circle” notation for the composition of relations we have \(f ◦g(x) = f(g(x))\) which is nice because the symbols \(f\) and \(g\) appear in the same order. But beware! there are atavists out there who write their compositions the other way around.

You should probably have a diagram like the following in mind while thinking about the composition of relations. Here, we have the set \(A = \{1, 2, 3, 4\}\), the set \(B\) is \(\{a, b, c, d\}\) and \(C = \{w, x, y, z\}\). The relation \(\text{R}\) goes from \(A\) to \(B\) and consists of the following set of pairs,

\(R = \{(1, a),(1, c),(2, d),(3, c),(3, d)\}\).

And

\(S = \{(a, y),(b, w),(b, x),(b, z)\}\).

Notice that the composition \(\text{R} ◦ \text{S}\) is impossible (or, more properly, it is empty). Why? What is the (only) pair in the composition \(\text{S} ◦ \text{R}\)?

## Exercises:

The lexicographic order, \(<_{\text{lex}}\), is a relation on the set of all words, where \(x <_{\text{lex}} y\) means that \(x\) would come before \(y\) in the dictionary. Consider just the three letter words like “iff”, “fig”, “the”, et cetera. Come up with a usable definition for \(x_1x_2x_3 <_{\text{lex}} y_1y_2y_3\).

What is the graph of “\(=\)” in \(\mathbb{R} × \mathbb{R}\)?

The **inverse** of a relation \(\text{R}\) is denoted \(\text{R}^{−1}\). It contains exactly the same ordered pairs as \(\text{R}\) but with the order switched. (So technically, they aren’t *exactly* the same ordered pairs . . . )

\(\text{R}^{−1} = \{(b, a) (a, b) ∈ \text{R}\}\)

Define a relation \(\text{S}\) on \(\mathbb{R} × \mathbb{R}\) by \(\text{S} = \{(x, y) y = \sin x\}\). What is \(\text{S}^{−1}\)? Draw a single graph containing \(\text{S}\) and \(\text{S}^{−1}\).

The “socks and shoes” rule is a very silly little mnemonic for remembering how to invert a composition. If we think of undoing the process of putting on our socks and shoes (that’s socks first, then shoes) we have to first remove our shoes, *then* take off our socks.

The socks and shoes rule is valid for relations as well.

Prove that \((\text{S} ◦ \text{R})^{−1} = \text{R}^{−1} ◦ \text{S}^{−1}\).