1.1: What is Order?
- Page ID
- 54891
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Above we informally spoke of two different ordered sets: the order on system connectivity and the order on booleans false ≤ true. Then we related these two ordered sets by means of Alice’s observation Φ. Before continuing, we need to make such ideas more precise. We begin in Section 1.2.1 with a review of sets and relations. In Section 1.2.2 we will give the definition of a preorder—short for preordered set—and a good number of examples.
Review of sets, relations, and functions
We will not give a definition of set here, but informally we will think of a set as a collection of things, known as elements. These things could be all the leaves on a certain tree, or the names of your favorite fruits, or simply some symbols a, b, c. For example, we write A = {h,1} to denote the set, called A, that contains exactly two elements, one called h and one called 1. The set {h, h, 1, h, 1} is exactly the same as A because they both contain the same elements, h and 1, and repeating an element more than once in the notation doesn’t change the set.3 For an arbitrary set X, we write x \(\in\) X if x is element of X; so we have \(h \in A\) and \(1 \in A\), but \(0 \notin A\).
Here are some important sets from mathematics-and the notation we will use-that will appear again in this book.
- \(\varnothing\) denotes the empty set; it has no elements.
- {1} denotes a set with one element; it has one element, 1 .
- \(\mathbb{B}\) denotes the set of booleans; it has two elements, true and false.
- \(\mathbb{N}\) denotes the set of natural numbers; it has elements \(0,1,2,3, \ldots, 90^{717}, \ldots\)
- \(n,\) for any \(n \in \mathbb{N},\) denotes the \(n^{t h}\) ordinal; it has \(n\) elements \(1,2, \ldots, n .\) For example, \(\underline{0}=\varnothing, \underline{1}=\{1\},\) and \(\underline{5}=\{1,2,3,4,5\}\)
- \(\mathbb{Z},\) the set of integers; it has elements \(\ldots,-2,-1,0,1,2, \ldots, 90^{717}, \ldots\)
- \(\mathbb{R},\) the set of real numbers; it has elements like \(\pi, 3.14,5 * \sqrt{2}, e, e^{2},-1457,90^{717},\) etc.
Given sets X and Y, we say that X is a subset of Y, and write X ⊆ Y, if every element in X is also in Y. For example {h} \(\subseteq\) A. Note that the empty set Ø: = {} is a subset of every other set.4 Given a set Y and a property P that is either true or false for each element of Y, we write {y \(\in\) Y | P(y)} to mean the subset of those y’s that satisfy P.
- Is it true that N = {n \(\in\) Z | n ≥ 0}?
- Is it true that N = {n \(\in\) Z | n ≥ 1}?
- Is it true that Ø = {n \(\in\) Z | 1 < n < 2}?
If both X1 and X2 are subsets of Y, their union, denoted X1 ∪ X2, is also a subset of Y, namely the one containing the elements in X1 and the elements in X2 but no more. For example if Y = {1,2,3,4} and X1 = {1,2} and X2 = {2,4}, then X1 \(\cup\) X2 = {1,2,4}. Note that Ø \(\cup\) X = X for any X \(\subseteq\) Y.
Similarly, if both X1 and X2 are subsets of Y, then their intersection, denoted X1 \(\cap\) X2, is also a subset of Y, namely the one containing all the elements of Y that are both in X1 and in X2, and no others. So {1,2,3} \(\cap\) {2,5} = {2}.
What if we need to union or intersect a lot of subsets? For example, consider the sets X0 = Ø, X1 = {1}, X2 = {1,2},etc. as subsets of N,and we want to know what the union of all of them is. This union is written \(\cup\)n \(\in\) N Xn, and it is the subset of N that contains every element of every Xn, but no others. Namely, \(\cup\)n \(\in\) N Xn = {n \(\in\) N | n ≥ 1}. Similarly one can write \(\cap\)n \(\in\) N Xn for the intersection of all of them, which will be empty in the above case.
Given two sets X and Y, the product X × Y of X and Y is the set of pairs (x, y), where x \(\in\) X and y \(\in\) Y.
Finally, we may want to take a disjoint union of two sets, even if they have elements in common. Given two sets X and Y, their disjoint union X \(\sqcup\) Y is the set of pairs of the form (x, 1) or (y, 2), where x \(\in\) X and y \(\in\) Y.
Let A = {h, 1} and B = {1, 2, 3}.
- There are eight subsets of B; write them out.
- Take any two nonempty subsets of B and write out their union.
- There are six elements in A × B; write them out. ♦
- There are five elements of A \(\cup\) B; write them out.
- If we consider A and B as subsets of the set {h, 1, 2, 3}, there are four elements of A \(\cup\) B; write them out.
Relationships between different sets for example between the set of trees in your neighborhood and the set of your favorite fruits are captured using subsets and product sets.
Let X and Y be sets. A relation between X and Y is a subset R \(\subseteq\) X × Y. A binary relation on X is a relation between X and X, i.e. a subset R \(\subseteq\) X × X.
It is convenient to use something called infix notation for binary relations R \(\subseteq\) A × A.
This means one picks a symbol, say ⋆, and writes a ⋆ b to mean (a, b) \(\in\) R.
There is a binary relation on R with infix notation ≤. Rather than writing (5, 6) \(\in\) R, we write 5 ≤ 6. Other examples of infix notation for relations are =, ≈, <, >. In number theory, they are interested in whether one number divides without remainder into another number;this relation is denoted with infix notation |, so 5|10.
Partitions and equivalence relations. We can now define partitions more formally.
If A is a set, a partition of A consists of a set P and, for each p \(\in\) P, a nonempty subset Ap \(\subseteq\) A, such that
(1.15)
We may denote the partition by {Ap}p \(\in\) P. We refer to P as the set of part labels and if p \(\in\) P is a part label, we refer to Ap as the p\(^{th}\) part. The condition (1.15) says that each element a ∈ A is in exactly one part. We consider two different partitions {\(\left\{A_{p}\right\}_{p \in P}\)} and {\(\left\{A_{p^{\prime}}^{\prime}\right\}_{p^{\prime} \in P^{\prime}}\)} of A to be the same if for each p \(\in\) P there exists a p' \(\in\)P' with \(A_{p}\) = \(A'_{p'}\),. In other words, if two ways to divide A into parts are exactly the same the only change is in the labels then we don’t make a distinction between them.
Suppose that A is a set and {\(A_{p}\)}p \(\in\) P and {\(\left\{A_{p^{\prime}}^{\prime}\right\}_{p^{\prime} \in P^{\prime}}\)} are two partitions of
A such that for each p \(\in\) P there exists a p′ \(\in\) P' with \(A_{p}\) = \(A′_{p′}\),.
- Show that for each p \(\in\) P there is at most one p′ \(\in\) P′ such that \(A_{p}\) = \(A′_{p′}\)
- Show that for each p′ \(\in\) P′ there is a p \(\in\) P such that \(A_{p}\) = \(A′_{p′}\),.
Consider the partition shown below:

- Answer
-
For any two elements a,b \(\in\) {11,12,13,21,22,23}, let’s allow ourselves to write a twiddle symbol a ∼ b between them if a and b are both in the same part. Write down every pair of elements (a, b) that are in the same part. There should be 10.5 ♦
We will see in Proposition 1.19 that there is a strong relationship between partitions and something called equivalence relations, which we define next.
Let A be a set. An equivalence relation on A is a binary relation, let’s give it infix notation ∼, satisfying the following three properties:
- a ∼ a, for all a \(\in\) A,
- a ∼ b \(iff_{a}\) b ∼ a, for all a, b \(\in\) A, and
- if a ∼ b and b ∼ c then a ∼ c, for all a, b, c \(\in\) A.
These properties are called reflexivity, symmetry, and transitivity, respectively.
'Iff’ is short for ‘if and only if’.
Let A be a set. There is a one-to-one correspondence between the ways to partition A and the equivalence relations on A.
Proof. We first show that every partition gives rise to an equivalence relation, and then that every equivalence relation gives rise to a partition. Our two constructions will be mutually inverse, proving the proposition.
Suppose we are given a partition {\(\left\{A_{p}\right\}_{p \in P}\)}; we define a relation ∼ and show it is an equivalence relation. Define a ∼ b to mean that a and b are in the same part: there is some p \(\in\) P such that a \(\in\) \(A_{p}\) and b \(\in\) \(A_{q}\). It is obvious that a is in the same part as itself. Similarly, it is obvious that if a is in the same part as b then b is in the same part as a, and that if further b is in the same part as c then a is in the same part as c. Thus ∼ is an equivalence relation as defined in Definition 1.18.
Suppose given an equivalence relation ∼; we will form a partition on A by saying what the parts are. Say that a subset X \(\subseteq\) A is (∼)-closed if, for every x \(\in\) X and x′ ∼ x, we have x′ \(\in\) X. Say that a subset X \(\subseteq\) A is (∼)-connected if it is nonempty and x ∼ y for every x, y \(\in\)X. Then the parts corresponding to ∼ are exactly the (∼)-closed, (∼)-connected subsets. It is not hard to check that these indeed form a partition. □
Let’s complete the “it’s not hard to check” part in the proof of Proposition 1.19. Suppose that ∼ is an equivalence relation on a set A, and let P be the set of (∼)-closed and (∼)-connected subsets {\(\left\{A_{p}\right\}_{p \in P}\)}.
- Show that each part \(A_{p}\) is nonempty.
- Show that if p \(\neq\) q, i.e. if \(A_{p}\) and \(A_{q}\) are not exactly the same set, then \(A_{p}\) ∩ \(A_{q}\) = Ø.
- Show that A = \(\cup\)p \(\in\) P \(A_{p}\). ♦
Given a set A and an equivalence relation ∼ on A, we say that the quotient A/∼ of A under ∼ is the set of parts of the corresponding partition.
Functions. The most frequently used sort of relation between sets is that of functions.
Let S and T be sets. A function from S to T is a subset F \(subseteq\) S × T such that for all s \(\in\) S there exists a unique t \(\in\) T with (s, t) \(\in\) F.
The function F is often denoted F: S → T. From now on, we write F(s) = t, or sometimes s → t, to mean (s,t) \(\in\) F. For any t \(\in\) T, the preimage of t along F is the subset {s \(\in\) S | F(s) = t}.
A function is called surjective, or a surjection, if for all t ∈ T there exists s \(\in\) S with F(s) = t. A function is called injective, or an injection,
if for all t \(\in\) T and s1, s2 \(\in\) S with F(s1) = t and F(s2) = t, we have s1 = s2. A function is called bijective if it is both surjective and injective.
We use various decorations on arrows, to denote these special sorts of functions. Here is a table with the name, arrow decoration, and an example of each sort of function:
arbitrary function surjective function injective function bijective function

An important but very simple sort of function is the identity function on a set X, denoted idX. It is the bijective function idX(x) = x.

For notational consistency with Definition 1.22, the arrows in Example 1.23 might be drawn as → rather than --->. The --->- style arrows were drawn because we thought it was prettier, i.e. easier on the eye. Beauty is important too; an imbalanced preference for strict correctness over beauty becomes pedantry. But outside of pictures, we will be careful.
In the following, do not use any examples already drawn above.
- Find two sets A and B and a function f : A → B that is injective but not surjective.
- Find two sets A and B and a function f : A → B that is surjective but not injective.
Now consider the four relations shown here:

or each relation, answer the following two questions.
3. Is it a function?
4. If not, why not? If so, is it injective, surjective, both (i.e. bijective), or neither? ♦
Suppose that A is a set and f : A → Ø is a function to the empty set. Show that A is empty.
A partition on a set A can also be understood in terms of surjective functions out of A. Given a surjective function f : A --» P, where P is any other set, the preimages f −1(p) \(\subseteq\) A, one for each element p \(\in\) P, form a partition of A. Here is an example.
Consider the partition of S := {11, 12, 13, 21, 22, 23} shown below:

It has been partitioned into four parts, so let P = {a, b, c, d } and let f :S --» P be given by
f(11) = a, f(12) = a, f(13) = b, f(21) = c, f(22) = d, f(23) = d
Write down a surjection corresponding to each of the five partitions in Eq. (1.5). ♦
If F : X → Y is a function and G : Y → Z is a function, their composite is the function X → Z defined to be G(F(x)) for any x \(\in\) X. It is often denoted G ◦ F, but we prefer to denote it F ; G. It takes any element x \(\in\) X, evaluates F to get an element F(x) \(\in\) Y and then evaluates G to get an element G(F(x)).
If X is any set and x \(\in\) X is any element, we can think of x as a function {1} → X, namely the function sending 1 to x. For example, the three functions {1} → {1, 2, 3} shown below correspond to the three elements of {1, 2, 3}:

Suppose given a function F : X → Y and an element of X, thought of as a function x : {1} → X. Then evaluating F at x is given by the composite, F(x) = x ; F.
Preorders
In Section 1.1, we several times used the symbol ≤ to denote a sort of order. Here is a formal definition of what it means for a set to have an order.
A preorder relation on a set X is a binary relation on X, here denoted with infix notation ≤, such that
(a) x ≤ x; and
(b) if x ≤ y and y ≤ z, then x ≤ z.
The first condition is called reflexivity and the second is called transitivity. If x ≤ y and y ≤ x, we write x \(\cong\) y and say x and y are equivalent. We call a pair (X, ≤) consisting of a set equipped with a preorder relation a preorder.
Remark 1.31. Observe that reflexivity and transitivity are familiar from Definition 1.18: preorders are just equivalence relations without the symmetry condition.
Every set X can be considered as a discrete preorder (X,=). This means that the only order relations on X are of the form x ≤ x; if x \(\neq\) y then neither x ≤ y or y ≤ x hold.
We depict discrete preorders as simply a collection of points:

From every set we may also construct its codiscrete preorder (X, ≤) by equipping it with the total binary relation X × X \(\subseteq\) X × X. This is a very trivial structure: it means that for all x and y in X we have x ≤ y (and hence also y ≤ x).
The booleans \(\mathbb{B}\) = {false, true} form a preorder with false ≤ true.

Remark 1.35 (Partial orders are skeletal preorders). A preorder is a partial order if we additionally have that
(c) x \(\cong\) y implies x = y.
In category theory terminology, the requirement that x \(\cong\) y implies x = y is known as skeletality, so partial orders are skeletal preorders. For short, we also use the term poset, a contraction of partially ordered set.
The difference between preorders and partial orders is rather minor. A partial order already is a preorder, and every preorder can be made into a partial order by equating any two elements x, y for which x \(\cong\) y, i.e. for which x ≤ y and y ≤ x.
For example, any discrete preorder is already a partial order, while any codiscrete preorder simply becomes the unique partial order on a one element set.
We have already introduced a few examples of preorders using Hasse diagrams. It will be convenient to continue to do this, so let us be a bit more formal about what we mean. First, we need to define a graph.
A graph G = (V, A, s, t) consists of a set V whose elements are called vertices, a set A whose elements are called arrows, and two functions s ,
t : A → V known as the source and target functions respectively. Given a \(\in\) A with s(a) = v and t(a) = w, we say that a is an arrow from v to w.
By a path in G we mean any sequence of arrows such that the target of one arrow is the source of the next. This includes sequences of length 1, which are just arrows a \(\in\) A in G, and sequences of length 0, which just start and end at the same vertex v, without traversing any arrows.
Here is a picture of a graph:

It has V = {1, 2, 3, 4} and A = {a, b, c, d, e}. The source and target functions, s, t : A → V are given by the following partially-filled-in tables (see Exercise 1.38):

There is one path from 2 to 3, namely the arrow e is a path of length 1. There are no paths from 4 to 3, but there is one path from 4 to 4, namely the path of length 0. There are infinitely many paths 1 → 2 because one can loop and loop and loop through d as many times as one pleases.
Fill in the table from Example 1.37. ♦
Remark 1.39. From every graph we can get a preorder. Indeed, a Hasse diagram is a graph G = (V, A, s, t) that gives a presentation of a preorder (P, ≤). The elements of P are the vertices V in G, and the order ≤ is given by v ≤ w iff there is a path v → w. For any vertex v, there is always a path v → v, and this translates into the reflexivity law from Definition 1.30. The fact that paths u → v and v → w can be concatenated to a path u → w translates into the transitivity law.
What preorder relation (P, ≤) is depicted by the graph G in Example 1.37? That is, what are the elements of P and write down every pair (p1, p2) for which p1 ≤ p2. ♦
Does a collection of points, like the one in Example 1.32, count as a Hasse diagram? ♦
Let X be the set of partitions of {•, ◦, ∗}; it has five elements and an order by coarseness, as shown in the Hasse diagram Eq. (1.5). Write down every pair (x, y) of elements in X such that x ≤ y. There should be 12. ♦
Remark 1.43. In Remark 1.35 we discussed partial orders preorders with the property that whenever two elements are equivalent, they are the same and then said that this property is fairly inconsequential: any preorder can be converted to a partial order that’s “equivalent” category-theoretically. A partial order is like a preorder with a fancy haircut: some mathematicians might not even notice it.
However, there are other types of preorders that are more special and noticeable. For example, a total order has the following additional property:
(d) for all x, y, either x ≤ y or y ≤ x.
We say two elements x, y of a preorder are comparable if either x ≤ y or y ≤ x, so a total order is a preorder where every two elements are comparable.
Is it correct to say that a discrete preorder is one where no two elements are comparable?
The natural numbers \(\mathbb{N}\) := {0, 1, 2, 3, . . .} are a pre- order with the order given by the usual size ordering, e.g. 0 ≤ 1 and 5 ≤ 100. This is a total order: either m ≤ n or n ≤ m for all m,n. One can see that its Hasse diagram looks like a line:

What made Eq. (1.5) not look like a line is that there are non-comparable elements a and b—namely all those in the middle row. which satisfy neither a ≤ b nor b ≤ a.
Note that for any set S, there are many different ways of assigning an order to S. Indeed, for the set N, we could also use the discrete ordering: only write n ≤ m if n = m. Another ordering is the reverse ordering, like 5 ≤ 3 and 3 ≤ 2, like how golf is scored (5 is worse than 3).
Yet another ordering on \(\mathbb{N}\) is given by division: we say that n ≤ m if n divides into m without remainder. In this ordering 2 ≤ 4, for example, but 2 \(\neq\) 3, since there is a remainder when 2 is divided into 3.
Write down the numbers 1,2,...,10 and draw an arrow a → b if a divides perfectly into b. Is it a total order? ♦
The real numbers \(\mathbb{R}\) also form a preorder with the “usual ordering”, e.g. −500 ≤ −499 ≤ 0 ≤ √2 ≤ 100/3.
Is the usual ≤ ordering on the set \(\mathbb{R}\) of real numbers a total order? ♦
Given a preorder, i.e. a pre-ordered set (P, ≤), we defined the notion of equivalence of elements, denoted x \(\cong \) y, to mean x ≤ y and y ≤ x. This is an equivalence relation, so it induces a partition on P. (The phrase “A induces B” means that we have an automatic way to turn an A into a B. In this case, we’re saying that we have an automatic way to turn equivalence relations into partitions, which we do; see Proposition 1.19.)
For example, the preorder whose Hasse diagram is drawn on the left corresponds to the partition drawn on the right.

Given a set X, the set of subsets of X is known as the power set of X; we denote it P(X). The power set can naturally be given an order by inclusion of subsets (and from now on, whenever we speak of the power set as an ordered set, this is the order we mean).
For example, taking X = {0, 1, 2}, we depict P(X) as

See the cube? The Hasse diagram for the power set of a finite set, say P{1, 2, . . . , n},\(^{a}\) always looks like a cube of dimension n.
\(^{a}\)Note that we omit the parentheses here, writing PX instead of P(X); throughout this book we will omit parentheses if we judge the presentation is cleaner and it is unlikely to cause confusion.
Draw the Hasse diagrams for P(Ø), P{1}, and P{1, 2}. ♦
We talked about getting a partition from a preorder; now let’s think about how we might order the set Prt(A) of all partitions of A, for some set A. In fact, we have done this before in Eq. (1.5). Namely, we order on partitions by fineness: a partition P is finer than a partition Q if, for every part p \(\in\) P there is a part q ∈ Q such that Ap \(\subseteq\) Aq. We could also say that Q is coarser than P.
Recall from Example 1.26 that partitions on A can be thought of as surjective functions out of A. Then f : A --» P is finer than g : A --» Q if there is a function h : P → Q such that f ; h = g.
For any set S there is a coarsest partition, having just one part. What surjective function does it correspond to? There is also a finest partition, where everything is in its own partition. What surjective function does it correspond to? ♦
Given a preorder (P, ≤), an upper set in P is a subset U of P satisfying the condition that if p \(\in\) U and p ≤ q, then q \(\in\) U. “If p is an element then so is anything bigger.” Write U(P) for the set of upper sets in P. We can give the set U an order by letting U ≤ V if U is contained in V.
For example, if (\(\mathbb{B}\), ≤) is the booleans (Example 1.34), then its preorder of uppersets \(\cup\)(\(\mathbb{B}\)) is

The subset {false} \(\subseteq\) \(\mathbb{B}\) is not an upper set, because false ≤ true and true \(\not in\) {false}.
Prove that the preorder of upper sets on a discrete preorder (see Example 1.32) on a set X is simply the power set P(X). ♦
Given preorders (P, ≤) and (Q, ≤), we may define a preorder structure on the product set P × Q by setting (p, q) ≤ (p′, q′) if and only if p ≤ p′ and q ≤ q′. We call this the product preorder. This is a basic example of a more general construction known as the product of categories.
Draw the Hasse diagram for the product of the two preorders drawn below:

-
For bonus points, compute the upper set preorder on the result. ♦
Given a preorder (P, ≤), we may define the opposite preorder (P, ≤ \(^{op}\)) to have the same set of elements, but with (p ≤ \(^{op}\)) q if and only if q ≤ p.
Monotone maps
We have said that the categorical perspective emphasizes relationships between things. For example, a preorder is a setting or world in which we have one sort of relationship, ≤, and any two objects may be, or may not be, so-related. Jumping up a level, the categorical perspective emphasizes that preorders themselves each a miniature world composed of many relationships can be related to one another.
The most important sort of relationship between preorders is called a monotone map. These are functions that preserve preorder relations in some sense mappings that respect ≤ and are hence considered the right notion of structure-preserving map for preorders.
A monotone map between preorders (A, ≤A) and (B, ≤B) is a function f : A → B such that, for all elements x, y \(\in\) A, if x ≤ A y then f(x) ≤ B f(y).
A monotone map A → B between two preorders associates to each element of preorder A an element of the preorder B. We depict this by drawing a dotted arrow from each element x \(\in\) A to its image f (x) \(\in\) B. Note that the order must be preserved in order to count as a valid monotone map, so if element x is above element y in the lefthand preorder A, then the image f (x) will be above the image f (y) in the righthand preorder.

Let \(\mathbb{B}\) and \(\mathbb{N}\) be the preorders of booleans from Example 1.34 and N the preorder of natural numbers from Example 1.45. The map \(\mathbb{B}\) → \(\mathbb{N}\) sending false to 17 and true to 24 is a monotone map, because it preserves order.

Consider the set of all animal classifications, for example ‘tiger’, ‘mammal’, ‘sapiens’, ‘carnivore’, etc.. These are ordered by specificity: since ‘tiger’ is a type of ‘mammal’, we write tiger ≤ mammal. The result is a preorder, which in fact forms a tree, often called the tree of life. At the top of the following diagram we see a small part of it:

At the bottom we see the hierarchical structure as a preorder. The dashed arrows show a monotone map, call it F, from the classifications to the hierarchy. It is monotone because it preserves order: whenever there is a path x → y upstairs, there is a pathF(x) → F(y) downstairs.
Given a finite set X, recall the power set P(X) and its natural order relation from Example 1.50. The map |·|: P(X) → \(\mathbb{N}\) sending each subset S to its number of elements |S|, also called its cardinality, is a monotone map.
Let X = {0, 1, 2}.
- Draw the Hasse diagram for P(X).
- Draw the Hasse diagram for the preorder 0 ≤ 1 ≤ 2 ≤ 3.
- Draw the cardinality map |·| from Example 1.62 as dashed lines between them. ♦
Recall the notion of upper set from Example 1.54. Given a preorder (P, ≤), the map U(P) → P(P) sending each upper set of (P, ≤) to itself—considered as a subset of P is a monotone map.
Consider the preorder B. The Hasse diagram for U(\(\mathbb{B}\)) was drawn in Example 1.54, and you drew the Hasse diagram for P(\(\mathbb{B}\)) in Exercise 1.51. Now draw the monotone map between them, as described in Example 1.64.
Let (P, ≤) be a preorder, and recall the notion of opposite preorder from Example 1.58.
- Show that the set ↑ p := {p′ \(\in\) P | p ≤ p′} is an upper set, for any p \(\in\) P.
- Show that this construction defines a monotone map ↑: P \(^{op}\) → U(P).
- Show that if p ≤ p′ in P if and only if ↑ (p′) \(\subseteq\) ↑ (p).
- Draw a picture of the map ↑ in the case where P is the pre order (b ≥ a ≤ c) from Example 1.56.
This is known as the Yoneda lemma for preorders. The if and only if condition proved in part 3 implies that, up to equivalence, to know an element is the same as knowing its upper set—that is, knowing its web of relationships with the other elements of the preorder. The general Yoneda lemma is a powerful tool in category theory, and a fascinating philosophical idea besides. ♦
As you yourself well know, a monotone map f : (P, ≤P) → (Q, ≤Q) consists of a function f : P → Q that satisfies a “monotonicity” property. Show that when (P, ≤P ) is a discrete preorder, then every function P → Q satisfies the monotonicity property, regardless of the order ≤ Q
Recall from Example 1.52 that given a set X we define Prt(X) to be the set of partitions on X, and that a partition may be defined using a surjective function s : X --» P for some set P. Any surjective function f : X --» Y induces a monotone map f \(^{*}\) : Prt(Y) → Prt(X), going “backwards.” It is defined by sending a partition s : Y --» P to the composite f ; s : X --» P.\(^{7}\)
Choose two sets X and Y with at least three elements each and choose a surjective, non-identity function f : X --» Y between them. Write down two different partitions P and Q of Y, and then find f\(^{*}\)(P) and f\(^{*}\)(Q). ♦
The following proposition, Proposition 1.70, is straightforward to check. Recall the definition of the identity function from Example 1.23 and the definition of composition from Definition 1.28.
For any preorder (P, ≤P ), the identity function is monotone. If (Q,≤Q) and (R,≤R) are preorders and f : P → Q and g: Q → R are monotone, then (f ; g): P → R is also monotone.
Check the two claims made in Proposition 1.70. ♦
Recall again the definition of opposite preorder from Example 1.58. The identity function idP : P → P is a monotone map (P, ≤) → (P, ≤\(^{op}\)) if and only if for all p, q \(\in\) P we have q ≤ p whenever p ≤ q. For historical reasons connected to linear algebra, when this is true, we call (P, ≤) a dagger preorder.
But in fact, we have seen dagger preorders before in another guise. Indeed, if (P, ≤) is a dagger preorder, then the relation ≤ is symmetric: p ≤ q if and only if q ≤ p, and it is also reflexive and transitive by definition of preorder. So in fact ≤ is an equivalence relation (Definition 1.18).
Recall the notion of skeletal preorders (Remark 1.35) and discrete preorders (Example 1.32). Show that a skeletal dagger preorder is just a discrete preorder, and hence can be identified with a set. ♦
Remark 1.74. We say that an A “can be identified with” a B when any A gives us a unique B and any B gives us a unique A, and both round-trips from an A to a B and back to an A, or from a B to an A and back to a B return us where we started. For example, any discrete preorder (P, ≤) has an underlying set P, and any set P can be made into a discrete preorder (p1 ≤ p2 iff p1 = p2), and the round-trips return us where we started. So what’s the difference? It’s like the notion of object-permanence from child development jargon: we can recognize “the same chair, just moved from one room to another.” A chair in the room of sets can be moved to a chair in the room of preorders. The lighting is different but the chair is the same.
Eventually, we will be able to understand this notion in terms of equivalence of categories, which are related to isomorphisms, which we will explore next in Definition 1.75.
Let (P, ≤P ) and (Q , ≤Q ) be preorders. A monotone function f : P → Q is called an isomorphism if there exists a monotone function g : Q → P such that f ; g \(id_{P}\) and g ; f \(id_{Q}\). This means that for any p \(\in\) P and q \(\in\) Q, we have p = g(f(p)) and q = f(g(q)). We refer to g as the inverse of f , and vice versa: f is the inverse of g. If there is an isomorphism P → Q, we say that P and Q are isomorphic.
An isomorphism between preorders is basically just a relabeling of the elements.
Here are the Hasse diagrams for three preorders P, Q, and R, all of which are isomorphic:

The map f : P → Q given by f (a) = v, f (b) = w, f (c) = x, f (d) = y, and f (e) = z has an inverse.
In fact Q and R are the same preorder. One may be confused by the fact that there is an arrow x → z in the Hasse diagram for R and not one in Q, but in fact this arrow is superfluous. By the transitivity property of preorders (Definition 1.30), since x ≤ y and y ≤ z, we must have x ≤ z, whether it is drawn or not. Similarly, we could have drawn an arrow v → y in either Q or R and it would not have changed the preorder.
Recall the preorder \(\mathbb{B}\) = {false, true}, where false ≤ true. As simple as this preorder is, it is also one of the most important.
Show that the map Φ from Section 1.1.1, which was roughly given by ‘Is • connected to ∗?’ is a monotone map Prt({∗, •, ◦}) → B; see also Eq. (1.5). ♦
Let P be a preorder. Monotone maps P → \(\mathbb{B}\) are in one-to-one correspondence with upper sets of P.
Proof. Let f : P → B be a monotone map. We will show that the subset f \(^{-1}\)(true) \(\subseteq\) P is an upper set. Suppose p \(\in\) f\(^{-1}\)(true) and p ≤ q; then true = f(p) ≤ f(q). But in B, if true ≤ f (q) then true = f (q). This implies q \(\in\) f \(^{-1}\)(true) and thus shows that f \(^{-1}\)(true) is an upper set. Conversely, if U is an upper set in P, define fU : P → B such that fU (p) = true when p \(\in\) U, and fU(p) = false when p \(\not in\) U.
This is a monotone map, because if p ≤ q, then either p \(\in\) U, so q \(\in\) U and f(p) = true = f(q), or p \(\not in\) U, so f(p) = false ≤ f(q).
These two constructions are mutually inverse, and hence prove the proposition.
Let P and Q be preorders, and f : P → Q be a monotone map. Then we can define a monotone map f \(^{*}\) : U(Q) → U(P) sending an upperset U \(\subseteq\) Q to the upper set f\(^{-1}\)(U) \(\subseteq\) P. We call this the pullback along f. Viewing upper sets as a monotone maps to \(\mathbb{B}\) as in Proposition 1.78, the pullback can be understood in terms of composition. Indeed, show that the f\(^{*}\) is defined by taking u: Q → \(\mathbb{B}\) to (f ; u):P → B. ♦

