7.0: Juggling With Two Operations
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{\!\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
We'll now start looking at algebraic structures with more than one operation. Typically, these structures will have rules governing the different operations, and additional rules for how the operations interact. We'll begin by looking at rings, which have two operations, usually written as addition and multiplication, related by the distributive property.
There are many reasons to study ring theory, often having to do with generalizing the properties that we observe in many of the rings we deal with in daytoday life, like the integers and the rational numbers. By making precise the algebraic structures that (for example) the integers satisfy, we can figure out what makes our favorite facts about the integers true, and easily see where those same facts hold true.
It's also an area where most of the real payoff comes later. Understanding ring theory is essential for algebraic geometry in particular, which is a major force in modern mathematics. The basic idea of algebraic geometry is to study geometry using zeroes of polynomials: for example, a line in the plane can be thought of as the zeroes of the polynomial \(f(x,y) = ymxb\) where \(m\) and \(b\) are constants. In other words, to understand properties of geometry, it is helpful to understand properties of polynomials. And polynomials are an example of a ring, as we'll see.
Definition 7.0.0
A ring is a set \(R\) with operations \(+\) and \(\cdot\) such that:
 \(R\) is a commutative group under \(+\),
 (Distributivity) For all \(r,s,t\in R\), we have \(r\cdot(s+t)=r\cdot s+r\cdot t\), and \((s+t)\cdot r = s\cdot r + t\cdot r\).
Exercise 7.0.1
Show, using the definition of a ring, that for any ring \(R\) with additive identity \(0\), we have \(0\cdot r=0\) for every \(r\in R\).
This is the most general type of ring. There are many different types of ring which arise from placing extra conditions, especially on the multiplicative operation. In fact, ring theory is kind of a zoo, divided up into the study of different 'species' of rings. Possibly the most important rings to study are commutative, associative rings with unity, which we define now.
Definition 7.0.2:
Let \(R\) be a ring, and \(r,s,t\in R\). Then \(R\) is:
 Associative if the multiplication operation is associative: \(r\cdot (s\cdot t) = (r\cdot s)\cdot t\),
 A ring with unity if there is a multiplicative identity \(1\), such that \(1\cdot r=r=r\cdot 1\),
 Commutative if the operation \(\cdot\) is commutative: \(r\cdot s=s\cdot r\).
Usually we'll deal with associative rings with unity; in fact, when we write 'ring' we'll mean an associative ring with unity unless otherwise noted. As a result, 'commutative ring' will mean a ring that is commutative, associative and with unity.
There are numerous examples of rings! Here are some familiar examples.

Integers. The integers are a commutative group under addition, and have the distributive property. Additionally, the integers are associative and commutative under multiplication, and have a multiplicative identity, \(1\). Thus, the integers are an commutative associative ring with unity.

Rational Numbers, Real Numbers, Complex Numbers. All of these familiar number systems are examples of commutative associative rings with unity.

Integers modulo \(n\), \(\mathbb{Z}_n\). The multiplication operation works just as the addition operation does: do the normal multiplication, and then divide by \(n\) and keep the remainder: \(a\cdot b = (ab)%n\). This is an associative and commutative operation, and there is a multiplicative identity.

Matrices. Recall that matrix addition is just entrybyentry, and that the multiplication of matrices adds and multiplies the entries according to a certain rule: if \(M\) and \(N\) are matrices, then \((MN)_{i,j}=\sum_k M_{i,k}N_{k,j}\). Since this only uses addition and multiplication, we can thus form matrices with entries in any ring \(R\), since \(R\) has notions of addition and multiplication. The set of all \(m\times n\) matrices with entries in \(R\) is denoted \(M_{m\times n}(R)\).
As an example, consider the matrices \(M=\begin{pmatrix}0&1\\2&3\end{pmatrix}\) and \(N=\begin{pmatrix}0&2\\ 3&4\end{pmatrix}\) with entries from \(\mathbb{Z}_5\). Then \(M+N=\begin{pmatrix}0&3\\ 0&2\end{pmatrix}\), and \(M\cdot N = \begin{pmatrix}3&4\\ 4&1\end{pmatrix}\).

Polynomials. Polynomials can be added and multiplied so long as we know how to add and multiply the coefficients. We let \(R[x]\) denote the ring of polynomials with coefficients from the ring \(R\) and variable \(x\) with exponent \(\geq 0\). For example, if \(R=\mathbb{Z}_2\), we have \((x+1)(x+1)=x^2+1\).
Polynomials in many variables also form rings. We usually just write \(R[x,y]\) or \(R[x,y,z]\) if we're using two or three variables, or more generally, \(R[x_1, x_2, \ldots, x_n]\) for more variables.

Rings of Functions. Many spaces of functions have a ring structure. For example, if we consider differentiable functions \(\mathbb{R}\rightarrow \mathbb{R}\), we can add and multiply functions: \((f+g)(x)=f(x)+g(x)\) and \((f\cdot g)(x)=f(x)g(x)\). Sums and products of differentiable functions are also differentiable, so they are closed. The functions form an additive group, and there's a multiplicative identity: the constant function defined by \(\mathbb{1}(x)=1\).
There are numerous such function spaces: the space of continuous functions, integrable functions, infinitely differentiable functions, and so on. And yes, polynomials. The study of rings of functions is important in mathematical analysis.
Exercise 7.0.3
 Generate two 'random' matrices \(M\) and \(N\) in \(M_{3,3}(\mathbb{Z}_6)\). Compute \(M+N\), \(MN\), and \(NM\).
 Consider \(f, g\in \mathbb{Z}_6\), defined by \(f=x^3+2x^2+3x\) and \(g=4x^3+5x+4\). Find \(f+g\) and \(fg\).
Contributors
 Tom Denton (Fields Institute/York University in Toronto)