Skip to main content
Mathematics LibreTexts

13.2: Summary of Algebraic Structures

  • Page ID
    243
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Loosely speaking, an algebraic structure is any set upon which "arithmetic-like'' operations have been defined. The importance of such structures in abstract mathematics cannot be overstated. By recognized a given set \(S\) as an instance of a well-known algebraic structure, every result that is known about that abstract algebraic structure is then automatically also known to hold for \(S\). This utility is, in large part, the main motivation behind abstraction.

    Before reviewing the algebraic structures that are most important to the study of Linear Algebra, we first carefully define what it means for an operation to be "arithmetic-like''.

    C.1 Binary operations and scaling operations

    When discussing an arbitrary nonempty set \(S\), you should never assume that \(S\) has any type of "structure'' (algebraic or otherwise) unless the context suggests differently. Put another way, the elements in \(S\) can only every really be related to each other in a subjective manner. E.g., if we take \(S = \{\text{Alice},\,\text{Bob},\,\text{Carol}\}\), then there is nothing intrinsic in the definition of \(S\) that suggests how these names should objectively be related to one another.

    If, on the other hand, we take \(S = \mathbb{R}\), then you have no doubt been conditioned to expect that a great deal of "structure'' already exists within \(S\). E.g., given any two real numbers \(r_{1}, r_{2} \in \mathbb{R}\), one can form the sum \(r_{1} + r_{2}\), the difference \(r_{1} - r_{2}\), the product \(r_{1}r_{2}\), the quotient \(r_{1} / r_{2}\) (assuming \(r_{2} \neq 0\)), the maximum \(\max\{r_{1}, r_{2}\}\), the minimum \(\min\{r_{1}, r_{2}\}\), the average \((r_{1} + r_{2})/2\), and so on. Each of these operations follows the same pattern: take two real numbers and "combine'' (or "compare'') them in order to form a new real number.

    Moreover, each of these operations imposes a sense of "structure'' within \(\mathbb{R}\) by relating real numbers to each other. We can abstract this to an arbitrary nonempty set as follows:

    Definition C.1.1. A binary operation on a nonempty set \(S\) is any function that has as its domain \(S \times S\) and as its codomain \(S\).

    In other words, a binary operation on \(S\) is any rule \(f : S \times S \to S\) that assigns exactly one element \(f(s_{1}, s_{2}) \in S\) to each pair of elements \(s_{1}, s_{2} \in S\). We illustrate this definition in the following examples.

    Example C.1.2.

    1. Addition, subtraction, and multiplication are all examples of familiar binary operations on \(\mathbb{R}\). Formally, one would denote these by something like

    \[ + : \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \ - : \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \ \text{and} \ * : \mathbb{R} \times \mathbb{R} \to \mathbb{R}, \ \text{respectively}. \]

    Then, given two real numbers \(r_{1}, r_{2} \in \mathbb{R}\), we would denote their sum by \(+(r_{1}, r_{2})\), their difference by \(-(r_{1}, r_{2})\), and their product by \(*(r_{1}, r_{2})\). (E.g., \(+(17, 32) = 49\), \(-(17, 32) = -15\), and \(*(17, 32) = 544\).) However, this level of notational formality can be rather inconvenient, and so we often resort to writing \(+(r_{1}, r_{2})\) as the more familiar expression \(r_{1} + r_{2}\), \(-(r_{1}, r_{2})\) as \(r_{1} - r_{2}\), and \(*(r_{1}, r_{2})\) as either \(r_{1} * r_{2}\) or \(r_{1}r_{2}\).

    2. The division function \(\div : \mathbb{R} \times \left( \mathbb{R}\setminus\{0\} \right) \to \mathbb{R}\) is not a binary operation on \(\mathbb{R}\) since it does not have the proper domain. However, division is a binary operation on \(\mathbb{R}\setminus\{0\}\).

    3. Other binary operations on \(\mathbb{R}\) include the maximum function \(\max:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\), the minimum function \(\min:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\), and the average function \((\cdot + \cdot)/2:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\).

    4. An example of a binary operation \(f\) on the set \(S = \{\text{Alice},\,\text{Bob},\,\text{Carol}\}\) is given by

    \[ f(s_{1}, s_{2}) = \begin{cases} s_{1} & {\rm{if~}} s_{1} {\rm{~alphabetically ~precedes~}} s_{2}, \\ \text{Bob} & \text{otherwise}. \end{cases}\]

    This is because the only requirement for a binary operation is that exactly one element of \(S\) is assigned to every ordered pair of elements \((s_{1}, s_{2}) \in S \times S\).

    Even though one could define any number of binary operations upon a given nonempty set, we are generally only interested in operations that satisfy additional "arithmetic-like'' conditions. In other words, the most interesting binary operations are those that, in some sense, abstract the salient properties of common binary operations like addition and multiplication on \(\mathbb{R}\). We make this precise with the definition of a so-called "group'' in Section C.2.

    At the same time, though, binary operations can only be used to impose "structure'' within a set. In many settings, it is equally useful to additional impose "structure'' upon a set. Specifically, one can define relationships between elements in an arbitrary set as follows:

    Definition C.1.3. A scaling operation (a.k.a. external binary operation) on a nonempty set \(S\) is any function that has as its domain \(\mathbb{F} \times S\) and as its codomain \(S\), where \(\mathbb{F}\) denotes an arbitrary field. (As usual, you should just think of \(\mathbb{F}\) as being either \(\mathbb{R}\) or \(\mathbb{C}\)).

    In other words, a scaling operation on \(S\) is any rule \(f : \mathbb{F} \times S \to S\) that assigns exactly one element \(f(\alpha, s) \in S\) to each pair of elements \(\alpha \in \mathbb{F}\) and \(s \in S\). This abstracts the concept of "scaling'' an object in \(S\) without changing what "type'' of object it already is. As such, \(f(\alpha, s)\) is often written simply as \(\alpha s\). We illustrate this definition in the following examples.

    Example C.1.4.

    1. Scalar multiplication of \(n\)-tuples in \(\mathbb{R}^{n}\) is probably the most familiar scaling operation to you. Formally, scalar multiplication on \(\mathbb{R}^{n}\) is defined as the following function:

    \[ \left( \alpha, (x_{1}, \ldots, x_{n}) \right) \longmapsto \alpha (x_{1}, \ldots, x_{n}) = (\alpha x_{1}, \ldots, \alpha x_{n}), \ \forall \, \alpha \in \mathbb{R}, \ \forall \, (x_{1}, \ldots, x_{n}) \in \mathbb{R}^n.\]

    In other words, given any \(\alpha \in \mathbb{R}\) and any \(n\)-tuple \((x_{1}, \ldots,x_{n}) \in \mathbb{R}^n\), their scalar multiplication results in a new \(n\)-tupledenoted by \(\alpha (x_{1}, \ldots, x_{n})\). This new \(n\)-tuple is virtually identical to the original, each component having just been "rescaled'' by \(\alpha\).

    2. Scalar multiplication of continuous functions is another familiar scaling operation. Given any real number \(\alpha \in \mathbb{R}\) and any function \(f \in \mathcal{C}(\mathbb{R})\), their scalar multiplication results in a new function that is denoted by \(\alpha f\), where \(\alpha f\) is defined by the rule \[ (\alpha f)(r) = \alpha (f(r)), \forall \, r \in \mathbb{R}. \] In other words, this new continuous function \(\alpha f \in \mathcal{C}(\mathbb{R})\) is virtually identical to the original function \(f\); it just ``rescales'' the image of each \(r \in \mathbb{R}\) under \(f\) by \(\alpha\).

    3. The division function \(\div : \mathbb{R} \times \left( \mathbb{R}\setminus\{0\} \right) \to \mathbb{R}\) is a scaling operation on \(\mathbb{R}\setminus\{0\}\). In particular, given two real number \(r_{1}, r_{2} \in \mathbb{R}\) and any non-zero real number \(s \in \mathbb{R}\setminus\{0\}\), we have that \(\div(r_{1}, s) = r_{1}(1/s)\) and \(\div(r_{2}, s) = r_{2}(1/s)\), and so \(\div(r_{1}, s)\) and \(\div(r_{2}, s)\) can be viewed as different ``scalings'' of the multiplicative inverse \(1/s\) of \(s\).

    This is actually a special case of the previous example. In particular, we can define a function \(f \in \mathcal{C}(\mathbb{R}\setminus\{0\})\) by \(f(s) = 1/s\), for each \(s \in \mathbb{R}\setminus\{0\}\). Then, given any two real numbers \(r_{1}, r_{2} \in \mathbb{R}\), the functions \(r_{1}f\) and \(r_{2}f\) can be defined by

    \[ r_{1}f(\cdot) = \div(r_{1}, \cdot) \ \ \text{and} \ \ r_{2}f(\cdot) = \div(r_{2}, \cdot), \ \text{respectively}.\]

    4. Strictly speaking, there is nothing in thedefinition that precludes \(S\) from equaling \(\mathbb{F}\). Consequently, addition, subtraction,and multiplication can all be seen as examples ofscaling operations on \(\mathbb{R}\).

    As with binary operations, it is easy to define any number of scaling operations upon a given nonempty set \(S\). However, we are generally only interested in operations that are essentially like scalar multiplication on \(\mathbb{R}^{n}\), and it is also quite common to additionally impose conditions for how scaling operations should interact with any binary operations that might also be defined upon \(S\). We make this precise when we present an alternate formulation of the definition for a vector space in Section C.2.

    Put another way, the definitions for binary operation and scaling operation are not particularly useful when taken as is. Since these operations are allowed to be any functions having the proper domains, there is no immediate sense of meaningful abstraction. Instead, binary and scaling operations become useful when additionally conditions are placed upon them so that they can be used to abstract "arithmetic-like'' properties. In other words, we are usually only interested in operations that abstract the salient properties of familiar operations for combining things like numbers, \(n\)-tuples, and functions.

    C.2 Groups, fields, and vector spaces

    We begin this section with the following definition, which is unequivocably one of the most fundamental and ubiquitous notions in all of abstract mathematics.

    Definition C.2.1.

    Let \(G\) be a nonempty set, and let \(*\) be a binary operation on \(G\). (In other words, \(*:G \times G \to G\) is a function with \(*(a, b)\) denoted by \(a*b\), for each \(a, b \in G\).) Then \(G\) is said to form a group under \(*\) if the following three conditions are satisfied:

    1. (associativity) Given any three elements \(a, b, c \in G\), \[ (a * b) * c = a * (b * c). \]
    2. (existence of an identity element) There is an element \(e \in G\) such that, given any element \(a \in G\), \[ a * e = e * a = a. \]
    3. (existence of inverse elements) Given any element \(a \in G\), there is an element \(b \in G\) such that \[ a * b = b * a = e. \]

    You should recognize these three conditions (which are sometimes collectively referred to as the group axioms) as properties that are satisfied by the operation of addition on \(\mathbb{R}\). This is not an accident. In particular, given real numbers \(\alpha, \beta \in \mathbb{R}\), the group axioms form the minimal set of assumptions needed in order to solve the equation \(x + \alpha = \beta\) for the variable \(x\), and it is in this sense that the group axioms are an abstraction of the most fundamental properties of addition of real numbers.

    A similar remark holds regarding multiplication on \(\mathbb{R}\setminus\{0\}\) and solving the equation \(\alpha x = \beta\) for the variable \(x\). Note, however, that this cannot be extended to all of \(\mathbb{R}\).

    Because the group axioms are so general, they are particularly useful in building more complicated algebraic structures. This is done by adding any number of additional axioms, the most fundamental of which is as follows.

    Definition C.2.2. Let \(G\) be a group under binary operation \(*\). Then \(G\) is called an abelian group (a.k.a. commutative group) if, given any two elements \(a, b \in G\), \(a * b = b * a\).

    Examples of groups are everywhere in abstract mathematics. We now give some of the more important examples that occur in Linear Algebra. Please note, though, that these examples are primarily aimed at motivating the definitions of more complicated algebraic structures. (In general, groups can be much "stranger'' than those below.)

    Example C.2.3.

    1. If \(G \in \left\{ \mathbb{Z}, \,\mathbb{Q}, \,\mathbb{R}, \,\mathbb{C} \right\}\), then \(G\) forms an abelian group under the usual definition of addition.

    Note, though, that the set \(\mathbb{Z}_{+}\) of positive integers does not form a group under addition since, e.g., it does not contain an additive identity element.

    1. Similarly, if \(G \in \left\{ \,\mathbb{Q}\setminus\{0\}, \,\mathbb{R}\setminus\{0\}, \,\mathbb{C}\setminus\{0\} \right\}\), then \(G\) forms an abelian group under the usual definition of multiplication.

    Note, though, that \(\mathbb{Z}\setminus\{0\}\) does not form a group under multiplication since only \(\pm 1\) have multiplicative inverses.

    1. If \(m, n \in \mathbb{Z}_{+}\) are positive integers and \(\mathbb{F}\) denotes either \(\mathbb{R}\) or \(\mathbb{C}\), then the set \(\mathbb{F}^{m \times n}\) of all \(m \times n\) matrices forms an abelian group under matrix addition.

    Note, though, that \(\mathbb{F}^{m \times n}\) does not form a group under matrix multiplication unless \(m = n = 1\), in which case \(\mathbb{F}^{1 \times 1} = \mathbb{F}\).

    1. Similarly, if \(n \in \mathbb{Z}_{+}\) is a positive integer and \(\mathbb{F}\) denotes either \(\mathbb{R}\) or \(\mathbb{C}\), then the set \(GL(n, \mathbb{F})\) of invertible \(n \times n\) matrices forms a group under matrix multiplications. This group, which is often called the general linear group, is non-abelian when \(n \geq 2\).

    Note, though, that \(GL(n, \mathbb{F})\) does not form agroup under matrix addition for any choice of \(n\) since, e.g., the zero matrix \(0_{n \times n} \notin GL(n, \mathbb{F})\).

    In the above examples, you should notice two things. First of all, it is important to specify the operation under which a set might or might not be a group. Second, and perhaps more importantly, all but one example is an abelian group. Most of the important sets in Linear Algebra possess some type of algebraic structure, and abelian groups are the principal building block of virtually every one of these algebraic structures. In particular, fields and vector spaces (as defined below) and rings and algebra (as defined in Section C.3) can all be described as "abelian groups plus additional structure''.

    Given an abelian group \(G\), adding "additional structure'' amounts to imposing one or more additional operation on \(G\) such that each new operations is "compatible'' with the preexisting binary operation on \(G\). As our first example of this, we add another binary operation to \(G\) in order to obtain the definition of a field:

    Definition C.2.4. Let \(F\) be a nonempty set, and let \(+\) and \(*\) be binary operations on \(F\). Then \(F\) forms a field under \(+\) and \(*\) if the following three conditions are satisfied:

    1. \(F\) forms an abelian group under \(+\).
    2. Denoting the identity element for \(+\) by \(0\), \(F\setminus\{0\}\) forms an abelian group under \(*\).
    3. (\(*\) distributes over \(+\)) Given any three elements \(a, b, c \in F\), \[ a * (b + c) = a * b + a * c. \]

    You should recognize these three conditions (which are sometimes collectively referred to as the field axioms) as properties that are satisfied when the operations of addition and multiplication are taken together on \(\mathbb{R}\). This is not an accident. As with the group axioms, the field axioms form the minimal set of assumptions needed in order to abstract fundamental properties of these familiar arithmetic operations. Specifically, the field axioms guarantee that, given any field \(F\), three conditions are always satisfied:

    1. Given any \(a, b \in F\), the equation \(x + a = b\) can be solved for the variable \(x\).
    2. Given any \(a \in F\setminus\{0\}\) and \(b \in F\), the equation \(a * x = b\) can be solved for \(x\).
    3. The binary operation \(*\) (which is like multiplication on \(\mathbb{R}\)) can be distributed over (i.e., is "compatible'' with) the binary operation \(+\) (which is like addition on \(\mathbb{R}\)).

    Example C.2.5.

    It should be clear that, if \(F \in \left\{\mathbb{Q}, \,\mathbb{R}, \,\mathbb{C} \right\}\), then \(F\) forms a field under the usual definitions of addition and multiplication.

    Note, though, that the set \(\mathbb{Z}\) of integers does not form a field under these operations since \(\mathbb{Z} \setminus \{0\}\) fails to form a group under multiplication. Similarly, none of the other sets from Example C.2.3 can be made into a field.

    In some sense \(\mathbb{Q}\), \(\mathbb{R}\), and \(\mathbb{C}\) are the only easily describable fields. While there are many other interesting and useful examples of fields, none of them can be described using entirely familiar sets and operations. This is because the field axioms are extremely specific in describing algebraic structure. As we will see in the next section, though, we can build a much more general algebraic structure called a "ring'' by still requiring that \(F\) form an abelian group under \(+\) but simultaneously relaxing the requirement that \(F\) simultaneously form an abelian group under \(*\).

    For now, though, we close this section by taking a completely different point of view. Rather than place an additional (and multiplication-like) binary operation on an abelian group, we instead impose a special type of scaling operation called scalar multiplication. In essence, scalar multiplication imparts useful algebraic structure on an arbitrary nonempty set \(S\) by indirectly imposing the algebraic structure of \(\mathbb{F}\) as an abelian group under multiplication. (Recall that \(\mathbb{F}\) can be replaced with either \(\mathbb{R}\) or \(\mathbb{C}\).)

    Definition C.2.6. Let \(S\) be a nonempty set, and let \(*\) be a scaling operation on \(S\). (In other words, \(* : \mathbb{F} \times S \to S\) is a function with \(*(\alpha, s)\) denoted by \(\alpha*s\) or even just \(\alpha s\), for every \(\alpha \in \mathbb{F}\) and \(s \in S\).) Then \(*\) is called scalar multiplication if it satisfies the following two conditions:

    1. (existence of a multiplicative identity element for \(*\)) Denote by \(1\) the multiplicative identity element for \(\mathbb{F}\). Then, given any \(s \in S\), \(1 * s = s\).
    2. (multiplication in \(\mathbb{F}\) is quasi-associative with respect to \(*\)) Given any \(\alpha, \beta \in \mathbb{F}\) and any \(s \in S\), \[ (\alpha \beta) * s = \alpha * (\beta * s). \]

    Note that we choose to have the multiplicative part of \(\mathbb{F}\) "act'' upon \(S\) because we are abstracting scalar multiplication as it is intuitively defined in Example C.1.4 on both \(\mathbb{R}^{n}\) and \(\mathcal{C}(\mathbb{R})\). This is because, by also requiring a "compatible'' additive structure (called vector addition), we obtain the following alternate formulation for the definition of a vector space.

    Definition C.2.7.

    Let \(V\) be an abelian group under the binary operation \(+\), and let \(*\) be a scalar multiplication operation on \(V\) with respect to \(\mathbb{F}\). Then \(V\) forms a vector space over \(\mathbb{F}\) with respect to \(+\) and \(*\) if the following two conditions are satisfied:

    1. (\(*\) distributes over \(+\)) Given any \(\alpha \in \mathbb{F}\) and any \(u, v \in V\), \[ \alpha * (u + v) = \alpha * u + \alpha * v. \]
    2. (\(*\) distributes over addition in \(\mathbb{F}\)) Given any \(\alpha, \beta \in \mathbb{F}\) and any \(v \in V\), \[ (\alpha + \beta) * v = \alpha * v + \beta * v. \]

    C.3 Rings and algebras

    In this section, we briefly mention two other common algebraic structures. Specifically, we first "relax'' the definition of a field in order to define a ring, and we then combine the definitions of ring and vector space in order to define an algebra. In some sense, groups, rings, and fields are the most fundamental algebraic structures, with vector spaces and algebras being particularly important variants within the study of Linear Algebra and its applications.

    Definition C.3.1.

    Let \(R\) be a nonempty set, and let \(+\) and \(*\) be binary operations on \(R\). Then \(R\) forms an (associative) ring under \(+\) and \(*\) if the following three conditions are satisfied:

    1. \(R\) forms an abelian group under \(+\).
    2. (\(*\) is associative) Given any three elements \(a, b, c \in R\), \(a * (b * c) = (a * b) * c\).
    3. (\(*\) distributes over \(+\)) Given any three elements \(a, b, c \in R\),

    \[ a * (b + c) = a * b + a * c \ \ \text{and} \ \ (a + b) * c = a * c + b * c.\]

    As with the definition of group, there are many additional properties that can be added to a ring; here, each additional property makes a ring more field-like in some way.

    Definition C.3.2. Let \(R\) be a ring under the binary operations \(+\) and \(*\).

    Then we call \(R\)

    1. commutative if \(*\) is a commutative operation; i.e., given any \(a, b \in R\), \(a * b = b * a\).
    2. unital if there is an identity element for \(*\); i.e., if there exists an element \(i \in R\) such that, given any \(a \in R\), \(a * i = i * a = a\).
    3. a commutative ring with identity (a.k.a. CRI) if it's both commutative and unital.

    In particular, note that a commutative ring with identity is almost a field; the only thing missing is the assumption that every element has a multiplicative inverse. It is this one difference that results in many familiar sets being CRIs (or at least unital rings) but not fields. E.g., \(\mathbb{Z}\) is a CRI under the usual operations of addition and multiplication, yet, because of the lack of multiplicative inverses for all elements except \(\pm 1\), \(\mathbb{Z}\) is not a field.

    In some sense, \(\mathbb{Z}\) is the prototypical example of a ring, but there are many other familiar examples. E.g., if \(F\) is any field, then the set of polynomials \(F[z]\) with coefficients from \(F\) is a CRI under the usual operations of polynomial addition and multiplication, but again, because of the lack of multiplicative inverses for every element, \(F[z]\) is itself not a field. Another important example of a ring comes from Linear Algebra. Given any vector space \(V\), the set \(\mathcal{L}(V)\) of all linear maps from \(V\) into \(V\) is a unital ring under the operations of function addition and composition. However, \(\mathcal{L}(V)\) is not a CRI unless \(\dim(V) \in \{0, 1\}\).

    Alternatively, if a ring \(R\) forms a group under \(*\) (but not necessarily an abelian group), then \(R\) is sometimes called a skew field (a.k.a. division ring). Note that a skew field is also almost a field; the only thing missing is the assumption that multiplication is commutative. Unlike CRIs, though, there are no simple examples of skew fields that are not also fields.

    As you can probably imagine, many other properties that can be appended to the definition of a ring, some of which are more useful than others. We close this section by defining the concept of an algebra over a field. In essence, an algebra is a vector space together with a "compatible'' ring structure. Consequently, anything that can be done with either a ring or a vector space can also be done with an algebra.

    Definition C.3.3.

    Let \(A\) be a nonempty set, let \(+\) and \(\times\) be binary operations on \(A\), and let \(*\) be scalar multiplication on \(A\) with respect to \(\mathbb{F}\). Then \(A\) forms an (associative) algebra over \(\mathbb{F}\) with respect to \(+\), \(\times\), and \(*\) if the following three conditions are satisfied:

    1. \(A\) forms an (associative) ring under \(+\) and \(\times\).
    2. \(A\) forms a vector space over \(\mathbb{F}\) with respect to \(+\) and \(*\).
    3. (\(*\) is quasi-associative and homogeneous with respect to \(\times\)) Given any element \(\alpha \in \mathbb{F}\) and any two elements \(a, b \in R\), \[ \alpha * (a \times b) = (\alpha * a) \times b {\rm{~and~}} \alpha * (a \times b) = a \times (\alpha * b). \]

    Two particularly important examples of algebras were already defined above: \(F[z]\) (which is unital and commutative) and \(\mathcal{L}(V)\) (which is, in general, just unital). On the other hand, there are also many important sets in Linear Algebra that are not algebras. E.g., \(\mathbb{Z}\) is a ring that cannot easily be made into an algebra, and \(\mathbb{R}^{3}\) is a vector space but cannot easily be made into a ring (since the cross product operation from Vector Calculus is not associative).


    This page titled 13.2: Summary of Algebraic Structures is shared under a not declared license and was authored, remixed, and/or curated by Isaiah Lankham, Bruno Nachtergaele, & Anne Schilling.