5.3: Simplified Signal Flow Graphs
- Page ID
- 54919
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\dsum}{\displaystyle\sum\limits} \)
\( \newcommand{\dint}{\displaystyle\int\limits} \)
\( \newcommand{\dlim}{\displaystyle\lim\limits} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\(\newcommand{\longvect}{\overrightarrow}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)We now return to signal flow graphs, expressing them in terms of props. We will discuss a simplified form without feedback (the only sort we have discussed so far), and then extend to the usual form of signal flow graphs in Section 5.4.3. But before we can do that, we must say what we mean by signals; this gets us into the algebraic structure of “rigs.” We will get to signal flow graphs in Section 5.3.2.
Rigs
Signals can be amplified, and they can be added. Adding and amplification interact via a distributive law, as follows: if we add two signals, and then amplify them by some amount a, it should be the same as amplifying the two signals separately by a, then adding the results.
We can think of all the possible amplifications as forming a structure called a rig,7 defined as follows.
A rig is a tuple (R, 0, +, 1, ∗), where R is a set, 0, 1 \(\in\) R are elements, and +, ∗ : R × R → R are functions, such that
- (R, +, 0) is a commutative monoid,
- (R, ∗, 1) is a monoid,\(^{8}\) and
- a ∗ (b + c) = a ∗ b + a ∗ c and (a + b) ∗ c = a ∗ c + b ∗ c for all a, b, c \(\in\) R.
- a ∗ 0 = 0 = 0 ∗ a for all a \(\in\) R.
We have already encountered many examples of rigs.
The natural numbers form a rig (\(\mathbb{N}\), 0, +, 1, ∗).
The Booleans form a rig (\(\mathbb{B}\), false, \(\bigvee\), true, \(\bigwedge\)).
Any quantale V = (V, ≤, I, ⊗) determines a rig (V, 0, \(\bigvee\), I, ⊗), where 0 = \(\bigvee\)Ø is the empty join. See Definition 2.79.
If R is a rig and n \(\in\) N is any natural number, then the set Mat\(_{n}\)(R) of (n×n) matrices in R forms a rig. A matrix M \(\in\) M at n(R) is a function M : \(\underline{n}\) × \(\underline{n}\) → R.
Addition M + N of matrices is given by (M + N )(i , j) := M (i , j) +N (i , j) and multiplication M ∗ N is given by (M ∗ N)(i, j) := \(\Sigma\)\(_{k \in \underline{n}}\) M(i, k) ∗ N(k, j). The 0-matrix is 0(i, j) := 0 for all i, j \(\in\) \(\underline{n}\). Note that Mat\(_{n}\)(R) is generally not commutative.
1. We said in Example 5.40 that for any rig R, the set Mat\(_{n}\)(R) forms a rig. What is its multiplicative identity 1 \(\in\) Mat\(_{n}\)(R)?
2. We also said that Mat\(_{n}\)(R) is generally not commutative. Pick an n and show that that Mat\(_{n}\)(\(\mathbb{N}\)) is not commutative, where \(\mathbb{N}\) is as in Example 5.37. ♦
Any ring forms a rig. In particular, the real numbers (\(\mathbb{R}\), 0, +, 1, ∗) are a rig. The difference between a ring and rig is that a ring, in addition to all the properties of a rig, must also have additive inverses, or negatives. A common mnemonic is that a rig is a ring without negatives.
The iconography of signal flow graphs
A signal flow graph is supposed to keep track of the amplification, by elements of a rig R, to which signals are subjected. While not strictly necessary,\(^{9}\) we will assume the signals themselves are elements of the same rig R. We refer to elements of R as signals for the time being.
Amplification of a signal by some value a \(\in\) R is simply depicted like so:

We interpret the above icon as a depicting a system where a signal enters on the left-hand wire, is multiplied by a, and is output on the right-hand wire.
What is more interesting than just a single signal amplification, however, is the interaction of signals. There are four other important icons in signal flow graphs.

Let’s go through them one by one. The first two are old friends from Chapter 2: copy and discard.

We interpret this diagram as taking in an input signal on the left, and outputting that same value to both wires on the right. It is basically the “copy” operation from Section 2.2.3.
Next, we have the ability to discard signals.

This takes in any signal, and outputs nothing. It is basically the “waste” operation from Section 2.2.3. Next, we have the ability to add signals.

This takes the two input signals and adds them, to produce a single output signal. Finally, we have the zero signal.

This has no inputs, but always outputs the 0 element of the rig.
Using these icons, we can build more complex signal flow graphs. To compute the operation performed by a signal flow graph we simply trace the paths with the above interpretations, plugging outputs of one icon into the inputs of the next icon.
For example, consider the rig R = \(\mathbb{N}\) from Example 5.37, where the scalars are the natural numbers. Recall the signal flow graph from Eq. (5.1) in the introduction:

As we explained, this takes in two input signals x and y, and returns two output signals a = 15x and b = 3x + 21y.
In addition to tracing the processing of the values as they move forward through the graph, we can also calculate these values by summing over paths. More explicitly, to get the contribution of a given input wire to a given output wire, we take the sum, over all paths p joining the wires, of the total amplification along that path.
So, for example, there is one path from the top input to the top output. On this path, the signal is first copied, which does not affect its value, then amplified by 5, and finally amplified by 3. Thus, if x is the first input signal, then this contributes 15x to the first output. Since there is no path from the bottom input to the top output (one is not allowed to traverse paths backwards), the signal at the first output is exactly 15x. Both inputs contribute to the bottom output. In fact, each input contributes in two ways, as there are two paths to it from each input. The top input thus contributes 3x = x + 2x, whereas the bottom input, passing through an additional ∗7 amplification, contributes 21y.
The following flow graph takes in two natural numbers x and y

and produces two output signals. What are they? ♦
This example is for those who have some familiarity with differential equations. A linear system of differential equations provides a simple way to specify the movement of a particle. For example, consider a particle whose position (x, y, z) in 3-dimensional space is determined by the following equations:

Using what is known as the Laplace transform, one can convert this into a linear system involving a formal variable D, which stands for “differentiate.” Then the system becomes

which can be represented by the signal flow graph
Signal flow graphs as morphisms in a free prop. We can formally define simplified signal flow graphs using props.
Let R be a rig (see Definition 5.36). Consider the set

and let s, t : G\(_{R}\) → \(\mathbb{N}\) be given by the number of dangling wires on the left and right of the generator icon respectively. A simplified signal flow graph is a morphism in the free prop Free(G\(_{R}\)) on this set G\(_{R}\) of generators. We define SFG\(_{R}\) := Free(G\(_{R}\)).
For now we’ll drop the term ‘simplified’, since these are the only sort of signal flow graph we know. We’ll return to signal flow graphs in their full glory—i.e. including feedback in Section 5.4.3.
To be more in line with our representations of both wiring diagrams and port graphs, morphisms in Free(G\(_{R}\)) should be drawn slightly differently. For example, technically the signal flow graph from Exercise 5.43 should be drawn as follows:

because we said we would label boxes with the elements of G. But it is easier on the eye to draw remove the boxes and just look at the icons inside as in Exercise 5.43, and so we’ll draw our diagrams in that fashion.
More importantly, props provide language to understand the semantics of signal flow graphs. Although the signal flow graphs themselves are free props, their semantics—their meaning in our model of signals flowing—will arise when we add equations to our props, as in Definition 5.33. These equations will tell us when two signal flow graphs act the same way on signals. For example,

both express the same behavior: a single input signal is copied twice so that three identical copies of the input signal are output.
If two signal flow graphs S, T are almost the same, with the one exception being that somewhere we replace the left-hand side of Eq. (5.47) with the right-hand side, then S and T have the same behavior. But there are other replacements we could make to a signal flow graph that do not change its behavior. Our next goal is to find a complete description of these replacements.
The prop of matrices over a rig
Signal flow graphs are closely related to matrices. In previous chapters we showed how a matrix with values in a quantale V—a closed monoidal preorder with all joins— represents a system of interrelated points and connections between them, such as a profunctor. The quantale gave us the structure and axioms we needed in order for matrix multiplication to work properly. But we know from Example 5.39 that quantales are examples of rigs, and in fact matrix multiplication makes sense in any rig R. In Example 5.40, we explained that the set Mat\(_{n}\)(R) of (n × n) matrices in R can naturally be assembled into a rig, for any fixed choice of n \(\in\) \(\mathbb{N}\). But what if we want to do better, and assemble all matrices into a single algebraic structure? The result is a prop!
An (m × n) matrix M with values in R is a function M : (\(\underline{m}\) × \(\underline{n}\)) → R. Given an (m × n) matrix M and an (n × p) matrix N, their composite is the (m × p) matrix M ; N defined as follows for any a \(\in\) \(\underline{m}\) and c \(\in\) \(\underline{p}\):
\(M \text { ; } N(a, c):=\sum_{b \in \underline{n}} M(a, b) \times N(b, c)\) (5.48)
Here the \(\Sigma\)\(_{b \in \underline{n}}\) just means repeated addition (using the rig R’s + operation), as usual.
Remark 5.49. Conventionally, one generally considers a matrix A acting on a vector v by multiplication in the order Av, where v is a column vector. In keeping with our composition convention, we use the opposite order, v ; A, where v is a row vector. See for example Eq. (5.52) for when this is implicitly used.
Let R be a rig. We define the prop of R-matrices, denoted Mat(R), to be the prop whose morphisms m → n are the (m × n)-matrices with values in R. Composition of morphisms is given by matrix multiplication as in Eq. (5.48). The monoidal product is given by the direct sum of matrices: given matrices A: m → n and b : p → q, we define A + B : m + p → n + q to be the block matrix

where each 0 represents a matrix of zeros of the appropriate dimension (m × q and n × p). We refer to any combination of multiplication and direct sum as a interconnection of matrices.
Let A and B be the following matrices with values in \(\mathbb{N}\):

What is the direct sum matrix A + B ? ♦
Turning signal flow graphs into matrices
Let’s now consider more carefully what we mean when we talk about the meaning, or semantics, of each signal flow graph. We’ll use matrices.

In the examples like the above (copied from Eq. (5.1)), the signals emanating from output wires, say a and b, are given by certain sums of amplified input values, say x and y. If we can only measure the input and output signals, and care nothing for what happens in between, then each signal flow graph may as well be reduced to a matrix of amplifications. We can represent the signal flow graph of Eq. (5.1) by either the matrix on the left (for more detail) or the matrix on the right if the labels are clear from context:

Every signal flow graph can be interpreted as a matrix.
The generators G\(_{R}\) from Definition 5.45 are shown again in the table below, where each is interpreted as a matrix.
For example, we interpret amplification by a \(\in\) R as the 1 × 1 matrix (a) : 1 → 1: it is an operation that takes an input x \(\in\) R and returns a ∗ x. Similarly, we can interpret \(\supset \text { as the } 2 \times 1 \text { matrix }\left(\begin{array}{l} 1 \\ 1 \end{array}\right) \text { : }\) it is an operation that takes a row vector consisting of two 1 inputs, x and y, and returns x + y. Here is a table showing the interpretation of each generator.

Note that both zero and discard are represented by empty matrices, but of differing dimensions. In linear algebra it is unusual to consider matrices of the form 0 × n or n × 0 for various n to be different, but they can be kept distinct for bookkeeping purposes: you can multiply a 0 × 3 matrix by a 3 × n matrix for any n, but you can not multiply it by a 2 × n matrix.
Since signal flow graphs are morphisms in a free prop, the table in (5.52) is enough to show that we can interpret any signal flow diagram as a matrix.
There is a prop functor S : SFG\(_{R}\) → Mat(R) that sends the generators g \(\in\) G icons to the matrices as described in Table 5.52.
Proof. This follows immediately from the universal property of free props, Remark 5.34. \(\square\)
We have now constructed a matrix S(g) from any signal flow graph g. But how can we produce this matrix explicitly? Both for the example signal flow graph in Eq. (5.1) and for the generators in Definition 5.45, the associated matrix has dimension m × n, where m is the number of inputs and n the number of outputs, with (i, j)th entry describing the amplification of the i th input that contributes to the j th output. This is how one would hope or expect the functor S to work in general; but does it? We have used a big hammer—the universal property of free constructions to obtain our functor S. Our next goal is to check that it works in the expected way. Doing so is a matter of using induction over the set of prop expressions, as we now see.\(^{10}\)
Let g be a signal flow graph with m inputs and n outputs. The matrix S(g) is the (m × n) matrix whose (i, j) entry describes the amplification of the i th input that contributes to the j th output.
Proof. Recall from Definition 5.30 that an arbitrary G\(_{R}\) generated prop expression is built from the morphisms id\(_{0}\) : 0 → 0, id\(_{1}\) : 1 → 1, σ: 2 → 2, and the generators in GR, using the following two rules:
• if α :m → n and β :p → q are expressions, then (α + β) : (m+p) → (n + q) is an expression.
• if α :m → n and β :n → p are expressions, then α ; β : m→p is an expression. S is a prop functor by Theorem 5.53, which by Definition 5.11 must preserve identities, compositions, monoidal products, and symmetries. We first show that the proposition is true when g is equal to id\(_{0}\), id\(_{1}\), and σ.
The empty signal flow graph id\(_{0}\) : 0 → 0 must be sent to the unique (empty) matrix (): 0 → 0.
The morphisms id\(_{1}\), σ, and a \(\in\) R map to the identity matrix, the swap matrix, and the scalar matrix (a) respectively:

In each case, the (i, j) entry gives the amplification of the i th input to the j th output. It remains to show that if the proposition holds for α: m → n and β: p → q, then it
holds for (i) α ; β (when n = p) and for (ii) α + β (in general). To prove (i), consider the following picture of α ; β:

Here α : m → n and β : n → q are signal flow graphs, assumed to obey the proposition. Consider the ith input and kth output of α ; β; we’ll just call these i and k. We want to show that the amplification that i contributes to k is the sum—over all paths from i to k—of the amplification along that path. So let’s also fix some j ∈ n, and consider paths from i to k that run through j. By distributivity of the rig R, the total amplification from i to k through j is the total amplification over all paths from i to j times the total amplication over all paths from j to k. Since all paths from i to k must run through some j th output of α/input of β, the amplification that i contributes to k is
\(\sum_{j \in \underline{n}} \alpha(i, j) * \beta(j, k)\)
This is exactly the formula for matrix multiplication, which is composition S(α) ; S(β) in the prop Mat(R); see Definition 5.50. So α ; β obeys the proposition when α and β do.
Proving (ii) is more straightforward. The monoidal product α + β of signal flow graphs looks like this:

No new paths are created; the only change is to reindex the inputs and outputs. In particular, the i th input of α is the i th input of α + β, the j th output of α is the j th output of α + β, the i th input of β is the (m + i) th output of α + β, and the j th output of β is the (n + j) th output of α + β. This means that the matrix with (i, j)th entry describing the amplification of the i th input that contributes to the j th output is S(α) + S(β) = S(α + β), as in Definition 5.50. This proves the proposition. \(\square\)
1. What matrix does the signal flow graph

represent?
2. What about the signal flow graph

3. Are they equal? ♦
The idea of functorial semantics
Let’s pause for a moment to reflect on what we have just learned. First, signal flow diagrams are the morphisms in a prop. This means we have two special operations we can do to form new signal flow diagrams from old, namely composition (combining in series) and monoidal product (combining in parallel). We might think of this as specifying a ‘grammar’ or ‘syntax’ for signal flow diagrams.
As a language, signal flow graphs have not only syntax but also semantics: each signal flow diagram can be interpreted as a matrix. Moreover, matrices have the same grammatical structure: they form a prop, and we can construct new matrices from old using composition and monoidal product. In Theorem 5.53 we completed this picture by showing that semantic interpretation is a prop functor between the prop of signal flow graphs and the prop of matrices. Thus we say that matrices give functorial semantics for signal flow diagrams.
Functorial semantics is a key manifestation of compositionality. It says that the matrix meaning S(g) for a big signal flow graph g can be computed by:
- splitting g up into little pieces,
- computing the very simple matrices for each piece, and
- using matrix multiplication and direct sum to put the pieces back together to obtain the desired meaning, S(g).
This functoriality is useful in practice, for example in speeding up computation of the semantics of signal flow graphs: for large signal flow graphs, composing matrices is much faster than tracing paths.

