
6.1: Complex Numbers, Vectors and Matrices


Complex Numbers

A complex number is simply a pair of real numbers. In order to stress however that the two arithmetics differ we separate the two real pieces by the symbol $$i$$. More precisely, each complex number, $$z$$, may be uniquely expressed by the combination $$x+iy$$, where $$x$$ and $$y$$ are real and $$i$$ denotes $$\sqrt{-1}$$. We call $$x$$ the real part and $$y$$ the imaginary part of z. We now summarize the main rules of complex arithmetic.

If $$z_{1} = x_{1}+iy_{1}$$ and $$z_{2} = x_{2}+iy_{2}$$ then

$z_{1}+z_{2} \equiv x_{1}+x_{2}+i(y_{1}+y_{2}) \nonumber$

Definition: Complex Multiplication

$z_{1}+z_{2} \equiv (x_{1}+iy_{1})(x_{2}+iy_{2}) = x_{1}x_{2}-y_{1}y_{2}+i(x_{1}y_{2}+x_{2}y_{1}) \nonumber$

Definition: Complex Conjugation

$\overline{z_{1}} \equiv x_{1}-iy_{1} \nonumber$

Definition: Complex Division

$\frac{z_{1}}{z_{2}} \equiv \frac{z_{1}}{z_{2}} \frac{\overline{z_{2}}}{\overline{z_{2}}} = \frac{x_{1}x_{2}+y_{1}y_{2}+i(x_{2}y_{1}-x_{1}y_{2})}{x_{2}^{2}+y_{2}^{2}} \nonumber$

Definition: Magnitude of a Complex Number

$|z_{1}| \equiv = \sqrt{z_{1} \overline{z_{1}}} = \sqrt{x_{1}^{2}+y_{1}^{2}} \nonumber$

Polar Representation

In addition to the Cartesian representation $$z = x+iy$$ one also has the polar form

$z = |z|(\cos(\theta)+i \sin(\theta)) \nonumber$

where $$\theta = \arctan(yx)$$

This form is especially convenient with regards to multiplication. More precisely,

\begin{align*} z_{1}z_{2} &= |z_{1}||z_{2}|(\cos(\theta_{1})\cos(\theta_{2})-\sin(\theta_{1})\sin(\theta_{2})+i(\cos(\theta_{1}) \sin(\theta_{2})+\sin(\theta_{1}) \cos(\theta_{2}))) \\[4pt] &=|z_{1}||z_{2}|(\cos(\theta_{1}+\theta_{2})+i \sin(\theta_{1}+\theta_{2})) \end{align*}

As a result:

$z^{n} = (|z|)^{n}(\cos(n \theta)+i \sin(n \theta)) \nonumber$

Complex Vectors and Matrices

A complex vector (matrix) is simply a vector (matrix) of complex numbers. Vector and matrix addition proceed, as in the real case, from elementwise addition. The dot or inner product of two complex vectors requires, however, a little modification. This is evident when we try to use the old notion to define the length of a complex vector. To wit, note that if:

$z = \begin{pmatrix} {1+i}\\ {1-i} \end{pmatrix} \nonumber$

then

$z^{T} z = (1+i)^2+(1-i)^2 = 1+2i-1+1-2i-1 = 0 \nonumber$

Now length should measure the distance from a point to the origin and should only be zero for the zero vector. The fix, as you have probably guessed, is to sum the squares of the magnitudes of the components of $$z$$. This is accomplished by simply conjugating one of the vectors. Namely, we define the length of a complex vector via:

$(z) = \sqrt{\overline{z}^{T} z} \nonumber$

In the example above this produces

$\sqrt{(|1+i|)^2+(|1-i|)^2} = \sqrt{4} = 2 \nonumber$

As each real number is the conjugate of itself, this new definition subsumes its real counterpart.

The notion of magnitude also gives us a way to define limits and hence will permit us to introduce complex calculus. We say that the sequence of complex numbers, $$\left \{ z_{n}| n = \begin{pmatrix} {1}\\ {2}\\ {\cdots} \end{pmatrix} \right \} \nonumber$$, converges to the complex number $$z_{0}$$ and write

$z_{n} \rightarrow z_{0} \nonumber$

or

$z_{0} = \lim_{n \rightarrow \infty} z_{n} \nonumber$

when, presented with any $$\epsilon > 0$$ one can produce an integer $$N$$ for which $$|z_{n}-z_{0}| < \epsilon$$ when $$n \ge N$$. As an example, we note that $$(\frac{i}{2})^{n} \rightarrow 0$$.

Example $$\PageIndex{1}$$

As an example both of a complex matrix and some of the rules of complex arithmetic, let us examine the following matrix:

$F = \begin{pmatrix} {1}&{1}&{1}&{1}\\ {1}&{i}&{-1}&{-i}\\ {1}&{-1}&{1}&{-1}\\ {1}&{-i}&{-1}&{i} \end{pmatrix} \nonumber$

Let us attempt to find $$F \overline{F}$$. One option is simply to multiply the two matrices by brute force, but this particular matrix has some remarkable qualities that make the job significantly easier. Specifically, we can note that every element not on the diagonal of the resultant matrix is equal to 0. Furthermore, each element on the diagonal is 4. Hence, we quickly arrive at the matrix

\begin{align*} F \overline{F} &= \begin{pmatrix} {4}&{0}&{0}&{0}\\ {0}&{4}&{0}&{0}\\ {0}&{0}&{4}&{0}\\ {0}&{0}&{0}&{4} \end{pmatrix} \\[4pt] &= 4i \end{align*}

This final observation, that this matrix multiplied by its transpose yields a constant times the identity matrix, is indeed remarkable. This particular matrix is an example of a Fourier matrix, and enjoys a number of interesting properties. The property outlined above can be generalized for any $$F_{n}$$, where $$F$$ refers to a Fourier matrix with $$n$$ rows and columns:

$F_{n} \overline{F}_{n} = nI \nonumber$