# 9.4: Orthonormal bases

- Page ID
- 259

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

We now define the notions of orthogonal basis and orthonormal basis for an inner product space. As we will see later, orthonormal bases have many special properties that allow us to simplify various calculations.

**Definition 9.4.1. **Let \(V \) be an inner product space with inner product \(\inner{\cdot}{\cdot}\). A list of nonzero vectors \((e_1,\ldots,e_m) \) in \(V\) is called **orthogonal** if

\[ \inner{e_i}{e_j} = 0, \quad \text{for all} ~1\le i\neq j \le m. \]

The list \((e_1,\ldots,e_m) \) is called **orthonormal** if

\[ \inner{e_i}{e_j} = \delta_{i,j}, \quad \text{for all \(i,j=1,\ldots,m\),} \]

where \(\delta_{ij} \) is the Kronecker delta symbol, i.e., \(\delta_{ij} = 1 \) if \(i=j \) and is zero otherwise.

**Proposition 9.4.2.** *Every orthogonal list of nonzero vectors in* \(V \) *is linearly independent.*

*Proof. *Let \((e_1,\ldots,e_m) \) be an orthogonal list of vectors in \(V\), and suppose that \(a_1,\ldots,a_m\in \mathbb{F} \) are such that

\[ a_1 e_1 + \cdots + a_m e_m =0. \]

Then

\begin{equation*}

0 = \norm{a_1 e_1 + \cdots +a_m e_m}^2

= |a_1|^2 \norm{e_1}^2 + \cdots + |a_m|^2 \norm{e_m}^2

\end{equation*}

Note that \(\norm{e_k} >0\), for all \(k=1,\ldots,m\), since every \(e_k \) is a nonzero vector. Also, \(|a_k|^2\ge 0\). Hence, the only solution to \(a_1 e_1 + \cdots + a_m e_m =0 \) is \(a_1=\cdots=a_m=0\).

**Definition 9.4.3. **An **orthonormal basis** of a finite-dimensional inner product space \(V \) is a list of orthonormal vectors that is basis for \(V\).

Clearly, any orthonormal list of length \(\dim(V) \) is an orthonormal basis for \(V\) (for infinite-dimensional vector spaces a slightly different notion of orthonormal basis is used).

**Example 9.4.4. **The canonical basis for \(\mathbb{F}^n \) is an orthonormal basis.

**Example 9.4.5. **The list \( \left((\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}), (\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}) \right) \) is an orthonormal basis for \(\mathbb{R}^2\).

The next theorem allows us to use inner products to find the coefficients of a vector \(v\in V \) in terms of an orthonormal basis. This result highlights how much easier it is to compute with an orthonormal basis.

**Theorem 9.4.6. ** *Let* \((e_1,\ldots,e_n) \) *be an orthonormal basis for* \(V\). *Then, for all* \(v\in V\), *we have*

\[ v = \inner{v}{e_1} e_1 + \cdots + \inner{v}{e_n} e_n \]

*and* \(\norm{v}^2 = \sum_{k=1}^n | \inner{v}{e_k}|^2\).

*Proof. * Let \(v\in V\). Since \((e_1,\ldots, e_n) \) is a basis for \(V\), there exist unique scalars \(a_1,\ldots,a_n\in\mathbb{F} \) such that

\[ v = a_1 e_1 + \cdots + a_n e_n. \]

Taking the inner product of both sides with respect to \(e_k \) then yields \(\inner{v}{e_k} = a_k\).

## Contributors

- Isaiah Lankham, Mathematics Department at UC Davis
- Bruno Nachtergaele, Mathematics Department at UC Davis
- Anne Schilling, Mathematics Department at UC Davis

Both hardbound and softbound versions of this textbook are available online at WorldScientific.com.