$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}{\| #1 \|}$$ $$\newcommand{\inner}{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$

# 2.6: Review Problems

$$\newcommand{\vecs}{\overset { \rightharpoonup} {\mathbf{#1}} }$$

$$\newcommand{\vecd}{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$

1. Write down examples of augmented matrices corresponding to each of the five types of solution sets for systems of equations with three unknowns.

2. Invent simple linear system that has multiple solutions. Use the standard approach for solving linear systems and a non-standard approach to obtain different descriptions of the solution set. Is the solution set different with different approaches?

3. Let $$M = \begin{pmatrix} a_{1}^{1} & a_{2}^{1} & \cdots & a_{k}^{1}\\ a_{1}^{2} & a_{2}^{2} & \cdots & a_{k}^{2} \\ \vdots & \vdots & ~ & \vdots \\ a_{1}^{r} & a_{2}^{r} & \cdots & a_{k}^{r} \end{pmatrix}$$

Note: $$x^{2}$$ does not denote the square of x. Instead $$x^{1}, x^{2}, x^{3}$$ , etc..., denote different variables; the superscript is an index. Although confusing at first, this notation was invented by Albert Einstein who noticed that quantities like $$a_{1}^{2}x^{1}+a_{2}^{2}x^{2}+\cdots+a_{k}^{2}x^{k} =: \sum_{j=1}^{k}a_{j}^{2}x^{j}$$, can be written unambiguously as $$a_{j}^{2}x^{j}. This is called Einstein summation notation. The most important thing to remember is that the index \(j$$ is a dummy variable, so that $$a_{j}^{2}x^{j}\equiva_{i}^2x^{i}$$; this is called “relabeling dummy indices”. When dealing with products of sums, you must remember to introduce a new dummy for each term; i.e., $$a_{i}x^{i}b_{i}y^{i} = \sum_{i}a_{i}x^{i}b_{i}y^{i}$$ does not equal $$a_{i}x^{i}b_{j}y^{j} = \sum_{i}a_{i}x^{i}\sum_{j}b_{j}y^{j}$$. Use Einstein summation notation to propose a rule for $$MX$$ so that $$MX = 0$$ is equivalent to the linear system

$$\begin{matrix}a_{1}^{1}x^{1} + a_{2}^{1}x^{2} + \cdots + a_{k}^{1}x^{k} = 0\\ a_{1}^{2}x^{1} + a_{2}^{2}x^{2} + \cdots + a_{k}^{2}x^{k} = 0\\ a_{1}^{r}x^{1} + a_{2}^{r}x^{2} + \cdots + a_{k}^{r}x^{k} = 0\end{matrix}$$

Show that your rule for multiplying a matrix by a vector obeys the linearity property.

4. The $$\textit{standard basis vector}$$ $$e_{i}$$ is a column vector with a one in the $$i$$th row, and zeroes everywhere else. Using the rule for multiplying a matrix times a vector in problem 3, find a simple rule for multiplying $$Me_{i}$$, where $$M$$ is the general matrix defined there.

5. If $$A$$ is a non-linear operator, can the solutions to $$Ax = b$$ still be written as “general solution=particular solution + homogeneous solutions”? Provide examples.

6. Find a system of equations whose solution set is the walls of a $$1×1×1$$ cube. (Hint: You may need to restrict the ranges of the variables; could your equations be linear?)