Search
- Filter Results
- Location
- Classification
- Include attachments
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/04%3A_Vector_spacesNow that important background has been developed, we are finally ready to begin the study of Linear Algebra by introducing vector spaces. To get a sense of how important vector spaces are, try flipping ...Now that important background has been developed, we are finally ready to begin the study of Linear Algebra by introducing vector spaces. To get a sense of how important vector spaces are, try flipping to a random page in these notes. There is very little chance that you will flip to a page that does not have at least one vector space on it. Isaiah Lankham, Mathematics Department at UC Davis Both hardbound and softbound versions of this textbook are available online at WorldScientific.com.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/09%3A_Inner_product_spaces/9.E%3A_Exercises_for_Chapter_9Let \( (e_1 , e_2 , e_3) \) be the canonical basis of \( \mathbb{R^3} \) , and define \[ f_1 = e_1 + e_2 + e_3, ~~~~~~~~~f_2 = e_2 + e_3, ~~~~~~~~~f_3 = e_3 . \] (a) Apply the Gram-Schmidt process to t...Let \( (e_1 , e_2 , e_3) \) be the canonical basis of \( \mathbb{R^3} \) , and define \[ f_1 = e_1 + e_2 + e_3, ~~~~~~~~~f_2 = e_2 + e_3, ~~~~~~~~~f_3 = e_3 . \] (a) Apply the Gram-Schmidt process to the basis \( (f_1 , f_2 , f_3) \). (b) What do you obtain if you instead applied the Gram-Schmidt process to the basis \( (f_3 , f_2 , f_1) \)?
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/07%3A_Eigenvalues_and_Eigenvectors/7.E%3A_Exercises_for_Chapter_7Let \(V\) be a finite-dimensional vector space over \(\mathbb{F},\) and let \(S, T \in \cal{L}\)\((V)\) be linear operators on \(V\) . Suppose that \(T\) has \(dim(V)\) distinct eigenvalues and that, g...Let \(V\) be a finite-dimensional vector space over \(\mathbb{F},\) and let \(S, T \in \cal{L}\)\((V)\) be linear operators on \(V\) . Suppose that \(T\) has \(dim(V)\) distinct eigenvalues and that, given any eigenvector \(v \in V\) for \(T\) associated to some eigenvalue \(\lambda \in \mathbb{F},\) \(v\) is also an eigenvector for \(S\) associated to some (possibly distinct) eigenvalue \(\mu \in \mathbb{F}.\) Prove that \(T \circ S = S \circ T .\)
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/12%3A_Supplementary_notes_on_matrices_and_linear_systems/12.03%3A_Solving_linear_systems_by_factoring_the_coefficient_matrixGiven an \(m \times n\) matrix \(A \in \mathbb{F}^{m \times n}\) and a non-zero vector \(b \in \mathbb{F}^m\) , we call \(Ax = 0\) the associated homogeneous matrix equation to the inhomogeneous matri...Given an \(m \times n\) matrix \(A \in \mathbb{F}^{m \times n}\) and a non-zero vector \(b \in \mathbb{F}^m\) , we call \(Ax = 0\) the associated homogeneous matrix equation to the inhomogeneous matrix equation \(Ax = b.\) Then, according to Theorem A.3.12, \(U\) can be found by first finding the solution space \(N\) for the associated equation \(Ax = 0\) and then finding any so-called particular solution \(u \in \mathbb{F}^n\) to \(Ax = b.\)
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/04%3A_Vector_spaces/4.02%3A_Elementary_properties_of_vector_spacesSuppose \(w\) and \(w'\) are additive inverses of \(v\) so that \(v+w=0\) and \(v+w'=0\) Then \[ w = w+0 = w+(v+w') = (w+v)+w' = 0+w' =w'. \] Since the additive inverse of \(v\) is unique, as we have ...Suppose \(w\) and \(w'\) are additive inverses of \(v\) so that \(v+w=0\) and \(v+w'=0\) Then \[ w = w+0 = w+(v+w') = (w+v)+w' = 0+w' =w'. \] Since the additive inverse of \(v\) is unique, as we have just shown, it will from now on be denoted by \(-v\) We also define \(w-v\) to mean \(w+(-v)\) We will, in fact, show in Proposition 4.2.5 below that \(-v=-1 v\) Note that the \(0\) on the left-hand side in Proposition 4.2.3 is a scalar, whereas the \(0\) on the right-hand side is a vector.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/02%3A_Introduction_to_Complex_Numbers/2.01%3A_De%EF%AC%81nition_of_Complex_NumbersIn other words, we are defining a new collection of numbers \(z\) by taking every possible ordered pair \((x, y)\) of real numbers \(x, y \in \mathbb{R}\), and \(x\) is called the real part of the ord...In other words, we are defining a new collection of numbers \(z\) by taking every possible ordered pair \((x, y)\) of real numbers \(x, y \in \mathbb{R}\), and \(x\) is called the real part of the ordered pair \((x,y)\) in order to imply that the set \(\mathbb{R}\) of real numbers should be identified with the subset \(\{ (x, 0) \ | \ x \in \mathbb{R} \} \subset \mathbb{C}\).
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/11%3A_The_Spectral_Theorem_for_normal_linear_maps/11.01%3A_Self-adjoint_or_hermitian_operators\begin{equation*} \begin{split} \lambda \norm{v}^2 &= \inner{\lambda v}{v} = \inner{Tv}{v} = \inner{v}{T^*v}\\ &= \inner{v}{Tv} = \inner{v}{\lambda v} = \overline{\lambda} \inner{v}{v} =\overline{\lam...\begin{equation*} \begin{split} \lambda \norm{v}^2 &= \inner{\lambda v}{v} = \inner{Tv}{v} = \inner{v}{T^*v}\\ &= \inner{v}{Tv} = \inner{v}{\lambda v} = \overline{\lambda} \inner{v}{v} =\overline{\lambda} \norm{v}^2. \end{split} \end{equation*} The operator \(T\in \mathcal{L}(V)\) defined by \(T(v) = \begin{bmatrix} 2 & 1+i\\ 1-i & 3 \end{bmatrix} v\) is self-adjoint, and it can be checked (e.g., using the characteristic polynomial) that the eigenvalues of \(T\) are \(\lambda=1,4\).
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/06%3A_Linear_Maps/6.06%3A_The_matrix_of_a_linear_mapSince \((w_1,\ldots,w_m) \) is a basis of \(W \), there exist unique scalars \(a_{ij}\in\mathbb{F} \) such that \begin{equation}\label{eq:Tv} Tv_j = a_{1j} w_1 + \cdots + a_{mj} w_m \quad \text{for \(...Since \((w_1,\ldots,w_m) \) is a basis of \(W \), there exist unique scalars \(a_{ij}\in\mathbb{F} \) such that \begin{equation}\label{eq:Tv} Tv_j = a_{1j} w_1 + \cdots + a_{mj} w_m \quad \text{for \(1\le j\le n \).} \tag{6.6.1} \end{equation} We can arrange these scalars in an \(m\times n \) matrix as follows: \begin{equation*} M(T) = \begin{bmatrix} a_{11} & \ldots & a_{1n}\\ \vdots && \vdots\\ a_{m1} & \ldots & a_{mn} \end{bmatrix}. \end{equation*} Often, this is also written as \(A=(a_{ij})…
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/04%3A_Vector_spaces/4.03%3A_SubspacesLet \(V\) be a vector space over \( \mathbb{F}\), and let \( U\) be a subset of \(V\) . Then we call \(U\) a subspace of \(V\) if \(U\) is a vector space over \(\mathbb{F}\) under the same operations ...Let \(V\) be a vector space over \( \mathbb{F}\), and let \( U\) be a subset of \(V\) . Then we call \(U\) a subspace of \(V\) if \(U\) is a vector space over \(\mathbb{F}\) under the same operations that make \(V\) into a vector space over \(\mathbb{F}\). Note that if we require \(U \in V\) to be a nonempty subset of \(V\), then condition 1 of Lemma 4.3.2 already follows from condition 3 since \(0u = 0 \rm{~for~} u \in U\).
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/04%3A_Vector_spaces/4.01%3A_De%EF%AC%81nition_of_vector_spacesScalar multiplication can similarly be described as a function \(\mathbb{F} \times V \to V\) that maps a scalar \(a\in \mathbb{F}\) and a vector \(v\in V\) to a new vector \(av \in V\). (More informat...Scalar multiplication can similarly be described as a function \(\mathbb{F} \times V \to V\) that maps a scalar \(a\in \mathbb{F}\) and a vector \(v\in V\) to a new vector \(av \in V\). (More information on these kinds of functions, also known as binary operations, can be found in Appendix C.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Book%3A_Linear_Algebra_(Schilling_Nachtergaele_and_Lankham)/12%3A_Supplementary_notes_on_matrices_and_linear_systemsAs discussed in Chapter 1, there are many ways in which you might try to solve a system of linear equation involving a finite number of variables. In particular, any arbitrary number of equations in an...As discussed in Chapter 1, there are many ways in which you might try to solve a system of linear equation involving a finite number of variables. In particular, any arbitrary number of equations in any number of unknowns — as long as both are finite —can be encoded as a single matrix equation. Specifically, by exploiting the deep connection between matrices and so-called linear maps, one can completely determine all possible solutions to any linear system.