Search
- Filter Results
- Location
- Classification
- Include attachments
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/05%3A_Linear_algebra_and_computing
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/04%3A_Eigenvalues_and_eigenvectors
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)This book arises from my belief that linear algebra, as presented in a traditional undergraduate curriculum, has for too long lived in the shadow of calculus. Many mathematics programs currently requi...This book arises from my belief that linear algebra, as presented in a traditional undergraduate curriculum, has for too long lived in the shadow of calculus. Many mathematics programs currently require their students to complete at least three semesters of calculus, but only one semester of linear algebra, which often has two semesters of calculus as a prerequisite.
- https://math.libretexts.org/Workbench/Understanding_Linear_Algebra_2e_(Austin)/01%3A_Systems_of_equations/1.03%3A_Computation_with_SageThis is a very rough measure of the effort required to find the reduced row echelon form; a more careful accounting shows that the number of arithmetic operations is roughly \frac23 n^3\text{.} As...This is a very rough measure of the effort required to find the reduced row echelon form; a more careful accounting shows that the number of arithmetic operations is roughly \frac23 n^3\text{.} As we have seen, some matrices require more effort than others, but the upshot of this observation is that the effort is proportional to n^3\text{.} We can think of this in the following way: If the size of the matrix grows by a factor of 10, then the effort required grows by a factor of \(10^3 =…
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/02%3A_Vectors_matrices_and_linear_combinationsWe began our study of linear systems in Chapter 1 where we described linear systems in terms of augmented matrices, such as \left[\begin{array}{rrr|r} 1 & 2 & -1 & 3 \\ -3 & 3 & -1 & 2 \\ 2 & 3 & 2 & ...We began our study of linear systems in Chapter 1 where we described linear systems in terms of augmented matrices, such as \left[\begin{array}{rrr|r} 1 & 2 & -1 & 3 \\ -3 & 3 & -1 & 2 \\ 2 & 3 & 2 & -1 \end{array}\right] In this chapter, we will uncover geometric information in a matrix like this, which will lead to an intuitive understanding of the insights we previously gained into the solutions of linear systems.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/02%3A_Vectors_matrices_and_linear_combinations/2.01%3A_Vectors_and_linear_combinations\(\newcommand{\avec} \newcommand{\mathbf b} \newcommand{\mathbf c} \newcommand{\mathbf d} \newcommand{\dtil}{\widetilde{\mathbf d}} \newcommand{\mathbf e} \newcommand{\mathbf f} \newcommand{\mathbf n}...\(\newcommand{\avec} \newcommand{\mathbf b} \newcommand{\mathbf c} \newcommand{\mathbf d} \newcommand{\dtil}{\widetilde{\mathbf d}} \newcommand{\mathbf e} \newcommand{\mathbf f} \newcommand{\mathbf n} \newcommand{\mathbf p} \newcommand{\mathbf q} \newcommand{\mathbf s} \newcommand{\mathbf t} \newcommand{\mathbf u} \newcommand{\mathbf v} \newcommand{\mathbf w} \newcommand{\mathbf x} \newcommand{\yvec} \newcommand{\zvec} \newcommand{\rvec} \newcommand{\mvec} \newcommand{\zerovec} \newcommand{\one…
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/06%3A_Orthogonality_and_Least_Squares/6.03%3A_Orthogonal_bases_and_projectionsRemember that an invertible matrix must be a square matrix, and the matrix Q will only be square if n=m\text{.} In this case, there are m vectors in the orthonormal set so the subspace \(W...Remember that an invertible matrix must be a square matrix, and the matrix Q will only be square if n=m\text{.} In this case, there are m vectors in the orthonormal set so the subspace W spanned by the vectors \mathbf u_1,\mathbf u_2,\ldots,\mathbf u_m is \mathbb R^m\text{.} If \mathbf b is a vector in \mathbb R^m\text{,} then \widehat{\mathbf{b}}=QQ^T\mathbf b is the orthogonal projection of \mathbf b onto \mathbb R^m\text{.} In other words, \(QQ^T\m…
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/06%3A_Orthogonality_and_Least_Squares/6.02%3A_Orthogonal_complements_and_the_matrix_tranposeThis section introduces the notion of an orthogonal complement, the set of vectors each of which is orthogonal to a prescribed subspace. We'll also find a way to describe dot products using matrix pro...This section introduces the notion of an orthogonal complement, the set of vectors each of which is orthogonal to a prescribed subspace. We'll also find a way to describe dot products using matrix products, which allows us to study orthogonality using many of the tools for understanding linear systems that we developed earlier.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/07%3A_The_Spectral_Theorem_and_singular_value_decompositions/7.01%3A_Symmetric_matrices_and_varianceNotice that the matrix A has eigenvectors \mathbf v_1 and \mathbf v_2 that not only form a basis for \mathbb R^2 but, in fact, form an orthogonal basis for \mathbb R^2\text{.} Give...Notice that the matrix A has eigenvectors \mathbf v_1 and \mathbf v_2 that not only form a basis for \mathbb R^2 but, in fact, form an orthogonal basis for \mathbb R^2\text{.} Given the prominent role played by orthogonal bases in the last chapter, we would like to understand what conditions on a matrix enable us to form an orthogonal basis of eigenvectors.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/01%3A_Systems_of_equations/1.03%3A_Computation_with_SageNotice that there are three separate things (we call them arguments) inside the parentheses: the number of rows, the number of columns, and the entries of the matrix listed by row inside square bracke...Notice that there are three separate things (we call them arguments) inside the parentheses: the number of rows, the number of columns, and the entries of the matrix listed by row inside square brackets. We can think of this in the following way: If the size of the matrix grows by a factor of 10 , then the effort required grows by a factor of 10^3=1000.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Understanding_Linear_Algebra_(Austin)/05%3A_Linear_algebra_and_computing/5.01%3A_Gaussian_elimination_revisitedIn this section, we revisit Gaussian elimination and explore some problems with implementing it in the straightforward way that we described back in Section 1.2. In particular, we will see how the fac...In this section, we revisit Gaussian elimination and explore some problems with implementing it in the straightforward way that we described back in Section 1.2. In particular, we will see how the fact that computers only approximate arithmetic operations can lead us to find solutions that are far from the actual solutions. Second, we will explore how much work is required to implement Gaussian elimination and devise a more efficient means of implementing it.