Search
- https://math.libretexts.org/Courses/De_Anza_College/Linear_Algebra%3A_A_First_Course/06%3A_Spectral_Theory/6.08%3A_Singular_Value_Decomposition/6.8E%3A_Exercises_for_Section_6.8This page contains exercises on finding the Singular Value Decomposition (SVD) of various matrices, outlining the computation of matrices \(U\), \(\Sigma\), and \(V\). Two detailed examples are provid...This page contains exercises on finding the Singular Value Decomposition (SVD) of various matrices, outlining the computation of matrices \(U\), \(\Sigma\), and \(V\). Two detailed examples are provided, along with a fifth exercise concerning the determinant of an \(n \times n\) matrix and its singular values.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Interactive_Linear_Algebra_(Margalit_and_Rabinoff)/02%3A_Systems_of_Linear_Equations-_Geometry/2.03%3A_Matrix_EquationsThis page explores the matrix equation \(Ax = b\), defining key concepts like consistency conditions, the relationship between matrix and vector forms, and the significance of spans. It explains that ...This page explores the matrix equation \(Ax = b\), defining key concepts like consistency conditions, the relationship between matrix and vector forms, and the significance of spans. It explains that for \(Ax = b\) to have solutions, the vector \(b\) must lie within the span of \(A\)'s columns. Systems have solutions for all \(b\) if \(A\) has a pivot in every row.
- https://math.libretexts.org/Courses/De_Anza_College/Linear_Algebra%3A_A_First_Course/01%3A_Systems_of_Equations/1.02%3A_Gaussian_Elimination/1.2E%3A_Exercises_for_Section_1.2This page contains exercises on augmented matrices and their impact on system solutions, focusing on consistency, uniqueness, and the influence of parameters. It details processes for row reducing mat...This page contains exercises on augmented matrices and their impact on system solutions, focusing on consistency, uniqueness, and the influence of parameters. It details processes for row reducing matrices to find row-echelon and reduced row-echelon forms, leading to potential solutions for linear equations.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Interactive_Linear_Algebra_(Margalit_and_Rabinoff)/02%3A_Systems_of_Linear_Equations-_Geometry/2.08%3A_The_Rank_TheoremThis page explains the rank theorem, which connects a matrix's column space with its null space, asserting that the sum of rank (dimension of the column space) and nullity (dimension of the null space...This page explains the rank theorem, which connects a matrix's column space with its null space, asserting that the sum of rank (dimension of the column space) and nullity (dimension of the null space) equals the number of columns. It includes examples demonstrating how different ranks and nullities influence solution options in linear equations, emphasizing the theorem's importance in understanding the relationship between solution freedom and system properties without direct calculations.
- https://math.libretexts.org/Courses/De_Anza_College/Linear_Algebra%3A_A_First_Course/05%3A_Linear_Transformations/5.02%3A_The_Matrix_of_a_Linear_Transformation_I/5.2E%3A_Exercises_for_Section_5.2This page discusses linear transformations \(T\) in \(\mathbb{R}^n\) with examples illustrating how they alter vector components and the associated matrices \(A\). It covers transformations in \(\math...This page discusses linear transformations \(T\) in \(\mathbb{R}^n\) with examples illustrating how they alter vector components and the associated matrices \(A\). It covers transformations in \(\mathbb{R}^2\), detailing specific cases like rotation and scaling, and describes the conditions under which a transformation matrix can be derived from vectors when an inverse exists.
- https://math.libretexts.org/Courses/De_Anza_College/Linear_Algebra%3A_A_First_Course/04%3A_R/4.02%3A_Dot_and_Cross_ProductThere are two ways of multiplying vectors which are of great importance in applications. The first of these is called the dot product. When we take the dot product of vectors, the result is a scalar. ...There are two ways of multiplying vectors which are of great importance in applications. The first of these is called the dot product. When we take the dot product of vectors, the result is a scalar. For this reason, the dot product is also called the scalar product and sometimes the inner product.
- https://math.libretexts.org/Bookshelves/Linear_Algebra/Interactive_Linear_Algebra_(Margalit_and_Rabinoff)/05%3A_Eigenvalues_and_Eigenvectors/5.02%3A_The_Characteristic_PolynomialThis page covers the determination of eigenvalues and eigenspaces for matrices, focusing on triangular matrices and the characteristic polynomial, defined as the determinant of \(A - \lambda I_n\). It...This page covers the determination of eigenvalues and eigenspaces for matrices, focusing on triangular matrices and the characteristic polynomial, defined as the determinant of \(A - \lambda I_n\). It explains solving the characteristic polynomial to find eigenvalues, provides examples, and introduces the concept of eigenspaces. It discusses the trace of a matrix and offers recipes for calculating the characteristic polynomial, especially for 2x2 matrices.
- https://math.libretexts.org/Courses/De_Anza_College/Introductory_Differential_Equations/06%3A_Systems_of_ODEs/6.04%3A_Matrices_and_linear_systemsThis page covers the fundamentals of matrix arithmetic, including operations like addition, multiplication, transposes, and inverses. It defines vectors and matrices and discusses the non-commutative ...This page covers the fundamentals of matrix arithmetic, including operations like addition, multiplication, transposes, and inverses. It defines vectors and matrices and discusses the non-commutative nature of multiplication. The text explains matrix inverses, determinants, and their relationships, including methods for computing determinants using cofactor expansion.
- https://math.libretexts.org/Courses/De_Anza_College/Linear_Algebra%3A_A_First_Course/04%3A_R/4.05%3A_Linear_Independence/4.5.E%3A_Exercise_for_Section_4.5This page presents exercises on linear independence and dependence of vector sets in linear algebra. It emphasizes key concepts such as expressing dependent vectors as combinations of others, identify...This page presents exercises on linear independence and dependence of vector sets in linear algebra. It emphasizes key concepts such as expressing dependent vectors as combinations of others, identifying independent subsets that span the same space, and the unique solutions in homogeneous systems related to independence. Additionally, it covers properties of vector combinations, conditions for independence in matrix systems, and includes proofs to support these concepts.
- https://math.libretexts.org/Courses/De_Anza_College/Linear_Algebra%3A_A_First_Course/07%3A_Vector_Spaces/7.09%3A_Isomorphisms/7.9E%3A_Exercises_for_Section_7.9This page discusses exercises on linear transformations, highlighting that a transformation is an isomorphism if and only if its matrix is invertible. It emphasizes the significance of dimensions for ...This page discusses exercises on linear transformations, highlighting that a transformation is an isomorphism if and only if its matrix is invertible. It emphasizes the significance of dimensions for injectivity and surjectivity. The exercises encourage analyzing specific transformations and their properties, as well as combinations of transformations.
- https://math.libretexts.org/Courses/De_Anza_College/Introductory_Differential_Equations/03%3A_Higher_order_linear_ODEs/3.03%3A_The_Method_of_Undetermined_Coefficients_IIIn this section, we use the Method of Undetermined Coefficients to find solutions to the constant coefficient equation ay''+by'+cy=exp{λx}(P(x) cos ω x + Q(x) sin ω x) where λ and ω are real numbers, ...In this section, we use the Method of Undetermined Coefficients to find solutions to the constant coefficient equation ay''+by'+cy=exp{λx}(P(x) cos ω x + Q(x) sin ω x) where λ and ω are real numbers, ω is not zero, and P and Q are polynomials.