Skip to main content

Solutions E Answers and Solutions to Homeworks

Part I Orthogonality

Week 1 Norms

Section 1.1 Opening
Unit 1.1.1 Why norms?
Section 1.2 Vector Norms
Unit 1.2.1 Absolute value
Unit 1.2.2 What is a vector norm?
Unit 1.2.3 The vector 2-norm (Euclidean length)
Unit 1.2.4 The vector \(p\)-norms
Unit 1.2.5 Unit ball
Unit 1.2.6 Equivalence of vector norms
Section 1.3 Matrix Norms
Unit 1.3.2 What is a matrix norm?
Unit 1.3.3 The Frobenius norm
Unit 1.3.4 Induced matrix norms
Unit 1.3.5 The matrix 2-norm
Unit 1.3.6 Computing the matrix 1-norm and \(\infty\)-norm
Unit 1.3.7 Equivalence of matrix norms
Unit 1.3.8 Submultiplicative norms
Section 1.4 Condition Number of a Matrix
Unit 1.4.1 Conditioning of a linear system
Unit 1.4.2 Loss of digits of accuracy
Section 1.6 Wrap Up
Unit 1.6.1 Additional homework

Week 2 The Singular Value Decomposition

Section 2.1 Opening Remarks
Unit 2.1.1 Low rank approximation
Section 2.2 Orthogonal Vectors and Matrices
Unit 2.2.1 Orthogonal vectors
Unit 2.2.2 Component in the direction of a vector
Unit 2.2.3 Orthonormal vectors and matrices
Unit 2.2.4 Unitary matrices
Unit 2.2.5 Examples of unitary matrices
Subsubsection 2.2.5.1 Rotations
Subsubsection 2.2.5.2 Reflections
Unit 2.2.6 Change of orthonormal basis
Unit 2.2.7 Why we love unitary matrices
Section 2.3 The Singular Value Decomposition
Unit 2.3.1 The Singular Value Decomposition Theorem
Unit 2.3.4 The Reduced Singular Value Decomposition
Unit 2.3.5 SVD of nonsingular matrices
Unit 2.3.6 Best rank-k approximation
Section 2.5 Wrap Up
Unit 2.5.1 Additional homework
Homework 2.5.1.1.
Homework 2.5.1.3.

Week 3 The QR Decomposition

Section 3.1 Opening
Unit 3.1.1 Choosing the right basis
Section 3.2 Gram-Schmidt Orthogonalization
Unit 3.2.2 Gram-Schmidt and the QR factorization
Unit 3.2.3 Classical Gram-Schmidt algorithm
Unit 3.2.4 Modified Gram-Schmidt (MGS)
Unit 3.2.5 In practice, MGS is more accurate
Unit 3.2.6 Cost of Gram-Schmidt algorithms
Section 3.3 Householder QR Factorization
Unit 3.3.1 Using unitary matrices
Unit 3.3.2 Householder transformation
Unit 3.3.3 Practical computation of the Householder vector
Subsubsection 3.3.3.3 A routine for computing the Householder vector
Unit 3.3.4 Householder QR factorization algorithm
Unit 3.3.5 Forming \(Q \)
Unit 3.3.6 Applying \(Q^H \)
Unit 3.3.7 Orthogonality of resulting \(Q \)
Section 3.5 Wrap Up
Unit 3.5.1 Additional homework

Week 4 Linear Least Squares

Section 4.2 Solution via the Method of Normal Equations
Unit 4.2.1 The four fundamental spaces of a matrix
Unit 4.2.2 The Method of Normal Equations
Unit 4.2.4 Conditioning of the linear least squares problem
Unit 4.2.5 Why using the Method of Normal Equations could be bad
Section 4.3 Solution via the SVD
Unit 4.3.1 The SVD and the four fundamental spaces
Unit 4.3.3 Case 2: General case
Section 4.4 Solution via the QR factorization
Unit 4.4.1 \(A \) has linearly independent columns

Part II Solving Linear Systems

Week 5 The LU and Cholesky Factorizations

Section 5.1 Opening
Unit 5.1.1 Of Gaussian elimination and LU factorization
Section 5.2 From Gaussian elimination to LU factorization
Unit 5.2.1 Gaussian elimination
Unit 5.2.2 LU factorization: The right-looking algorithm
Unit 5.2.3 Existence of the LU factorization
Unit 5.2.4 Gaussian elimination via Gauss transforms
Ponder This 5.2.4.4.
Section 5.3 LU factorization with (row) pivoting
Unit 5.3.1 Gaussian elimination with row exchanges
Unit 5.3.2 Permutation matrices
Unit 5.3.3 LU factorization with partial pivoting
Unit 5.3.5 Solving with a triangular matrix
Subsubsection 5.3.5.1 Algorithmic Variant 1
Subsubsection 5.3.5.2 Algorithmic Variant 2
Section 5.4 Cholesky factorization
Unit 5.4.1 Hermitian Positive Definite matrices
Unit 5.4.3 Cholesky factorization algorithm (right-looking variant)
Section 5.5 Enrichments
Unit 5.5.1 Other LU factorization algorithms
Subsubsection 5.5.1.1 Variant 1: Bordered algorithm
Subsubsection 5.5.1.2 Variant 2: Up-looking algorithm
Subsubsection 5.5.1.3 Variant 3: Left-looking algorithm
Subsubsection 5.5.1.4 Variant 4: Crout variant
Subsubsection 5.5.1.5 Variant 5: Classical Gaussian elimination

Week 6 Numerical Stability

Section 6.1 Opening Remarks
Unit 6.1.1 Whose problem is it anyway?
Section 6.2 Floating Point Arithmetic
Unit 6.2.2 Error in storing a real number as a floating point number
Unit 6.2.4 Stability of a numerical algorithm
Unit 6.2.6 Absolute value of vectors and matrices
Section 6.3 Error Analysis for Basic Linear Algebra Algorithms
Unit 6.3.1 Initial insights
Unit 6.3.2 Backward error analysis of dot product: general case
Unit 6.3.3 Dot product: error results
Unit 6.3.4 Matrix-vector multiplication
Unit 6.3.5 Matrix-matrix multiplication
Section 6.4 Error Analysis for Solving Linear Systems
Unit 6.4.1 Numerical stability of triangular solve
Unit 6.4.3 Numerical stability of linear solve via LU factorization
Unit 6.4.5 Is LU with Partial Pivoting Stable?

Week 7 Solving Sparse Linear Systems

Section 7.1 Opening Remarks
Unit 7.1.1 Where do sparse linear systems come from?
Section 7.2 Direct Solution
Unit 7.2.1 Banded matrices
Unit 7.2.2 Nested dissection
Section 7.3 Iterative Solution
Unit 7.3.2 Gauss-Seidel iteration
Unit 7.3.3 Convergence of splitting methods

Week 8 Descent Methods

Section 8.2 Search directions
Unit 8.2.1 Basics of descent methods
Unit 8.2.2 Toward practical descent methods
Unit 8.2.3 Relation to Splitting Methods
Unit 8.2.5 Preconditioning
Section 8.3 The Conjugate Gradient Method
Unit 8.3.1 A-conjugate directions
Unit 8.3.5 Practical Conjugate Gradient Method algorithm
Unit 8.3.6 Final touches for the Conjugate Gradient Method
Subsubsection 8.3.6.2 Preconditioning

Part III The Algebraic Eigenvalue Problem

Week 9 Eigenvalues and Eigenvectors

Section 9.1 Opening
Unit 9.1.1 Relating diagonalization to eigenvalues and eigenvectors
Section 9.2 Basics
Unit 9.2.1 Singular matrices and the eigenvalue problem
Unit 9.2.2 The characteristic polynomial
Unit 9.2.3 More properties of eigenvalues and vectors
Unit 9.2.4 The Schur and Spectral Decompositions
Unit 9.2.5 Diagonalizing a matrix
Unit 9.2.6 Jordan Canonical Form
Section 9.3 The Power Method and related approaches
Unit 9.3.1 The Power Method
Subsubsection 9.3.1.4 The Rayleigh quotient
Unit 9.3.2 The Power Method: Convergence
Unit 9.3.3 The Inverse Power Method
Unit 9.3.4 The Rayleigh Quotient Iteration

Week 10 Practical Solution of the Hermitian Eigenvalue Problem

Section 10.1 Opening
Unit 10.1.1 Subspace iteration with a Hermitian matrix
Section 10.2 From Power Method to a simple QR algorithm
Unit 10.2.1 A simple QR algorithm
Unit 10.2.2 A simple shifted QR algorithm
Unit 10.2.3 Deflating the problem
Section 10.3 A Practical Hermitian QR Algorithm
Unit 10.3.1 Reduction to tridiagonal form
Unit 10.3.2 Givens' rotations
Unit 10.3.4 The implicit Q theorem
Unit 10.3.5 The Francis implicit QR Step

Week 11 Computing the SVD

Section 11.1 Opening
Unit 11.1.1 Linking the Singular Value Decomposition to the Spectral Decomposition
Section 11.2 Practical Computation of the Singular Value Decomposition
Unit 11.2.1 Computing the SVD from the Spectral Decomposition
Unit 11.2.2 A strategy for computing the SVD
Unit 11.2.3 Reduction to bidiagonal form
Section 11.3 Jacobi's Method
Unit 11.3.1 Jacobi rotation
Ponder This 11.3.1.1.
Unit 11.3.2 Jacobi's method for computing the Spectral Decomposition

Week 12 Attaining High Performance

Section 12.1 Opening
Unit 12.1.1 Simple Implementation of matrix-matrix multiplication
Section 12.2 Linear Algebra Building Blocks
Unit 12.2.2 Opportunities for optimization
Section 12.3 Casting Computation in Terms of Matrix-Matrix Multiplication
Unit 12.3.2 Blocked LU factorization