Skip to main content

Solutions E Answers and Solutions to Homeworks

Part I Orthogonality

Week 1 Norms

Section 1.1 Opening Remarks
Subsection 1.1.1 Why norms?
Section 1.2 Vector Norms
Subsection 1.2.1 Absolute value
Subsection 1.2.2 What is a vector norm?
Subsection 1.2.3 The vector 2-norm (Euclidean length)
Subsection 1.2.4 The vector \(p\)-norms
Subsection 1.2.5 Unit ball
Subsection 1.2.6 Equivalence of vector norms
Section 1.3 Matrix Norms
Subsection 1.3.2 What is a matrix norm?
Subsection 1.3.3 The Frobenius norm
Subsection 1.3.4 Induced matrix norms
Subsection 1.3.5 The matrix 2-norm
Subsection 1.3.6 Computing the matrix 1-norm and \(\infty\)-norm
Subsection 1.3.7 Equivalence of matrix norms
Subsection 1.3.8 Submultiplicative norms
Section 1.4 Condition Number of a Matrix
Subsection 1.4.1 Conditioning of a linear system
Subsection 1.4.2 Loss of digits of accuracy
Section 1.6 Wrap Up
Subsection 1.6.1 Additional homework

Week 2 The Singular Value Decomposition

Section 2.1 Opening Remarks
Subsection 2.1.1 Low rank approximation
Section 2.2 Orthogonal Vectors and Matrices
Subsection 2.2.1 Orthogonal vectors
Subsection 2.2.2 Component in the direction of a vector
Subsection 2.2.3 Orthonormal vectors and matrices
Subsection 2.2.4 Unitary matrices
Subsection 2.2.5 Examples of unitary matrices
Subsubsection 2.2.5.1 Rotations
Subsubsection 2.2.5.2 Reflections
Subsection 2.2.6 Change of orthonormal basis
Subsection 2.2.7 Why we love unitary matrices
Section 2.3 The Singular Value Decomposition
Subsection 2.3.1 The Singular Value Decomposition Theorem
Subsection 2.3.4 The Reduced Singular Value Decomposition
Subsection 2.3.5 SVD of nonsingular matrices
Subsection 2.3.6 Best rank-k approximation
Section 2.5 Wrap Up
Subsection 2.5.1 Additional homework
Homework 2.5.1.1.
Homework 2.5.1.3.

Week 3 The QR Decomposition

Section 3.1 Opening Remarks
Subsection 3.1.1 Choosing the right basis
Section 3.2 Gram-Schmidt Orthogonalization
Subsection 3.2.2 Gram-Schmidt and the QR factorization
Subsection 3.2.3 Classical Gram-Schmidt algorithm
Subsection 3.2.4 Modified Gram-Schmidt (MGS)
Subsection 3.2.5 In practice, MGS is more accurate
Subsection 3.2.6 Cost of Gram-Schmidt algorithms
Section 3.3 Householder QR Factorization
Subsection 3.3.1 Using unitary matrices
Subsection 3.3.2 Householder transformation
Subsection 3.3.3 Practical computation of the Householder vector
Subsubsection 3.3.3.3 A routine for computing the Householder vector
Subsection 3.3.4 Householder QR factorization algorithm
Subsection 3.3.5 Forming \(Q \)
Subsection 3.3.6 Applying \(Q^H \)
Subsection 3.3.7 Orthogonality of resulting \(Q \)
Section 3.5 Wrap Up
Subsection 3.5.1 Additional homework

Week 4 Linear Least Squares

Section 4.2 Solution via the Method of Normal Equations
Subsection 4.2.1 The four fundamental spaces of a matrix
Subsection 4.2.2 The Method of Normal Equations
Subsection 4.2.4 Conditioning of the linear least squares problem
Subsection 4.2.5 Why using the Method of Normal Equations could be bad
Section 4.3 Solution via the SVD
Subsection 4.3.1 The SVD and the four fundamental spaces
Subsection 4.3.3 Case 2: General case
Section 4.4 Solution via the QR factorization
Subsection 4.4.1 \(A \) has linearly independent columns

Part II Solving Linear Systems

Week 5 The LU and Cholesky Factorizations

Section 5.1 Opening Remarks
Subsection 5.1.1 Of Gaussian elimination and LU factorization
Section 5.2 From Gaussian elimination to LU factorization
Subsection 5.2.1 Gaussian elimination
Subsection 5.2.2 LU factorization: The right-looking algorithm
Subsection 5.2.3 Existence of the LU factorization
Subsection 5.2.4 Gaussian elimination via Gauss transforms
Ponder This 5.2.4.4.
Section 5.3 LU factorization with (row) pivoting
Subsection 5.3.1 Gaussian elimination with row exchanges
Subsection 5.3.2 Permutation matrices
Subsection 5.3.3 LU factorization with partial pivoting
Subsection 5.3.5 Solving with a triangular matrix
Subsubsection 5.3.5.1 Algorithmic Variant 1
Subsubsection 5.3.5.2 Algorithmic Variant 2
Section 5.4 Cholesky factorization
Subsection 5.4.1 Hermitian Positive Definite matrices
Subsection 5.4.3 Cholesky factorization algorithm (right-looking variant)
Section 5.5 Enrichments
Subsection 5.5.1 Other LU factorization algorithms
Subsubsection 5.5.1.1 Variant 1: Bordered algorithm
Subsubsection 5.5.1.2 Variant 2: Up-looking algorithm
Subsubsection 5.5.1.3 Variant 3: Left-looking algorithm
Subsubsection 5.5.1.4 Variant 4: Crout variant
Subsubsection 5.5.1.5 Variant 5: Classical Gaussian elimination

Week 6 Numerical Stability

Section 6.1 Opening Remarks
Subsection 6.1.1 Whose problem is it anyway?
Section 6.2 Floating Point Arithmetic
Subsection 6.2.2 Error in storing a real number as a floating point number
Subsection 6.2.4 Stability of a numerical algorithm
Subsection 6.2.6 Absolute value of vectors and matrices
Section 6.3 Error Analysis for Basic Linear Algebra Algorithms
Subsection 6.3.1 Initial insights
Subsection 6.3.2 Backward error analysis of dot product: general case
Subsection 6.3.3 Dot product: error results
Subsection 6.3.4 Matrix-vector multiplication
Subsection 6.3.5 Matrix-matrix multiplication
Section 6.4 Error Analysis for Solving Linear Systems
Subsection 6.4.1 Numerical stability of triangular solve
Subsection 6.4.3 Numerical stability of linear solve via LU factorization
Subsection 6.4.5 Is LU with Partial Pivoting Stable?

Week 7 Solving Sparse Linear Systems

Section 7.1 Opening Remarks
Subsection 7.1.1 Where do sparse linear systems come from?
Section 7.2 Direct Solution
Subsection 7.2.1 Banded matrices
Subsection 7.2.2 Nested dissection
Section 7.3 Iterative Solution
Subsection 7.3.2 Gauss-Seidel iteration
Subsection 7.3.3 Convergence of splitting methods

Week 8 Descent Methods

Section 8.2 Search directions
Subsection 8.2.1 Basics of descent methods
Subsection 8.2.2 Toward practical descent methods
Subsection 8.2.3 Relation to Splitting Methods
Subsection 8.2.5 Preconditioning
Section 8.3 The Conjugate Gradient Method
Subsection 8.3.1 A-conjugate directions
Subsection 8.3.5 Practical Conjugate Gradient Method algorithm
Subsection 8.3.6 Final touches for the Conjugate Gradient Method
Subsubsection 8.3.6.2 Preconditioning

Part III The Algebraic Eigenvalue Problem

Week 9 Eigenvalues and Eigenvectors

Section 9.1 Opening Remarks
Subsection 9.1.1 Relating diagonalization to eigenvalues and eigenvectors
Section 9.2 Basics
Subsection 9.2.1 Singular matrices and the eigenvalue problem
Subsection 9.2.2 The characteristic polynomial
Subsection 9.2.3 More properties of eigenvalues and vectors
Subsection 9.2.4 The Schur and Spectral Decompositions
Subsection 9.2.5 Diagonalizing a matrix
Subsection 9.2.6 Jordan Canonical Form
Section 9.3 The Power Method and related approaches
Subsection 9.3.1 The Power Method
Subsubsection 9.3.1.4 The Rayleigh quotient
Subsection 9.3.2 The Power Method: Convergence
Subsection 9.3.3 The Inverse Power Method
Subsection 9.3.4 The Rayleigh Quotient Iteration

Week 10 Practical Solution of the Hermitian Eigenvalue Problem

Section 10.1 Opening Remarks
Subsection 10.1.1 Subspace iteration with a Hermitian matrix
Section 10.2 From Power Method to a simple QR algorithm
Subsection 10.2.1 A simple QR algorithm
Subsection 10.2.2 A simple shifted QR algorithm
Subsection 10.2.3 Deflating the problem
Section 10.3 A Practical Hermitian QR Algorithm
Subsection 10.3.1 Reduction to tridiagonal form
Subsection 10.3.2 Givens' rotations
Subsection 10.3.4 The implicit Q theorem
Subsection 10.3.5 The Francis implicit QR Step

Week 11 Computing the SVD

Section 11.1 Opening Remarks
Subsection 11.1.1 Linking the Singular Value Decomposition to the Spectral Decomposition
Section 11.2 Practical Computation of the Singular Value Decomposition
Subsection 11.2.1 Computing the SVD from the Spectral Decomposition
Subsection 11.2.2 A strategy for computing the SVD
Subsection 11.2.3 Reduction to bidiagonal form
Section 11.3 Jacobi's Method
Subsection 11.3.1 Jacobi rotation
Ponder This 11.3.1.1.
Subsection 11.3.2 Jacobi's method for computing the Spectral Decomposition

Week 12 Attaining High Performance

Section 12.1 Opening Remarks
Subsection 12.1.1 Simple Implementation of matrix-matrix multiplication
Section 12.2 Linear Algebra Building Blocks
Subsection 12.2.2 Opportunities for optimization
Section 12.3 Casting Computation in Terms of Matrix-Matrix Multiplication
Subsection 12.3.2 Blocked LU factorization