Skip to main content

Subsection 11.5.2 Summary

Let \(A \in \mathbb C^{m \times n} \) and \(A = U \Sigma V^H \) its SVD, where \(\Sigma \) has the structure indicated by

\begin{equation*} \Sigma = \left( \begin{array}{c | c} \Sigma_{TL} \amp 0_{r \times (n-r)} \\ \hline 0_{(m-r) \times r} \amp 0_{(m-r) \times ( n - r )} \end{array} \right), \mbox{ with } \begin{array}[t]{l} \Sigma_{TL} = \diag{ \sigma_0, \ldots , \sigma_{r-1} }, \\ \mbox{ and } \sigma_0 \geq \sigma_1 \geq \cdots \geq \sigma_{r-1} \gt 0 \end{array} \end{equation*}

Then a Spectral Decomposition of \(A^H A \) is given by

\begin{equation*} A^H A = V \Sigma \Sigma^T \Sigma V^H = V \left( \begin{array}{c | c} \Sigma_{TL}^2 \amp 0_{r \times (n-r)} \\ \hline 0_{(n-r) \times r} \amp 0_{(n-r) \times ( n - r )} \end{array} \right) V^H \end{equation*}

a Spectral Decomposition of \(A A^H \) is given by

\begin{equation*} A A^H = U \Sigma^T \Sigma U^H = U \left( \begin{array}{c | c} \Sigma_{TL}^2 \amp 0_{r \times (m-r)} \\ \hline 0_{(m-r) \times r} \amp 0_{(m-r) \times ( m - r )} \end{array} \right) U^H . \end{equation*}

Let \(A \in \mathbb C^{m \times n} \) be of rank \(r\) and

\begin{equation*} A^H A = \left( \begin{array}{c | c} Q_L \amp Q_R \end{array} \right) \left( \begin{array}{c | c} D_{TL} \amp 0_{r\times (n-r)} \\ \hline 0_{(n-r) \times r} \amp 0_{(n-r) \times (n-r)} \end{array} \right) \left( \begin{array}{c | c} Q_L \amp Q_R \end{array} \right)^H \end{equation*}

be the Spectral Decomposition of \(A^H A \text{,}\) where \(Q_L \in \mathbb C^{n \times r} \) and, for simplicity, we assume the diagonal elements of \(D_{TL} \) are ordered from largest to smallest. Then the Reduced SVD of \(A \) is given by \(A = U_L \Sigma_{TL} V_L^H \text{,}\) where \(V_L = Q_L \) and \(\Sigma_{TL} = D_{TL}^{1/2} \text{,}\) and \(U_L = A V_L \Sigma_{TL}^{-1} = A Q_L D_{TL}^{-1/2} \text{.}\)

Let \(A \in \mathbb C^{m \times n} \text{,}\) with \(m \geq n \text{,}\) and \(A = Q R\) be its QR factorization where, for simplicity, we assume that \(n \times n \) upper triangular matrix \(R \) is nonsingular. If \(R = \widehat U \widehat \Sigma \widehat V^H \) is the SVD of \(R \text{,}\) then

\begin{equation*} \begin{array}{l} A \\ ~~~=~~~~\\ Q R ~~~=~~~~\\ Q \widehat U \Sigma_R \widehat V^H ~~~=~~~~\\ \begin{array}[t]{c} \underbrace{ ( Q \widehat U ) } \\ U \end{array} \begin{array}[t]{c} \underbrace{ \widehat \Sigma } \\ \Sigma \end{array} \begin{array}[t]{c} \underbrace{ \widehat V^H. } \\ V^H \end{array} \end{array} \end{equation*}

which exposes the Reduced SVD of \(A \text{.}\)

Figure 11.2.3.1 illustrates the reduction of a matrix to bidiagonal form, via Householder transformations computed from, and applied to, the left and the right.

A practical algorithm for computing the SVD first reduces a matrix to bidiagonal form and then performs an implicitly shifted bidiagonal QR algorithm, as described in Subsection 10.3.5.

Given a symmetric \(2 \times 2 \) matrix \(A \text{,}\) with

\begin{equation*} A = \left( \begin{array}{c c} \alpha_{0,0} \amp \alpha_{0,1} \\ \alpha_{1,0} \amp \alpha_{1,1} \end{array} \right) \end{equation*}

There exists a rotation,

\begin{equation*} J = \left( \begin{array}{c c} \gamma \amp - \sigma \\ \sigma \amp \gamma \end{array} \right), \end{equation*}

where \(\gamma = \cos( \theta ) \) and \(\sigma = \sin( \theta ) \) for some angle \(\theta \text{,}\) such that

\begin{equation*} J^T A J = \left( \begin{array}{c c} \gamma \amp \sigma \\ - \sigma \amp \gamma \end{array} \right) \left( \begin{array}{c c} \alpha_{0,0} \amp \alpha_{0,1} \\ \alpha_{1,0} \amp \alpha_{1,1} \end{array} \right) \left( \begin{array}{c c} \gamma \amp - \sigma \\ \sigma \amp \gamma \end{array} \right) = \left( \begin{array}{c c} \lambda_{0} \amp 0 \\ 0 \amp \lambda_{1} \end{array} \right) = \Lambda. \end{equation*}

This is known as a Jacobi rotation.

A Jacobi rotation, when applied from the left and right symmetrically to selected rows and columns of a symmetric matrix, can be used to strategically introduce zeroes in a matrix. Since this is a unitary similarity transformation, it preserves eigenvalues. Iterating so that every off-diagonal element is zeroed exactly once is known as a (Jacobi) sweep. Applying multiple sweeps will yield a sequence of matrices that eventually converge to a diagonal matrix. Approximations for the eigenvalues can then be found on the diagonal of that matrix and corresponding eigenvalues can be recovered by accommulating the Jacobi rotations. This is known as Jacobi's method (for solving the symmetric eigenvalue problem).

Jacobi's method can be modified so that it computes the SVD of a general matrix. This is known as the one-sided Jacobi's method.