Skip to main content

Subsection 10.3.6 A complete algorithm

The last unit shows how one iteration of the QR algorithm can be performed on a tridiagonal matrix by implicitly shifting and then "chasing the bulge." All that is left to complete the algorithm is to note that

  • The shift \(\mu_k \) can be chosen to equal \(\alpha_{m-1,m-1} \) (the last element on the diagonal, which tends to converge to the eigenvalue smallest in magnitude). In practice, choosing the shift to be an eigenvalue of the bottom-right \(2 \times 2 \) matrix works better. This is known as the Wilkinson shift.

  • If \(A = Q T Q^T \) reduced \(A\) to the tridiagonal matrix \(T\) before the QR algorithm commenced, then the Givens' rotations encountered as part of the implicitly shifted QR algorithm can be applied from the right to the appropriate columns of \(Q \) so that upon completion \(Q \) is left overwritten with the eigenvectors of \(A \text{.}\) Let's analyze this:

    • Reducing the matrix to tridiagonal form requires \(O(m^3) \) computations.

    • Forming \(Q \) from the Householder vectors requires \(O( m^3 ) \) computations.

    • Applying Givens' rotation to a pairs of columns of \(Q\) requires \(O(m) \) computation per Givens' rotation. For each Francis implicit QR step \(O(n)\) Givens' rotations are computed, making the application of Givens' rotations to \(Q \) of cost \(O(m^2) \) per iteration of the implicitly shifted QR algorithm. Typically a few (2-3) iterations are needed per eigenvalue that is uncovered (when deflation is incorporated), meaning that \(O(m) \) iterations are needed. Thus, a QR algorithm with a tridiagonal matrix that accumulates eigenvectors requires \(O(m^3) \) computation.

    Thus, the total cost of computing the eigenvalues and eigenvectors is \(O(m^3 ) \text{.}\)

  • If an element on the subdiagonal becomes zero (or very small), and hence the corresponding element of the superdiagonal, then

    \begin{equation*} T = \left( \begin{array}{c | c} T_{00} \amp 0 \\ \hline 0 \amp T_{11} \end{array} \right) \quad \quad \quad \begin{array}{c c c c | c c c} \times \amp \times \amp \amp \amp \amp \amp \\ \times \amp \times \amp \times \amp \amp \amp \amp \\ \amp \times \amp \times \amp \times \amp \amp \amp \\ \amp \amp \times \amp \times \amp 0 \amp \amp \\ \hline \amp \amp \amp 0 \amp \times \amp \times \amp \\ \amp \amp \amp \amp \times \amp \times \amp \times \\ \amp \amp \amp \amp \amp \times \amp \times \end{array} \end{equation*}

    then

    • The computation can continue separately with \(T_{00} \) and \(T_{11} \text{.}\)

    • One can pick the shift from the bottom-right of \(T_{00}\) as one continues finding the eigenvalues of \(T_{00} \text{,}\) thus accelerating that part of the computation.

    • One can pick the shift from the bottom-right of \(T_{11}\) as one continues finding the eigenvalues of \(T_{11} \text{,}\) thus accelerating that part of the computation.

    • One must continue to accumulate the eigenvectors by applying the rotations to the appropriate columns of \(Q \text{.}\)

    • Because of the connection between the QR algorithm and the Inverse Power Method, subdiagonal entries near the bottom-right of \(T \) are more likely to converge to a zero, so most deflation will happen there.

    • A question becomes when an element on the subdiagonal, \(\tau_{i+1,i} \) can be considered to be zero. The answer is when \(\vert \tau_{i+1,i} \vert \) is small relative to \(\vert \tau_i \vert \) and \(\vert \tau_{i+1,i+1} \vert \text{.}\) A typical condition that is used is

      \begin{equation*} \vert \tau_{i+1,i} \vert \leq \meps \sqrt{ \vert \tau_{i,i} \vert + \vert \tau_{i+1,i+1} \vert }. \end{equation*}

    For details, see some of our papers mentioned in the enrichments.