Skip to main content

Subsection 10.3.7 A complete algorithm

The last unit shows how one iteration of the QR algorithm can be performed on a tridiagonal matrix by implicitly shifting and then ``chasing the bulge''. All that is left to complete the algorithm is to note that

  • The shift \(\mu_k \) can be chosen to equal \(\alpha_{m-1,m-1} \) (the last element on the diagonal, which tends to converge to the eigenvalue smallest in magnitude). In practice, choosing the shift to be an eigenvalue of the bottom-right \(2 \times 2 \) matrix works better. This is known as the Wilkinson shift.

  • If an element of the subdiagonal (and corresponding element on the superdiagonal) becomes small enough, it can be considered to be zero and the problem deflates (decouples) into two smaller tridiagonal matrices. Small is often taken to means that \(\vert \alpha_{i+1,i} \vert \leq \epsilon ( \vert \alpha_{i,i} \vert + \vert \alpha_{i+1,i+1} \vert)\) where \(\epsilon \) is some quantity close to the machine epsilon (unit roundoff).

  • If \(A = Q T Q^T \) reduced \(A\) to the tridiagonal matrix \(T\) before the QR algorithm commensed, then the Givens' rotations encountered as part of the implicitly shifted QR algorithm can be applied from the right to the appropriate columns of \(Q \) so that upon completion \(Q \) is overwritten with the eigenvectors of \(A \text{.}\) Notice that applying a Givens' rotation to a pair of columns of \(Q\) requires \(O(n) \) computation per Givens rotation. For each Francis implicit QR step \(O(n)\) Givens' rotations are computed, making the application of Givens' rotations to \(Q \) of cost \(O^(n^2) \) per iteration of the implicitly shifted QR algorithm. Typically a few (2-3) iterations are needed per eigenvalue that is uncovered (by deflation) meaning that \(O(n) \) iterations are needed. Thus, the QR algorithm is roughly of cost \(O(n^3) \) if the eigenvalues are accumulated (in addition to the cost of forming the \(Q \) from the reduction to tridiagonal form, which takes another \(O(n^3) \) operations.)

  • If an element on the subdiagonal becomes zero (or very small), and hence the corresponding element of the superdiagonal, then

    \begin{equation*} T = \left( \begin{array}{c | c} T_{00} \amp 0 \\ \hline 0 \amp T_{11} \end{array} \right) \quad \quad \quad \begin{array}{c c c c | c c c} \times \amp \times \amp \amp \amp \amp \amp \\ \times \amp \times \amp \times \amp \amp \amp \amp \\ \amp \times \amp \times \amp \times \amp \amp \amp \\ \amp \amp \times \amp \times \amp 0 \amp \amp \\ \hline \amp \amp \amp 0 \amp \times \amp \times \amp \\ \amp \amp \amp \amp \times \amp \times \amp \times \\ \amp \amp \amp \amp \amp \times \amp \times \end{array} \end{equation*}

    then

    • The computation can continue separately with \(T_{00} \) and \(T_{11} \text{.}\)

    • One can pick the shift from the bottom-right of \(T_{00}\) as one continues finding the eigenvalues of \(T_{00} \text{,}\) thus accelerating that part of the computation.

    • One can pick the shift from the bottom-right of \(T_{11}\) as one continues finding the eigenvalues of \(T_{11} \text{,}\) thus accelerating that part of the computation.

    • One must continue to accumulate the eigenvectors by applying the rotations to the appropriate columns of \(Q \text{.}\)

    • Because of the connection between the QR algorithm and the Inverse Power Method, subdiagonal entries near the bottom-right of \(T \) are more likely to converge to a zero, so most deflation will happen there.

    • A question becomes when an element on the subdiagonal, \(\tau_{i+1,i} \) can be considered to be zero. The answer is when \(\vert \tau_{i+1,i} \vert \) is small relative to \(\vert \tau_i \vert \) and \(\vert \tau_{i+1,i+1} \text{.}\) A typical condition that is used is

      \begin{equation*} \vert \tau_{i+1,i} \vert \leq \meps \sqrt{ \vert \tau_{i,i} \tau_{i+1,i+1} \vert }. \end{equation*}
    • The tridiagonal QR algorithm that we described can then be used to diagonalize the matrix, accumulating the eigenvectors by applying the encountered Givens' rotations to \(Q_T \text{.}\) This is where the real expense is: Apply the Givens' rotations to matrix \(T \) requires \(O( n ) \) per sweep. Applying the Givens' rotation to \(Q_T \) requires \(O(n^2) \) per sweep.

    For details, see some of our papers mentioned in the next section.