Skip to main content

Subsection 9.3.2 The Inverse Power Method

The Power Method homes in on an eigenvector associated with the largest (in magnitude) eigenvalue. The Inverse Power Method homes in on an eigenvector associated with the smallest eigenvalue (in magnitude).

Once again, we assume that a given matrix \(A \in \C^{m \times m} \) is diagonalizable so that there exist matrix \(X \) and diagonal matrix \(\Lambda \) such that \(A = X \Lambda X^{-1} \text{.}\) We further assume that \(\Lambda = {\rm diag}( \lambda_0 , \cdots , \lambda_{m-1} ) \) and

\begin{equation*} \vert \lambda_0 \vert \geq \vert \lambda_1 \vert \geq \cdots \geq \vert \lambda_{m-2} \vert \gt \vert \lambda_{m-1} \vert \gt 0 . \end{equation*}

Notice that this means that \(A \) is nonsingular.

Clearly, if

\begin{equation*} \vert \lambda_0 \vert \geq \vert \lambda_1 \vert \geq \cdots \geq \vert \lambda_{m-2} \vert \gt \vert \lambda_{m-1} \vert \gt 0, \end{equation*}

then

\begin{equation*} \left\vert \frac{1}{\lambda_{m-1}} \right\vert \gt \left\vert \frac{1}{\lambda_{m-2}} \right\vert \geq \left\vert \frac{1}{\lambda_{m-3}} \right\vert \geq \cdots \geq \left\vert \frac{1}{\lambda_{0}} \right\vert . \end{equation*}

Thus, an eigenvector associated with the smallest (in magnitude) eigenvalue of \(A \) is an eigenvector associated with the largest (in magnitude) eigenvalue of \(A^{-1} \text{.}\) This suggest the following naive iteration (which mirrors the second attempt for the Power Method in Subsubsection 9.3.1.2, but iterating with \(A^{-1} \)):

\begin{equation*} \begin{array}{l} {\bf for~} k = 0, \ldots \\ ~~~v^{(k+1)} = A^{-1} v^{(k)} \\ ~~~v^{(k+1)} = \lambda_{m-1} v^{(k+1)} \\ {\bf endfor} \end{array} \end{equation*}

From the analysis of the convergence of in Subsubsection 9.3.1.3 for the Power Method algorithm, we conclude that now

\begin{equation*} \begin{array}{l} \| v^{(k+1)} - \psi_{m-1} x_{m-1} \|_{X^{-1}} \leq \left\vert \frac{\lambda_{m-1}}{\lambda_{m-2}} \right\vert \| v^{(k)} - \psi_{m-1} x_{m-1} \|_{X^{-1}}. \end{array} \end{equation*}

A more practical Inverse Power Method algorithm is given by

\begin{equation*} \begin{array}{l} {\bf for~} k = 0, \ldots \\ ~~~ v^{(k+1)} = A^{-1} v^{(k)} \\ ~~~v^{(k+1)} = v^{(k+1)} / \| v^{(k+1)} \| \\ {\bf endfor} \end{array} \end{equation*}

We would probably want to factor \(P A = L U \) (LU factorization with partial pivoting) once and solve \(L( U v^{(k+1)} ) = P v^{(k)} \) rather than multiplying with \(A^{-1} \text{.}\)

Often, we would expect the Invert Power Method to converge faster than the Power Method. For example, take the case where \(\vert \lambda_k \vert \) are equally spaced between \(0 \) and \(m \) so that \(\vert \lambda_k \vert = (k+1) \text{.}\) Then

\begin{equation*} \left\vert \frac{\lambda_1}{\lambda_0} \right\vert = \frac{m-1}{m} \quad \mbox{and} \quad \left\vert \frac{\lambda_{m-1}}{\lambda_{m-2}} \right\vert = \frac{1}{2}. \end{equation*}

which means that the Power Method converges much more slowly than the Inverse Power Method.