Skip to main content

Subsection 9.3.3 The Inverse Power Method

Homework 9.3.3.1.

Let \(A \in \mathbb C^{m \times m} \) be nonsingular, and \(( \lambda, x ) \) an eigenpair of \(A \)

Which of the follow is an eigenpair of \(A^{-1} \text{:}\)

  • \(( \lambda, x ) \text{.}\)

  • \(( \lambda, A^{-1} x ) \text{.}\)

  • \(( 1/\lambda, A^{-1} x ) \text{.}\)

  • \(( 1/\lambda, x ) \text{.}\)

Answer

\(( 1/\lambda, x ) \text{.}\)

Now justify your answer.

Solution

Since \(A x = \lambda x \) and \(A \) is nonsingular, we know that \(A^{-1} \) exists and \(\lambda \neq 0 \text{.}\) Hence

\begin{equation*} \frac{1}{\lambda} x = A^{-1} x \end{equation*}

which can be rewritten as

\begin{equation*} A^{-1} x = \frac{1}{\lambda} x. \end{equation*}

We conclude that \(( 1/\lambda, x ) \) is an eigenpair of \(A^{-1} \text{.}\)

The Power Method homes in on an eigenvector associated with the largest (in magnitude) eigenvalue. The Inverse Power Method homes in on an eigenvector associated with the smallest eigenvalue (in magnitude).

Once again, we assume that a given matrix \(A \in \C^{m \times m} \) is diagonalizable so that there exist matrix \(X \) and diagonal matrix \(\Lambda \) such that \(A = X \Lambda X^{-1} \text{.}\) We further assume that \(\Lambda = {\rm diag}( \lambda_0 , \cdots , \lambda_{m-1} ) \) and

\begin{equation*} \vert \lambda_0 \vert \geq \vert \lambda_1 \vert \geq \cdots \geq \vert \lambda_{m-2} \vert \gt \vert \lambda_{m-1} \vert \gt 0 . \end{equation*}

Notice that this means that \(A \) is nonsingular.

Clearly, if

\begin{equation*} \vert \lambda_0 \vert \geq \vert \lambda_1 \vert \geq \cdots \geq \vert \lambda_{m-2} \vert \gt \vert \lambda_{m-1} \vert \gt 0, \end{equation*}

then

\begin{equation*} \left\vert \frac{1}{\lambda_{m-1}} \right\vert \gt \left\vert \frac{1}{\lambda_{m-2}} \right\vert \geq \left\vert \frac{1}{\lambda_{m-3}} \right\vert \geq \cdots \geq \left\vert \frac{1}{\lambda_{0}} \right\vert . \end{equation*}

Thus, an eigenvector associated with the smallest (in magnitude) eigenvalue of \(A \) is an eigenvector associated with the largest (in magnitude) eigenvalue of \(A^{-1} \text{.}\)

This suggest the following naive iteration (which mirrors the second attempt for the Power Method in Subsubsection 9.3.1.2, but iterating with \(A^{-1} \)):

\begin{equation*} \begin{array}{l} {\bf for~} k = 0, \ldots \\ ~~~v^{(k+1)} = A^{-1} v^{(k)} \\ ~~~v^{(k+1)} = \lambda_{m-1} v^{(k+1)} \\ {\bf endfor} \end{array} \end{equation*}

From the analysis of the convergence of in Subsection 9.3.2 for the Power Method algorithm, we conclude that now

\begin{equation*} \begin{array}{l} \| v^{(k+1)} - \psi_{m-1} x_{m-1} \|_{X^{-1}} \leq \left\vert \frac{\lambda_{m-1}}{\lambda_{m-2}} \right\vert \| v^{(k)} - \psi_{m-1} x_{m-1} \|_{X^{-1}}. \end{array} \end{equation*}

A more practical Inverse Power Method algorithm is given by

\begin{equation*} \begin{array}{l} \mbox{Pick } v^{(0)} \mbox{ of unit length} \\ {\bf for~} k = 0, \ldots \\ ~~~ v^{(k+1)} = A^{-1} v^{(k)} \\ ~~~v^{(k+1)} = v^{(k+1)} / \| v^{(k+1)} \| \\ {\bf endfor} \end{array} \end{equation*}

We would probably want to factor \(P A = L U \) (LU factorization with partial pivoting) once and solve \(L( U v^{(k+1)} ) = P v^{(k)} \) rather than multiplying with \(A^{-1} \text{.}\)