## Subsection9.5.2Summary

###### Definition9.5.2.1. Eigenvalue, eigenvector, and eigenpair.

Let $A \in \C^{m \times m} \text{.}$ Then $\lambda \in \C$ and nonzero $x \in \C^{m}$ are said to be an eigenvalue and corresponding eigenvector if $A x = \lambda x \text{.}$ The tuple $( \lambda, x )$ is said to be an eigenpair.

For $A \in \mathbb C^{m \times m} \text{,}$ the following are equivalent statements:

• $A$ is nonsingular.

• $A$ has linearly independent columns.

• There does not exist $x \neq 0$ such that $A x = 0 \text{.}$

• $\Null( A ) = \{ 0 \} \text{.}$ (The null space of $A$ is trivial.)

• $\dim(\Null( A )) = 0 \text{.}$

• $\det( A ) \neq 0 \text{.}$

For $A \in \mathbb C^{m \times m} \text{,}$ the following are equivalent statements:

• $\lambda$ is an eigenvalue of $A$

• $( \lambda I - A )$ is singular.

• $( \lambda I - A )$ has linearly dependent columns.

• There exists $x \neq 0$ such that $( \lambda I - A ) x = 0 \text{.}$

• The null space of $\lambda I - A$ is nontrivial.

• $\dim(\Null( \lambda I - A )) \gt 0 \text{.}$

• $\det( \lambda I - A ) = 0 \text{.}$

###### Definition9.5.2.2. Spectrum of a matrix.
The set of all eigenvalues of $A$ is denoted by $\Lambda( A )$ and is called the spectrum of $A \text{.}$

The spectral radius of $A \text{,}$ $\rho( A ) \text{,}$ equals the magnitude of the largest eigenvalue, in magnitude:

\begin{equation*} \rho( A ) = \max_{\lambda \in \Lambda( A )} \vert \lambda \vert. \end{equation*}

Some useful facts for $A \in \C^{m \times m} \text{:}$

• $0 \in \Lambda( A )$ if and only $A$ is singular.

• The eigenvectors corresponding to distinct eigenvalues are linearly independent.

• Let $A$ be nonsingular. Then $( \lambda, x )$ is an eigenpair of $A$ if and only if $( 1/\lambda, x )$ is an eigenpair of $A^{-1} \text{.}$

• $( \lambda, x )$ is an eigenpair of $A$ if and only if $( \lambda - \rho, x )$ is an eigenpair of $A - \rho I \text{.}$

Some useful facts for Hermitian $A \in \C^{m \times m} \text{:}$

• All eigenvalues are real-valued.

• $A$ is HPD if and only if all its eigenvalues are positive.

• If $( \lambda, x )$ and $( \mu, y )$ are both eigenpairs of Hermitian $A \text{,}$ then $x$ and $y$ are orthogonal.

###### Definition9.5.2.6.

The determinant of

\begin{equation*} A = \left( \begin{array}{c c} \alpha_{00} \amp \alpha_{01} \\ \alpha_{10} \amp \alpha_{11} \end{array} \right) \end{equation*}

is given by

\begin{equation*} \det( A ) = {\alpha_{00} \alpha_{11} - \alpha_{10} \alpha_{01}}. \end{equation*}

The characteristic polynomial of

\begin{equation*} A = \left( \begin{array}{c c} \alpha_{00} \amp \alpha_{01} \\ \alpha_{10} \amp \alpha_{11} \end{array} \right) \end{equation*}

is given by

\begin{equation*} \det( \lambda - I A ) = {( \lambda - \alpha_{00} ) ( \lambda - \alpha_{11} ) - \alpha_{10} \alpha_{01}}. \end{equation*}

This is a second degree polynomial in $\lambda$ and has two roots (multiplicity counted). The eigenvalues of $A$ equal the roots of this characteristic polynomial.

The characteristic polynomial of $A \in \mathbb C^{m \times m}$ is given by

\begin{equation*} \det( \lambda - I A ). \end{equation*}

This is a polynomial in $\lambda$ of degree $m$ and has $m$ roots (multiplicity counted). The eigenvalues of $A$ equal the roots of this characteristic polynomial. Hence, $A$ has $m$ eigenvalues (multiplicity counted).

###### Definition9.5.2.7. Algebraic multiplicity of an eigenvalue.

Let $A \in \Cmxm$ and $p_m( \lambda )$ its characteristic polynomial. Then the (algebraic) multiplicity of eigenvalue $\lambda_i$ equals the multiplicity of the corresponding root of the polynomial.

If

\begin{equation*} p_m( \chi ) = \alpha_0 + \alpha_1 \chi + \cdots + \alpha_{m-1} \chi^{m-1} + \chi^m \end{equation*}

and

\begin{equation*} A = \left( \begin{array}{ c c c c c c c c} - \alpha_{n-1} \amp - \alpha_{n-2} \amp - \alpha_{n-3} \amp \cdots \amp - \alpha_1 \amp - \alpha_{0} \\ 1 \amp 0 \amp 0 \amp \cdots \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \amp \cdots \amp 0 \amp 0 \\ 0 \amp 0 \amp 1 \amp \cdots \amp 0 \amp 0 \\ \vdots \amp \vdots \amp \vdots \amp \ddots \amp \vdots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp 1 \amp 0 \\ \end{array} \right) \end{equation*}

then

\begin{equation*} p_m( \lambda ) = \det( \lambda I - A ) . \end{equation*}

Let $\lambda$ be an eigenvalue of $A \in \mathbb C^{m \times m}$ and

\begin{equation*} {\cal E}_\lambda( A ) = \{ x \in \C^m \vert A x = \lambda x \}. \end{equation*}

bet the set of all eigenvectors of $A$ associated with $\lambda$ plus the zero vector (which is not considered an eigenvector). This set is a subspace.

The elements on the diagonal of a diagonal matrix are its eigenvalues. The elements on the diagonal of a triangular matrix are its eigenvalues.

###### Definition9.5.2.10.

Given a nonsingular matrix $Y \text{,}$ the transformation $Y^{-1} A Y$ is called a similarity transformation (applied to matrix $A$).

Let $A, B, Y \in \mathbb C^{m \times m} \text{,}$ where $Y$ is nonsingular, $B = Y^{-1} A Y \text{,}$ and $( \lambda, x )$ an eigenpair of $A \text{.}$ Then $( \lambda, Y^{-1} x )$ is an eigenpair of $B \text{.}$

###### Definition9.5.2.12.

Given a nonsingular matrix $Q$ the transformation $Q^H A Q$ is called a unitary similarity transformation (applied to matrix $A$).

###### Definition9.5.2.16. Diagonalizable matrix.

A matrix $A \in \Cmxm$ is said to be diagonalizable if and only if there exists a nonsingular matrix $X$ and diagonal matrix $D$ such that $X^{-1} A X = D \text{.}$

If $A \in \Cmxm$ has distinct eigenvalues, then it is diagonalizable.

###### Definition9.5.2.18. Defective matrix.

A matrix $A \in \Cmxm$ that does not have $m$ linearly independent eigenvectors is said to be defective.

###### Definition9.5.2.20. Geometric multiplicity.

Let $\lambda \in \Lambda( A ) \text{.}$ Then the geometric multiplicity of $\lambda$ is defined to be the dimension of ${\cal E}_\lambda( A )$ defined by

\begin{equation*} {\cal E}_\lambda( A ) = \{ x \in \C^m \vert A x = \lambda x \}. \end{equation*}

In other words, the geometric multiplicity of $\lambda$ equals the number of linearly independent eigenvectors that are associated with $\lambda \text{.}$

###### Definition9.5.2.21. Jordan Block.

Define the $k \times k$ Jordan block with eigenvalue $\mu$ as

\begin{equation*} J_k( \mu ) = \left( \begin{array}{c c c c c c } \mu \amp 1 \amp 0 \amp \cdots \amp 0 \amp 0 \\ 0 \amp \mu \amp 1 \amp \ddots \amp 0 \amp 0 \\ \vdots \amp \ddots \amp \ddots \amp \ddots \amp \vdots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \ddots \amp \mu \amp 1 \\ 0 \amp 0 \amp 0 \amp \cdots \amp0 \amp \mu \end{array} \right) \end{equation*}

A practical Power Method for finding the eigenvector associated with the largest eigenvalue (in magnitude):

\begin{equation*} \begin{array}{l} \mbox{Pick } v^{(0)} \mbox{ of unit length} \\ {\bf for~} k = 0, \ldots \\ ~~~v^{(k+1)} = A v^{(k)} \\ ~~~ v^{(k+1)} = v^{(k+1)} / \| v^{(k+1)} \| \\ {\bf endfor} \end{array} \end{equation*}
###### Definition9.5.2.23. Rayleigh quotient.

If $A \in \Cmxm$ and $x \neq 0 \in \Cm$ then

\begin{equation*} \frac{ x^H A x }{x^H x} \end{equation*}

is known as the Rayleigh quotient.

If $x$ is an eigenvector of $A \text{,}$ then

\begin{equation*} \frac{x^H A x}{x^H x} \end{equation*}

is the associated eigenvalue.

###### Definition9.5.2.24. Convergence of a sequence of scalars.

Let $\alpha_0, \alpha_1 , \alpha_2, \ldots \in \R$ be an infinite sequence of scalars. Then $\alpha_k$ is said to converge to $\alpha$ if

\begin{equation*} \lim_{k \rightarrow \infty} \vert \alpha_k - \alpha \vert = 0 . \end{equation*}
###### Definition9.5.2.25. Convergence of a sequence of vectors.

Let $x_0, x_1 , x_2, \ldots \in \C^m$ be an infinite sequence of vectors. Then $x_k$ is said to converge to $x$ if for any norm $\| \cdot \|$

\begin{equation*} \lim_{k \rightarrow \infty} \| x_k - x \| = 0. \end{equation*}
###### Definition9.5.2.26. Rate of convergence.

Let $\alpha_0, \alpha_1 , \alpha_2, \ldots \in \R$ be an infinite sequence of scalars that converges to $\alpha \text{.}$ Then

• $\alpha_k$ is said to converge linearly to $\alpha$ if for sufficiently large $k$

\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C \vert \alpha_{k} - \alpha \vert \end{equation*}

for some constant $C \lt 1 \text{.}$

• $\alpha_k$ is said to converge superlinearly to $\alpha$ if for sufficiently large $k$

\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C_k \vert \alpha_{k} - \alpha \vert \end{equation*}

with $C_k \rightarrow 0 \text{.}$

• $\alpha_k$ is said to converge quadratically to $\alpha$ if for sufficiently large $k$

\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C \vert \alpha_{k} - \alpha \vert ^2 \end{equation*}

for some constant $C \text{.}$

• $\alpha_k$ is said to converge superquadratically to $\alpha$ if for sufficiently large $k$

\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C_k \vert \alpha_{k} - \alpha \vert ^2 \end{equation*}

with $C_k \rightarrow 0 \text{.}$

• $\alpha_k$ is said to converge cubically to $\alpha$ if for large enough $k$

\begin{equation*} \vert \alpha_{k+1} - \alpha \vert \leq C \vert \alpha_{k} - \alpha \vert ^3 \end{equation*}

for some constant $C \text{.}$

The convergence of the Power Method is linear.

A practical Inverse Power Method for finding the eigenvector associated with the smallest eigenvalue (in magnitude):

\begin{equation*} \begin{array}{l} \mbox{Pick } v^{(0)} \mbox{ of unit length} \\ {\bf for~} k = 0, \ldots \\ ~~~v^{(k+1)} = A^{-1} v^{(k)} \\ ~~~ v^{(k+1)} = v^{(k+1)} / \| v^{(k+1)} \| \\ {\bf endfor} \end{array} \end{equation*}

The convergence of the Inverse Power Method is linear.

A practical Rayleigh quation iteration for finding the eigenvector associated with the smallest eigenvalue (in magnitude):

\begin{equation*} \begin{array}{l} \mbox{Pick } v^{(0)} \mbox{ of unit length} \\ {\bf for~} k = 0, \ldots \\ ~~~ \rho_k = v^{(k)\!H} A v^{(k)} \\ ~~~ v^{(k+1)} = ( A - \rho_k I )^{-1} v^{(k)} \\ ~~~ v^{(k+1)} = v^{(k+1)} / \| v^{(k+1)} \| \\ {\bf endfor} \end{array} \end{equation*}

The convergence of the Rayleigh Quotient Iteration is quadratic (eventually, the number of correct digits doubles in each iteration). If $A$ is Hermitian, the convergence is cubic (eventually, the number of correct digits triples in each iteration).