Skip to main content

Subsection 9.2.3 More properties of eigenvalues and vectors

No video for this unit

This unit reminds us of various properties of eigenvalue and eigenvectors through a sequence of homeworks.

Homework 9.2.3.1.

Let \(\lambda \) be an eigenvalue of \(A \in \mathbb C^{m \times m} \) and let

\begin{equation*} {\cal E}_\lambda( A ) = \{ x \in \C^m \vert A x = \lambda x \}. \end{equation*}

be the set of all eigenvectors of \(A \) associated with \(\lambda \) plus the zero vector (which is not considered an eigenvector). Show that \({\cal E}_\lambda( A ) \) is a subspace.

Solution

A set \({\cal S} \subset \Cm \) is a subspace if and only if for all \(\alpha \in \C \) and \(x,y \in \Cm \) two conditions hold:

  • \(x \in {\cal S} \) implies that \(\alpha x \in {\cal S} \text{.}\)

  • \(x, y \in {\cal S} \) implies that \(x + y \in {\cal S} \text{.}\)

  • \(x \in {\cal E}_{\lambda}( A )\) implies \(\alpha x \in {\cal E}_{\lambda}( A )\text{:}\)

    \(x \in {\cal E}_{\lambda}(A) \) means that \(A x = \lambda x \text{.}\) If \(\alpha \in \C \) then \(\alpha A x = \alpha \lambda x \) which, by commutivity and associativity means that \(A ( \alpha x ) = \lambda ( \alpha x ) \text{.}\) Hence \((\alpha x) \in {\cal E}_{\lambda}(A) \text{.}\)

  • \(x,y \in {\cal E}_{\lambda}( A )\) implies \(x+y \in {\cal E}_{\lambda}( A )\text{:}\)

    \begin{equation*} A( x + y ) = A x + A y = \lambda x + \lambda y = \lambda( x + y ) . \end{equation*}

While there are an infinite number of eigenvectors associated with an eigenvalue, the fact that they form a subspace (provided the zero vector is added) means that they can be described by a finite number of vectors, namely a basis for that space.

Homework 9.2.3.2.

Let \(D \in \Cmxm \) be a diagonal matrix. Give all eigenvalues of \(D \text{.}\) For each eigenvalue, give a convenient eigenvector.

Solution

Let

\begin{equation*} D = \left( \begin{array}{c c c c} \delta_0 \amp 0 \amp \cdots \amp 0 \\ 0 \amp \delta_1 \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \delta_{m-1} \end{array} \right). \end{equation*}

Then

\begin{equation*} \lambda I - D = \left( \begin{array}{c c c c} \lambda - \delta_0 \amp 0 \amp \cdots \amp 0 \\ 0 \amp \lambda - \delta_1 \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \lambda - \delta_{m-1} \end{array} \right) \end{equation*}

is singular if and only if \(\lambda = \delta_i \) for some \(i \in \{ 0, \ldots , m-1 \} \text{.}\) Hence \(\Lambda( D ) = \{ \delta_0, \delta_1, \ldots, \delta_{m-1} \} \text{.}\)

Now,

\begin{equation*} D e_j = \mbox{ the column of } D \mbox{ indexed with } j = \delta_j e_j \end{equation*}

and hence \(e_j \) is an eigenvector associated with \(\delta_j \text{.}\)

Homework 9.2.3.3.

Compute the eigenvalues and corresponding eigenvectors of

\begin{equation*} A = \left(\begin{array}{rrr} -2 \amp 3 \amp -7 \\ 0 \amp 1 \amp 1 \\ 0 \amp 0 \amp 2 \\ \end{array}\right) \end{equation*}

(Recall: the solution is not unique.)

Solution

The eigenvalues can be found on the diagonal: \(\{ -2, 1, 2 \} \text{.}\)

  • To find an eigenvector associated with \(-2\text{,}\) form

    \begin{equation*} (-2) I - A = \left( \begin{array}{rrr} 0 \amp -3 \amp 7 \\ 0 \amp -3 \amp -1 \\ 0 \amp 0 \amp -4 \\ \end{array}\right) \end{equation*}

    and look for a vector in the null space of this matrix. By examination,

    \begin{equation*} \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right) \end{equation*}

    is in the null space of this matrix and hence an eigenvector of \(A \text{.}\)

  • To find an eigenvector associated with \(1\text{,}\) form

    \begin{equation*} (1) I - A = \left(\begin{array}{rrr} 3 \amp -3 \amp 7 \\ 0 \amp 0 \amp -1 \\ 0 \amp 0 \amp -1 \\ \end{array}\right) \end{equation*}

    and look for a vector in the null space of this matrix. Given where the zero appears on the diagonal, we notice that a vector of the form

    \begin{equation*} \left( \begin{array}{c} \chi_0 \\ 1 \\ 0 \end{array} \right) \end{equation*}

    is in the null space if \(\chi_0 \) is choosen appropriately. This means that

    \begin{equation*} 3 \chi_0 - 3 (1) = 0 \end{equation*}

    and hence \(\chi_0 = 1 \) so that

    \begin{equation*} \left( \begin{array}{r} 1 \\ 1 \\ 0 \end{array} \right) \end{equation*}

    in the null space of this matrix and hence an eigenvector of \(A \text{.}\)

  • To find an eigenvector associated with \(2\text{,}\) form

    \begin{equation*} (2) I - A = \left(\begin{array}{rrr} 4 \amp -3 \amp 7 \\ 0 \amp 1 \amp -1 \\ 0 \amp 0 \amp 0 \\ \end{array}\right) \end{equation*}

    and look for a vector in the null space of this matrix. Given where the zero appears on the diagonal, we notice that a vector of the form

    \begin{equation*} \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ 1 \end{array} \right) \end{equation*}

    is in the null space if \(\chi_0 \) and \(\chi_1\) are choosen appropriately. This means that

    \begin{equation*} \chi_1 - 1(1) = 0 \end{equation*}

    and hence \(\chi_1 = 1 \text{.}\) Also,

    \begin{equation*} 4 \chi_0 - 3 (1) + 7 ( 1) = 0 \end{equation*}

    so that \(\chi_0 = -1 \text{,}\)

    \begin{equation*} \left( \begin{array}{r} -1 \\ 1 \\ 1 \end{array} \right) \end{equation*}

    is in the null space of this matrix and hence an eigenvector of \(A \text{.}\)

Homework 9.2.3.4.

Let \(U \in \Cmxm \) be an upper triangular matrix. Give all eigenvalues of \(U \text{.}\) For each eigenvalue, give a convenient eigenvector.

Solution

Let

\begin{equation*} U = \left( \begin{array}{c c c c} \upsilon_{0,0} \amp \upsilon_{0,1} \amp \cdots \amp \upsilon_{0,m-1} \\ 0 \amp \upsilon_{1,1} \amp \cdots \amp \upsilon_{1,m-1} \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \upsilon_{m-1,m-1} \end{array} \right). \end{equation*}

Then

\begin{equation*} \lambda I - U = \left( \begin{array}{c c c c} \lambda - \upsilon_{0,0} \amp - \upsilon_{0,1} \amp \cdots \amp - \upsilon_{0,m-1} \\ 0 \amp \lambda - \upsilon_{1,1} \amp \cdots \amp - \upsilon_{1,m-1} \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\ 0 \amp 0 \amp \cdots \amp \lambda - \upsilon_{m-1,m-1} \end{array} \right). \end{equation*}

is singular if and only if \(\lambda = \upsilon_{i,i} \) for some \(i \in \{ 0, \ldots , m-1 \} \text{.}\) Hence \(\Lambda( U ) = \{ \upsilon_{0,0}, \upsilon_{1,1}, \ldots, \upsilon_{m-1,m-1} \} \text{.}\)

Let \(\lambda \) be an eigenvalue of \(U \text{.}\) Things get a little tricky if \(\lambda \) has multiplicity greater than one. Partition

\begin{equation*} U = \left( \begin{array}{c c c} U_{00} \amp u_{01} \amp U_{02} \\ 0 \amp \upsilon_{11} \amp u_{12}^T \\ 0 \amp 0 \amp U_{22} \end{array} \right) \end{equation*}

where \(\upsilon_{11} = \lambda \text{.}\) We are looking for \(x \neq 0 \) such that \(( \lambda I - U ) x = 0 \) or, partitioning \(x \text{,}\)

\begin{equation*} \left( \begin{array}{c c c} \upsilon_{11} I - U_{00} \amp - u_{01} \amp - U_{02} \\ 0 \amp 0 \amp - u_{12}^T \\ 0 \amp 0 \amp \upsilon_{11} I - U_{22} \end{array} \right) \left( \begin{array}{c} x_0 \\ \chi_1 \\ x_2 \end{array} \right) = \left( \begin{array}{c} 0 \\ 0 \\ 0 \end{array} \right). \end{equation*}

If we choose \(x_2 = 0 \) and \(\chi_1 =1 \text{,}\) then

\begin{equation*} ( \upsilon_{11} I - U_{00} ) x_0 - u_{01} = 0 \end{equation*}

and hence \(x_0 \) must satisfy

\begin{equation*} ( \upsilon_{11} I - U_{00} ) x_0 = u_{01}. \end{equation*}

If \(\upsilon_{11} I - U_{00} \) is nonsingular, then there is a unique solution to this equation, and

\begin{equation*} \left( \begin{array}{c} ( \upsilon_{11} I - U_{00} )^{-1} u_{01} \\ 1 \\ 0 \end{array} \right) \end{equation*}

is the desired eigenvector. HOWEVER, this means that the partitioning

\begin{equation*} U = \left( \begin{array}{c c c} U_{00} \amp u_{01} \amp U_{02} \\ 0 \amp \upsilon_{11} \amp u_{12}^T \\ 0 \amp 0 \amp U_{22} \end{array} \right) \end{equation*}

must be such that \(\upsilon_{11} \) is the FIRST diagonal element that equals \(\lambda \text{.}\)

In the next week, we will see that practical algorithms for computing the eigenvalues and eigenvectors of a square matrix morph that matrix into an upper triangular matrix via a sequence of transforms that preserve eigenvalues. The eigenvectors of that triangular matrix can then be computed using techniques similar to those in the solution to the last homework. Once those have been computed, they can be "back transformed" into the eigenvectors of the original matrix.