Skip to main content

Subsection 2.5.2 Summary

Given \(x, y \in \Cm \)

  • their dot product (inner product) is defined as

    \begin{equation*} x^H y = \overline x^T y = \overline{x^T} y = \overline \chi_0 \psi_0 + \overline \chi_1 \psi_1 + \cdots + \overline \chi_{m-1} \psi_{m-1} = \sum_{i=0}^{m-1} \overline \chi_i \psi_i . \end{equation*}
  • These vectors are said to be orthogonal (perpendicular) iff \(x^H y = 0 \text{.}\)

  • The component of \(y \) in the direction of \(x \) is given by

    \begin{equation*} \frac{x^H y}{x^H x} x = \frac{x x^H }{x^H x} y. \end{equation*}

    The matrix that projects a vector onto the space spanned by \(x \) is given by

    \begin{equation*} \frac{x x^H } {x^H x}. \end{equation*}
  • The component of \(y \) orthogonal to \(x \) is given by

    \begin{equation*} y - \frac{x^H y}{x^H x} x = \left( I - \frac{x x^H }{x^H x}\right) y. \end{equation*}

    Thus, the matrix that projects a vector onto the space orthogonal to \(x \) is given by

    \begin{equation*} I - \frac{x x^H } {x^H x}. \end{equation*}

Given \(u,v \in \Cm \) with \(u \) of unit length

  • The component of \(v \) in the direction of \(u \) is given by

    \begin{equation*} u^H v u = u u^H v. \end{equation*}
  • The matrix that projects a vector onto the space spanned by \(u \) is given by

    \begin{equation*} u u^H \end{equation*}
  • The component of \(v \) orthogonal to \(u \) is given by

    \begin{equation*} v - u^H v u = \left( I - u u^H \right) v. \end{equation*}
  • The matrix that projects a vector onto the space that is orthogonal to \(x \) is given by

    \begin{equation*} I - u u^H \end{equation*}

Let \(u_0, u_1, \ldots, u_{n-1} \in \C^m \text{.}\) These vectors are said to be mutually orthonormal if for all \(0 \leq i,j \lt n \)

\begin{equation*} u_i^H u_j = \left\{ \begin{array}{c l} 1 \amp {\rm if~} i = j \\ 0 \amp {\rm otherwise} \end{array} \right. . \end{equation*}

Let \(Q \in \C^{m \times n} \) (with \(n \leq m \)). Then \(Q \) is said to be

  • an orthonormal matrix iff \(Q^H Q = I \text{.}\)

  • a unitary matrix iff \(Q^H Q = I \) and \(m = n \text{..}\)

  • an orthogonal matrix iff it is a unitary matrix and is real-valued.

Let \(Q \in \C^{m \times n} \) (with \(n \leq m \)). Then \(Q = \left( \begin{array}{c | c | c} q_0 \amp \cdots q_{n-1} \end{array} \right) \) is orthonormal iff \(\{ q_0, \ldots, q_{n-1} \) are mutually orthonormal.

Definition 2.5.2.1. Unitary matrix.

Let \(U \in \C^{m \times m} \text{.}\) Then \(U \) is said to be a unitary matrix if and only if \(U^H U = I \) (the identity).

If \(U, V \in \C^{m \times m} \) are unitary, then

  • \(U^H U = I \text{.}\)

  • \(U U^H = I \text{.}\)

  • \(U^{-1} = U^H \text{.}\)

  • \(U^H \) is unitary.

  • \(U V \) is unitary.

If \(U \in \Cmxm \) and \(V \in \C^{n \times n} \) are unitary, \(x \in \Cm \text{,}\) and \(A \in \Cmxn \text{,}\) then

  • \(\| U x \|_2 = \| x \|_2 \text{.}\)

  • \(\| U^H A \|_2 = \| U A \|_2 = \| A V \|_2 = \| A V^H \|_2 = \| U^H A V \|_2 = \| U A V^H \|_2 = \| A \|_2 \text{.}\)

  • \(\| U^H A \|_F = \| U A \|_F = \| A V \|_F = \| A V^H \|_F = \| U^H A V \|_F = \| U A V^H \|_F = \| A \|_F \text{.}\)

  • \(\| U\|_2 = 1 \)

  • \(\kappa_2( U ) = 1 \)

Examples of unitary matrices:

  • Rotation in 2D: \(\left( \begin{array}{r r} c \amp -s \\ s \amp c \end{array} \right) \text{.}\)

  • Reflection: \(I - 2 u u^H \) where \(u \in \Cm\) and \(\| u \|_2 = 1 \text{.}\)

Change of orthonormal basis: If \(x \in \Cm \) and \(U = \left( \begin{array}{c | c | c} u_0 \amp \cdots \amp u_{m-1} \end{array} \right) \) is unitary, then

\begin{equation*} x = (u_0^H x) u_0 + \cdots + (u_{m-1}^H x) u_{m-1} = \left( \begin{array}{c|c|c} u_0 \amp \cdots \amp u_{m-1} \end{array} \right) \begin{array}[t]{c} \underbrace{ \left( \begin{array}{c} u_0^H x \\ \vdots \\ u_{m-1}^H x \end{array} \right) } \\ U^H x \end{array} = U U^H x. \end{equation*}

Let \(A \in \Cnxn \) be nonsingular and \(x \in \Cn \) a nonzero vector. Consider

\begin{equation*} y = A x \quad \mbox{and} \quad y + \delta\!y = A ( x + \delta\!x ). \end{equation*}

Then

\begin{equation*} \frac{\| \delta\!y \|}{\| y \|}\leq \begin{array}[t]{c} \underbrace{\| A \| \| A^{-1} \|} \\ \kappa( A ) \end{array} \frac{\| \delta\!x \|}{\| x \|}, \end{equation*}

where \(\| \cdot \| \) is an induced matrix norm.

Let \(A \in \C^{m \times n} \) and \(A = U \Sigma V^H \) its SVD with

\begin{equation*} U = \left( \begin{array}{ c | c } U_L \amp U_R \end{array} \right) = \left( \begin{array}{ c | c | c } u_0 \amp \cdots \amp u_{m-1} \end{array} \right), \end{equation*}
\begin{equation*} V = \left( \begin{array}{ c | c } V_L \amp V_R \end{array} \right) = \left( \begin{array}{ c | c | c } v_0 \amp \cdots \amp v_{n-1} \end{array} \right), \end{equation*}

and

\begin{equation*} \Sigma = \FlaTwoByTwo{ \Sigma_{TL} }{ 0 }{ 0 }{ 0 } , \mbox{ where } \Sigma_{TL} = \left( \begin{array}{c c c c} \sigma_0 \amp 0 \amp \cdots \amp 0 \\ 0 \amp \sigma_1 \amp \cdots \amp 0 \\ \vdots \amp \vdots \amp \ddots \amp \vdots \\\ 0 \amp 0 \amp \cdots \amp \sigma_{r-1} \end{array} \right) ~~~{\rm and} ~~~~ \sigma_0 \geq \sigma_1 \geq \cdots \geq \sigma_{r-1} \gt 0. \end{equation*}

Here \(U_L \in \C^{m \times r} \text{,}\) \(V_L \in \C^{n \times r} \) and \(\Sigma_{TL} \in \R^{r \times r } \text{.}\) Then

  • \(\| A \|_2 = \sigma_0 \text{.}\) (The 2-norm of a matrix equals the largest singular value.)

  • \(\rank( A ) = r \text{.}\)

  • \(\Col( A ) = \Col( U_L ) \text{.}\)

  • \(\Null( A ) = \Col( V_R ) \text{.}\)

  • \(\Rowspace( A ) = \Col( V_L ) \text{.}\)

  • Left null-space of \(A = \Col( U_R ) \text{.}\)

  • \(A^H = V \Sigma^T U^H \text{.}\)

  • SVD: \(A^H = V \Sigma U^H \text{.}\)

  • Reduced SVD: \(A = U_L \Sigma_{TL} V_L^H \text{.}\)

  • \begin{equation*} A = \begin{array}[t]{c} \underbrace{ ~~~~~~~~~\sigma_0 u_0 v_0^H ~~~~~~~~~ } \\ \sigma_0 \!\!\!\! \begin{array}[t]{c|c} ~\amp~\\ ~\amp~\\ \end{array} \!\!\!\! \begin{array}[t]{c}\hline ~~~~~~~~~ \\ ~ \end{array} \end{array} + \begin{array}[t]{c} \underbrace{ ~~~~~~~~~\sigma_1 u_1 v_1^H ~~~~~~~~~ } \\ \sigma_1 \!\!\!\! \begin{array}[t]{c|c} ~\amp~\\ ~\amp~\\ \end{array} \!\!\!\! \begin{array}[t]{c} \hline ~~~~~~~~~ \\ ~ \end{array} \end{array} + \cdots + \begin{array}[t]{c} \underbrace{ ~~~~~~~~~\sigma_{r-1} u_{r-1} v_{r-1}^H ~~~~~~~~~ } \\ \sigma_{r-1} \!\!\!\! \begin{array}[t]{c|c} ~\amp~\\ ~\amp~\\ \end{array} \!\!\!\! \begin{array}[t]{c} \hline ~~~~~~~~~ \\ ~ \end{array} \end{array} . \end{equation*}
  • Reduced SVD: \(A^H = V_L \Sigma U_L^H \text{.}\)

  • If \(m \times m \) matrix \(A \) is nonsingular: \(A^{-1} = V \Sigma^{-1} U^H \text{.}\)

  • If \(A \in \Cmxm \) then \(A \) is nonsingular if and only if \(\sigma_{m-1} \neq 0 \text{.}\)

  • If \(A \in \Cmxm \) is nonsingular then \(\kappa_2( A ) = \sigma_0 / \sigma_{m-1} \text{.}\)

  • (Left) pseudo inverse: if \(A \) has linearly independent columns, then \(A^\dagger = ( A^H A )^{-1} A^H = V \Sigma_{TL}^{-1} U_L^H \text{.}\)

  • \(v_0 \) is the direction of maximal magnification.

  • \(v_{n-1} \) is is the direction of minimal magnification.

  • If \(n \leq m \text{,}\) then \(A v_j = \sigma_j u_j \text{,}\) for \(0 \leq j \lt n \text{.}\)