Skip to main content

Subsection 2.2.7 Why we love unitary matrices

In Subsection 1.4.1, we looked at how sensitive solving

\begin{equation*} A x = b \end{equation*}

is to a change in the right-hand side

\begin{equation*} A ( x + \delta\!x) = b + \delta\!b \end{equation*}

when \(A \) is nonsingular. We concluded that

\begin{equation*} \frac{\| \delta\!x \|}{\| x \|}\leq \begin{array}[t]{c} \underbrace{\| A \| \| A^{-1} \|} \\ \kappa( A ) \end{array} \frac{\| \delta\!b \|}{\| b \|}, \end{equation*}

when an induced matrix norm is used. Let's look instead at how sensitive matrix-vector multiplication is.

Homework 2.2.7.1.

Let \(A \in \Cnxn \) be nonsingular and \(x \in \Cn \) a nonzero vector. Consider

\begin{equation*} y = A x \quad \mbox{and} \quad y + \delta\!y = A ( x + \delta\!x ). \end{equation*}

Show that

\begin{equation*} \frac{\| \delta\!y \|}{\| y \|}\leq \begin{array}[t]{c} \underbrace{\| A \| \| A^{-1} \|} \\ \kappa( A ) \end{array} \frac{\| \delta\!x \|}{\| x \|}, \end{equation*}

where \(\| \cdot \| \) is an induced matrix norm.

Solution

Since \(x = A^{-1} y \) we know that

\begin{equation*} \| x \| \leq \| A^{-1} \| \| y \| \end{equation*}

and hence

\begin{equation} \frac{1}{\| y \|} \leq \| A^{-1} \| \frac{1}{\| x \|}.\label{chapter03-launch-1}\tag{2.2.1} \end{equation}

Subtracting \(y = A x \) from \(y + \delta\!y = A ( x + \delta\!x ) \) yields

\begin{equation*} \delta\!y = A \delta\!x \end{equation*}

and hence

\begin{equation} \| \delta\!y \| \leq \| A \| \| \delta\!x \|.\label{chapter03-launch-2}\tag{2.2.2} \end{equation}

Combining (2.2.1) and (2.2.2) yields the desired result.

There are choices of \(x \) and \(\delta\!x \) for which the bound is tight.

What does this mean? It means that if as part of an algorithm we use matrix-vector or matrix-matrix multiplication, we risk amplifying relative error by the condition number of the matrix by which we multiply. Now, we saw in Section 1.4 that \(1 \leq \kappa( A ) \text{.}\) So, if there are algorithms that only use matrices for which \(\kappa(A) = 1 \text{,}\) then those algorithms don't amplify relative error.

Remark 2.2.7.1.

We conclude that unitary matrices, which do not amplify the 2-norm of a vector or matrix, should be our tool of choice, whenever practical.