## Subsection6.3.5Matrix-matrix multiplication

The idea behind backward error analysis is that the computed result is the exact result when computing with changed inputs. Let's consider matrix-matrix multiplication:

\begin{equation*} C := A B. \end{equation*}

What we would like to be able to show is that there exist $\Delta\!A$ and $\Delta\!B$ such that the computed result, $\check C \text{,}$ satisfies

\begin{equation*} \check C := ( A + \Delta\!A )( B + \Delta\!B). \end{equation*}

###### Ponder This6.3.5.1.

Can one find matrices $\Delta\!A$ and $\Delta\! B$ such that

\begin{equation*} \check C = ( A + \Delta\!A ) ( B + \Delta\!B )? \end{equation*}

For matrix-matrix multiplication, it is possible to "throw" the error onto the result, as summarized by the following theorem:

###### Homework6.3.5.2.

Prove Theorem 6.3.5.1.

Solution

Partition

\begin{equation*} C = \left( \begin{array}{c | c | c | c} c_0 \amp c_1 \amp \cdots \amp c_{n-1} \end{array} \right) \quad \mbox{and} \quad B = \left( \begin{array}{c | c | c | c} b_0 \amp b_1 \amp \cdots \amp b_{n-1} \end{array} \right). \end{equation*}

Then

\begin{equation*} \left( \begin{array}{c | c | c | c} c_0 \amp c_1 \amp \cdots \amp c_{n-1} \end{array} \right) := \left( \begin{array}{c | c | c | c} A b_0 \amp A b_1 \amp \cdots \amp A b_{n-1} \end{array} \right). \end{equation*}

From R-1F 6.3.4.1 regarding matrix-vector multiplication we know that

\begin{equation*} \begin{array}{rcl} \left( \begin{array}{c | c | c | c} \check c_0 \amp \check c_1 \amp \cdots \amp \check c_{n-1} \end{array} \right) \amp=\amp \left( \begin{array}{c | c | c | c} A b_0 + \delta\!c_0\amp A b_1 + \delta\!c_1 \amp \cdots \amp A b_{n-1} + \delta\!c_{n-1} \end{array} \right) \\ \amp=\amp \left( \begin{array}{c | c | c | c} A b_0 \amp A b_1 \amp \cdots \amp A b_{n-1} \end{array} \right) + \left( \begin{array}{c | c | c | c} \delta\!c_0\amp \delta\!c_1 \amp \cdots \amp \delta\!c_{n-1} \end{array} \right) \\ \amp = \amp A B + \Delta\!C . \end{array} \end{equation*}

where $\vert \delta\!c_j \vert \leq \gamma_k \vert A \vert \vert b_j \vert\text{,}$ $j = 0, \ldots , n-1 \text{,}$ and hence $\vert \Delta\!C \vert \leq \gamma_k \vert A \vert \vert B \vert \text{.}$

###### Remark6.3.5.2.

In practice, matrix-matrix multiplication is often the parameterized operation $C := \alpha A B + \beta C.$ A consequence of Theorem 6.3.5.1 is that for $\beta \neq 0 \text{,}$ the error can be attributed to a change in parameter $C \text{,}$ which means the error has been "thrown back" onto an input parameter.