Skip to main content

Subsection 6.3.2 Backward error analysis of dot product: general case

Consider now

\begin{equation*} \kappa \becomes x^T y = \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ \vdots \\ \chi_{n-2} \\ \chi_{n-1} \end{array} \right)^T \left( \begin{array}{c} \psi_0 \\ \psi_1 \\ \vdots \\ \psi_{n-2} \\ \psi_{n-1} \end{array} \right) = \Big( \big( ( \chi_0 \psi_0 + \chi_1 \psi_1 ) + \cdots \big) + \chi_{n-2} \psi_{n-2} \Big) + \chi_{n-1} \psi_{n-1}. \end{equation*}

Under the computational model given in (((Unresolved xref, reference "stability-model-of-floating-point-computation"; check spelling or use "provisional" attribute)))  the computed result, \(\check \kappa \text{,}\) satisfies

\begin{equation} \begin{array}{rcl} \check \kappa \amp=\amp \bigg( \Big( \big( \chi_0 \psi_0 ( 1 + \epsilon_{*}^{(0)} ) + \chi_1 \psi_1 ( 1 + \epsilon_{*}^{(1)} ) \big) ( 1 + \epsilon_{+}^{(1)} ) + \cdots \Big) ( 1 + \epsilon_{+}^{(n-2)} ) \\ \amp \amp ~~~~~~ + \chi_{n-1} \psi_{n-1} ( 1 + \epsilon_{*}^{(n-1)} ) \bigg) ( 1 + \epsilon_{+}^{(n-1)} ) \\ \amp = \amp \phantom{+~} \chi_0 \psi_0 ( 1 + \epsilon_*^{(0)} ) ( 1 + \epsilon_+^{(1)} ) ( 1 + \epsilon_+^{(2)} ) \cdots ( 1 + \epsilon_+^{(n-1)} ) \\ \amp \amp +~ \chi_1 \psi_1 ( 1 + \epsilon_*^{(1)} ) ( 1 + \epsilon_+^{(1)} ) ( 1 + \epsilon_+^{(2)} ) \cdots ( 1 + \epsilon_+^{(n-1)} ) \\ \\ \amp \amp +~ \chi_2 \psi_2 ( 1 + \epsilon_*^{(2)} ) \phantom{( 1 + \epsilon_+^{(1)} )} ( 1 + \epsilon_+^{(2)} ) \cdots ( 1 + \epsilon_+^{(n-1)} ) \\ \amp \amp + ~~~~~~~~~~~~~~~\vdots \\ \amp \amp +~ \chi_{n-1} \psi_{n-1} ( 1 + \epsilon_*^{(n-1)} ) \phantom{( 1 + \epsilon_+^{(1)} )} \phantom{( 1 + \epsilon_+^{(2)} ) } \phantom{\cdots} ( 1 + \epsilon_+^{(n-1)} ) \\ \\ \amp = \amp \sum_{i=0}^{n-1} \left( \chi_i \psi_i ( 1 + \epsilon_{*}^{(i)} ) \prod_{j=i}^{n-1} ( 1 + \epsilon_{+}^{(j)} ) \right)\!, \end{array}\label{stability-eqn-dot2}\tag{6.3.2} \end{equation}

where \(\epsilon_{+}^{(0)} = 0 \) and \(\vert \epsilon_{*}^{(0)} \vert , \vert \epsilon_{*}^{(j)} \vert, \vert \epsilon_{+}^{(j)} \vert \leq \meps \) for \(j = 1, \ldots , n-1 \text{.}\)

Clearly, a notation to keep expressions from becoming unreadable is desirable. For this reason we introduce the symbol \(\theta_j \text{:}\)

By Mathematical Induction.

  • Base case. \(n = 1\text{.}\) Trivial.

  • Inductive Step. The Inductive Hypothesis (I.H.) tells us that for all \(\epsilon_i \in \mathbb{R}\text{,}\) \(0\leq i\leq n-1\text{,}\) \(n \meps \lt 1 \text{,}\) and \(\vert \epsilon_i \vert \leq \meps\text{,}\) there exists a \(\theta_{n} \in \mathbb{R}\) such that

    \begin{equation*} \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} = 1 + \theta_{n}, \mbox{ with } \vert \theta_n \vert \leq {n \meps}/{(1 - n \meps)}. \end{equation*}

    We will show that if \(\epsilon_i \in \mathbb{R}\text{,}\) \(0\leq i\leq n\text{,}\) \((n+1) \meps \lt 1 \text{,}\) and \(\vert \epsilon_i \vert \leq \meps\text{,}\) then there exists a \(\theta_{n+1} \in \mathbb{R}\) such that

    \begin{equation*} \prod_{i=0}^{n} (1 + \epsilon_i)^{\pm 1} = 1 + \theta_{n+1}, \mbox{ with } \vert \theta_{n+1} \vert \leq {(n+1) \meps}/{(1 - (n+1) \meps)}. \end{equation*}
    • Case 1: \(\prod_{i=0}^{n} (1 + \epsilon_i)^{\pm 1} = \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} ( 1 + \epsilon_n)\text{.}\) See Exercise~\ref{ex:theta}.

    • Case 2: \(\prod_{i=0}^{n} (1 + \epsilon_i)^{\pm 1} = ( \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} )/( 1 + \epsilon_n)\text{.}\) By the I.H. there exists a \(\theta_{n} \) such that \(( 1 + \theta_{n} ) = \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} \) and \(\vert \theta_n \vert \leq n \meps / ( 1 - n \meps ) \text{.}\) Then

      \begin{equation*} \frac{ \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} }{ 1 + \epsilon_n } = \frac{ 1 + \theta_n }{ 1 + \epsilon_n } = 1 + \begin{array}[t]{c} \underbrace{ \frac{ \theta_n - \epsilon_n }{ 1 + \epsilon_n}} \\ \theta_{n+1} \end{array}, \end{equation*}

      which tells us how to pick \(\theta_{n+1} \text{.}\) Now

      \begin{equation*} \begin{array}{rcl} \vert \theta_{n+1} \vert \amp=\amp \left\vert \frac{\theta_n - \epsilon_n}{1 + \epsilon_n} \right\vert \leq \frac{ \vert \theta_n \vert + \meps }{ 1 - \meps } \leq \frac{ \frac{n \meps}{1 - n \meps} + u }{1 - \meps} = \frac{n\meps + (1-n \meps ) \meps }{(1-n \meps)( 1 - \meps )} \\ \amp=\amp \frac{( n+1) \meps - n \meps^2}{1-(n+1)\meps + n \meps^2} \leq \frac{( n+1) \meps}{1-(n+1)\meps}. \end{array} \end{equation*}
  • By the Principle of Mathematical Induction, the result holds.

Homework 6.3.2.1.

Complete the proof of Lemma 6.3.2.1.

Solution

We merely need to fill in the details for Case 1 in the proof:

Case 1: \(\prod_{i=0}^{n} (1 + \epsilon_i)^{\pm 1} = ( \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} )( 1 + \epsilon_n)\text{.}\)

By the I.H. there exists a \(\theta_{n} \) such that \(( 1 + \theta_{n} ) = \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} \) and \(\vert \theta_n \vert \leq n \meps / ( 1 - n \meps ) \text{.}\) Then

\begin{equation*} \begin{array}{l} \left( \prod_{i=0}^{n-1} (1 + \epsilon_i)^{\pm 1} \right) ( 1 + \epsilon_n ) \\ ~~~=~~~~ \\ ( 1 + \theta_n )( 1 + \epsilon_n ) \\ ~~~=~~~~ \\ 1 + \begin{array}[t]{c} \underbrace{ \theta_n + \epsilon_n + \theta_n \epsilon_n} \\ \theta_{n+1} \end{array}, \end{array} \end{equation*}

which tells us how to pick \(\theta_{n+1} \text{.}\) Now

\begin{equation*} \begin{array}{l} \vert \theta_{n+1} \vert \\ ~~~=~~~~\\ \vert \theta_n + \epsilon_n + \theta_n \epsilon_n \vert \\ ~~~\leq~~~~\\ \vert \theta_n \vert + \vert \epsilon_n \vert + \vert \theta_n \vert \vert \epsilon_n \vert \\ ~~~\leq~~~~\\ \frac{n \meps}{1 - n \meps} + \meps + \frac{n \meps}{1 - n \meps} \meps \\ ~~~=~~~~\\ \frac{n \meps + \meps( 1 - n \meps ) + n \meps^2 }{1 - n \meps} \\ ~~~=~~~~\\ \frac{(n +1 )+ \meps }{1 - n \meps} \\ ~~~\leq~~~~\\ \frac{(n +1 )+ \meps }{1 - (n+1) \meps}. \end{array} \end{equation*}
Remark 6.3.2.2.

The quantity \(\theta_n\) will be used throughout this note. It is not intended to be a specific number. Instead, it is an order of magnitude identified by the subscript \(n\text{,}\) which indicates the number of error factors of the form \((1 + \epsilon_i)\) and/or \((1 + \epsilon_i)^{-1}\) that are grouped together to form \(( 1 + \theta_n )\text{.}\)

Since we will often encounter the bound on \(\vert \theta_n \vert \) that appears in Lemma 6.3.2.1 we assign it a symbol as follows:

Definition 6.3.2.3.

For all \(n \geq 1 \) and \(n \meps \lt 1\text{,}\) define

\begin{equation*} \displaystyle \gamma_n = {n \meps}/{(1 - n \meps)}\text{.} \end{equation*}

With this notation, (6.3.2) simplifies to

\begin{equation} \begin{array}{l} \check \kappa \\ ~~~=~~~~\\ \chi_0 \psi_0 ( 1 + \theta_n ) + \chi_1 \psi_1 ( 1 + \theta_n ) + \cdots + \chi_{n-1} \psi_{n-1} ( 1 + \theta_2 ) \\ % \label{eqn:dot3b} \\ ~~~=~~~~\\ \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ \chi_2 \\ \vdots \\ \chi_{n-1} \end{array} \right)^T \left( \begin{array}{c c c c c} ( 1 + \theta_n ) \amp 0 \amp 0\amp \cdots \amp 0 \\ 0 \amp ( 1 + \theta_n ) \amp0 \amp \cdots \amp 0 \\ 0 \amp 0 \amp ( 1 + \theta_{n-1} ) \amp \cdots \amp 0 \\ \vdots \amp\vdots \amp\vdots \amp\ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp ( 1 + \theta_{2} ) \end{array} \right) \left( \begin{array}{c} \psi_0 \\ \psi_1 \\ \psi_2 \\ \vdots \\ \psi_{n-1} \end{array} \right) \\ %\nonumber ~~~=~~~~\\ \left( \begin{array}{c} \chi_0 \\ \chi_1 \\ \chi_2 \\ \vdots \\ \chi_{n-1} \end{array} \right)^T \left( I + \left( \begin{array}{c c c c c} \theta_n \amp 0 \amp 0\amp \cdots \amp 0 \\ 0 \amp \theta_n \amp0 \amp \cdots \amp 0 \\ 0 \amp 0 \amp \theta_{n-1} \amp \cdots \amp 0 \\ \vdots \amp\vdots \amp\vdots \amp\ddots \amp \vdots \\ 0 \amp 0 \amp 0 \amp \cdots \amp \theta_{2} \end{array} \right) \right) \left( \begin{array}{c} \psi_0 \\ \psi_1 \\ \psi_2 \\ \vdots \\ \psi_{n-1} \end{array} \right), \end{array}\label{stability-eqn-dot3}\tag{6.3.3} \end{equation}

where \(\vert \theta_j \vert \leq \gamma_j \text{,}\) \(j = 2, \ldots, n \text{.}\)

Remark 6.3.2.4.

Two instances of the symbol \(\theta_n\text{,}\) appearing even in the same expression, typically do not represent the same number. For example, in (6.3.3) a \(( 1 + \theta_n ) \) multiplies each of the terms \(\chi_0 \psi_0 \) and \(\chi_1 \psi_1 \text{,}\) but these two instances of \(\theta_n \text{,}\) as a rule, do not denote the same quantity. In particular, one should be careful when factoring out such quantities.

As part of the analyses the following bounds will be useful to bound error that accumulates:

This lemma will be invoked when, for example, we want to bound \(\vert \epsilon \vert \) such that \(1 + \epsilon = ( 1 + \epsilon_1 )( 1 + \epsilon_2 ) = 1 + \left( \epsilon_1 + \epsilon_2 + \epsilon_1 \epsilon_2 \right) \) knowing that \(\vert \epsilon_1 \vert \leq \gamma_n \) and \(\vert \epsilon_2 \vert \leq \gamma_b \text{.}\)

Homework 6.3.2.2.

Prove Lemma 6.3.2.5.

Solution
\begin{equation*} \begin{array}{l} \gamma_n \\ ~~~=~~~~ \lt \mbox{ definition } \gt \\ \frac{n \meps}{1 - n \meps} \\ ~~~\leq ~~~~ \lt b \geq 1 \gt \\ \frac{(n+b) \meps}{1 - n \meps} \\ ~~~\leq ~~~~\lt 1/(1-n \meps ) \leq 1/(1 -(n+b)\meps) \mbox{ if } (n+b)\meps \lt 1 \gt \\ \frac{(n+b) \meps}{1 - (n+b) \meps} \\ ~~~=~~~~ \lt \mbox{ definition } \gt \\ \gamma_{n+b}. \end{array} \end{equation*}

and

\begin{equation*} \begin{array}{l} \gamma_n + \gamma_b + \gamma_n \gamma_b \\ ~~~=~~~~ \lt \mbox{ definition } \gt \\ \frac{n \meps}{1 - n \meps} + \frac{b \meps}{1 - b \meps} + \frac{n \meps}{(1 - n \meps)}\frac{b \meps}{(1 - b \meps)} \\ ~~~=~~~~ \lt \mbox{ algebra } \gt \\ \frac{n \meps (1 - b \meps)+ (1 - n \meps) b \meps + b n \meps^2} {(1 - n \meps)(1 - b \meps)} \\ ~~~=~~~~ \lt \mbox{ algebra } \gt \\ \frac{n \meps - b n \meps ^2+ b \meps - b n \meps^2 + b n \meps^2} {1 - (n+b) \meps + b n \meps^2} \\ ~~~=~~~~ \lt \mbox{ algebra } \gt \\ \frac{(n+b) \meps - b n \meps ^2} {1 - (n+b) \meps + b n \meps^2} \\ ~~~\leq ~~~~\lt b n \meps^2 > 0 \gt \\ \frac{(n+b) \meps} {1 - (n+b) \meps + b n \meps^2} \\ ~~~\leq ~~~~\lt b n \meps^2 > 0 \gt \\ \frac{(n+b) \meps} {1 - (n+b) \meps} \\ ~~~ = ~~~~ \lt \mbox{ definition } \gt \\ \gamma_{n+b}. \end{array} \end{equation*}