Skip to main content

Subsection C.1.1 Computation with scalars

Most computation with matrices and vectors in the end comes down to the addition, subtraction, or multiplication with floating point numbers:

\begin{equation*} \chi\ {\rm op}\ \psi \end{equation*}

where \(\chi \) and \(\psi \) are scalars and \({\rm op } \) is one of \(+, - , \times \text{.}\) Each of these is counted as one floating point operation. However, not all such floating point operations are created equal: computation with complex-valued (double precision) numbers is four times more expensive than computation with real-valued (double precision) numbers. As mentioned: usually we just pretend we are dealing with real-valued numbers when counting the cost. We assume you know how to multiply by four.

Dividing two scalars is a lot more expensive. Frequently, instead of dividing by \(\alpha \) we can instead first compute \(1 / \alpha \) and then reuse that result for many multiplications, instead of dividing many times. Thus, the number of divisions in an algorithm is usually a "lower order term" and hence we can ignore it.

Another observation is that almost all computation we encounter involves a "Fused Multiply Accumulate":

\begin{equation*} \alpha \chi + \psi, \end{equation*}

which requires two flops: a multiply and an add.