Skip to main content

Unit 4.5.2 Family values

For most dense linear algebra operations, like LU factorization, there are multiple unblocked and corresponding blocked algorithms. Usually, each of the blocked algorithms casts most computation in terms of an operation that is a matrix-matrix multiplication, or a variant thereof. For example, there are five blocked algorithms (variants) for computing the LU factorization, captured in Figure 4.5.3.

\begin{equation*} \newcommand{\routinename}{A = {\rm LU\_blk}( A )} \newcommand{\guard}{n( A_{BR} ) > 0} \newcommand{\partitionings}{ A \rightarrow \FlaTwoByTwo{ A_{TL} } {A_{TR}} { A_{BL} } { A_{BR} } } \newcommand{\partitionsizes}{ A_{TL} {\rm ~is~} 0 \times 0 } \newcommand{\repartitionings}{ \FlaTwoByTwo{ A_{TL} } {A_{TR}} { A_{BL} } { A_{BR} } \rightarrow \FlaThreeByThreeBR{ A_{00} }{ A_{01} }{ A_{02} } { A_{10} }{ A_{11} }{ A_{12} } { A_{20} }{ A_{21} }{ A_{22} } } \newcommand{\moveboundaries}{ \FlaTwoByTwo{ A_{TL} } {A_{TR}} { A_{BL} } { A_{BR} } \leftarrow \FlaThreeByThreeTL{ A_{00} }{ A_{01} }{ A_{02} } { A_{10} }{ A_{11} }{ A_{12} } { A_{20} }{ A_{21} }{ A_{22} } } \newcommand{\update}{ \begin{array}{l | l | l} \mbox{Variant 1:} \amp \mbox{Variant 2:} \amp \mbox{Variant 3:} \\ \begin{array}[t]{l} A_{01} := L_{00}^{-1} A_{01}\\ A_{10} := A_{10} U_{00}^{-1} \\ A_{11} := A_{11} - A_{10} A_{01} \\ A_{11} := LU( A_{11} ) \\ \end{array} \amp \begin{array}[t]{l} A_{10} := A_{10} U_{00}^{-1} \\ A_{11} := A_{11} - A_{10} A_{01} \\ A_{11} := LU( A_{11} ) \\ A_{12} := A_{12} - A_{10} A_{02} \\ A_{12} := L_{11}^{-1} A_{12} \end{array} \amp \begin{array}[t]{l} A_{10} := A_{10} U_{00}^{-1} \\ A_{11} := A_{11} - A_{10} A_{01} \\ A_{11} := LU( A_{11} ) \\ A_{21} := A_{21} - A_{20} A_{01} \\ A_{21} := A_{21} U_{11}^{-1} \end{array} \\ \hline \mbox{Variant 4:} \amp \mbox{Variant 5:} \\ \begin{array}[t]{l} A_{11} := A_{11} - A_{10} A_{01} \\ A_{12} := A_{12} - A_{10} A_{02} \\ A_{12} := L_{11}^{-1} A_{12} \\ A_{21} := A_{21} - A_{20} A_{01} \\ A_{21} := A_{21} U_{11}^{-1} \end{array} \amp \begin{array}[t]{l} A_{11} := LU( A_{11} ) \\ A_{12} := L_{11}^{-1} A_{12} \\ A_{21} := A_{21} U_{11}^{-1} \\ A_{22} := A_{22} - A_{21} A_{12} \end{array} \end{array} } \FlaAlgorithm \end{equation*}
Figure 4.5.3. Five blocked LU factorizations. \(L_{ii} \) and \(U_{ii} \) denote the unit lower triangular matrix and upper triangular matrix stored in \(A_{ii}\text{,}\) respectively. \(L_{ii}^{-1} A_{ij} \) is shorthand for solving the triangular system with right-hand sides \(L_{ii} U_{ij} = A_{ij} \) and \(A_{ij} U_{jj}^{-1} \) is shorthand for solving the triangular system with right-hand sides \(L_{ij} U_{jj} = A_{ij} \text{.}\)

It turns out that different variants are best used under different circumstances.

  • Variant 5, also known as Classical Gaussian Elimination and the algorithm in Figure 4.5.1, fits well with the matrix-matrix multiplication that we developed in this course, because it casts most computation in terms of a matrix-multiplication with large matrix sizes \(m\) and \(n \text{,}\) and small matrix size \(k \) (which equals \(b \)): \(A_{22} := A_{22} - A_{21} A_{12} \text{.}\)

  • Variant 3, also known as the left-looking variant, can be easily check-pointed (a technique that allows the computation to be restarted if, for example, a computer crashes partially into a long calculation).

  • Variants 3, 4, and 5 can all be modified to incorporate "partial pivoting" which improves numerical stability (a topic covered in courses on numerical linear algebra).

Further details go beyond the scope of this course. But we will soon have a MOOC for that: Advanced Linear Algebra: Foundations to Frontiers! [19].

What we are trying to give you a flavor of is the fact that it pays to have a family of algorithms at one's disposal so that the best algorithm for a situation can be chosen. The question then becomes "How do we find all algorithms in the family?" We have a MOOC for that too: LAFF-On Programming for Correctness [20].