Skip to main content

Subsection 5.6.2 Summary

The process known as Gaussian elimination is equivalent to computing the LU factorization of the matrix \(A \in \Cmxm \)

\begin{equation*} : A = L U, \end{equation*}

where \(L \) is a unit lower trianguular matrix and \(U\) is an upper triangular matrix.

Definition 5.6.2.1.

Given a matrix \(A \in \C^{m \times n} \) with \(m \geq n \text{,}\) its LU factorization is given by \(A = L U \) where \(L \in \C^{m\times n}\) is unit lower trapezoidal and \(U \in \C^{n \times n} \) is upper triangular with nonzeroes on its diagonal.

Definition 5.6.2.2. Principal leading submatrix.

For \(k \leq n \text{,}\) the \(k \times k \) principal leading submatrix of a matrix \(A \) is defined to be the square matrix \(A_{TL} \in \C^{k \times k} \) such that \(A = \FlaTwoByTwo{A_{TL}}{A_{TR}}{A_{BL}}{A_{BR}} \text{.}\)

\begin{equation*} \newcommand{\routinename}{A = \mbox{LU-right-looking}( A )} \newcommand{\guard}{ n( A_{TL} ) \lt n( A ) } \newcommand{\partitionings}{ A \rightarrow \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} } \newcommand{\partitionsizes}{ A_{TL} {\rm ~is~} 0 \times 0 } \newcommand{\repartitionings}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \rightarrow \FlaThreeByThreeBR{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \leftarrow \FlaThreeByThreeTL{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}} } \newcommand{\update}{ \begin{array}{l} a_{21} := a_{21} / \alpha_{11} \\ A_{22} := A_{22} - a_{21} a_{12}^T \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.5. Right-looking LU factorization algorithm.

The right-looking algorithm performs the same computations as the algorithm

\begin{equation*} \begin{array}{l} {\bf for~} j:=0, \ldots , n-1 \\ ~~~ \left. \begin{array}{l} {\bf for~} i:=j+1, \ldots , n-1 \\ ~~~ \lambda_{i,j} := \alpha_{i,j} / \alpha_{j,j} \\ ~~~ \alpha_{i,j} := 0 \\ {\bf endfor} \end{array} \right\} \mbox{ compute multipliers} \\ ~~~ {\bf for~} i:=j+1, \ldots , n-1 \\ ~~~ ~~~ \left. \begin{array}{l} {\bf for~} k=j+1, \ldots , n-1 \\ ~~~ \alpha_{i,k} := \alpha_{i,k} - \lambda_{i,j} \alpha_{j,k} \\ ~~~ ~~~ {\bf endfor} \end{array} \right\} \mbox{ subtract } \lambda_{i,j} \mbox{ times row } j \mbox{ from row } k \\ ~~~ {\bf endfor} \\ {\bf endfor} \end{array} \end{equation*}
\begin{equation*} \newcommand{\routinename}{A = \mbox{LU-left-looking}( A )} \newcommand{\guard}{ n( A_{TL} ) \lt n( A ) } \newcommand{\partitionings}{ A \rightarrow \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} } \newcommand{\partitionsizes}{ A_{TL} {\rm ~is~} 0 \times 0 } \newcommand{\repartitionings}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \rightarrow \FlaThreeByThreeBR{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \leftarrow \FlaThreeByThreeTL{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}} } \newcommand{\update}{ \begin{array}{l} \mbox{Solve } L_{00} u_{01} = a_{01} \mbox{ overwriting } a_{01} \mbox{ with } u_{01} \\ \alpha_{11} := \upsilon_{11} = \alpha_{11} - a_{10}^T a_{01} \\ a_{21} := a_{21} - A_{20} a_{01} \\ a_{21} := l_{21} = a_{21} / \alpha_{11} \\ \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.6. Left-looking LU factorization algorithm. \(L_{00} \) is the unit lower triangular matrix stored in the strictly lower triangular part of \(A_{00} \) (with the diagonal implicitly stored).

Solving \(A x = b \) via LU factorization:

  • Compute the LU factorization \(A = L U \text{.}\)

  • Solve \(L z = b \text{.}\)

  • Solve \(U x = z \text{.}\)

Cost of LU factorization: Starting with an \(m \times n \) matrix \(A \text{,}\) LU factorization requires approximately \(m n^2 - \frac{1}{3} n^3 \) flops. If \(m = n \) this becomes

\begin{equation*} \frac{2}{3} n^3 \mbox{ flops}. \end{equation*}
Definition 5.6.2.7.
A matrix \(L_k \) of the form
\begin{equation*} L_k = \FlaThreeByThreeBR { I_k }{ 0 }{ 0 } { 0 }{ 1 }{ 0 } { 0 }{ l_{21} }{ I }, \end{equation*}
where \(I_k \) is the \(k \times k \) identity matrix and \(I \) is an identity matrix "of appropropriate size" is called a Gauss transform.
\begin{equation*} L_k^{-1} = \FlaThreeByThreeBR { I_k }{ 0 }{ 0 } { 0 }{ 1 }{ 0 } { 0 }{ l_{21} }{ I }^{-1} = \FlaThreeByThreeBR { I_k }{ 0 }{ 0 } { 0 }{ 1 }{ 0 } { 0 }{ -l_{21} }{ I }. \end{equation*}
Definition 5.6.2.8.

Given

\begin{equation*} p = \left( \begin{array}{c} \pi_0 \\ \vdots \\ \pi_{n-1} \end{array} \right), \end{equation*}

where \(\{ \pi_0, \pi_1, \ldots , \pi_{n-1} \} \) is a permutation (rearrangement) of the integers \(\{ 0 , 1, \ldots , n-1 \} \text{,}\) we define the permutation matrix \(P( p )\) by

\begin{equation*} P( p ) = \left( \begin{array}{c} e_{\pi_0}^T \\ \vdots \\ e_{\pi_{n-1}}^T \end{array} \right). \end{equation*}

If \(P \) is a permutation matrix then \(P^{-1} = P^T \text{.}\)

Definition 5.6.2.9. Elementary pivot matrix.

Given \(\pi \in \{ 0, \ldots , n-1 \} \) define the elementary pivot matrix

\begin{equation*} \widetilde P( \pi ) = \left( \begin{array}{c} e_\pi^T \\ \hline e_1^T \\ \vdots \\ e_{\pi-1}^T \\ \hline e_0^T \\ \hline e_{\pi+1}^T \\ \vdots \\ e_{n-1}^T \end{array} \right) \end{equation*}

or, equivalently,

\begin{equation*} \widetilde P( \pi ) = \left\{ \begin{array}{c l} I_n \amp \mbox{if } \pi = 0 \\ \left( \begin{array}{c | c | c | c} 0 \amp 0 \amp 1 \amp0 \\ \hline 0 \amp I_{\pi-1}\amp 0 \amp 0\\ \hline 1 \amp0 \amp0 \amp0 \\ \hline 0 \amp 0 \amp 0\amp I_{n-\pi-1} \end{array} \right) \amp \mbox{otherwise}, \end{array} \right. \end{equation*}

where \(n \) is the size of the permutation matrix.

\begin{equation*} \newcommand{\routinename}{[ A, p ] = \mbox{LUpiv-right-looking}( A )} \newcommand{\guard}{ n( A_{TL} ) \lt n( A ) } \newcommand{\partitionings}{ A \rightarrow \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}}, p \rightarrow \FlaTwoByOne{ p_T }{ p_B } } \newcommand{\partitionsizes}{ A_{TL} {\rm ~is~} 0 \times 0 , p_T \mbox{ has } 0 \mbox{ elements} } \newcommand{\repartitionings}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \rightarrow \FlaThreeByThreeBR{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}}, \FlaTwoByOne{ p_T }{ p_B } \rightarrow \FlaThreeByOneB{ p_0 }{ \pi_1 }{ p_2 } } \newcommand{\moveboundaries}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \leftarrow \FlaThreeByThreeTL{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}}, \FlaTwoByOne{ p_T }{ p_B } \leftarrow \FlaThreeByOneT{ p_0 }{ \pi_1 }{ p_2 } } \newcommand{\update}{ \begin{array}{l} \pi_1 := \maxi\left( \begin{array}{c} \alpha_{11} \\ \hline a_{21} \end{array} \right) \\ \FlaThreeByThreeBR{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{ \alpha_{11} }{ a_{12}^T } {A_{20}}{ a_{21} }{ A_{22} } \becomes \FlaTwoByTwo {I}{0} {0}{P( \pi_1 )} \FlaThreeByThreeBR{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{ \alpha_{11} }{ a_{12}^T } {A_{20}}{ a_{21} }{ A_{22} } \\ a_{21} := a_{21} / \alpha_{11} \\ A_{22} := A_{22} - a_{21} a_{12}^T \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.10. Right-looking LU factorization algorithm with partial pivoting.

Solving \(A x = b \) via LU factorization: with row pivoting:

  • Compute the LU factorization with pivoting \(PA = L U \text{.}\)

  • Apply the row exchanges to the right-hand side: \(y = P b \text{.}\)

  • Solve \(L z = y \text{.}\)

  • Solve \(U x = z \text{.}\)

\begin{equation*} \newcommand{\routinename}{\mbox{Solve } L z = y, \mbox{ overwriting } y \mbox{ with } z \mbox{ (Variant 1)}} \newcommand{\guard}{ n( L_{TL} ) \lt n( L ) } \newcommand{\partitionings}{ L \rightarrow \FlaTwoByTwo{L_{TL}}{L_{TR}} {L_{BL}}{L_{BR}}, y \rightarrow \FlaTwoByOne{y_T}{y_B} } \newcommand{\partitionsizes}{ L_{TL} {\rm ~is~} 0 \times 0 \mbox{ and } y_T \mbox{ has } 0 \mbox{ elements} } \newcommand{\repartitionings}{ \FlaTwoByTwo{L_{TL}}{L_{TR}} {L_{BL}}{L_{BR}} \rightarrow \FlaThreeByThreeBR{L_{00}}{l_{01}}{L_{02}} {l_{10}^T}{\lambda_{11}}{l_{12}^T} {L_{20}}{l_{21}}{L_{22}}, \FlaTwoByOne{y_T}{y_B} \rightarrow \FlaThreeByOneB{y_0}{\psi_1}{y_2} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{L_{TL}}{L_{TR}} {L_{BL}}{L_{BR}} \leftarrow \FlaThreeByThreeTL{L_{00}}{l_{01}}{L_{02}} {l_{10}^T}{\lambda_{11}}{l_{12}^T} {L_{20}}{l_{21}}{L_{22}}, \FlaTwoByOne{y_T}{y_B} \leftarrow \FlaThreeByOneT{y_0}{\psi_1}{y_2} } \newcommand{\update}{ \begin{array}{l} y_2 := y_2 - \psi_1 l_{21} \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.11. Lower triangular solve (with unit lower triangular matrix), Variant 1
\begin{equation*} \newcommand{\routinename}{\mbox{Solve } L z = y, \mbox{ overwriting } y \mbox{ with } z \mbox{ (Variant 2)}} \newcommand{\guard}{ n( L_{TL} ) \lt n( L ) } \newcommand{\partitionings}{ L \rightarrow \FlaTwoByTwo{L_{TL}}{L_{TR}} {L_{BL}}{L_{BR}}, y \rightarrow \FlaTwoByOne{y_T}{y_B} } \newcommand{\partitionsizes}{ L_{TL} {\rm ~is~} 0 \times 0 \mbox{ and } y_T \mbox{ has } 0 \mbox{ elements} } \newcommand{\repartitionings}{ \FlaTwoByTwo{L_{TL}}{L_{TR}} {L_{BL}}{L_{BR}} \rightarrow \FlaThreeByThreeBR{L_{00}}{l_{01}}{L_{02}} {l_{10}^T}{\lambda_{11}}{l_{12}^T} {L_{20}}{l_{21}}{L_{22}}, \FlaTwoByOne{y_T}{y_B} \rightarrow \FlaThreeByOneB{y_0}{\psi_1}{y_2} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{L_{TL}}{L_{TR}} {L_{BL}}{L_{BR}} \leftarrow \FlaThreeByThreeTL{L_{00}}{l_{01}}{L_{02}} {l_{10}^T}{\lambda_{11}}{l_{12}^T} {L_{20}}{l_{21}}{L_{22}}, \FlaTwoByOne{y_T}{y_B} \leftarrow \FlaThreeByOneT{y_0}{\psi_1}{y_2} } \newcommand{\update}{ \begin{array}{l} \psi_1 := \psi_1 - l_{10}^T y_0 \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.12. Lower triangular solve (with unit lower triangular matrix), Variant 2
\begin{equation*} \newcommand{\routinename}{\mbox{Solve } U x = z, \mbox{ overwriting } z \mbox{ with } x \mbox{ (Variant 1)}} \newcommand{\guard}{ n( U_{BR} ) \lt n( U ) } \newcommand{\partitionings}{ U \rightarrow \FlaTwoByTwo{U_{TL}}{U_{TR}} {U_{BL}}{U_{BR}}, z \rightarrow \FlaTwoByOne{z_T}{z_B} } \newcommand{\partitionsizes}{ U_{BR} {\rm ~is~} 0 \times 0 \mbox{ and } z_B \mbox{ has } 0 \mbox{ elements} } \newcommand{\repartitionings}{ \FlaTwoByTwo{U_{TL}}{U_{TR}} {U_{BL}}{U_{BR}} \rightarrow \FlaThreeByThreeTL{U_{00}}{u_{01}}{U_{02}} {u_{10}^T}{\upsilon_{11}}{u_{12}^T} {U_{20}}{u_{21}}{U_{22}}, \FlaTwoByOne{z_T}{z_B} \rightarrow \FlaThreeByOneT{z_0}{\zeta_1}{z_2} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{U_{TL}}{U_{TR}} {U_{BL}}{U_{BR}} \leftarrow \FlaThreeByThreeBR{U_{00}}{u_{01}}{U_{02}} {u_{10}^T}{\upsilon_{11}}{u_{12}^T} {U_{20}}{u_{21}}{U_{22}}, \FlaTwoByOne{z_T}{z_B} \leftarrow \FlaThreeByOneB{z_0}{\zeta_1}{z_2} } \newcommand{\update}{ \begin{array}{l} \zeta_1 := \zeta_1 / \upsilon_{11} \\ z_0 := z_0 - \zeta_1 u_{01} \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.13. Upper triangular solve Variant 1
\begin{equation*} \newcommand{\routinename}{\mbox{Solve } U x = z, \mbox{ overwriting } z \mbox{ with } x \mbox{ (Variant 2)}} \newcommand{\guard}{ n( U_{BR} ) \lt n( U ) } \newcommand{\partitionings}{ U \rightarrow \FlaTwoByTwo{U_{TL}}{U_{TR}} {U_{BL}}{U_{BR}}, z \rightarrow \FlaTwoByOne{z_T}{z_B} } \newcommand{\partitionsizes}{ U_{BR} {\rm ~is~} 0 \times 0 \mbox{ and } z_B \mbox{ has } 0 \mbox{ elements} } \newcommand{\repartitionings}{ \FlaTwoByTwo{U_{TL}}{U_{TR}} {U_{BL}}{U_{BR}} \rightarrow \FlaThreeByThreeTL{U_{00}}{u_{01}}{U_{02}} {u_{10}^T}{\upsilon_{11}}{u_{12}^T} {U_{20}}{u_{21}}{U_{22}}, \FlaTwoByOne{z_T}{z_B} \rightarrow \FlaThreeByOneT{z_0}{\zeta_1}{z_2} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{U_{TL}}{U_{TR}} {U_{BL}}{U_{BR}} \leftarrow \FlaThreeByThreeBR{U_{00}}{u_{01}}{U_{02}} {u_{10}^T}{\upsilon_{11}}{u_{12}^T} {U_{20}}{u_{21}}{U_{22}}, \FlaTwoByOne{z_T}{z_B} \leftarrow \FlaThreeByOneB{z_0}{\zeta_1}{z_2} } \newcommand{\update}{ \begin{array}{l} \zeta_1 := \zeta_1 - u_{12}^T z_2 \\ \zeta_1 := \zeta_1 / \upsilon_{11} \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.14. Upper triangular solve Variant 2

Cost of triangular solve Starting with an \(n \times n \) (upper or lower) triangular matrix \(T \text{,}\) solving \(T x = b \) requires approximately \(n^2 \) flops.

Provided the solution of \(A x = b \) yields some accuracy in the solution, that accuracy can be improved through a process known as iterative refinement.

  • Let \(\widehat x \) is an approximate solution to \(A x = b \text{.}\)

  • Let \(\widehat \delta\! x \) is an approximate solution to \(A \delta\! x = b - A \widehat x \text{,}\)

  • Then \(\widehat x + \widehat \delta\! x \text{,}\) is an improved approximation.

  • This process can be repeated until the accuracy in the computed solution is as good as warranted by the conditioning of \(A\) and the accuracy in \(b \text{.}\)

Definition 5.6.2.15. Hermitian positive definite matrix.

A matrix \(A \in \C^{n \times n} \) is Hermitian positive definite (HPD) if and only if it is Hermitian (\(A^H = A\)) and for all nonzero vectors \(x \in \C^n \) it is the case that \(x ^H A x \gt 0 \text{.}\) If in addition \(A \in \R^{n \times n} \) then \(A \) is said to be symmetric positive definite (SPD).

Some insights regarding HPD matrices:

  • \(B \) has linearly independent columns if and only if \(A = B^H B \) is HPD.

  • A diagonal matrix has only positive values on its diagonal if and only if it is HPD.

  • If \(A \) is HPD, then its diagonal elements are all real-valued and positive.

  • If \(A = \left( \begin{array}{c | c } A_{TL} \amp A_{TR} \\ \hline A_{BL} \amp A_{BR} \end{array} \right), \) where \(A_{TL} \) is square, is HPD, then \(A_{TL} \) and \(A_{BR} \) are HPD.

\begin{equation*} \newcommand{\routinename}{A = \mbox{Chol-right-looking}( A )} \newcommand{\guard}{ n( A_{TL} ) \lt n( A ) } \newcommand{\partitionings}{ A \rightarrow \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} } \newcommand{\partitionsizes}{ A_{TL} {\rm ~is~} 0 \times 0 } \newcommand{\repartitionings}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \rightarrow \FlaThreeByThreeBR{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}} } \newcommand{\moveboundaries}{ \FlaTwoByTwo{A_{TL}}{A_{TR}} {A_{BL}}{A_{BR}} \leftarrow \FlaThreeByThreeTL{A_{00}}{a_{01}}{A_{02}} {a_{10}^T}{\alpha_{11}}{a_{12}^T} {A_{20}}{a_{21}}{A_{22}} } \newcommand{\update}{ \begin{array}{l} \alpha_{11} := \lambda_{11} = \sqrt{ \alpha_{11} } \\ a_{21} := l_{21} = a_{21} / \alpha_{11} \\ A_{22} := A_{22} - a_{21} a_{12}^T ~~~~~~~~ \mbox{(syr: update only lower triangular part)} \end{array} } \FlaAlgorithm \end{equation*}
Figure 5.6.2.17. Cholesky factorization algorithm (right-looking variant). The operation "syr" refers to "symmetric rank-1 update", which performs a rank-1 update, updating only the lower triangular part of the matrix in this algorithm.

Let \(\widehat x \in \C^n \) equal the solution to the linear least-squares (LLS) problem

\begin{equation} \| b - A \widehat x \|_2 = \min_{x \in \C^n} \| b - A x \|_2 ,\label{chapter05-cholesky-lls-eqn-1-summary}\tag{5.6.1} \end{equation}

where \(A \) has linearly independent columns, equals the solution to the normal equations

\begin{equation*} \begin{array}[t]{c} \underbrace{ A^H A }\\ B \end{array} \widehat x = \begin{array}[t]{c} \underbrace{A^H b} \\ y \end{array} . \end{equation*}

This solution can be computed via the steps

  • Form \(B = A^H A \text{.}\) Cost: approximately \(m n^2 \) flops.

  • Factor \(B = L L^H \) (Cholesky factorization). Cost: approximately \(n^3/3 \) flops.

  • Compute \(y = A^H b \text{.}\) Cost: approximately \(2 m n\) flops.

  • Solve \(L z = y \text{.}\) Cost: approximately \(n^2 \) flops.

  • Solve \(L^H \widehat x = z \text{.}\) Cost: approximately \(n^2 \) flops.

for a total of, approximately, \(m n^2 + n^3/3 \) flops.