## Subsection5.6.2Summary

The process known as Gaussian elimination is equivalent to computing the LU factorization of the matrix $A \in \Cmxm$

\begin{equation*} : A = L U, \end{equation*}

where $L$ is a unit lower trianguular matrix and $U$ is an upper triangular matrix.

###### Definition5.6.2.1.

Given a matrix $A \in \C^{m \times n}$ with $m \geq n \text{,}$ its LU factorization is given by $A = L U$ where $L \in \C^{m\times n}$ is unit lower trapezoidal and $U \in \C^{n \times n}$ is upper triangular with nonzeroes on its diagonal.

For $k \leq n \text{,}$ the $k \times k$ principal leading submatrix of a matrix $A$ is defined to be the square matrix $A_{TL} \in \C^{k \times k}$ such that $A = \FlaTwoByTwo{A_{TL}}{A_{TR}}{A_{BL}}{A_{BR}} \text{.}$

The right-looking algorithm performs the same computations as the algorithm

\begin{equation*} \begin{array}{l} {\bf for~} j:=0, \ldots , n-1 \\ ~~~ \left. \begin{array}{l} {\bf for~} i:=j+1, \ldots , n-1 \\ ~~~ \lambda_{i,j} := \alpha_{i,j} / \alpha_{j,j} \\ ~~~ \alpha_{i,j} := 0 \\ {\bf endfor} \end{array} \right\} \mbox{ compute multipliers} \\ ~~~ {\bf for~} i:=j+1, \ldots , n-1 \\ ~~~ ~~~ \left. \begin{array}{l} {\bf for~} k=j+1, \ldots , n-1 \\ ~~~ \alpha_{i,k} := \alpha_{i,k} - \lambda_{i,j} \alpha_{j,k} \\ ~~~ ~~~ {\bf endfor} \end{array} \right\} \mbox{ subtract } \lambda_{i,j} \mbox{ times row } j \mbox{ from row } k \\ ~~~ {\bf endfor} \\ {\bf endfor} \end{array} \end{equation*}

Solving $A x = b$ via LU factorization:

• Compute the LU factorization $A = L U \text{.}$

• Solve $L z = b \text{.}$

• Solve $U x = z \text{.}$

Cost of LU factorization: Starting with an $m \times n$ matrix $A \text{,}$ LU factorization requires approximately $m n^2 - \frac{1}{3} n^3$ flops. If $m = n$ this becomes

\begin{equation*} \frac{2}{3} n^3 \mbox{ flops}. \end{equation*}
###### Definition5.6.2.7.
A matrix $L_k$ of the form
\begin{equation*} L_k = \FlaThreeByThreeBR { I_k }{ 0 }{ 0 } { 0 }{ 1 }{ 0 } { 0 }{ l_{21} }{ I }, \end{equation*}
where $I_k$ is the $k \times k$ identity matrix and $I$ is an identity matrix "of appropropriate size" is called a Gauss transform.
\begin{equation*} L_k^{-1} = \FlaThreeByThreeBR { I_k }{ 0 }{ 0 } { 0 }{ 1 }{ 0 } { 0 }{ l_{21} }{ I }^{-1} = \FlaThreeByThreeBR { I_k }{ 0 }{ 0 } { 0 }{ 1 }{ 0 } { 0 }{ -l_{21} }{ I }. \end{equation*}
###### Definition5.6.2.8.

Given

\begin{equation*} p = \left( \begin{array}{c} \pi_0 \\ \vdots \\ \pi_{n-1} \end{array} \right), \end{equation*}

where $\{ \pi_0, \pi_1, \ldots , \pi_{n-1} \}$ is a permutation (rearrangement) of the integers $\{ 0 , 1, \ldots , n-1 \} \text{,}$ we define the permutation matrix $P( p )$ by

\begin{equation*} P( p ) = \left( \begin{array}{c} e_{\pi_0}^T \\ \vdots \\ e_{\pi_{n-1}}^T \end{array} \right). \end{equation*}

If $P$ is a permutation matrix then $P^{-1} = P^T \text{.}$

###### Definition5.6.2.9. Elementary pivot matrix.

Given $\pi \in \{ 0, \ldots , n-1 \}$ define the elementary pivot matrix

\begin{equation*} \widetilde P( \pi ) = \left( \begin{array}{c} e_\pi^T \\ \hline e_1^T \\ \vdots \\ e_{\pi-1}^T \\ \hline e_0^T \\ \hline e_{\pi+1}^T \\ \vdots \\ e_{n-1}^T \end{array} \right) \end{equation*}

or, equivalently,

\begin{equation*} \widetilde P( \pi ) = \left\{ \begin{array}{c l} I_n \amp \mbox{if } \pi = 0 \\ \left( \begin{array}{c | c | c | c} 0 \amp 0 \amp 1 \amp0 \\ \hline 0 \amp I_{\pi-1}\amp 0 \amp 0\\ \hline 1 \amp0 \amp0 \amp0 \\ \hline 0 \amp 0 \amp 0\amp I_{n-\pi-1} \end{array} \right) \amp \mbox{otherwise}, \end{array} \right. \end{equation*}

where $n$ is the size of the permutation matrix.

Solving $A x = b$ via LU factorization: with row pivoting:

• Compute the LU factorization with pivoting $PA = L U \text{.}$

• Apply the row exchanges to the right-hand side: $y = P b \text{.}$

• Solve $L z = y \text{.}$

• Solve $U x = z \text{.}$

Cost of triangular solve Starting with an $n \times n$ (upper or lower) triangular matrix $T \text{,}$ solving $T x = b$ requires approximately $n^2$ flops.

Provided the solution of $A x = b$ yields some accuracy in the solution, that accuracy can be improved through a process known as iterative refinement.

• Let $\hat x$ is an approximate solution to $A x = b \text{.}$

• Let $\widehat \delta\! x$ is an approximate solution to $A \delta\! x = b - A \widehat x \text{,}$

• Then $\widehat x + \widehat \delta\! x \text{,}$ is an improved approximation.

• This process can be repeated until the accuracy in the computed solution is as good as warranted by the conditioning of $A$ and the accuracy in $b \text{.}$

###### Definition5.6.2.15. Hermitian positive definite matrix.

A matrix $A \in \C^{n \times n}$ is Hermitian positive definite (HPD) if and only if it is Hermitian ($A^H = A$) and for all nonzero vectors $x \in \C^n$ it is the case that $x ^H A x \gt 0 \text{.}$ If in addition $A \in \R^{n \times n}$ then $A$ is said to be symmetric positive definite (SPD).

Some insights regarding HPD matrices:

• $B$ has linearly independent columns if and only if $A = B^H B$ is HPD.

• A diagonal matrix has only positive values on its diagonal if and only if it is HPD.

• If $A$ is HPD, then its diagonal elements are all real-valued and positive.

• If $A = \left( \begin{array}{c | c } A_{TL} \amp A_{TR} \\ \hline A_{BL} \amp A_{BR} \end{array} \right),$ where $A_{TL}$ is square, is HPD, then $A_{TL}$ and $A_{BR}$ are HPD.

Let $\hat x \in \C^n$ equal the solution to the linear least-squares (LLS) problem

$$\| b - A \widehat x \|_2 = \min_{x \in \C^n} \| b - A x \|_2 ,\label{chapter05-cholesky-lls-eqn-1-summary}\tag{5.6.1}$$

where $A$ has linearly independent columns, equals the solution to the normal equations

\begin{equation*} \begin{array}[t]{c} \underbrace{ A^H A }\\ B \end{array} \widehat x = \begin{array}[t]{c} \underbrace{A^H b} \\ y \end{array} . \end{equation*}

This solution can be computed via the steps

• Form $B = A^H A \text{.}$ Cost: approximately $m n^2$ flops.

• Factor $B = L L^H$ (Cholesky factorization). Cost: approximately $n^3/3$ flops.

• Compute $y = A^H b \text{.}$ Cost: approximately $2 m n$ flops.

• Solve $L z = y \text{.}$ Cost: approximately $n^2$ flops.

• Solve $L^H \hat x = z \text{.}$ Cost: approximately $n^2$ flops.

for a total of, approximately, $m n^2 + n^3/3$ flops.