## Subsection5.2.3Existence of the LU factorization

Now that we have an algorithm for computing the LU factorization, it is time to talk about when this LU factorization exists (in other words: when we can guarantee that the algorithm completes).

We would like to talk about the existence of the LU factorization for the more general case where $A$ is an $m \times n$ matrix, with $m \geq n \text{.}$ What does this mean?

###### Definition5.2.3.1.

Given a matrix $A \in \C^{m \times n}$ with $m \geq n \text{,}$ its LU factorization is given by $A = L U$ where $L \in \C^{m\times n}$ is unit lower trapezoidal and $U \in \C^{n \times n}$ is upper triangular with nonzeroes on its diagonal.

The first question we will ask is when the LU factorization exists. For this, we need another definition.

For $k \leq n \text{,}$ the $k \times k$ principal leading submatrix of a matrix $A$ is defined to be the square matrix $A_{TL} \in \C^{k \times k}$ such that $A = \FlaTwoByTwo{A_{TL}}{A_{TR}}{A_{BL}}{A_{BR}} \text{.}$

This definition allows us to state necessary and sufficient conditions for when a matrix with $n$ linearly independent columns has an LU factorization:

###### Homework5.2.3.1.

Prove Lemma 5.2.3.3.

Hint

You may use the fact that a triangular matrix has an inverse if and only if it has no zeroes on its diagonal.

Solution

The proof hinges on the fact that a triangular matrix is nonsingular if and only if it doesn't have any zeroes on its diagonal. Hence we can instead prove that $A = LU$ is nonsingular if and only if $U$ is nonsingular ( since $L$ is unit lower triangular and hence has no zeroes on its diagonal).

• ($\Rightarrow$): Assume $A = L U$ is nonsingular. Since $L$ is nonsingular, $U = L^{-1} A \text{.}$ We can show that $U$ is nonsingular in a number of ways:

• We can explicitly give its inverse:

\begin{equation*} U ( A^{-1} L ) = L^{-1} A A^{-1} L = I. \end{equation*}

Hence $U$ has an inverse and is thus nonsingular.

• Alternatively, we can reason that the product of two nonsingular matrices, namely $L^{-1}$ and $A \text{,}$ is nonsingular.

• ($\Leftarrow$): Assume $A = L U$ and $U$ has no zeroes on its diagonal. We then know that both $L^{-1}$ and $U^{-1}$ exist. Again, we can either explicitly verify a known inverse of $A \text{:}$

\begin{equation*} A (U^{-1} L^{-1}) = L U U^{-1} L^{-1} = I \end{equation*}

or we can recall that the product of two nonsingular matrices, namely $U^{-1}$ and $L^{-1} \text{,}$ is nonsingular.

• ($\Rightarrow$): Let nonsingular $A$ have a (unique) LU factorization. We will show that its principal leading submatrices are nonsingular.

Let

\begin{equation*} \begin{array}[t]{c} \underbrace{ \FlaTwoByTwo { A_{TL} }{ A_{TR} } { A_{BL} }{ A_{BR} } } \\ A \end{array} = \begin{array}[t]{c} \underbrace{ \FlaTwoByTwo { L_{TL} }{ 0 } { L_{BL} }{ L_{BR} } } \\ L \end{array} \begin{array}[t]{c} \underbrace{ \FlaTwoByTwo { U_{TL} }{ U_{TR} } { 0 }{ U_{BR} } } \\ U \end{array} \end{equation*}

be the LU factorization of $A \text{,}$ where $A_{TL}, L_{TL}, U_{TL} \in \Ckxk \text{.}$ By the assumption that $L U$ is the LU factorization of $A \text{,}$ we know that $U$ cannot have a zero on the diagonal and hence is nonsingular. Now, since

\begin{equation*} \begin{array}{rcl} \begin{array}[t]{c} \underbrace{ \FlaTwoByTwo { A_{TL} }{ A_{TR} } { A_{BL} }{ A_{BR} } } \\ A \end{array} \amp = \amp \begin{array}[t]{c} \underbrace{ \FlaTwoByTwo { L_{TL} }{ 0 } { L_{BL} }{ L_{BR} } } \\ L \end{array} \begin{array}[t]{c} \underbrace{ \FlaTwoByTwo { U_{TL} }{ U_{TR} } { 0 }{ U_{BR} } } \\ U \end{array} \\ \amp = \amp \FlaTwoByTwo { L_{TL} U_{TL} }{ L_{TL} U_{TR} } { L_{BL} U_{TL} }{ L_{BL} U_{TR} + L_{BL} U_{BR} }, \end{array} \end{equation*}

the $k \times k$ principal leading submatrix $A_{TL}$ equals $A_{TL} = L_{TL} U_{TL} \text{,}$ which is nonsingular since $L_{TL}$ has a unit diagonal and $U_{TL}$ has no zeroes on the diagonal. Since $k$ was chosen arbitrarily, this means that all principal leading submatrices are nonsingular.

• ($\Leftarrow$): We will do a proof by induction on $n \text{.}$

• Base Case: $n = 1 \text{.}$ Then $A$ has the form $A = \FlaTwoByOneSingleLine{ \alpha_{11} }{ a_{21} }$ where $\alpha_{11}$ is a scalar. Since the principal leading submatrices are nonsingular $\alpha_{11} \neq 0 \text{.}$ Hence $A = \begin{array}[t]{c} \underbrace{ \FlaTwoByOneSingleLine{ 1 }{ a_{21}/\alpha_{11} } }\\ L \end{array} \begin{array}[t]{c} \underbrace{ \alpha_{11} } \\ U \end{array}$ is the LU factorization of $A \text{.}$ This LU factorization is unique because the first element of $L$ must be $1 \text{.}$

• Inductive Step: Assume the result is true for all matrices with $n = k \text{.}$ Show it is true for matrices with $n = k+1\text{.}$

Let $A$ of size $n = k+1$ have nonsingular principal leading submatrices. Now, if an LU factorization of $A$ exists, $A = L U \text{,}$ then it would have to form

$$\begin{array}[t]{c} \underbrace{ \left( \begin{array}{c | c} A_{00} \amp a_{01} \\ \hline a_{10}^T \amp \alpha_{11} \\ A_{20} \amp a_{21} \end{array} \right)} \\ A \end{array} = \begin{array}[t]{c} \underbrace{ \left( \begin{array}{c | c} L_{00} \amp 0 \\ \hline l_{10}^T \amp 1 \\ L_{20} \amp l_{21} \end{array} \right)} \\ L \end{array} \begin{array}[t]{c} \underbrace{ \left( \begin{array}{c | c} U_{00} \amp u_{01} \\ \hline 0 \amp \upsilon_{11} \end{array} \right)} \\ U \end{array} .\label{chapter05-eqn-LUproof2}\tag{5.2.1}$$

If we can show that the different parts of $L$ and $U$ exist, are unique, and $\upsilon_{11} \neq 0 \text{,}$ we are done (since then $U$ is nonsingular). (5.2.1) can be rewritten as

\begin{equation*} \left( \begin{array}{c } A_{00} \\ \hline a_{10}^T \\ A_{20} \end{array} \right) = \left( \begin{array}{c } L_{00} \\ \hline l_{10}^T \\ L_{20} \end{array} \right) U_{00} \mbox{ and } \left( \begin{array}{c } a_{01} \\ \hline \alpha_{11} \\ a_{21} \end{array} \right) = \left( \begin{array}{c } L_{00} u_{01} \\ \hline l_{10}^T u_{01} + \upsilon_{11} \\ L_{20} u_{01} + l_{21} \upsilon_{11} \end{array} \right), \end{equation*}

or, equivalently,

\begin{equation*} \left\{ \begin{array}{rcl} L_{00} u_{01} \amp = \amp a_{01} \\ \upsilon_{11} \amp = \amp \alpha_{11} - l_{10}^T u_{01} \\ l_{21} \amp = \amp (a_{21} - L_{20} u_{01} )/\upsilon_{11} \end{array} \right. \end{equation*}

Now, by the Inductive Hypothesis $L_{00} \text{,}$ $l_{10}^T \text{,}$ and $L_{20}$ exist and are unique. So the question is whether $u_{01} \text{,}$ $\upsilon_{11} \text{,}$ and $l_{21}$ exist and are unique:

• $u_{01}$ exists and is unique. Since $L_{00}$ is nonsingular (it has ones on its diagonal) $L_{00} u_{01} = a_{01}$ has a solution that is unique.

• $\upsilon_{11}$ exists, is unique, and is nonzero. Since $l_{10}^T$ and $u_{01}$ exist and are unique, $\upsilon_{11} = \alpha_{11} - l_{10}^T u_{01}$ exists and is unique. It is also nonzero since the principal leading submatrix of $A$ given by

\begin{equation*} \FlaTwoByTwo { A_{00} }{ a_{01} } { a_{10}^T }{ \alpha_{11} } = \FlaTwoByTwo { L_{00} }{ 0 } { l_{10}^T }{ 1} \FlaTwoByTwo { U_{00} }{ u_{01} } { 0 }{ \upsilon_{11} }, \end{equation*}

is nonsingular by assumption and therefore $\upsilon_{11}$ must be nonzero.

• $l_{21}$ exists and is unique. Since $\upsilon_{11}$ exists, is unique, and is nonzero,

\begin{equation*} l_{21} = ( a_{21} - L_{20} a_{01} )/ \upsilon_{11} \end{equation*}

exists and is uniquely determined.

Thus the $m \times (k+1)$ matrix $A$ has a unique LU factorization.

• By the Principal of Mathematical Induction the result holds.

The formulas in the inductive step of the proof of Theorem 5.2.3.4 suggest an alternative algorithm for computing the LU factorization of a $m \times n$ matrix $A$ with $m \geq n \text{,}$ given in Figure 5.2.3.5. This algorithm is often referred to as the (unblocked) left-looking algorithm.

###### Homework5.2.3.2.

Show that if the left-looking algorithm in Figure 5.2.3.5 is applied to an $m \times n$ matrix, with $m \ge n \text{,}$ the cost is approximately $m n^2 - \frac{1}{3} n^3$ flops (just like the right-looking algorithm).

Solution

Consider the iteration where $A_{TL}$ is (initially) $k \times k \text{.}$ Then

• Solving $L_{00} u_{01} = a_{21}$ requires approximately $k^2$ flops.

• Updating $\alpha_{11} := \alpha_{11} - a_{10}^T a_{01}$ requires approximately $2k$ flops, which we will ignore.

• Updating $a_{21} := a_{21} - A_{20} a_{01}$ requires approximately $2(m-k-1) k$ flops.

• Updating $a_{21} := a_{21} / \alpha_{11}$ requires approximately $(m-k-1)$ flops, which we will ignore.

Thus, the total cost is given by, approximately,

\begin{equation*} \sum_{k=0}^{n-1} \left( k^2 + 2 ( m-k-1 ) k \right) \mbox{ flops}. \end{equation*}

Let us now simplify this:

\begin{equation*} \begin{array}{l} \sum_{k=0}^{n-1} \left( k^2 + 2 ( m-k-1 ) k \right) \\ ~~~=~~~~ \lt \mbox{ algebra } \gt \\ \sum_{k=0}^{n-1} k^2 + 2 \sum_{k=0}^{n-1} ( m-k-1 ) k \\ ~~~=~~~~ \lt \mbox{ algebra } \gt \\ \sum_{k=0}^{n-1} 2 (m-1) k - \sum_{k=0}^{n-1} k^2 \\ ~~~ \approx ~~~~ \lt \sum_{j=0}^{n-1} j \approx n^2/2 \mbox{ and } \sum_{j=0}^{n-1} j^2 \approx n^3/3 \gt \\ (m-1) n^2 - \frac{1}{3} n^3 \\ % ~~~ \approx ~~~ \lt \mbox{ for large } m, % (m-1) \approx m \gt \\ % m n^2 - \frac{1}{3} n^3. \end{array} \end{equation*}

Had we not ignored the cost of $\alpha_{11} := \alpha_{11} - a_{10}^T a_{01} \text{,}$ which approximately $2k \text{,}$ then the result would have been approximately

\begin{equation*} m n^2 - \frac{1}{3} n^3 \end{equation*}

instead of $(m-1) n^2 - \frac{1}{3} n^3 \text{,}$ which is identical to that of the right-looking algorithm in Figure 5.2.2.1. This makes sense, since the two algorithms perform the same operations in a different order.

Of course, regardless,

\begin{equation*} (m-1) n^2 - \frac{1}{3} n^3 \approx m n^2 - \frac{1}{3} n^3 \end{equation*}

if $m$ is large.

###### Remark5.2.3.6.

A careful analysis would show that the left- and right-looking algorithms perform the exact same operations with the same elements of $A \text{,}$ except in a different order. Thus, it is no surprise that the costs of these algorithms are the same.

###### Ponder This5.2.3.3.

If $A$ is $m \times m$ (square!), then yet another algorithm can be derived by partitioning $A \text{,}$ $L \text{,}$ and $U$ so that

\begin{equation*} A = \left( \begin{array}{c c} A_{00} \amp a_{01} \\ a_{10}^T \amp \alpha_{11} \end{array} \right), L = \left( \begin{array}{c c} L_{00} \amp 0 \\ l_{10}^T \amp 1 \end{array} \right), U = \left( \begin{array}{c c} U_{00} \amp u_{01} \\ 0 \amp \upsilon_{11} \end{array} \right). \end{equation*}

Assume that $L_{00}$ and $U_{00}$ have already been computed in previous iterations, and determine how to compute $u_{01} \text{,}$ $l_{10}^T \text{,}$ and $\upsilon_{11}$ in the current iteration. Then fill in the algorithm:

This algorithm is often called the bordered LU factorization algorithm.

Next, modify the proof of Theorem 5.2.3.4 to show the existence of the LU factorization when $A$ is square and has nonsingular leading principal submatrices.

Finally, show that this bordered algorithm also requires approximately $2 m^3 / 3$ flops.

###### Homework5.2.3.4.

Implement the algorithm given in Figure 5.2.3.5 as

function [ A_out ] = LU_left_looking( A )


by completing the code in Assignments/Week05/matlab/LU_left_looking.m. Input is an $m \times n$ matrix $A\text{.}$ Output is the matrix $A$ that has been overwritten by the LU factorization. You may want to use Assignments/Week05/matlab/test_LU_left_looking.m to check your implementation.

Solution

See Assignments/Week05/answers/LU_left_looking.m. (Assignments/Week05/answers/LU_left_looking.m)