Skip to main content

Subsection 3.2.2 Gram-Schmidt and the QR factorization

The discussion in the last unit motivates the following theorem:

In order to prove this theorem elegantly, we will first present the Gram-Schmidt orthogonalization algorithm using FLAME notation, in the next unit.

Ponder This 3.2.2.1.

What happens in the Gram-Schmidt algorithm if the columns of \(A \) are NOT linearly independent? How might one fix this? How can the Gram-Schmidt algorithm be used to identify which columns of \(A \) are linearly independent?

Solution

If \(a_j \) is the first column such that \(\{ a_0, \ldots, a_{j} \}\) are linearly dependent, then \(a_j^\perp \) will equal the zero vector and the process breaks down.

When a vector with \(a_j^\perp \) equal to the zero vector is encountered, the columns can be rearranged (permuted) so that that column (or those columns) come last.

Again, if \(a_j^\perp = 0 \) for some \(j \text{,}\) then the columns are linearly dependent since then \(a_j \) can be written as a linear combination of the previous columns.