Skip to main content

Subsection 7.3.4 Successive Over-Relaxation (SOR)

Recall that if \(A = D - L - U \) where \(-L \text{,}\) \(D \text{,}\) and \(-U \) are the strictly lower triangular, diagonal, and strictly upper triangular parts of \(A\text{,}\) then the Gauss-Seidel iteration for solving \(A x = y \) can be expressed as \(x^{(k+1)} = (D-L)^{-1}( U x + y ) \) or, equivalently, \(\chi_{i}^{(k+1)} \) solves

\begin{equation*} \sum_{j=0}^{i-1} \alpha_{i,j} \chi_j^{(k+1)} + \alpha_{i,i} \chi_{i}^{(k+1)} = - \sum_{j=i+1}^{n-1} \alpha_{i,j} \chi_j^{(k)} + \psi_{i}. \end{equation*}

where any term involving a zero is skipped. We label this \(\chi_{i}^{(k+1)} \) with \(\chi_{i}^{{\rm GS}\,(i+1)} \) in our subsequent discussion.

What if we pick our next value a bit further:

\begin{equation*} \chi_{i}^{(k+1)} = \omega \chi_{i}^{{\rm GS}\,(k+1)} + (1-\omega) \chi_{i}^{(k)} , \end{equation*}

where \(\omega \geq 1 \text{.}\) This is known as over-relaxation. Then

\begin{equation*} \chi_{i}^{{\rm GS}\,(k+1)} = \frac{1}{\omega} \chi_{i}^{(k+1)} - \frac{1-\omega}{\omega} \chi_{i}^{(k)} \end{equation*}

and

\begin{equation*} \sum_{j=0}^{i-1} \alpha_{i,j} \chi_j^{(k+1)} + \alpha_{i,i} \left[ \frac{1}{\omega} \chi_{i}^{(k+1)} - \frac{1-\omega}{\omega} \chi_{i}^{(k)} \right] = - \sum_{j=i+1}^{n-1} \alpha_{i,j} \chi_j^{(k)} + \psi_{i} \end{equation*}

or, equivalently,

\begin{equation*} \sum_{j=0}^{i-1} \alpha_{i,j} \chi_j^{(k+1)} + \frac{1}{\omega} \alpha_{i,i} \chi_{i}^{(k+1)} = \frac{1-\omega}{\omega} \alpha_{i,i} \chi_{i}^{(k)} - \sum_{j=i+1}^{n-1} \alpha_{i,j} \chi_j^{(k)}+\psi_{i}. \end{equation*}

This is equivalent to splitting

\begin{equation*} A = \begin{array}[t]{c} \underbrace{\left( \frac{1}{\omega} D - L \right)}\\ M \end{array} - \begin{array}[t]{c} \underbrace{\left( \frac{1-\omega}{\omega} D + U \right)}\\ N \end{array}, \end{equation*}

an iteration known as successive over-relaxation (SOR). The idea now is that the relaxation parameter \(\omega\) can often be chosen to improve (reduce) the spectral radius of \(M^{-1} N \text{,}\) thus accelerating convergence.

We continue with \(A = D - L - U \text{,}\) where \(-L \text{,}\) \(D \text{,}\) and \(-U \) are the strictly lower triangular, diagonal, and strictly upper triangular parts of \(A\text{.}\) Building on SOR where

\begin{equation*} A = \begin{array}[t]{c} \underbrace{\left( \frac{1}{\omega} D - L \right)}\\ M_F \end{array} - \begin{array}[t]{c} \underbrace{\left( \frac{1-\omega}{\omega} D + U \right)}\\ N_F \end{array}, \end{equation*}

where the \(F\) stands for "Forward." Now, an alternative would be to compute the elements of \(x \) in reverse order, using the latest available values. This is equivalent to splitting

\begin{equation*} A = \begin{array}[t]{c} \underbrace{\left( \frac{1}{\omega} D - U \right)}\\ M_R \end{array} - \begin{array}[t]{c} \underbrace{\left( \frac{1-\omega}{\omega} D + L \right)}\\ N_R \end{array}, \end{equation*}

where the \(R\) stands for "Reverse." The symmetric successive over-relaxation (SSOR) iteration combines the "forward" SOR with a "reverse" SOR, much like the symmetric Gauss-Seidel does:

\begin{equation*} \begin{array}{rcl} x^{(k+\frac{1}{2})} \amp = \amp M_F^{-1} ( N_F x^{(k)} + y ) \\ x^{(k+1)} \amp = \amp M_R^{-1} ( N_R x^{(k+\frac{1}{2})} + y ) . \end{array} \end{equation*}

This can be expressed as splitting \(A = M - N \text{.}\) The details are a bit messy, and we will skip them.