next up previous contents
Next: 5 Application to a Up: 1 Introduction Previous: 3 Relaxation   Contents

4 Some General Theory

The reason we have taken so much effort to determine the iteration matrix is that if

$\displaystyle {\rm\bf x}^{(k+1)}={\rm\bf G}{\rm\bf x}^{(k)}+{\rm\bf b},$ (146)

and as the true solution $ {\rm\bf s}$ satisfies

$\displaystyle {\rm\bf s}={\rm\bf G}{\rm\bf s}+{\rm\bf b},$ (147)

so defining an error $ {\rm\bf e}^{(k)}={\rm\bf x}^{(k)}-{\rm\bf s},$ we must have

$\displaystyle {\rm\bf e}^{(k+1)}={\rm\bf G}{\rm\bf x}^{(k)}={\rm\bf G}^{k+1}{\rm\bf e}^{(0)}.$ (148)

Thus the iteration will converge provided $ {\rm\bf G}^k$ tends to the zero matrix as $ k$ becomes large.

Theorem If $ {\rm\bf G}$ has eigenvalues $ \lambda_r$, the iteration converges as $ k\to\infty$ provided $ \rho ({\rm\bf G})=\max_r\vert\lambda_r\vert <1$. We call $ \rho$ the spectral radius of the matrix $ {\rm\bf G}$.

We consider only a special case, where there is a full set of eigenvalues, and if $ {\rm\bf G}={\rm\bf S}\Lambda{\rm\bf S}^{-1}$ where $ \Lambda$ is a diagonal matrix of the eigenvalues, then $ {\rm\bf G}^n={\rm\bf S}\Lambda^n{\rm\bf S}^{-1}$ and this will become the zero vector provided the moduli of all the eignevalues are less than one.

Theorem (Gershgorin) The eigenvalues of $ {\rm\bf G}= \{g_{rs}\}$ lie within the union of the discs

$\displaystyle D_r\ :\ \vert\lambda -g_{rr}\vert\le\sum_{s\ne r}\vert g_{rs}\vert.$ (149)

This can be seen by supposing eigenvalue $ \lambda$ has eigenvector $ {\rm\bf z}$ so that

$\displaystyle {\rm\bf G}{\rm\bf z}=\lambda{\rm\bf z}\ => \ (\lambda-g_{rr})z_r=\sum_{s\ne r}g_{rs}z_s,\ r=1,\ldots ,m.$ (150)

Choose $ r$ such that $ \vert z_r\vert =\vert\vert{\rm\bf z}\vert\vert_\infty$, and then

$\displaystyle \vert\lambda -g_{rr}\vert .\vert\vert{\rm\bf z}\vert\vert_\infty\...
...s\vert\le \sum_{s\ne r}\vert g_{rs}\vert.\vert\vert{\rm\bf z}\vert\vert_\infty,$ (151)

whence the result follows.

Lemma Jacobi iteration will converge if $ {\rm\bf A}$ is strictly diagonally dominant,

$\displaystyle \vert a_{rr}\vert > \sum_{s\ne r}\vert a_rs\vert,\ \ \forall r=1,\ldots ,m.$ (152)

In this case, using Jacobi iteration

$\displaystyle J_{rs}=\left\{\begin{array}{l r} 0, &r=s, \\  \displaystyle{a_{rs}\over a_{rr}},&s\ne r,\\  \end{array}\right.$ (153)

so that for all eigenvalues of the iteration matrix,

$\displaystyle \vert\lambda\vert \le \sum_{s\ne r}\vert J_{rs}\vert =\sum_{s\ne r}{{\vert a_{rs}\vert}\over{\vert a_{rr}\vert}}<1,$ (154)

so that all the eigenvalues of the iteration matrix must have moduli less than one.

Lemma Relaxation can converge only if $ 0<\omega <2$.

To show this, consider the eigenvalues of the iteration matrix, $ {\mathcal H}(\omega )$,

$\displaystyle [(1-\omega ){\rm\bf I}+\omega {\rm\bf U}]{\rm\bf z}=\lambda ({\rm\bf I}-\omega {\rm\bf L}){\rm\bf z},$ (155)

so that

$\displaystyle [(1-\omega -\lambda ){\rm\bf I}+\omega{\rm\bf U}+\lambda\omega{\rm\bf L}]{\rm\bf z}=0.$ (156)

If we take the determinant of the coefficient matrix and look at the term which is independent of $ \lambda$ (and so is the product of all the eigenvalues), it is $ (1-\omega )^m$, so that if $ \vert 1-\omega \vert \ge 1$ then at least one eigenvalue must be greater than or equal to 1. Hence it is necessary that $ \vert 1-\omega\vert < 1$, or $ 0<\omega <2$.

We do not have time to go into the mathematics deeply, but there will be a critical value $ \omega_0$ for which $ \rho ({\mathcal H}(\omega ))$ is a minimum. For most practical matrices this optimum value of the relaxation parameter can be found from numerical experiments.


next up previous contents
Next: 5 Application to a Up: 1 Introduction Previous: 3 Relaxation   Contents
Last changed 2000-11-21