next up previous contents
Next: 2 Conjugate Gradient Iteration Up: 1 Introduction Previous: 4 Some General Theory   Contents

5 Application to a Poisson Equation

Consider now the discretisation of

$\displaystyle \nabla^2 w=f,\ \ (x,y)\in [0,1]\times [0,1]$ (157)

with condition $ w=0$ on the boundary. If we use central differences with mesh size $ h=1/m$ in both $ x$ and $ y$ directions,

$\displaystyle W_{r,s}-{1\over 4}(W_{r-1,s}+W_{r,s-1}+W_{r,s+1}+W_{r+1,s})= h^2f_{r,s},\ \ r,s=1,\ldots ,m-1.$ (158)

and boundary conditions $ W_{0,s}=W_{m,s}=W_{r,0}=W_{r,m}=0$, $ s=1,\ldots ,m-1$ and $ r=1,\ldots ,m-1$. For an appropriately defined vector $ {\rm\bf x}$ we have

$\displaystyle {\rm\bf L}{\rm\bf x}={1\over 4}(x_{r-1,s}+x_{r,s-1}),$ (159)

$\displaystyle {\rm\bf U}{\rm\bf x}={1\over 4}(x_{r+1,s}+x_{r,s+1}).$ (160)

Suppose that $ \mu$ is an eigenvalue of the Jacobi iteration matrix for this problem, then

$\displaystyle ({\rm\bf L}+{\rm\bf U}){\rm\bf x}=\mu{\rm\bf x}.$ (161)

Consider now a vector $ {\rm\bf d}$ whose elements are defined by $ d_{r,s}=\alpha^{r+s}x_{r,s}$, then the $ {\mathcal M}(r,s)$ element of $ (\alpha{\rm\bf L}+\alpha^{-1}{\rm\bf U}){\rm\bf d}$ is

$\displaystyle {\rm element}\ {\mathcal M}(r,s):\ \ (\alpha{\rm\bf L}+\alpha^{-1...
...U}){\rm\bf d}= {1\over 4}\alpha^{r+s}(x_{r-1,s}+x_{r,s-1}+x_{r,s+1}+x_{r+1,s}),$ (162)

and the RHS is $ \mu\alpha^{r+s}{\rm\bf x}_{r,s}=\mu d_{r,s}$ so that $ \mu$ is also an eigenvalue of the matrix $ \alpha{\rm\bf L}+\alpha^{-1}{\rm\bf U}$ for any $ \alpha$.


\begin{lemma}\begin{equation}\rho ({\rm\bf H}) = \rho ({\rm\bf J})^2\end{equation} \end{lemma}
If we define the asymptotic convergence rate as $ \log \rho$ the Lemma implies that Gauss-Seidel converges twice as fast as Jacobi. To show this result in this case (it is a more general result valid for a wider class of matrices) we use the fact that $ {\rm det}({\rm\bf I}-{\rm\bf L})=1$ so that

$\displaystyle {\rm det}(\lambda{\rm\bf I}-{\rm\bf H})={\rm det}[({\rm\bf I}-{\r...
...m\bf I}-{\rm\bf H})]={\rm det}[\lambda{\rm\bf I}-\lambda{\rm\bf L}-{\rm\bf U}],$ (163)

since $ {\rm\bf H}=({\rm\bf I}-{\rm\bf L})^{-1}{\rm\bf U}$, so that

$\displaystyle {\rm det}(\lambda{\rm\bf I}-{\rm\bf H})= {\rm det}[\sqrt{\lambda}...
...mbda}{\rm\bf I}-\sqrt{\lambda}{\rm\bf L}-{1\over\sqrt{\lambda}}{\rm\bf U})]=0,,$ (164)

and using our result above, $ \sqrt{\lambda}$ must be an eigenvalue of $ {\rm\bf L}+{\rm\bf U}={\rm\bf J}$

In addition to this result we can find the eigenvalues and eigenvectors of $ {\rm\bf J}$ explicitly. If we let

$\displaystyle {\rm\bf x}_{r,s}^{(p,q)}=\sin{\pi p r h}\sin{\pi q s h},\ \ p,q=1,\ldots ,m-1,$ (165)

then

$\displaystyle {\rm\bf J}{\rm\bf x}^{(p,q)}={1\over 2}(\cos{\pi p h}+\cos{\pi q h}){\rm\bf x}^{(p,q)},$ (166)

and the eigenvalue associated with eigenvector $ {\rm\bf x}^{(p,q)}$ is

$\displaystyle \lambda_{(p,q)}={1\over 2}(\cos{\pi ph}+\cos{\pi qh}),$ (167)

so that the largest eigenvalue occurs when $ p=q=1$ or $ p=q=m-1$ and

$\displaystyle \rho ({\rm\bf J})=\cos {\pi h},$ (168)

$\displaystyle \rho ({\rm\bf H})=\cos^2{\pi h}.$ (169)


next up previous contents
Next: 2 Conjugate Gradient Iteration Up: 1 Introduction Previous: 4 Some General Theory   Contents
Last changed 2000-11-21