next up previous contents
Next: 2 Finite Volume Methods Up: 1 Model Problem Previous: 1 Model Problem   Contents

Subsections

1 Finite Differences

The definition of derivative can be used to obtain a discrete approximation, since the definition can be expressed in a number of different ways, so to can the discrete approximation. The definition in terms of limits can be any of

$\displaystyle u'(x)=\lim_{\Delta x\to 0}{{u(x+\Delta x)-u(x)}\over{\Delta x}} =...
...Delta x}} =\lim_{\Delta x\to 0}{{u(x+\Delta x)-u(x-\Delta x)}\over{2\Delta x}}.$ (7)

Define a set of points $ \Omega_J=\{x_0=0<x_1\ldots <x_m=1\}$ and a set of nodes $ J_\Omega=\{0,1,\ldots ,m\}$ and approximate $ u_j=u(x_j)$ by $ U_j$, $ \forall j\in J_\Omega$. Then discrete approximations for a derivative are:

$\displaystyle u'(x_j)\sim{{U_{j+1}-U_j}\over{x_{j+1}-x_j}}={{U_{j+1}-U_j}\over{h}}\ {\rm if}\ x_j=jh,$ (8)

$\displaystyle u'(x_j)\sim{{U_{j}-U_{j-1}}\over{x_{j}-x_{j-1}}}={{U_{j}-U_{j-1}}\over{h}}\ {\rm if}\ x_j=jh,$ (9)

$\displaystyle u'(x_j)\sim{{U_{j+1}-U_{j-1}}\over{x_{j+1}-x_{j-1}}}={{U_{j+1}-U_{j-1}}\over{2h}}\ {\rm if}\ x_j=jh,$ (10)

For simplicity take $ x_j=jh$ with $ h=1/m$ unless it is indicated otherwise.

Higher derivatives can be approximated in a similar way:

$\displaystyle u''(x)\sim{{u'(x_j+{1\over 2}h)-u'(x_j-{1\over 2}h)}\over{h}},$ (11)

so that

$\displaystyle u''\sim{{{{U_{j+1}-U_j}\over h}-{{U_j-U_{j-1}}\over h}}\over h}={{U_{j+1}-2U_j+U_{j-1}}\over h^2}.$ (12)

Now if $ f_j=f(x_j),\ j=1,\ldots ,m-1$, $ f_0=a$ and $ f_m=b$,

$\displaystyle U_0=f_0,$ (13)

$\displaystyle -{{U_{j+1}-2U_j+U_{j-1}}\over h^2}+\kappa^2U_j=f_j,\ \ \ j=1,\ldots ,m-1,$ (14)

$\displaystyle U_m=f_m.$ (15)

If we let $ {\rm\bf U}^{\rm T}=(U_0,U_1,\ldots ,U_m)$ be a vector of approximate values, define matrices $ {\rm\bf K}$, $ {\rm\bf I}^*$,

$\displaystyle {\rm\bf K}=\left(\begin{array}{r r r r c r} 1&0&0&0&\cdots&0\\  -...
...\cdot&\cdot&\cdot&\cdot&\cdots&\cdot\\  0&0&0&0&\cdots&1\\  \end{array}\right),$ (16)

$\displaystyle {\rm\bf I}^*=\left(\begin{array}{r r r r c r r} 0&0&0&0&\cdots&0&...
...\cdots&\cdot\\  0&0&0&0&\cdots&1&0\\  0&0&0&0&\cdots&0&0\\  \end{array}\right).$ (17)

Then we can write the finite difference scheme as

$\displaystyle ({1\over h^2}{\rm\bf K}+\kappa^2{\rm\bf I}^*){\rm\bf U}={\rm\bf f}.$ (18)

This is a discrete version of the original differential equation (1.6).

It is possible to reduce the matrices to $ m-1\times m-1$ by eliminating the explicitly given values of $ U_0$, $ U_m$. On the other hand, if one of the boundary conditions involved a derivative (corresponding to a physical situation where as an example, heat transfer rate is specified) then keeping these values as "unkowns" is needed; overall keeping them in the matrix is a very small overhead.

1 Note 1 Derivative Boundary Conditions

If the boundary condition at $ x=0$ is for instance $ u'(0)=0$ we proceed by introducing a fictitious point $ U_{-1}$, so that we have

$\displaystyle {\rm (a)\ differential\ equation:}\ -{{U_1-2U_0+U_{-1}}\over h^2}+\kappa^2U_0=f_0,$ (19)

$\displaystyle {\rm (b)\ boundary\ condition:}\ U_1-U_{-1}=0,$ (20)

and eliminating $ U_{-1}$,

$\displaystyle -{{2U_1-2U_0}\over h^2}+\kappa^2U_0=f_0.$ (21)

$ {\rm\bf K}$, $ {\rm\bf I}^*$ become

$\displaystyle {\rm\bf K}=\left(\begin{array}{r r r r c r} 2&-2&0&0&\cdots&0\\  ...
....&.&\cdots&.&.\\  0&0&0&0&\cdots&1&0\\  0&0&0&0&\cdots&0&0\ \end{array}\right),$ (22)

2 Note 2 Unsteady PDE's

If the problem is unsteady, for instance

$\displaystyle {{\partial u}\over{\partial t}}=\sigma{{\partial^2 u}\over{\partial x^2}} -\kappa^2u+f(x,t),$ (23)

then introduce a set of time points $ t_0,t_1,\ldots$ (usually assuming $ t_n=n\Delta t$) and let $ U^n_j$ approximate $ u(x_j,t_n)$. Than approximate

$\displaystyle {{\partial u}\over{\partial t}}\sim {{U_{n+1}-U_n}\over{t_{n+1}-t_n}}={{U^{n+1}_j-U^n_j}\over{\Delta t}} \ {\rm if}\ t_n=n\Delta t.$ (24)

It is now possible to use a variety of schemes.

(a) Explicit Scheme

If we approximate the RHS of the differential equation at time $ t_n$, and let $ \mu=\sigma\Delta t/h^2$ and define a vector $ {\rm\bf U}^n$ at each time step,

$\displaystyle {\rm\bf U}^{n+1}=({\rm\bf I}-\kappa^2\Delta t{\rm\bf I}^*+\mu{\rm\bf K}){\rm\bf U}^n+\Delta t{\rm\bf f},$ (25)

the vector $ U^{n+1}$ is given explicitly in terms of known quantities so there is no matrix problem to solve.

(b) Implicit Scheme

Approximate the RHS of the differential equation at time $ t_{n+1}$,

$\displaystyle ({\rm\bf I}+\kappa^2\Delta t{\rm\bf I}^*-\mu{\rm\bf K}){\rm\bf U}^{n+1}={\rm\bf U}^n+\Delta t{\rm\bf f}^{n+1}.$ (26)

Now a matrix problem has to be solved at each time step.

(c) $ \theta$-method

We approximate the right hand side as a weighted sum of values at $ t_n$ and $ t_{n+1}$, using $ \theta{\rm RHS}^{n+1}+(1-\theta ){\rm RHS}^n$,

$\displaystyle [{\rm\bf I}+\theta \kappa^2\Delta t {\rm\bf I}^*-\theta\mu{\rm\bf...
...ta)\mu{\rm\bf K}]{\rm\bf U}^n +\theta {\rm\bf f}^{n+1}+(1-\theta ){\rm\bf f}^n.$ (27)

The case $ \theta =1/2$ is called Crank-Nicholson and has a higher order of accuracy in approximating the time derivative than the other cases (see note 4 below).

3 Note 3 Two Space Dimensions

If steady but two-dimensional, for instance

$\displaystyle -\nabla^2 u+\kappa^2u=f\ \ {\rm on}\ \Omega=[0,1]\times [0,1],$ (28)

let $ U_{r,s}$ approximate $ u(r\Delta x,s\Delta y )$, $ r=0,1,\ldots ,m_x$, $ s=0,1,\ldots ,m_y$ where $ \Delta x=1/m_x$, $ \Delta y=1/m_y$, and approximate

$\displaystyle {{\partial^2 u}\over{\partial x^2}}\sim{{U_{r+1,s}-2U_{r,s}+U_{r-1,s}}\over{\Delta x^2}},$ (29)

$\displaystyle {{\partial^2 u}\over{\partial y^2}}\sim{{U_{r,s+1}-2U_{r,s}+U_{r,s-1}}\over{\Delta y^2}}.$ (30)

The discrete form of the differential equation is

$\displaystyle {{U_{r+1,s}-2U_{r,s}+U_{r-1,s}}\over{\Delta x^2}}+ {{U_{r,s+1}-2U_{r,s}+U_{r,s-1}}\over{\Delta y^2}}+\kappa^2U_{r,s}=f_{r,s},$ (31)

Now we need to define the vector $ {\rm\bf U}$ to include all the elements of the array $ U_{r,s}$. One way is just to enumerate the elements of $ {\rm\bf U}$,

$\displaystyle {\rm\bf U}^{\rm T}=\{U_{0,0},U_{1,0},\cdots ,U_{m_x,0},U_{0,1},\cdots ,U_{m_x,m_y}\},$ (32)

another is to suppose there is a map $ {\mathcal M}(r,s)$ which maps the pairs $ (r,s)$ to the integers $ 0,1,2,\cdots ,(m_x+1)(m_y+1)-1$. Either way, $ {\rm\bf U}$ is a vector of dimension $ (m_x+1)(m_y+1)$ - a partial explanation of why supercomputers are needed for anything like realistic problems. Even using only $ 101\times 101$ points means we have to determine a vector of length $ \sim 10^4$ by inverting a matrix system where the coefficient matrix is roughly of size $ 10^4\times 10^4$. If we now define one row of a matrix $ {\rm\bf K}$ by

$\displaystyle {\rm row}({\rm\bf K})=\{0,\cdots ,0,-{{1}\over{\Delta x^2}},0,\cd...
...\kappa^2,-{1\over{\Delta x^2}},0,\cdots ,0, -{1\over{\Delta y^2}},0,\cdots,0\},$ (33)

with $ {\rm\bf I}^*$ suitably defined, we still have a matrix system

$\displaystyle ({\rm\bf K}+k^2{\rm\bf I}^*){\rm\bf U}={\rm\bf f},$ (34)

to invert in order to calculate $ {\rm\bf U}$. In a later part we shall consider algorithms which can invert matrix systems such as this where it is impossible to store the whole coefficent matrix on a computer; all we are able to store is the vector obtained when the matrix multiplies another vector.

4 Note 4 Error Analysis

If we write the continuous system as

$\displaystyle {\mathcal L}u=f,$ (35)

and the discrete system as

$\displaystyle {\rm L}_hU_j=f_j,$ (36)

where $ {\mathcal L}$, $ {\rm L}_h$ are linear operators, then define a truncation error

$\displaystyle T_j=({\rm L}_hu-f)\vert_{x=jh},$ (37)

and an error

$\displaystyle e_j=u_j-U_j.$ (38)

We then have

$\displaystyle T_j=({\rm L}_hu-f)\vert_{x=jh} =({\rm L}_hu-{\rm L}_hU)\vert_{x=jh} = {\rm L}_he_j.$ (39)

This is a central result in analysis of errors associated with discretisation. In principle, to determine the magnitude of errors, we have only to know the `magnitude' (that is some form of norm) of the inverse of $ {\rm L}_h$ and the `magnitude' of the truncation error.

In our model problem

$\displaystyle {\rm L}_hU_j=-{{U_{j+1}-2U_j+U_{j-1}}\over h^2}+\kappa^2U_j,$ (40)

so that

$\displaystyle {\rm L}_hu_j=-{{u_{j+1}-2u_j+u_{j-1}}\over h^2}+\kappa^2u_j,$ (41)

and using Taylor expansions,

$\displaystyle u_{j+1}=u(x_j+h)=u_j+hu'_j+{h^2\over2}u''_j+\ldots ,$ (42)

$\displaystyle u_{j-1}=u(x_j-h)=u_j-hu'_j+{h^2\over2}u''_j+\ldots ,$ (43)

and hence

$\displaystyle T_j=-u''_j+k^2u_j-f_j-{1\over{12}}h^2u''''_j+\ldots ,$ (44)

the first three terms are just the differential equation evaluated at $ x_j$ and so add to zero; with a bit more work, we can show

$\displaystyle T_j=-{1\over {12}}h^2u''''(\xi_j ),\ \ (j-1)h<\xi_j<(j+1)h,$ (45)

so that if $ M=\max \vert u''''\vert$, then

$\displaystyle \vert T_j\vert \le {1\over{12}}h^2M.$ (46)

If $ T\to 0$ as $ h\to 0$ the scheme is called consistent.

If $ T\sim h^p$ as $ h\to 0$ the scheme is called order-$ p$ accurate. In the model problem, the scheme is second order accurate. With PDE's, it is possible for the accuracy to be of different order for different independent variables. In the problem considered in note 2 above, the scheme is second order accurate in $ h$ but only first order accurate in $ \Delta t$, unless using crank-Nicholson when it is second order accurate in $ \Delta t$.

If we define $ {\rm\bf e}^{\rm T}=\{0,e_1,\ldots ,e_{m-1},0\}$, $ {\rm\bf T}^{\rm T} =\{0,T_1,\ldots ,T_{m-1},0\}$, then in matrix notation

$\displaystyle ({1\over h^2}{\rm\bf K}+\kappa^2{\rm\bf I}^*){\rm\bf e}={\rm\bf T},$ (47)

and formally

$\displaystyle {\rm\bf e}=h^2({\rm\bf K}+\kappa^2h^2{\rm\bf I}^*)^{-1}{\rm\bf T},$ (48)

so we need an estimate of the norm of $ ({\rm\bf K}+\kappa^2h^2{\rm\bf I}^*)^{-1}$. This turns out to be $ O(m^2)=O(h^{-2})$ in our model problem so that $ \vert\vert{\rm\bf e}\vert\vert =O(h^2)$ as $ h\to 0$.


next up previous contents
Next: 2 Finite Volume Methods Up: 1 Model Problem Previous: 1 Model Problem   Contents
Last changed 2000-11-21