next up previous index
Next: The Mid-Point Method Up: Pushing Particles Previous: Solving the Hamilton-Jacobi Equation

Euler Method

Since Newton equations can be reduced either directly or through the Lagrangian or Hamiltonian routes to systems of first order ordinary differential equations, once we know how to solve the latter numerically, we'll have a handle on Newton equations themselves.

So let us consider then the following simple ODE:

\begin{displaymath}\frac{\textrm{d} x(t)}{\textrm{d} t}
= f(x, t)
\end{displaymath} (4.66)

The Euler method of solving this equation is very simple. Replace d x(t)/d t with a finite difference:

 \begin{displaymath}
\frac{x(t + \Delta t) - x(t)}{\Delta t} = f\left(x(t), t\right)
\end{displaymath} (4.67)

This is, actually, one possible choice, that will lead to an explicit integration scheme. The other choice is:

\begin{displaymath}\frac{x(t + \Delta t) - x(t)}{\Delta t} =
f\left(x(t + \Delta t), t+\Delta t\right)
\end{displaymath} (4.68)

This choice leads to an implicit integration scheme.

Let us focus on the explicit scheme first. Equation (4.71) suggests the following integration scheme:

\begin{displaymath}x(t + \Delta t) = x(t) + f\left(x(t), t\right) \Delta t
\end{displaymath} (4.69)

or, in other words, once we have assumed a certain initial condition for x, say, x0 = x(t0) then:

\begin{eqnarray*}x_1 &=& x(t_1) = x(t_0 + \Delta t) =
x_0 + f(x_0, t_0) \Delta ...
...) = x(t_2 + \Delta t) =
x_2 + f(x_2, t_2) \Delta t \\
\cdots
\end{eqnarray*}


Observe that this scheme is intrinsically sequential. It cannot be parallelised, unless you are solving a very large system of coupled ODEs, in which case, the whole system of equations can be solved in parallel. This is generally true of any ODE problem, and any ODE numerical method. They cannot be parallelised by themselves. Parallelism is possible for large systems of ODEs only.

The Euler scheme may become unstable. To see that consider two possible solutions xn and x'n at tn that are very close, so that $x'_n = x_n + \delta x_n$, where $\delta x_n$ is a very small number. In order to assess the stability of the method we need to consider the evolution of $\delta x_n$ with time.

Plugging $x_n + \delta x_n$ into the Euler scheme yields:

\begin{displaymath}x_{n+1} + \delta x_{n+1}
= x_n + \delta x_n +
f\left(
x_n ...
... x}
\bigg\vert_{{x = x_n \atop t = t_n}} \delta x_n \Delta t
\end{displaymath} (4.70)

Since xn+1 and xn are already linked by the Euler relationship, we get

 \begin{displaymath}
\delta x_{n+1} = \delta x_n
+ \frac{\partial f(x, t)}{\pa...
... x}
\bigg\vert_{{x = x_n \atop t = t_n}} \delta x_n \Delta t
\end{displaymath} (4.71)

Now, let us assume that $\delta x_{n+1} = g x_n$, where g is a growth factor. Plugging this into equation (4.75) yields:

\begin{displaymath}g = 1 + \frac{\partial f(x, t)}{\partial x}
\bigg\vert_{{x = x_n \atop t = t_n}} \Delta t
\end{displaymath} (4.72)

For the scheme to be numerically stable we must have that $g \in [-1, 1]$, hence the condition:

\begin{displaymath}\frac{\partial f(x, t)}{\partial x}
\bigg\vert_{{x = x_n \atop t = t_n}} \Delta t
\in [-2, 0]
\end{displaymath} (4.73)

This means that if, for example, $\frac{\partial f(x, t)}{\partial x}
\big\vert_{{x = x_n\atop t = t_n}}$ is positive then Euler method is bound to be unstable, whatever the choice of $\Delta t$. If however $\frac{\partial f(x, t)}{\partial x}
\big\vert_{{x = x_n\atop t = t_n}}$ is negative, then by choosing an appropriately small $\Delta t$ we can stabilise the computation.

The simplest example of a negative $\frac{\partial f(x, t)}{\partial x}
\big\vert_{{x = x_n\atop t = t_n}}$ is when it is a negative constant, e.g.,

\begin{displaymath}\frac{\partial f(x, t)}{\partial x}
\bigg\vert_{{x = x_n\atop t = t_n}} = -k,
\end{displaymath} (4.74)

where k > 0, or, in other words,

\begin{displaymath}\frac{\textrm{d} x(t)}{\textrm{d} t} = -k x + C
\end{displaymath} (4.75)

Assume that C=0, then the exact solution is

x(t) = x0 e-kt (4.76)

The stability criterion implies that $\Delta t \in [0, 2/k]$.

Euler method is not only unstable in great many circumstances. It is not very accurate either. This can be ascertained easily by comparing the exact solution of

\begin{displaymath}\frac{\textrm{d} x(t)}{\textrm{d} t} = -k x
\end{displaymath}

for which a stable numerical Euler scheme is possible, with a numerical approximation. Starting from x0 numerically we are going to get:

\begin{displaymath}x_1 = x(\Delta t) = x_0 - k x_0 \Delta t
\end{displaymath}

The exact solution is:

\begin{displaymath}x_1 = x_0 e^{-k \Delta t} = x_0 \left( 1 - k \Delta t
+ \fr...
...Delta t)^2}{2!}
- \frac{(k \Delta t)^3}{3!} + \ldots \right)
\end{displaymath}

Hence the error is:

\begin{displaymath}\delta x_1 = x_0 \left( \frac{(k \Delta t)^2}{2!}
- \frac{(k \Delta t)^3}{3!} + \ldots \right)
\end{displaymath}

or of order ${\cal O}\left((\Delta t)^2\right)$. Such methods are said to be first-order accurate.

The reason for this lack of accuracy is that

as the formula advances the solution through the interval $\left[t_n, t_n + \Delta t\right]$ the derivative information that pertains to the interval is sampled only at the beginning of the interval.


next up previous index
Next: The Mid-Point Method Up: Pushing Particles Previous: Solving the Hamilton-Jacobi Equation
Zdzislaw Meglicki
2001-02-26