next up previous index
Next: Other Matrix Operations Up: Pushing Particles Previous: Bulirsch-Stoer Step

Stiff Equations

Every time we encounter a problem where things change on two vastly different scales, the equations get stiff.

But even innocent ODEs of second order can easily lead to stiff systems of first order ODEs.

Consider the following example:

\begin{displaymath}\frac{\textrm{d}^2 x(t)}{\textrm{d} t^2} = 100 x
\end{displaymath} (4.79)

At first glance there is just one scale in this equation, which is 100. Now assume a solution of the following form:

x(t) = A ea t (4.80)

then

\begin{eqnarray*}\frac{\textrm{d} x(t)}{\textrm{d} t} &=& a A e^{a t} \\
\frac{\textrm{d}^2 x(t)}{\textrm{d} t^2} &=& a^2 A e^{a t}
= a^2 x(t)
\end{eqnarray*}


substituting this solution into our equation yields

\begin{displaymath}a = \pm 10
\end{displaymath} (4.81)

The most general solution is therefore a linear combination of two possibilities:

x(t) = A e10 t + B e-10 t (4.82)

There are two vastly different time scales in this solution.

Even if you set your initial conditions so that A = 0, e.g.,

\begin{eqnarray*}x(0) &=& 1 \\
\frac{\textrm{d} x(t)}{\textrm{d} t} \bigg\vert_{t = 0} &=& -10
\end{eqnarray*}


rounding errors will add a small amount of the e10 t solution, so that numerically you'll get:

\begin{displaymath}x(t) = e^{-10 t} + \epsilon e^{10 t}
\end{displaymath} (4.83)

and if you integrate long enough, the solution will eventually blow up.

We had seen it already when we talked about the instability of the explicit Euler method.

Here is how you can stabilize the solution method regardless of the length of your time step, sic!

Consider the following very simple equation:

\begin{displaymath}\frac{\textrm{d} x(t)}{\textrm{d} t} = -c x(t)
\end{displaymath} (4.84)

The derivative itself can be approximated by:

\begin{displaymath}\frac{\textrm{d} x(t)}{\textrm{d} t} \approx
\frac{x(t + \Delta t) - x(t)}{\Delta t}
\end{displaymath} (4.85)

In the Euler explicit scheme the right hand side was evaluated at x(t). But we can just as well evaluate it at $x(t + \Delta t)$. This leads to the following equation:

\begin{displaymath}\frac{x(t + \Delta t) - x(t)}{\Delta t}
= -c x(t + \Delta t)
\end{displaymath} (4.86)

and the solution is:

\begin{displaymath}x(t + \Delta t) = \frac{x(t)}{1 + c \Delta t}
\end{displaymath} (4.87)

This solution is always going to converge to zero, regardless of the step length $\Delta t$. If $\Delta t$ is too large, the convergence may not be very accurate, but the numerical solution itself is not going to explode.

This solution method is said to be unconditionally stable.

The reasoning can be easily expanded to a system of ODEs as follows. Consider the following:

\begin{displaymath}\frac{\textrm{d}\boldsymbol{x}(t)}{\textrm{d}t}
= - \boldsymbol{C} \cdot \boldsymbol{x}(t)
\end{displaymath} (4.88)

Implicit differencing yields

\begin{displaymath}\boldsymbol{x}(t + \Delta t)
= \left(\boldsymbol{1} + \boldsymbol{C}\Delta t\right)^{-1}
\cdot \boldsymbol{x}(t)
\end{displaymath} (4.89)

This equation can be solved analytically, for example, by diagonalizing C. In the corresponding base the solution can be then written as:

\begin{displaymath}\tilde{\boldsymbol{x}}_i(t + \Delta t)
= \frac{\tilde{\boldsymbol{x}}_i(t)}
{1 + \lambda_i \Delta t}
\end{displaymath} (4.90)

In order to obtain $\boldsymbol{x}(t + \Delta t)$ we have to rotate it back to the original basis.

We have discussed this step in P573.

If all $\lambda_i$ are positive or zero (i.e., C is said to be positive definite) then the method is stable for any step size $\Delta t$.

In order to solve the equation we can invert $1 + \boldsymbol{C} \Delta t$ at the first step, and then, assuming that $\Delta t$ is fixed, simply keep reusing it, because C is constant.

If C = C(t) but still definite positive for every t, or if we're going to change $\Delta t$, we may have to invert $1 + \boldsymbol{C} \Delta t$ at every step, or to invoke one of our eigen-value procedures to do the job - and then rotate the solution as need be.

In general

\begin{displaymath}\frac{\textrm{d}\boldsymbol{x}(t)}{\textrm{d} t}
= \boldsymbol{f}(\boldsymbol{x}, t)
\end{displaymath} (4.91)

and the implicit differentiation scheme:

\begin{displaymath}\boldsymbol{x}(t + \Delta t)
= \boldsymbol{x}(t) + \Delta t
\boldsymbol{f}(\boldsymbol{x}(t + \Delta t), t + \Delta t)
\end{displaymath} (4.92)

leads to a set of nonlinear algebraic equations that have to be solved iteratively, e.g., using the Newton-Raphson method, at every time step. In general case this scheme may not guaranteed to be stable, but, if the Jacobian $\partial \boldsymbol{f}/\partial\boldsymbol{x}$ is positive definite for every t then the method is stable.

If the time step $\Delta t$ is a short one, then

\begin{displaymath}\boldsymbol{f}(\boldsymbol{x}(t + \Delta t), t + \Delta t)
\...
...\left(\boldsymbol{x}(t + \Delta t) - \boldsymbol{x}(t)\right)
\end{displaymath} (4.93)

and the solution to this equation is

\begin{displaymath}\boldsymbol{x}(t + \Delta t)
= \left(\boldsymbol{1} - \Delta...
...} = \boldsymbol{x(t)}}
\cdot \boldsymbol{x}(t)\right)\right)
\end{displaymath} (4.94)

So the only matrix we have to invert in this case is:

\begin{displaymath}\boldsymbol{1} - \Delta t \frac{\partial\boldsymbol{f}}
{\pa...
...boldsymbol{x}}\bigg\vert_{\boldsymbol{x} = \boldsymbol{x(t)}}
\end{displaymath} (4.95)

This is called a semi-implicit method and its stability depends on the sign of the Jacobian $\partial \boldsymbol{f}/\partial\boldsymbol{x}$, as in the full implicit case.

The implicit and semi-implicit methods discussed so far are, like the Euler method, first-order accurate only. Higher order implicit methods exist too, but they are prohibitively costly and thus seldom used. It is usually cheaper to shorten the time step while enjoying the stability of a first-order implicit or semi-implicit method at the same time. For a longer time step a semi-implicit method may break to begin with, so the resulting cost would be additionally compounded by having to go back to a fully non-linear case.

Semi-implicit methods are used commonly whenever radiative transfer has to be combined with CFD computations. The reason for this is a very short time scale associated with radiative transfer phenomena compared to CFD time scales. Weather prediction codes and ocean-atmosphere circulation codes are the place where you'll encounter such techniques. Another area is simulation of astrophysical systems, such as accretion disks, supernova explosions, and collisions between neutron stars.

Implicit and semi-implicit methods applied to systems of ODEs and PDEs (the latter can be reduced to the former) require the use of matrix inversion methods, or, at the very least, the use of linear equation solvers. This is what we are going to focus on in the next chapter.


next up previous index
Next: Other Matrix Operations Up: Pushing Particles Previous: Bulirsch-Stoer Step
Zdzislaw Meglicki
2001-02-26