next up previous index
Next: Adaptive Stepsize Control Up: The Fourth-Order Runge-Kutta Method Previous: The Fourth-Order Runge-Kutta Method

Example Code

Here is a simple example code that implements the Fourth-Order Runge-Kutta:

SUBROUTINE rk4(y, dydx, x, h, yout, derivs)

  USE nrtype; USE nrutil, ONLY : assert_eq

  REAL(sp), DIMENSION(:), INTENT(in) :: y, dydx
  REAL(sp), INTENT(in) :: x, h
  REAL(sp), DIMENSION(:), INTENT(out) :: yout

     SUBROUTINE derivs(x, y, dydx)
       USE nrtype
       REAL(sp), INTENT(in) :: x
       REAL(sp), DIMENSION(:), INTENT(in) :: y
       REAL(sp), DIMENSION(:), INTENT(out) :: dydx
     END SUBROUTINE derivs

  INTEGER(i4b) :: ndum
  REAL(sp) :: h6, hh, xh
  REAL(sp), DIMENSION(SIZE(y)) :: dym, dyt, yt

  ndum = assert_eq(SIZE(y), SIZE(dydx), SIZE(yout), 'rk4')
  hh = h*0.5_sp
  h6 = h/6.0_sp
  xh = x + hh
  yt = y + hh * dydx
  CALL derivs(xh, yt, dyt)
  yt = y + hh * dyt
  CALL derivs(xh, yt, dym)
  yt = y + h * dym
  dym = dyt + dym
  CALL derivs(x + h, yt, dyt)
  yout = y + h6 * (dydx + dyt + 2.0_sp * dym)


The subroutine applies Runge-Kutta solver to a system of equations:

\begin{eqnarray*}\frac{\textrm{d} y_1}{\textrm{d} x} &=&
\textrm{dydx}_1\left( ...
... x} &=&
\textrm{dydx}_m\left( y_1, y_2, \ldots, y_m, x \right)

The computation is carried on in parallel, assuming a parallelising Fortran compiler.

Note that here the lower index numbers the equations not time steps. Furthermore the notation itself is y(x) rather than x(t).

The arguments that must be passed to the subroutine are

which is the vector of initial values of yn (in our notation this would be xn) at ``time'' xn (in our notation this would be tn);
which is the vector of what in our notation would be f(xn, tn), and in this program's notation, these are the values that the right hand side assumes at ``time'' xn;
which is the value of time, in our notation tn;
which is the length of the time step, in our notation $\Delta t$;
which is a place for the new, updates values of y(x + h); in our notation this would be $\boldsymbol{x}_{n+1} =
\boldsymbol{x}(t + \Delta t)$;
which is the name of the function that corresponds to our function f(x, t), and whose job is to evaluate dydx at various stages of the computation.

The first operation simply checks if arrays y, dydx, and yout are of matching sizes, and, if not, an appropriate error message is written on standard output. Otherwise the size is written on a dummy variable ndum that is not used any more:

  ndum = assert_eq(SIZE(y), SIZE(dydx), SIZE(yout), 'rk4')
The next three statements:
  hh = h*0.5_sp
  h6 = h/6.0_sp
  xh = x + hh
evaluate $\frac{1}{2}\Delta t$ (hh), $\frac{1}{6}\Delta t$ (h6), and $t + \frac{1}{2}\Delta t$, which goes into xh.

And now we evaluate

\begin{displaymath}x_n + \frac{k_1}{2} = x_n + f(x_n, t_n) \frac{\Delta t}{2}, \end{displaymath}

and store it on yt. This will go into k2 later:
  yt = y + hh * dydx

The next step is to evaluate

\begin{displaymath}k_2 = f\left(x_n + f\left(x_n, t_n\right) \frac{\Delta t}{2},
t_n + \frac{\Delta t}{2}\right)\Delta t, \end{displaymath}

where xn + 1/2 is already stored on yt, and tn+1/2 is stored on xh. Observe that subroutine derivs reverses the order of arguments, compared with our notation, i.e., time (xh) goes first, then position (yt), and the last argument is used for the answer, once the subroutine returns:
  CALL derivs(xh, yt, dyt)
What this function actually returns, is not k2, but $k_2 / \Delta t$, because we haven't multiplied dyt by anything yet.

This multiplication takes place in the next line, when we evaluate

  yt = y + hh * dyt
But remember that hh is really $\Delta t/ 2$, so what we are evaluating here, in fact is:

\begin{displaymath}x_n + \frac{k_2}{2}

so that the next call:
  CALL derivs(xh, yt, dym)

\begin{displaymath}f\left( x_n + \frac{k_2}{2}, t_n + \frac{\Delta t}{2}\right),

which really is $k_3/\Delta t$, and returns it in dym.

Now we evaluate xn + k3 thusly:

  yt = y + h * dym
Remember that whereas hh stands for $\Delta t/ 2$, the single h is just $\Delta t$, so h * dym is indeed plain k3, whereas x + h stands for, in our notation, $t_n + \Delta t$. With these two we can now evaluate k4:
  CALL derivs(x + h, yt, dyt)
What's returned in dyt is $k_4/\Delta t$.

Observe that just before calling derivs we have save the previous value of dyt + dym on dym. That previous value was $k_2/\Delta t + k_3 / \Delta t$. So this means that h6 * 2.0_sp * dym stands for

\begin{displaymath}\frac{\Delta t}{6} 2
\left( \frac{k_2}{\Delta t} + \frac{k_3}{\Delta t} \right)
= \frac{k_2}{3} + \frac{k_3}{3}

In turn h6 * (dydx + dyt), stands for

\begin{displaymath}\frac{\Delta t}{6} \left(
\frac{k_1}{\Delta t} + \frac{k_4}{\Delta t}
\frac{k_1}{6} + \frac{k_4}{6}

Consequently, you can now see clearly that:
  yout = y + h6 * (dydx + dyt + 2.0_sp * dym)

\begin{displaymath}x_n + \frac{k_1}{6} + \frac{k_2}{3} + \frac{k_3}{3}
+ \frac{k_4}{6}

which is the Runge-Kutta formula.

next up previous index
Next: Adaptive Stepsize Control Up: The Fourth-Order Runge-Kutta Method Previous: The Fourth-Order Runge-Kutta Method
Zdzislaw Meglicki