next up previous index
Next: Diagonalization of Hermitian Matrices Up: The QR and QL Previous: QL Algorithm with Implicit

Example Code

SUBROUTINE tqli(d,e,z)

  USE nrtype; USE nrutil, ONLY : assert_eq, nrerror
  USE nr, ONLY : pythag

  IMPLICIT NONE

  REAL(sp), DIMENSION(:), INTENT(inout) :: d, e
  REAL(sp), DIMENSION(:,:), OPTIONAL, INTENT(inout) :: z

  INTEGER(i4b) :: i, iter, l, m, n, ndum
  REAL(sp) :: b, c, dd, f, g, p, r, s
  REAL(sp), DIMENSION(SIZE(e)) :: ff

  n = assert_eq(SIZE(d), SIZE(e), 'tqli: n')
  IF(PRESENT(z)) ndum=assert_eq(n, SIZE(z, 1), SIZE(z, 2), 'tqli: ndum')
  e(:)=EOSHIFT(e(:),1)
  DO l=1, n
     iter=0
     iterate: DO
        DO m=l, n-1
           dd=ABS(d(m)) + ABS(d(m+1))
           IF(ABS(e(m))+dd == dd) EXIT
        END DO
        IF (m==l) EXIT iterate
        IF(iter == 30) CALL nrerror('too many iterations in tqli')
        iter = iter+1
        g=(d(l+1)-d(l))/(2.0_sp*e(l))
        r = pythag(g, 1.0_sp)
        g = d(m) - d(l) + e(l) / (g + SIGN(r, g))
        s = 1.0
        c = 1.0
        p = 0.0
        DO i = m-1, l, -1
           f = s*e(i)
           b = c*e(i)
           r = pythag(f, g)
           e(i+1) = r
           IF(r == 0.0) THEN
              d(i+1) = d(i+1) - p
              e(m) = 0.0
              CYCLE iterate
           END IF
           s = f/r
           c = g/r
           g = d(i + 1) - p
           r = (d(i) - g) * s + 2.0_sp * c * b
           p = s * r
           d(i + 1) = g + p
           g = c * r - b
           IF(PRESENT(z)) THEN
              ff(1:n) = z(1:n,i+1)
              z(1:n, i+1) = s*z(1:n,i) + c*ff(1:n)
              z(1:n,i) = c*z(1:n, i) - s*ff(1:n)
           END IF
        END DO
        d(l) = d(l) - p
        e(l) = g
        e(m) = 0.0
     END DO iterate
  END DO
END SUBROUTINE tqli

This subroutine takes two arrays of length n on input: d, which contains diagonal terms of a tridiagonal symmetric matrix A, and e, which contains sub- (and super-) diagonal terms of A. The third parameter, z, is optional. If it is provided the eigenvectors that correspond to the diagonalising transformation Q (and thus the transformation itself) are written on it. Its dimensions must therefore be $n\times n$. On output the subroutine returns diagonal terms on d, whereas vector e is destroyed. Because there are only n-1 sub-diagonal terms, e(1) is ignored and the valid terms are assumed to live on e(2) through e(n).

The first two statements check if arrays d and e are properly dimensioned. If they are the dimension is written on n, if not, an error message is written on standard output, which contains a string 'tqli: n' and the subroutine aborts. Then, if the third argument z is present, we check if that matrix is correctly dimensioned, i.e., $n\times n$ and write the dimension on ndum, which is a dummy argument that's not used by the subroutine any more, because the real n already lives in n:

  n = assert_eq(SIZE(d), SIZE(e), 'tqli: n')
  IF(PRESENT(z)) ndum=assert_eq(n, SIZE(z, 1), SIZE(z, 2), 'tqli: ndum')

The next step is to shift array e to the left by one place for convenience:

  e(:)=EOSHIFT(e(:),1)
so that the first valid element of it goes into e(1) and the last goes into e(n-1), i.e., the matrix Anow looks as follows:

\begin{displaymath}\left(
\begin{array}{ccccccc}
d_1 & e_1 & 0 & 0 & \ldots & ...
...
0 & 0 & 0 & 0 & \ldots & e_{n-1} & d_n
\end{array} \right)
\end{displaymath}

Now the real fun begins. The main body of the subroutine is a large DO loop, with another DO loop inside and a number of conditionals:

  DO l=1, n
     iter=0
     iterate: DO
        DO m=l, n-1
           dd=ABS(d(m)) + ABS(d(m+1))
           IF(ABS(e(m))+dd == dd) EXIT
        END DO
        IF (m==l) EXIT iterate
        IF(iter == 30) CALL nrerror('too many iterations in tqli')
        iter = iter+1
        
        blah... blah... blah...

        DO i = m-1, l, -1

           blah... blah... blah...

           IF(r == 0.0) THEN
              d(i+1) = d(i+1) - p
              e(m) = 0.0
              CYCLE iterate
           END IF

           blah... blah... blah...

        END DO

        blah... blah... blah...     

        d(l) = d(l) - p
        e(l) = g
        e(m) = 0.0
     END DO iterate
  END DO
There are several ideas here. First we move downwards, i.e., we start with l = 1, diagonalize that in some way, then we deflate matrix A by crossing out the first row and the first column and repeat the same operation to a smaller matrix, and we keep doing that until the whole A is done.

As you remember the QL algorithm generates eigenvalues in the order of diminishing absolute value. So once the first such value pops up in the upper left corner, that part of the job is finished, and the matrix can be deflated.

Within each such deflated submatrix, as l moves towards n, we try to split the submatrix of A on a very small subdiagonal element em, so small that

|dm| + |dm+1| + |em| = |dm| + |dm+1|

within the accuracy of floating point operations. Assuming that we have found such an element, we are eventually going to set it to zero formally (at the end of the iterate loop), while performing some mathematics on the elements between m and l. If we hit on such an element at the very beginning, i.e., if m = l, then it means that for this l the off-diagonal term is already negligible and so we can move to l+1 right away. If we don't find such an element at all, then m will eventually become n-1, i.e., we'll just have to deal with the whole submatrix, that is a deflated A.

Now, recall the shifting strategy:

Choose ks equal to the eigenvalue of the leading $2\times2$ submatrix that is closer to dl.
OK, so we have this little submatrix:

\begin{displaymath}\left(
\begin{array}{cc}
d_l & e_l \\
e_l & d_{l+1}
\end{array} \right)
\end{displaymath}

and how do we find its eigenvalue? Well, we simply hit it with the Jacobi rotation:

\begin{eqnarray*}\theta &=& \frac{d_{l+1} - d_l}{2 e_l} \\
t &=& \frac{\hbox{s...
...\,\hbox{sign}(\theta)}
{\vert\theta\vert + \sqrt{\theta^2 + 1}}
\end{eqnarray*}


and this d'l is now going to be our ks, i.e., the shift.

In the code this is implemented as:

        g=(d(l+1)-d(l))/(2.0_sp*e(l))
        r = pythag(g, 1.0_sp)
        g = d(m) - d(l) + e(l) / (g + SIGN(r, g))
where g is initially $\theta$, r is $\sqrt{\theta^2 + 1}$, the shift ks = d'l becomes d(l) - e(l) / (g + SIGN(r, g)), and the second instance of g becomes dm - ks.

Now let us have a look at the lower right corner of our submatrix of a deflated original A that results from a successful splitting:

\begin{displaymath}\begin{array}{llllll}
\ddots & e_{m-2} & 0 & 0 & 0 & \cdots ...
...vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}\end{displaymath}

We choose our $\cos{\varphi}$ and $\sin{\varphi}$ to be

  
$\displaystyle \sin{\varphi}$ = $\displaystyle s =
\frac{e_{m-1}}{\sqrt{e_{m-1}^2 + \left(d_m - k_s\right)^2}}$ (3.58)
$\displaystyle \cos{\varphi}$ = $\displaystyle c =
\frac{d_m - k_s}{\sqrt{e_{m-1}^2 + \left(d_m - k_s\right)^2}}$ (3.59)

It is easy to see that their squares add up to 1, so they are good $\cos$ and $\sin$.

But what if $\sqrt{e_{m-1}^2 + \left(d_m - k_s\right)^2} = 0$? This can happen only if em-1 = 0 and dm - ks = 0, or so close to zero that the machine can't make a difference. A condition like that should not really happen on the first sweep, because we have already located such m that $e_m < \varepsilon$, and that m wasn't m-1!

This part of the computation is implemented thusly:

           f = s*e(i)
           b = c*e(i)
           r = pythag(f, g)
           e(i+1) = r
           IF(r == 0.0) THEN
              d(i+1) = d(i+1) - p
              e(m) = 0.0
              CYCLE iterate
           END IF
           s = f/r
           c = g/r
where s and c stand for $\sin$ and $\cos$ accordingly.

What are $\sin{\varphi}$ and $\cos{\varphi}$ chosen thusly going to do to em-1?

Remember that we are now looking for a Jacobi rotation that kills element Am-1, m when applied to matrix A from the left, i.e., $\boldsymbol{P}^T\cdot\boldsymbol{A}$:

\begin{displaymath}\left(
\begin{array}{cc}
c & -s \\
s & c
\end{array}\ri...
...m-1} + s \left(d_m - k_s\right) \\
? & ?
\end{array}\right)
\end{displaymath}

You can now see that choosing $\sin{\varphi}$ and $\cos{\varphi}$ as given by equations (3.59) and (3.60) will indeed do the job.

Now, let us have a look at the next part of the code:

           g = d(i + 1) - p
           r = (d(i) - g) * s + 2.0_sp * c * b
           p = s * r
           d(i + 1) = g + p
           g = c * r - b
To understand what happens here we reverse engineer the computation from the end, i.e., from the place where
           d(i + 1) = g + p
where p = s * r, and i = m-1, in other words, this is:

d'm = g + s r

Looking 3 lines up we see that g = dm - p, but initially p is simply 0, so

d'm = dm + s r

In turn

\begin{displaymath}r = \left(d_{m-1} - d_m\right) s + 2 c e_{m-1}
\end{displaymath}

so

\begin{displaymath}d'_m = d_m + s^2\left(d_{m-1} - d_m\right) + 2 s c e_{m-1}
= s^2 d_{m-1} + c^2 d_m + 2 s c e_{m-1}
\end{displaymath}

Now, if you compare this with equation (3.15) on page [*] in section about Jacobi rotations (section 3.2), you will see that this is simply:

\begin{displaymath}a'_{qq} = s^2 a_{pp} + c^2 a_{qq} + 2 s c a_{pq} \\
\end{displaymath}

i.e., a plain Jacobi rotation. This is then followed with the corresponding generation of eigenvectors:
           IF(PRESENT(z)) THEN
              ff(1:n) = z(1:n,i+1)
              z(1:n, i+1) = s*z(1:n,i) + c*ff(1:n)
              z(1:n,i) = c*z(1:n, i) - s*ff(1:n)
           END IF
which mimics the following Jacobi formulae:

\begin{eqnarray*}v'_{rq} &=& s v_{rp} + c v_{rq} \\
v'_{rp} &=& c v_{rp} - s v_{rq}
\end{eqnarray*}


But even though for i = m - 1 the operations correspond indeed to a Jacobi rotation, on the following iterations for $i = m - 2, m - 3, \ldots, l + 1, l$ the operations are somewhat different. They are the Givens rotations, whose purpose is to restore the tridiagonal form.

At the end of this process, as we go all the way back to l, dland el themselves will become affected and em will become annihilated, whereas the matrix will remain tridiagonal. This part of the code closes the iterate loop:

        d(l) = d(l) - p
        e(l) = g
        e(m) = 0.0

The innermost DO loop corresponds to subjecting an ever shrinking submatrix of A to a series of QL iterations. The iterations stop once the element el has been annihilated.


next up previous index
Next: Diagonalization of Hermitian Matrices Up: The QR and QL Previous: QL Algorithm with Implicit
Zdzislaw Meglicki
2001-02-26