Next: Convergence of the Jacobi Up: Eigensystems Previous: Introduction

# Jacobi Transformations of a Symmetric Matrix

The orthogonal transformations Ppq annihilate element (p,q) of an object matrix.

Successive transformations undo previously set zeros, but the off-diagonal terms eventually get smaller and smaller, until they get nearly zero, and the only thing that's left is the diagonal with eigenvalues.

Taking the product of all the transformations Ppq yields the matrix of eigenvectors.

The method is absolutely foolproof for symmetric real matrices. But it is slow. Painfully slow for matrices larger than .

It is a simple and stable algorithm though, and it parallelises well too.

The Jacobi rotation matrix has the form:

 (3.5)

All diagonal elements are 1 with the exception of (p,p) and (q,q), which are c. The (p,q) element is s and the (q,p) element is -s. All other elements are 0. Furthermore:
 c = (3.6) s = (3.7)

hence

 c2 + s2 = 1 (3.8)

Because , Ppq-1 = PpqT and so

 (3.9)

This operation will affect only rows p and q and columns p and qleaving the rest of the matrix unchanged.

It is quite easy to see what the effect of equation (3.9) is going to be on selected terms.

First we need to come up with an expression that describes a generic term of matrix Ppq in terms of Kronecker deltas:

 Pij = = (3.10)

Now we can evaluate a'rp for, say, and (assume summation over dummy indexes i and j)

 a'rp = P-1ri aij Pjp = Pir aij Pjp = = = = = (3.11)

In summary:

 a'rp = (3.12) a'rq = (3.13) a'pp = c2 app + s2 aqq - 2 s c apq (3.14) a'qq = s2 app + c2 aqq + 2 s c apq (3.15) a'pq = (3.16)

The purpose of the Jacobi rotation Ppq is to kill a'pq. Thus:

 (3.17)

which implies

 (3.18)

Now, observe that:

This means that if we divide both sides of equation (3.18) by 2, we'll get:

 (3.19)

Let us call this a for convenience.

Next recall the following simple algebraic identity:

 (3.20)

This is easy to see. Since , we have:

 (3.21)

Now, denote by t, so that the equation looks as follows:

 (3.22)

Solving equation (3.22) with respect to t yields the following:

Now, this solution can be rewritten in a computationally more convenient form. Consider the + case:

The - case yields:

If we get a positive in the + case and a negative in the - case we'll end up with the same smaller t that corresponds to an angle . This will yield the most stable reduction. So we can rewrite the formula for t as follows:

 (3.23)

Once we have the t, we get c and s as follows3.1

 c = (3.24) s = c t (3.25)

Equations (3.12) through (3.16) are now rewritten to minimize the round off error and to make them look like the new quantity is equal to the old one plus a small correction. And so,

 a'pq = a'qp = 0 = (c2 - s2) apq + sc (app - aqq) (3.26)

by definition. Hence:

 (3.27)

Then we have
 app' = c2 app + s2 aqq - 2scapq = = = = app - t apq (3.28)

Similarly from the same equation (3.26) we get:

 (3.29)

and then
 aqq' = s2 app + c2 aqq + 2 s c apq = = = = aqq + t apq (3.30)

For a'rp and a'rq the computation is more trivial:

 a'rp = c arp - s arq = arp + (c - 1) arp - s arq = (3.31)

and
 a'rq = c arq + s arp = arq + (c - 1) arq + s arp = (3.32)

where

 (3.33)

Next: Convergence of the Jacobi Up: Eigensystems Previous: Introduction
Zdzislaw Meglicki
2001-02-26