Next: Example Program Up: Other Matrix Operations Previous: Other Matrix Operations

# Gauss-Jordan Elimination

Consider the following equation:

 (5.1)

The equation can be solved as follows:
1.
divide the first row by A11, so that the new A11 becomes 1
2.
subtract from the second row A2i the first row multiplied by A21, so that after that subtraction A21 becomes 0
3.
subtract from the third row A3i the first row multiplied by A31, so that after that subtraction A31 becomes 0
4.
5.
subtract from the row Ani the first row multiplied by An1, so that after that subtraction An1 becomes 0. Now the first column of matrix A is

6.
divide the second row by A22, so that the new A22 becomes 1
7.
subtract from the first row A1i the second row multiplied by A12, so that after that subtraction A12 becomes 0
8.
subtract from the third row A3i the second row multiplied by A32, so that after that subtraction A32 becomes 0
9.
subtract from the fourth row A4i the second row multiplied by A42, so that after that subtraction A42 becomes 0
10.
11.
subtract from the row Ani the second row multiplied by An2, so that after that subtraction An2 becomes 0. Now the first two columns of matrix A

12.
At the same time matching operations must be performed on vector b.

When the process is finished we get the following new equation

 (5.2)

The coefficients bi are quite different now, but the equation can be solved trivially. And so, we get:

 xn = bn (5.3)

In matrix notation the operations performed on A amount to having found such matrix A-1 that

 (5.4)

Now, if during the computation we were to perform all the motions not only on vector b, but also on another matrix, which has been initialized to 1, we would end up with A-1 inside that matrix when the whole thing is over.

When these computations are carried out solutions can be found simultaneously to systems of equations with various right hand sides (but always the same left hand side A so vectors b are often aligned into a matrix, which doesn't have to be square, and that matrix is then also accompanied by a square matrix that has been initialized to 1, so as to yield A-1 as well.

Although the above comprises the heart of the method there is one complication that we have to incorporate. It may happen that a particular diagonal term Akk is zero or very small. In that case dividing whatever's left of Aki by Akk may lead to overflows. If this is the case then the solution is to interchange the rows or the columns so as to place the largest element of Aki, which is called a pivot in the Akk position, and get the small one in the pivot's old location. This is an essential part of the Gauss-Jordan Elimination technique and the program must never be written without pivoting.

Next: Example Program Up: Other Matrix Operations Previous: Other Matrix Operations
Zdzislaw Meglicki
2001-02-26