20 Indirect methods for linear equations

epgp books

 

TABLE OF CONTENTS

1. Introduction

2. Vector Norms

2.1 Distance between vectors in ℝn

2.2 Convergent sequences

3. Matrix norms and distances

4. The Jacobi iterative method

4.1 The matrix form

5. The Gauss-Seidel method

5.1 The matrix form

6. Convergence of iterative techniques

6.1 Criterion for convergence

 

LEARNING OBJECTIVES

1. Various vector norms are introduced. The associated notion of vector distance is also introduced.

2. Idea of convergence is defined and limit of a sequence of vectors introduced.

3. Next, matrix norms and distances.

4. The Jacobi iterative method is described and also put in the matrix form.

5. Next the Gauss-Seidel method is described. This is also put in the matrix form as well.

6. Convergence of iterative techniques is discussed and simple criterion for convergence described.

 

Indirect methods for solution of linear Equations

 

1. Introduction

In this unit we describe iterative techniques for solving linear systems. Jacobi and the Gauss-Seidel iterative methods are the two classic iterative methods in this class. Iterative techniques are seldom used for solving linear systems of small dimension since the time required for sufficient accuracy exceeds that required for direct techniques such as Gauss elimination. For large systems with a high percentage of 0 entries, however, these techniques are often much more efficient. Systems of this type arise frequently in circuit analysis and in the numerical solution of boundary-value problems and partial-differential equations.

 

2. Vector norms

The l2 norm is what we called the Euclidean norm of the vector x. It is not difficult to show that the norms defined above do satisfy the properties required of a norm that we described in the unit on matrices.

 

Example

 

2.1 Distance between vectors in ℝn

The distance between two vectors is defined as the norm of the difference of the vectors. Thus

 

Example

2.2 Convergent sequence

 

Theorem-1

 

Proof

The converse can be proved in a similar manner.

 

Example

 

Theorem-2

For each

 

Proof

 

3. Matrix norms and distances

 

Theorem-3

 

Theorem-4

 

Proof

 

Example

 

Theorem-5

 

Proof of (ii)

 

Example-1

 

Example-2

 

4. The Jacobi iterative method

 

Example

Consider the system of three equations

 

4.1 The matrix form

 

Example

The system of equations

 

5. The Gauss-Seidel Method

This modification is called the Gauss-Seidel iterative method.

 

Example

Let us look at the earlier example once again:

For more accuracy than this, we need to calculate x(4), x(5), … as well.

 

5.1 The matrix form

Let us now put the Gauss-Sidel method also in the matrix form. For this purpose we multiply equation (20) by aii and take all the (k + 1)th iterate terms on the left hand side:

Define the matrices D, L and U as we did for the Jacobi case. Then the matrix form of the Gauss-Sidel method is

 

6.  Convergence of iterative techniques

We will now study certain general properties of the iterative techniques for the solution of linear systems, in particular the question of convergence of the iterative series. The question of convergence is crucial for all problems where iterative techniques are employed. There is always a possibility that a series of iterates diverges or converges so slowly that it is of no practical use.

First of all we define a convergent matrix. An n x n matrix A is said to be convergent if

 

Theorem-6

For a convergent matrix the following statements are equivalent; that is each follows from the others:

Now we come to the general iteration method, equation (21).

 

Theorem-7

First of all we have the following result: If the spectral radius of T satisfies ρ(T) < 1, then (I T)-1 exists and satisfies the relation

 

Proof

If Tx = λx then (I T)x = (1 – λ)x. Thus if λ is an eigenvalue of T, then 1 − λ is an eigenvalue of    I − T. But

|λ| ≤ ρ(T) < 1,

which implies that λ = 1 is not an eigenvalue of T, and 0 cannot be an eigenvalue of I T. Hence, (I T)-1 exists.                                                                                                                   QED

 

Theorem-8

 

Proof

We omit the proof of the converse of the theorem.

 

Theorem-9

 

Jacobi and Gauss-Seidel techniques

Both Jacobi and Gauss-Seidel techniques can be written in the form [See equations (16),19, 21 and (22)]

Or

Ax = b

We can demonstrate the result for Gauss-Seidel also in the same fashion.

 

6.1 Criterion for convergence

What we need is an executable criterion for the convergence and the rapidity of convergence of the two techniques.

To provide such a criterion, we first introduce a strictly diagonally dominant matrix. An n x n matrix A is said to be strictly diagonally dominant if

If the equality is also allowed in the above equation, then the matrix is said to be diagonally dominant. Strict diagonal dominance implies that each diagonal element is so large that its magnitude is greater than the sum of magnitudes of all other elements in that row.

 

Theorem-10

 

SUMMARY

  • Before taking up the iterative methods of solving linear systems, we introduce various vector norms and the associated notion of vector distances.
  • Next we introduce the idea of convergence for vectors and limit of a sequence of vectors.
  • Then we define matrix norms and distances in the same vein.
  • After these preliminaries, we describe the Jacobi iterative method for solving linear system of equations. The equations are put in a matrix form which is more useful for theoretical analysis.
  • Next we describe the Gauss-Seidel method, often but not always more rapidly converging than the Jacobi method. Here also the equations are put in a matrix form as well.
  • Finally we briefly discuss convergence of these iterative techniques and describe simple criterion for convergence.
you can view video on Indirect methods for linear equations