4 Iterative Methods to solve equation f (x) = 0: Method of False Position

Savita R. Gandhi

epgp books

 

3.1  Introduction 

 

In  our  previous  lecture  (module),  we  learned  about  nature  of  iterative  methods, classification  of  iterative  methods  (bracketing  or  open)  to  find  roots  of  an  equation f (x) = 0. We began with Bisection method to find roots of f (x) = 0 . It was observed that through Bisection method is a very simple, steady and reliable method, it suffers from drawback of being slow. It would be advisable to seek its modification to make it faster. The regular falsi method or method of false position is an attempt in that direction. This also is closed method. As in, Bisection method, at the beginning of iterative process two end points a and b are found so that f (af (b) < 0 to ensure that at least one root lies between a and b.

 

It was also pointed out in disadvantages of Bisection method that it does not take into account of magnitude of function values at end points to generate estimates for the next iteration. It just considers sign of function values. This makes it slow.

 

Thus, in method of false position, the function values at the end points a and b are taken into account to generate the estimates for next iteration.

 

Method of false position differs with Bisection method only in the  methodology  of finding c . In Bisection Method, interval was subdivided into equal parts,  c was mid point given by (a + b) 2.

 

No account was taken of function values at a and b . As at the root, function value is zero, the point, where function value is smaller is expected to be nearer to the root. It can be visualized also from the graph below.

 

3.2 Iterative Process 

 

So, in regular falsi method, to take into consideration the function values at  a and b straight line is drawn, joining (a, f (a)) and (b, f (b)) .

 

The point, where it cuts  X  axis is, the new estimate of the root. Mathematically, estimate of formula for c can be derived as follows:

Equation of straight line joining (a, f (a)) and (b, f (b)) is given by :

If  f (c) = 0,c is the root otherwise if f (af (c) > 0 ,  c takes the role of  a otherwise  c  takes the role of b.

 

3.3  Graphical illustration of the Iterative Process 

 

Case – 1:f(x)

3.4  Different Stopping Criterion 

 

Now, once we have observed the iterative process, let us focus on different stopping criterions. The sequence of c k’s approaches the root. That is why, in case of the Bisection section algorithm, iterative process was stopped, when

 

| c k c k-1| <Î … (3)

 

It is called x – tolerance criterion (X-TOL). The recent most c k is the estimate of the root. There is another stopping criterion called function tolerance (F–TOL). At root, function value is zero, so if estimate is quite near the root then function value would be small. Hence, many times, one would like to stop when

 

|f ( c k)| < d  … (4)

 

where d is some small positive constant like €.

 

You would like to ask: which one is better or what basis decision should be made to choose the particular criterion? Well, it depends on the method, nature of function near the root, objective of finding the root etc. Many times, a combination of different criterions are also applied.

 

If slope of the curve is small near the root, curve is almost horizontal, and then function tolerance may not be appropriate stopping criterion, because curve is rising slowly, function values in neighborhood of the root are going to be small, so even if estimate c is not sufficiently near the root, one may stop. This is visible from the following graph:

On the other hand, if it is expected that slope is high near the root or sequence  of approximations c k’s may converge to the root, like in case of almost vertical graph, then function tolerance ensures that estimate is good approximation to the root.

In simple words, for f (x) , if f ¢(x)  is high near the root, FTOL would be better to use as stopping criterion. 

3.5 Algorithm for Regular False Method 

 

 

Let us solve examples to find root of an equation using method of false position. For comparison purpose, we shall find the roots of the same equations, used for illustrating Bisection Method.

 

Example 1:

 

f (x) = 3x – cos x -1

 

Take a = 0, f (a) = -2.0, c0 = a

 

b = 1, f (b) = 1.459698

f (a). f (b) < 0

 

k a f(a) b f(b) ck f(ck) ck– ck-1
1 0.000000 -2.000000 1.000000 1.459698 0.578085 -0.103255 0.578085
2 0.578085 -0.103255 1.000000 1.459698 0.605959 -0.004081 0.027873
3 0.605959 -0.004081 1.000000 1.459698 0.607057 -0.000159 0.001099
4 0.607057 -0.000159 1.000000 1.459698 0.607100 -0.000006 0.000043
5 0.607100 -0.000006 1.000000 1.459698 0.607102 -0.000000 0.000002
6 0.607102 -0.000000 1.000000 1.459698 0.607102 -0.000000 0.000000

 

Example 2:

 

f (x) = x3 + x -1

 

Take a = 0, f (a) = -1,c0 = a = 0

 

b =1, f (b) =1

 

k a f(a) b f(b) ck f(ck) ck– ck-1
1 0.000000 -1.000000 1.000000 1.000000 0.500000 -0.375000 0.500000
2 0.500000 -0.375000 1.000000 1.000000 0.636364 -0.105935 0.136364
3 0.636364 -0.105935 1.000000 1.000000 0.671196 -0.026428 0.034832
4 0.671196 -0.026428 1.000000 1.000000 0.679662 -0.006376 0.008466
5 0.679662 -0.006376 1.000000 1.000000 0.681691 -0.001525 0.002029
6 0.681691 -0.001525 1.000000 1.000000 0.682176 -0.000364 0.000485
7 0.682176 -0.000364 1.000000 1.000000 0.682292 -0.000087 0.000116
8 0.682292 -0.000087 1.000000 1.000000 0.682319 -0.000021 0.000028
9 0.682319 -0.000021 1.000000 1.000000 0.682326 -0.000005 0.000007
10 0.682326 -0.000005 1.000000 1.000000 0.682327 -0.000001 0.000002
11 0.682327 -0.000001 1.000000 1.000000 0.682328 -0.000000 0.000000

 

Programming Code in C:

The above output was obtained by running the following C program.

 

//Regular falsi method

 

#include<stdio.h>

#include<conio.h>

#include<math.h>

#define fnx(x) pow (X, 3)-X+1

 

void main()

{

float a, b, c,fa, fb, fc, error,co; int i=0;

int count=0;

float err = pow(10.0,-6.0)/2;

 

printf(“\nEnter initial value of a = “); scanf(“%f”, &a);

fa = fnx(a);

printf(“\n value of f(a) = %f”, fa);

 

printf(“\nEnter initial value of b = “); scanf(“%f”, &b);

fb = fnx(b);

printf(“\n value of f(b) = %f”, fb); co=a;

printf(“\nNumber\t A\t F(A)\t B\t F(B)\t C\t F(C)\t Error”);

 

do

{

i++;

c = (a*fb – b*fa) / (fb – fa);

fc = fnx(c); if(fc==0)

{

printf(“\ndesired root is %f”, c);

//exit(0);

}

error=fabs(co-c); co=c;

printf(“\n\n%d\t %f\t %f\t %f\t %f\t %f\t %f\t %f”, i, a, fa, b, fb, c, fc, error); if(fa*fc >0)

{

}

else

{

 

}

a = c; fa= fc;

 

b = c; fb = fc;

}while(error >err);

 

printf(“\n\nThe root of the equation = %f”, c);

 

getch();

}

 

3.7 Observations 

 

1)  From the tables, it is clear that same desired accuracy is obtained in less number of iterations. Question is, is this the  case for these two specific equations, or in general we can expect False Position Method to perform better as compared to Bisection Method. Well, in general method of false position is expected to converge faster to the root than the Bisection Method. That should be also, as to generate next approximation, it takes account of function values by the formula.

 

c   = a ×f (b) –b ×f (a)

k            f (b) – f (a)

and cis just not the mid point of the interval.  cis the point, where straight line joining the two points (a, f(a)) and (b, f(b)) on the curve intersects with X axis. One very important measure of speed of convergence is “Order of Convergence”. We shall discuss over “Order of Convergence” in one of next modules. Method of False Position is regarded as having superlinear convergence.

 

It can be seen, from the Figures [2,3,4,5,6 ], as well as computational tables above, one endpoint has become fixed. The moment curve does not change its shape from concave to convex or vice versa in the bracketing interval [a, b], always in this method, ck ’s will approach the root from one side. As a result ck ’s successively replace one of the two a’s or b’s, as the case may be, and the other remains fixed. For example, in example 1 as well as example 2, from the very beginning b is remaining constant. Other shape figures are also presented below for completeness.

Case–3: 

[a remains fixed throughoutthe process. Only b is changing, sequence ofc’sthat is Its approach the root)

Case–4: 

 

[ b remains fixed, a keeps on changing, sequence of ck ’s (a’s) approach the root ]

 

Thus, as ck ’s approach the root only from one side of the root, interval length no more approaches zero, it will always be greater than either |a – root| or |b – root| depending upon which of the endpoint is fixed.

e.g. If b point isfixed, bracketing subinterval length shall always be >Ib- root 1.

Thus, no more number of intervals can be predetermined to achieve desired accuracy.2) There could be cases, though rare, where Method False Position may get slower as compared to Bisection Method. For example, look at the following graph of the equation f(x) = x10 – 1 = 0.

 

[ False position method is slower than Bisection method ]

 

In  this  example,  sequence  of  approximations  is  converging  slowly.  ck–  ck-1    is becoming small, though not sufficiently near the root. The reason being, slope of the curve is large, at and in neighborhood of the root. That is why; many solvers apply F – TOL in conjunction with X–TOL stopping criterion to find roots of an equation using method of False Position Sequence Following solution tables give comparison of Bisection Method and method of False Position in this specific example, which confirms that for this example, Bisection Method is better.

Table:

f (x) = x10 -1

a = 0, f (a) = 1.00000

b = 1.3, f (b) = 12.785842

Comparison Table 

k Bisection Method False Position Method
ck f(ck) ck f(ck)
1 0.650000 -0.986537 0.094300 -1.000000
2 0.975000 -0.223671 0.181759 -1.000000
3 1.137500 2.626718 0.262874 -0.999998
. . . . .
. . . . .
. . . . .
. . . . .
18 0.999999 -0.000013 0.928885 -0.521790
19 1.000001 0.000012 0.943436 -0.441369
. . . . .
. . . . .
. . . . .
59 0.999999 -0.000014

 

Advantages:

 

1) It is faster than Bisection Method.

2) It is also simpl

3) It guarantees convergenc

4) Only one function evaluation per iteration is required.

 

Disadvantages:

 

1) Not self starting. One needs two initial guesses a and b such that f(a)f(b) < 0.

2) Though, faster than Bisection Method, still regarded as slow.

3) In rare cases, it may become slower than Bisection Method.

 

—————–o————-o—————-o——————o—————–o————o

you can view video on Iterative Methods to solve equation f (x) = 0: Method of False Position
Suggested Reading:
1. Numerical Methods for Engineers by Steven C Chapra & Raymond P Canale, Fifth Edition, Tata McGraw Hill Publication, Special Indian Edition.
2. A Friendly Introduction to Numerical Analysis by Brian Bradie, Pearson Education.
3. Numerical Mathematics and Computing by Ward Cheney & David Kincaid, fifth Edition, Cengage Learning.
4. Computer Oriented Numerical Methods by Dr. N Datta, Vikas Publication.
5. Numerical Methods with Programs in C by T Veerarajan & T Ramachandran, Second Edition, Tata McGraw
Hill Publication.
6. Numerical Methods by V. Rajaraman, Third Edition, Prentice – Hall India Pvt. Ltd.
7. Numerical Methods with C++ Programming by RM Somasundaram & RM Chandrasekaran, Prentice -Hall India Pvt. Ltd.
8. Applied Numerical Analysis by C F Gerald & P O Wheatley, Seventh Edition, Pearson Education Asia, New Delhi.
9. Numerical Methods by Dr. V. N. Vedamurthy & Dr. N.Ch. S.N. Iyengar, Vikas Publication.
10. Numerical Analysis by Richard L. Burden, J. Douglas Faires, Cengage Publishcation.
11. Numerical Methods with programs in BASIC, FORTRAN, Pascal and C++ by S. Balachandra Rao & C. K. Shantha, Revised Edition, Universities Press.
12. A Textbook of Computer Based Numerical and Statistical Techniques by A. K. Jaiswal & Anju Khandelwal, New Age International (P) Ltd, Publishers.
13. http://www.math.niu.edu/~dattab/MATH435.2013/
14.https://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/10RootFinding/falseposition/