30 One-Variable Equation
In this section, we deal with equations with only a single variable, most often denoted by , either polynomial, non-linear, or mixed functions.
The Bisection Method
The bisection method is arguably the most basic and elementary analytic approach to obtaining solutions. In fact, this is based on the intermediate value theorem discussed in the previous section Analysis, specifically, limits and continuity.
Let us revisit the intermediate value theorem.
Intermediate Value Theorem
Given a continuous function , where , the intermediate value theorem} warrants the existence of at least one solution in such that , where .
Here, we let . Then, the intermediate value theorem becomes . Let us illustrate the iteration algorithm of bisection method with the second motivating example where .
1. Identify 2 points and such that and .
Therefore, let .
Arbitrarily, let us check .
Therefore, let .
2. Set the first midpoint .
Then, .
Therefore,
Therefore, let , and keep .
3. Continue iteration with .
- if , then we are done with a conclusion .
- if , then let and , and continue iteration.
- if , then let and , and continue iteration.
4. Terminate when one of the stopping criteria is met:
- , where
where is some small number we set (e.g. ).
Stopping Criterion
There is no right or wrong criteria for stopping. Rather, it actually depends on how much we know about the behavior of and . However, if little is known about and , then we would be better off by taking the safest or the most conservative approach, (b) , where , since this criterion is closest to testing relative error.[7]
Let us have a look how first ten iterations look. Errors in the last column were defined as , where , a reasonably accurate estimate of , was assumed to be the solution of .
We observe a decreasing trend in errors, yet note it only is a trend, but not absolute. In fact, the errors have increased from eighth through tenth iterations, though minimally.
Newton’s Method
Now we turn to the famous Newton’s or Newton-Rhapson method, based on Taylor’s theorem. Owing to its utility and power, naturally this technique has become well-known.
Let us first have a look at Taylor’s theorem.
Theorem. (Taylor’s Theorem [7])
Suppose is a -differentiable function , denoted ; is defined on ; and . Then, for all , there exists a number between and with
where
and
In Taylor’s theorem, is decomposed into two different functions and , which denote the -th Taylor polynomial and the remainder term (or truncation error), respectively. By taking to the limit, we obtain the famous Taylor series and the special case where is called a Maclaurin series.
Note that Taylor’s theorem only ensures the existence of some number in the interval , but does not guarantee explicit determination of . Still, the warranted existence of allows us to conduct an error analysis and determine an error bound.
To derive Newton’s method, let us consider the Taylor expansion of expanded about and evaluated at :
Let , i.e. is the solution of our interest. Then,
It is our assumption is small. Therefore, higher powers of would be even smaller. Hence,
Solving for , we have
Generalizing for iteration,
Let us illustrate with an example, where . Following is the tabulated result of first four iterations. As in the case of the previous example, errors were defined as at , a reasonably accurate estimate of .
Note that how rapidly the error decreases below with only two iterations. One might as well think this might be a coincidence. Therefore, let us run another iteration with .
Again, note the rapid reduction in errors. With only five iterations the error went below . In fact, this efficiency in approximation is the basis of the fame and wide applications of the Newton’s method. Comparing with the bisection method based on the same initial value , where merely a fair at best estimate with an error only under was obtained after ten iterations, the Newton’s method yielded much more accurate estimate with an error under with only two iterations.
So, can we say the Newton’s method has dominance over the bisection method? The answer is no. While the Newton’s method is substantially powerful with many strengths, it is not universally applicable. To begin with, the main limitation of the Newton’s method is has to be defined, i.e. the function of interest has to be differentiable, yet this is not always the case for a continuous function .