Next: Exercises Up: Newton's Method Previous: Purpose
The need to solve nonlinear equations that cannot be solved analytically has led to the development of numerical methods. One of the most commonly used numerical methods is called Newton's method or the Newton-Raphson method. The idea of Newton's method is relatively simple. Suppose you have a nonlinear equation of the form
f(x) = 0where f is a differentiable function. Then the idea of Newton's method is to start with an initial guess x0 for the root and to use the tangent line to f at x=x0 to approximate f. The equation for the tangent line appears below.
y = f(x0)+f'(x0) (x-x0)If the tangent line is a good approximation to f, then the x intercept of the tangent line should be a good approximation to the root. Call this value of x where the tangent line intersects the x axis x1. We can solve the equation above to get the following formula.
In practice, unless the starting point x0 is very close to the root, the value x1 is not close enough to the root we are seeking, so Newton's method is applied again using the tangent line at x=x1. This process can be repeated, leading to a sequence of values where the value of xn+1 is determined from the equation
For example, consider the equation from above,
If you want to solve this equation using Newton's method, the first thing to do is to write it in standard form as
Then, plot the expression to get an idea of where the roots are. You may have to adjust the plot range to locate all of the roots. The following commands show that there are exactly three roots, one at x=0, and two others at about 2 and -2.
> f := x -> sin(x)-x/2;
It isn't very hard to write a Maple command that will do one step of Newton's method. The examples below show a very simple method for doing so, using the function f defined above and a starting value of x=2.2.
> newt := x -> x-f(x)/D(f)(x);
One of the problems with Newton's method is knowing when to stop. With a numerical method, you can never get a root exactly, but only a numerical approximation to the root. There are basically two measures of how good your approximation to the root is. One is the absolute value of f(xn). If this number is less than a certain tolerance, then you should have a good approximation to the root. The other measure is the change in the value of xn. If the value of xn and xn+1 are very close, then this can also be a criterion for stopping. You should go back to the example above and experiment with changing the number of iterations.
The worst thing about Newton's method is that it may fail to converge. Here is an example that doesn't seem to be converging after five iterations. Try increasing the number of iterations and see what happens.
> g := x -> x^5-12*x^4+3*x^3+7*x^2-2*x-1;
William W. Farr