# Numerical Methods

Numerical methods consists of algorithms that are used to find approximate answers. This is extremely useful because the process of finding exact answers can be complicated or even impossible. So, in situations where a close approximation will work just as well as an exact answer, it is often easier to use algorithms rather than struggle through the work of finding an exact answer. Another benefit of using algorithms is that computers can be programmed to do them so that tedious computations do not have to be done by hand. Numerical methods have been beneficial for many areas of study but are most applicable to engineering and science [9].

The use of numerical methods, or numerical algorithms, can be seen as far back as 1650 BC in the Egyptians’ method for finding roots of an equation [3]. Many ancient Greeks also made contributions to numerical methods. For example, the work of Eudoxus of Cnidus and later of Archimedes led to the development of the method of exhaustion, which can be used as a method for approximations [3].

Once calculus was developed, it provided ways of finding exact values for problems that came from many areas, such as business, engineering, medicine, and the physical sciences [3]. However, as mentioned before, many problems were extremely difficult to solve. This in turn motivated individuals to find alternative methods that could be used to find close approximations of the exact solutions, which further developed the area of numerical methods [3]. This need for finding less challenging and complex methods also led to the development of logarithms which were created by John Napier, a Scottish mathematician [3]. Values could be converted to logarithms using special tables. Then, the computations consisted of adding and subtracting these logarithms rather than multiplying and dividing long decimal numbers [3].

Isaac Newton, the same man who helped make calculus what it is today, also made contributions to numerical methods [3]. One such contribution that we will be discussing is the Newton-Raphson method, which is used to find roots of nonlinear equations. Other men who made important contributions to numerical methods were Leonhard Euler, Joseph-Louis Lagrange, and Carl Friedrich Gauss [3]. We will discuss one method that Gauss contributed to known as Gaussian-Seidel Iteration, which is a method for solving systems of many linear equations.

We will begin our discussion with Taylor series, which are used to approximate the values of a function. Additionally, we will examine how to calculate the errors to have an idea of how accurate our approximation is. We will then discuss the Trapezoid Rule and Simpson’s Rules with their respective errors. These rules are used to approximate integrals, which we have seen in Calculus. Next is the Newton-Raphson method and the Secant Rule, which are used to approximate the roots of a nonlinear equation with the derivative (also a Calculus topic) or without the derivative respectively. Finally, we will wrap up this section by discussing the Gaussian-Seidel Iteration method for approximating solutions to systems of linear equations. This is an alternative to the Linear Algebra method of Gaussian elimination.

Before we begin, there is one more idea we need to be aware of, error. Knowing how to calculate errors for the different approximation methods is one of the most important skills in Numerical Methods. The whole point of Numerical Methods is to find approximations because the exact values are extremely difficult to find, but these approximations are not worth much if they are vastly different from the exact answers. The point is that error analysis is what ensures that our approximations are very close to the exact answer so that our approximations can be used. Numerical Methods would be of no use to us if we could not somehow measure the accuracy of our approximations.

Now that we understand the importance of calculating errors, let’s begin our journey into the interesting world of numerical methods.