34 Errors and Approximations

We begin this next section with the study of numerical methods. We see numerical methods being applied every day in scientific research and engineering, where data approximations and errors are being calculated to lead development in the right direction. In the following subsections of numerical methods we will see some familiar notations such at integrals, differentiation, and various functions from calculus. Much of what we learn in basic math courses such as algebra and calculus lead the way to advanced math concepts like the one we will cover today.

Of the following concepts we will cover, the main job is to calculate the best approximation within range. One of the many benefits to numerical methods and the ability to program the information for computer use. This allows for anyone to input data and quickly run an established macro to output a quick approximation.

With all approximations there is much room for error and not every programmed calculation is always perfect. With this in mind we will begin by covering how to calculate errors and the various methods available, before moving on to numerical integration.

As already mentioned, errors exist in numerical methods because we are making approximations, not spending the time to calculate an exact value for the solution. With any approximation, most likely, the solution is not the exact value. We can also say that the amount the approximation is off by, known as the error, is quantifiable.

Before we can look at approximations we need to understand the different errors and how we calculate them so that when we progress to approximations we are able to calculate the accuracy of the approximation.

We begin with the most straightforward error calculation method, true error.

True Error: Let A be the true value for f(x) at x=a using direct substitution. Let B be the approximate value for f(x) at x=a, using an approximation method. The true error is thus:

    \begin{equation*} E_{T}=|A-B|. \end{equation*}

We will see an example of a true error when we discuss examples of Taylor series. In later subsections we will be introduced to approximations that begin at a point near to our center and we can improve our approximation the more successive calculations we have after the initial approximation. With this kind of approximation environment we usually are not able to calculate the true value at the center point. Therefore we can only calculate the error between successive approximations and through a pattern of approximate errors we can see if the error value is trending down or up which will tell us if the approximations are growing farther away or coming closer together at a center value.

Approximate Error: Let A_{n} for some n\in \mathbb{N} be the previous value of an approximation and let A_{n+1} be the current value for the approximation. We calculate the approximate error as

    \begin{equation*} E_{A}=|A_{n+1}-A_{n}|. \end{equation*}

There is a trend we make out with the errors thus far, each error is calculated as an absolute value. This is done because our starting value for the approximation may be greater or less than the center point we wish to approximate and for that reason we are only concerned with how large the difference is.

We see errors more frequently as percentages in our day to day life. This is why numerical methods teaches us about relative errors. For both true and approximate error we have the following relative error functions.

  1. True Relative Error: Let A be the true value for f at point x and let B be the approximate value for f centered at x. Then the true relative error can be found as

    \begin{equation*} \varepsilon_{T}=|\frac{A-B}{A}|\cdot 100 \end{equation*}

  1. Approximate Relative Error: Let A_{n} be the previous approximate value for f centered at x and let A_{n+1} be the current approximate value for f centered at x. Then the approximate relative error can be found as

    \begin{equation*} \varepsilon_{A}=|\frac{A_{n+1}-A_{n}}{A_{n+1}}|\cdot 100 \end{equation*}

Now that we have established how to calculate the errors for out approximations we can introduce the different approximation methods. In the next few subsections we will also be able to understand when different error calculations are applicable and when they are not. Let us begin with the Taylor series approximation and the appropriate method for finding the errors.

28.1 Taylor Series

We have already covered series in a previous section, with this in mind we introduce the Taylor series. The Taylor series is an expansion of a series for the function f(x) about a point x=a. We can denote the series as

    \begin{equation*} f(x)=f(a)+f'(a)(x-a)+\frac{f''(a)}{2!}(x-a)^{2}+...+\frac{f^{n}(a)}{n!}(x-a)^{n}+...+R_{n} \end{equation*}

We can interpret this notation as follows, beginning with wanting to find the best approximate value for function f at point x. We start the Taylor series by first finding f at a close point a. As we progress through the series expansion, taking derivatives of f at point a and inputting them as the Taylor series requires we are building on our approximation. This means with each calculation we are making our approximation more precise.

At the very end of the Taylor series formula we see an R_{n}, this is what is left between the approximation and the true value of a problem. We call R_{n} the remainder term as it is the total of what remains between the approximation and the true value. The remainder is included in this formula because we began with f(x)= not f(x)\approx this means that the Taylor series plus the remainder gives the exact value for f(x). To calculate the remainder we have the following equation.

    \begin{equation*} R_{n}=\frac{1}{(n+1)!} \cdot f^{n+1}(z) \cdot (x-a)^{n+1} \end{equation*}

Where z is defined as some point such that a\leq z\leq x. Since we are approximating the remainder using the n+1 derivative we can compare the remainder to the true error. By comparing the two values with |R_{n}| > |E_{T}| we are able to check if our calculations using Taylor series are correct.

Times when the Taylor series and relative error is most useful is when certain functions have a discontinuous point x but we have a desire to calculate the value of f(x) at x anyways. This is not always the case, in the following example we know we can easily calculate the desired x but to show the accuracy of the Taylor series we use an example that could also be calculated by substitution.

 

Example:

Given f(x)=\sin{x} approximate f(x) at x=\frac{\pi}{2} with a=\frac{\pi}{3} through the third derivative.

We start by calculating the first three derivatives for f(x).

    \begin{equation*} \begin{split} f(x)&=\sin{x}\\ f'(x)&=\cos{x}\\ f''(x)&=-\sin{x}\\ f^{3}(x)&=-\cos{x} \end{split} \end{equation*}

Next we need to calculate each function at a=\frac{\pi}{3}

    \begin{equation*} \begin{split} f(a)&=\sin{\frac{\pi}{3}}=0.866025\\ f'(a)&=\cos{\frac{\pi}{3}}=0.5\\ f''(a)&=-\sin{\frac{\pi}{3}}=-0.866025\\ f^{3}(a)&=-\cos{\frac{\pi}{3}}=-0.5 \end{split} \end{equation*}

Next we calculate x-a=\frac{\pi}{2} - \frac{\pi}{3}=\frac{\pi}{6}. Now we can substitute in our found values for the Taylor series format we saw earlier and find the solution rounded to 6 significant figures.

    \begin{equation*} \begin{split} \sin \left( \frac{\pi}{2} \right) &\approx 0.866025+\left( 0.5\cdot \left( \frac{\pi}{6} \right) \right)-\left( \frac{0.866025}{2!} \cdot \left( \frac{\pi}{6} \right) ^{2} \right)- \left( \frac{0.5}{3!} \cdot \left( \frac{\pi}{6} \right) ^{3} \right)\\ &=0.866025+0.261799-0.118713-0.011962\\ &=0.997149 \end{split} \end{equation*}

Now to find the true error we know that \sin{\frac{\pi}{2}}=1 so we find the true error to be

    \begin{equation*} E_{T}=|1-0.997149|=0.002851 \end{equation*}

Next we must calculate R_{n}. Let \frac{\pi}{3}< z< \frac{\pi}{2} such that z=\frac{\pi}{2}.

    \begin{equation*} R_{3}=\frac{f^{4}(\frac{\pi}{2})}{4!}  \cdot \left( \frac{\pi}{6} \right)^{4}=0.003132 \end{equation*}

Since 0.003132 > 0.002851 we know we have performed our calculations correctly.

 

This rounds out our discussion on the Taylor series. Next we will cover the series utilized when we are approximating f using a=0, the Maclaurin Series.

28.2 Maclaurin Series

As previously mentioned the Maclaurin series is the Taylor series but for a=0. This should be very straight forward as we have already been introduced to the formula for the Taylor series as well as the formula for calculating the remainder term of the series. The exact same formulas are utilized for the Maclaurin series. We will jump right into an example.

 

Example:

Given f(x)=\sec{x} approximate f(x) at x=\frac{\pi}{9} with a=0 through the fourth derivative [15].

First we begin by finding the derivatives for f through the fourth derivative.

    \begin{equation*} \begin{split} f(x)&=\sec{x}\\ f'(x)&=\sec{x} \tan{x} \\ f''(x)&=\sec{x} \tan^{2}{x} + \sec^{3}{x}\\ f^{3}(x)&=\sec{x} \tan^{3}{x} + \sec^{3}{x} + 3\sec^{3}{x} \tan{x} + 2\sec^{3}{x} \tan{x}\\ &=\sec{x} \tan^{3}{x} + 5\sec^{3}{x} \tan{x}\\ f^{4}(x)&=\sec{x} \tan^{4}{x} +3\sec^{3}{x} \tan^{2}{x} +15sec^{3}{x} \tan^{2}{x} +\sec^{5}{x}\\ &=18\sec^{3}{x} \tan^{2}{x} + 5\sec^{5}{x} +\sec{x} \tan^{4}{x} \end{split} \end{equation*}

Next we calculate the value for each of the derivatives at a=0.

    \begin{equation*} \begin{split} f(a)&=\sec{0}=1\\ f'(a)&=\sec{0} \tan{0} = 0\\ f''(a)&=\sec{0} \tan^{2}{0} + \sec^{3}{0}=1\\ f^{3}(a)&=\sec{0} \tan^{3}{0} + 5\sec^{3}{0} \tan{0}=0\\ f^{4}(a)&=18\sec^{3}{0} \tan^{2}{0} + 5\sec^{5}{0} +\sec{0} \tan^{4}{0}=5 \end{split} \end{equation*}

Now we can substitute these found values into the Taylor series formula.

    \begin{equation*} \begin{split} \sec \left( \frac{\pi}{9} \right) &\approx 1+\frac{1}{2!} \left( \left( \frac{\pi}{9} \right) -0\right)^{2}+\frac{5}{4!} \left( \left( \frac{\pi}{9} \right)-0\right) ^{4}\\ &=1+0.060924+0.003093\\ &=1.064017 \end{split} \end{equation*}

Now to find the true error.

    \begin{equation*} E_{T}=| \left( \sec{ \left( \frac{\pi}{9} \right)} \right) -1.064017|=0.000161 \end{equation*}

Next we must calculate R_{n}. Let 0< z< \frac{\pi}{9} such that z=\frac{\pi}{9}.

    \begin{equation*} R_{5}=\frac{f^{5}(\frac{\pi}{9})}{5!}  \cdot (\frac{\pi}{9})^{5}=0.001455. \end{equation*}

Since 0.001455 > 0.000161 we know we have performed our calculations correctly.

 

As we can see from this example everything all the formulas apply the same as in a normal Taylor series, the only difference between Taylor and Maclaurin series is that in Maclaurin series a=0. The next section of numerical methods we will cover will introduce further measurements for potential inaccuracy in calculations.

28.3 Analysis

In the real world when we take a measurement we need to take into account that the measurement is not perfect, there is always room for error whether it be human error or computer error. With this in mind numerical methods teaches students exact and approximate analysis. Now this is not the same analysis we learned from our real analysis course where we studied axioms of sets and learned about topology. Here we learn how to account for uncertainty in our calculations.

Exact Uncertainty

We begin with the uncertainty in our measurements for variables of a function. Let x be an arbitrary variable in a function u. Then x= \bar{x} \pm e_{x}. This format works for any variable so if we have a as a variable then a=\bar{a} \pm e_{a}. This formula for a variable, given uncertainty, consists of the mean value denoted by \bar{x} and the radius of uncertainty e_{x}. We define the formula for exact analysis of uncertainty as follows.

Definition.

Let u be a function such that \bar{u} is the mean value for u and the minimum and maximum values are defined as u_{min}, u_{max} respectively. The uncertainty of u can then be defined as follows.

    \begin{equation*} e_{u} = \bigtriangleup u =\frac{(u_{max}-\bar{u})+(\bar{u} -u_{min})}{2} = \frac{(u_{max} - u_{min})}{2} \end{equation*}

In order to fully understand the process of exact analysis we refer to an example.

 

Example:

Given u=x^{2} y, let x= 1.0 \pm 0.1 and y= 2.0 \pm 0.1. Find the uncertainty of u using exact analysis.

First, we can look at the relative uncertainty of x and y by the following calculation

    \begin{equation*} \begin{split} \frac{e_{x}}{\bar{x}}&= \frac{0.1}{1.0} \times 100 = 10 \% \\ \frac{e_{y}}{\bar{y}}&= \frac{0.1}{2.0} \times 100 = 5 \% \end{split} \end{equation*}

Now to find the uncertainty of u we first need to find \bar{u}.

    \begin{equation*} \begin{split} \bar{u}&= (\bar{x})^{2} \bar{y}\\ &=1.0^{2} \cdot 2.0\\ &=2 \end{split} \end{equation*}

Next we need to calculate the maximum possible value of u.

    \begin{equation*} \begin{split} u_{max}&= (\bar{x} + e_{x})^{2} (\bar{y}+e_{x})\\ &=(1.0 + 0.1)^{2} (2.0+0.1)\\ &=2.541 \end{split} \end{equation*}

We must also do the same for the minimum possible value of u.

    \begin{equation*} \begin{split} u_{min}&=(\bar{x} -e_{x})^{2} (\bar{y} -e_{y})\\ &=(1.0 - 0.1)^{2} (2.0 -0.1)\\ &=1.539 \end{split} \end{equation*}

Now we can substitute all these found value in the formula for the uncertainty formula e_{u}.

    \begin{equation*} \begin{split} e_{u} &=\frac{(2.541 - 1.539}{2}\\ &= 0.501 \end{split} \end{equation*}

Thus we can conclude that the absolute uncertainty is written as u=2 \pm 0.501. and we find the relative uncertainty to be

    \begin{equation*} \frac{e_{u}}{\bar{u}} \times 100=\frac{0.501}{2} \times 100= 25.05% \end{equation*}

 

We see in this example that in order to analyze the uncertainty in our final calculation output from u we need to measure the uncertainty of every variable involved. Once the maximums and minimums have been found we can use this information to measure the uncertainty of the calculations output from u.

Exact analysis uses maximums and minimums to find the average radius of uncertainty, in the next analysis method we look at what might happen if we attempted to search for the greatest uncertainty.

Approximate Uncertainty

As mentioned previously, approximate uncertainty is another way to measure uncertainty but this time we are going for the greatest possible uncertainty. The approach is slightly different by substituting each variable with its function for possible uncertainty and then expanding the function u before simplifying and substituting in real values. Using the same example as for exact analysis we can see how approximate analysis is applied.

 

Example:

Given u=x^{2} y, let x= 1.0 \pm 0.1 and y= 2.0 \pm 0.1. Find the uncertainty of u using approximate analysis.

Firstly, for approximate analysis we need to take the absolute values of the magnitudes for each variable and sum them up as we see here.

    \begin{equation*} \begin{split} e_{u} = du &= \left| \frac{du}{dx} e_{x} \right| + \left| \frac{du}{dy} e_{y} \right| + \left| \frac{du}{dz} e_{z} \right|\\ &=\left| (2\bar{x} \bar{y})(0.1) \right| + \left| \bar{x}^{2} (0.1) \right|\\ &=\left| 2(1.0)(2.0)(0.1) \right| + \left| (1.0)^{2} (0.1) \right|\\ &=\left|0.4 \right| + \left| 0.1 \right|\\ &=0.5 \end{split} \end{equation*}

Next we calculate \bar{u} the same way we did previously, in the exact analysis example.

    \begin{equation*} \begin{split} \bar{u}&=\bar{x}^{2} \bar{y}\\ &=(1.0)^{2} (2.0) \\ &=2.0 \end{split} \end{equation*}

Thus we have the absolute uncertainty of u as u = 2.0 \pm 0.5 and we can calculate the relative uncertainty the same as the previous example.

    \begin{equation*} \begin{split} \frac{e_{u}}{\bar{u}} \times 100 &= \frac{0.5}{2}\\ &=0.25 \times 100\\ &= 25\% \end{split} \end{equation*}

From this analysis we can also see that the greatest uncertainty of u comes from the uncertainty of x since the greatest uncertainty was found to be 0.4 where as the greatest uncertainty of y was 0.1.

 

From this previous example showing us the steps of approximate analysis we can see that approximate analysis has a significant benefit. If we needed to find the so called “weakest link” in a calculation we could run an approximate analysis in order to find which variable within the calculation is leading to the a larger uncertainty in the final output. In real life situations this is extremely useful in improving the accuracy of certain measurements within scientific research or engineering.

This concludes our portion of errors and approximations. In the next section we will be reunited with integrals, which we first saw in calculus. This time we will be utilizing integrals in order to approximate an area.

License

Senior Seminar Online Portfolio Copyright © by Maggie M Schildt. All Rights Reserved.

Share This Book