# Numerical methods

Calculating integrals analytically can be painful for complicated functions. In some cases, its not even possible due to the fact that not all functions have antiderivatives. When faced with such an issue, we are glad to have numerical methods at our service. ## Intro

How do engineers make sure that huge cargo ships minimize their water resistance to save fuel, or even more importantly don't sink? For such matters, the builders analyze the vessel's hull to find its total (wetted) surface area and the center of buoyancy. This involves integrals, and often complicated ones too. Depending on how the body is designed, they may not even have an exact solution. Luckily, we have numerical methods to the rescue.

Apart from saving us from drowning, techniques like Simpson's rule are what makes Space X able to offer shuttle service to space stations. Furthermore, sophisticated numerical methods along with powerful NVIDIA GPUs has allowed scientists at the University of Illinois to simulate living cells.

## Concept

Some integrals, or rather, most integrals, can't be computed by hand. And if the integral can be computed by hand, the calculation may be pretty lengthy.

For example, have a look at this one:

There's no way you'll be able to compute this by hand. Those integration tools we covered are of no use here.

But integrals are used pretty much everywhere. We're in desperate need of a tool to approximate integrals.

## Math

Up until now we have been approximating integrals by using rectangles. A better way of approximating integrals is to use trapezoids instead of rectangles.

The trapezoids are just rectangles plus a triangle on top. The area of one trapezoid is the base times the two heights divided by two. For example, the integral above can be approximated by two trapezoids:

## Newton's method

### A very brief introduction to numerical methods

Sometimes, using the methods we have seen to differentiate or integrate is ridiculously hard, or even impossible.

Then, we need numerical methods. These methods use a finite dataset to approximate a numerical solution to a problem. The notes of this section will introduce a selection of common numerical methods. So buckle up and enjoy!

When we use a numerical method, we often begin at some start value, which can be an educated guess. Then we take steps towards the solution. The closer to the actual value we need to get, the more steps are we need to step. It's up to us to choose how close we need to be: the price we pay is using more computational power.

### Newton's method for finding roots

Given a function , we can use Newton's method or, as it's also called, Newton-Raphson's method for finding its roots. It requires that the function is differentiable, and uses the equation for the tangent line to find a root.

Newton's method finds roots by repeatedly traveling down tangent lines

This is the method: we start by guessing there's a root close to some value . We draw the tangent line at . Next, we go for a ride down the tangent line to the -axis, calling this new value .

Going straight up to the function curve again, we repeat the procedure, drawing the tangent line at . Sliding down this tangent line to the -axis, we find what we'll call .

If we keep repeating this procedure, we can get arbitrarily close to the root. Check it out for :

Note that here, when we put gives:

Thus, as long as we choose an appropriate starting point, we can use the method to find a numerical value for !

### The method explained

We derive the method more formally using the equation for the tangent line.

Recall that the tangent line to at is:

Sliding down the tangent line is equivalent with plugging the point into the tangent line equation. Since we know , this lets us determine . By shuffling around terms, we obtain:

The formula follows the same pattern for all the subsequent points. This is the general formula for Newton's method:

### Hazards

The method won't work out if either the starting point or some point between the start and the root is a critical point. As the formula has the derivative of the current point in the denominator, this would mean dividing by zero which is big no.

We may also get trouble if the function has vertical asymptotes: there is no way we can jump over such a gap using Newton's method. In general, it's a good idea to use an initial guess which is as close as possible to the root we seek.

If the above fallacies are avoided, as long as is continuous close to the root and that the limit of the series of approximations exists, then the method will be your humble servant.

## Trapezoid rule

Integrals can be really complex. There are a few hacks to compute integrals, like substitution and partial fraction decomposition, but both of these hacks may involve laborious computations. To compute an integral like:

then you might well need several sheets of paper. And you won't get some beautiful expression. Instead, you'll get some hotchpotch of and . Yuck.

While doing your computations, as the diligent student you are, you don't even know if you'll find a solution. It's quite depressing. Especially in an exam situation. Some integrals, like:

can't be solved with the methods covered in this course.

But integrals are super important! They're used everywhere: in finance, biology, physics etc. Integrals are an integral part of the modern world.

We need to be able to calculate complex integrals somehow, even unconventional methods are allowed. The most basic method for calculating integrals is the trapezoid rule. Look at the pic above. The integral can be approximated by the shaded area ,

To make our approximation more accurate, let's use the same reasoning for several intervals. Since the middle points gets summed up several times, we end up with:

## Simpson's rule

### Why do we need Simpson's rule?

Don't you think the trapezoid rule is kind of... basic? You chop up the integral into trapezoids and sum up their area. That's all to it. It's the most basic way to approximate an integral, really. Couldn't all those brainy mathematicians come up with something more fun? The trapezoid rule is a bit like pasta pesto, without any cheese or pine nuts. Basic, right?

The trapezoid rule works, but it converges slowly towards the correct value. You need to decrease the step length by quite a bit for a marked improvement in accuracy. This means you'll need more function evaluations.

If you've got limited computational power, and you need cutting-edge precision, the trapezoid rule won't be enough. Of course, there's nothing stopping you from setting the step length to . But your grandchildren will have died by the time your computer finishes. Or computer might explode or something.

As a stock broker, let's say you'd like to compute a horrible integral. The integral gives the expected value of some stock, so there's much at stake. Instead of using the trapezoid rule, you better use Simpson's rule.

### Alright, but what is Simpson's rule? The idea here is to approximate the function with a quadratic polynomial. Then the integral of the polynomial is the original integral and you'd get the approximation

Now let's chop up the axis and use Simpson's rule on each little bit. Notice that the mid point, is multiplied by . When we add up all the 's, the endpoints are multiplied by , except for the outermost endpoints. The entire integral can be approximated by the sum of each approximation. Put formally, we get:

Above we assumed that n was even, in order to write our sum in a clear way.

This is an ever more accurate approximation. Now you're ready to go off and trade stocks!  