Taylor polynomials

Any mathematical function $f(x)$ can be written as a polynomial of this special form: $$f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n$$ This so-called Taylor polynomial may be of infinite order, but a good approximation is often obtained by considering enough terms and letting the reference point $a$ be close to the point $x$ we are interested in. Terms involving higher order derivatives can then be disregarded without significant loss of accuracy.

Table of contents

    Intro

    Colin Maclaurin was an absolute genius, no question about it. For a very long time, he held the record as the world's youngest professor, appointed by the University of Aberdeen in 1717 when he was only nineteen.

    A topic of particular interest to him was the ability to construct functions from infinitely long polynomials, for which he coined the term Taylor series after the fellow mathematician, Brook Taylor.

    Mclaurin later also got his name recognized in the field of mathemetics. A special case of Taylor series are today known as Maclaurin series.

    Despite what the names suggest though, neither Maclaurin nor Taylor were the first to use polynomials to evaluate other functions.

    In fact, the great Indian mathematician Madhava of Sangamagrama had done precisely that for trigonometric functions already in the fourteenth century.

    Concept

    There are many wild functions out there. How could you, for example, make sense of ? Does it increase? Does it decrease? Does it oscillate? You really can't know, unless you're some kind of genius.

    But we can approximate the whole thing with a polynomial. And polynomials are easy to deal with. They're easy to evaluate, easy to differentiate. We can increase the degree of the polynomial to improve our approximation.

    This is actually how calculators work. They don't have built-in functions for , etc. Instead, a calculator will approximate the value of something, like , with a polynomial.

    Math

    The Taylor series for a function is defined as followed at the point :

    If we instead put a upper bound to our sum we find a approximation to the function around :

    more concretely we could put then we find that:

    If we do this for a function, say , we get a better and better approximation. Below we show a animation of what happens when we increase the number of we use:

    We can also calculate the series for around and decide that we put over upper limit on the summation to be 7

    Maclaurin polynomials

    A Maclaurin polynomial for

    Sometimes, we wish to to rewrite a function as a polynomial. Strangely enough, we can actually do it so that the function still remains the exact same, at least close to a point.

    This comes in handy for computing certain limits, solving messy equations and so on.

    As it's so easy to integrate and differentiate polynomials, the trick can make our lives considerably easier.

    We try it out with . We would like to find a polynomial for around , to make it a bit easier. is an odd function, so we stick to the odd polynomials, starting with .

    That's actually not so bad around zero, if we are interested only in approximating the value. However, adding a to the polynomial makes the approximation of the convexity better:

    We can keep adding terms and make the polynomial a better and better match.

    This is the essence of Maclaurin polynomials. As we add more and more terms, the polynomial will eventually match the function.

    Sometimes, the match is limited to an interval, but for some functions, we obtain a perfect match for the whole domain.

    The Maclaurin polynomial is a special case of Taylor polynomials: whereas Taylor polynomials describe the function from the vicinity of any point , the Maclaurin polynomial describe the function from the vicinity of 0. We'll look at Taylor polynomials in the next section.

    Finding Maclaurin polynomials

    So, how do we find the magical polynomial?

    Well, the first term uses the function value at to approximate the function value close to . The second term approximates the slope, the derivative, using the derivative at . The third approximates the convexity, the second derivative. And so on.

    It all follows a formula:

    This is an infinite series. We call it the Maclaurin series of , and it is exactly equal to .
    If we only write the first terms, then it's called the Maclaurin polynomial of degree , and it is an approximation of the function.

    The Maclaurin polynomial is an approximation of the function

    This is the Maclaurin polynomial of degree for :

    That's just two terms in the example, but, the function value and the second derivative at is , so they are invisible. They still count though!

    When calculating with Maclaurin polynomials, we choose the degree of the polynomial depending on its application.

    The sigma notation

    This is a compact way of writing the Maclaurin polynomial of degree for :

    If you aren't familiar with it, the funky Greek letter is the sum symbol, adding together the terms to the right for all between and .

    Letting go all the way to infinity, we get the Maclaurin series for .

    Some useful Maclaurin series

    Though it's not hard in general to compute Maclaurin polynomials, it's a craft that takes time in higher degrees of the polynomial.

    Thus, these good-looking and common Maclaurin series are worth remembering:

    Taylor polynomials

    Taylor vs Maclaurin

    The approximation of a function value using Maclaurin polynomials gets worse and worse as we step away from .

    In some cases, not even the infinite Maclaurin series go towards the function value, if we go looking for function values too far from . For example, the Maclaurin series for does not go towards the function value if doesn't lie inside .

    But do not despair: what Maclaurin polynomials do around zero, Taylor polynomials can do anywhere.

    Say we want to approximate a function around some point far from zero. Then, we'd like to approximate the function using values closer to . This is what the Taylor polynomials do.

    Taylor polynomials approximate the function around any point

    Taylor polynomials

    The Taylor series around a point for look like this:

    Notice that if we plug in , we're back with good old Maclaurin. Thus, Maclaurin series are but a special case of Taylor!

    Cutting the series off after the term of degree , analogously with the Maclaurin case we get a Taylor polynomial of degree .

    Ok so, the Taylor series don't look as nice as the Maclaurin series. But, look: as we get closer and closer to , the the function value approaches . And, close to , the terms all get pretty small, assuring that the function value dominates over the derivatives. That seems pretty reasonable, right?

    With the sigma notation, the Taylor polynomial of degree is:

    Like with Maclaurin, replacing by gives the Taylor series.

    An example

    Let's find the Taylor polynomial of degree for around .

    As the derivative of the exponential function is itself, computing the derivatives is easy peasy. So, around :

    Some types of computer programs use Taylor series to solve equations. When we don't need the exact answer, this is the best choice for quick calculations.

    Taylor error term

    Let's be frank. Doing Taylor expansions is kind of tedius. It's just a matter of using differentiation over and over again. Then you use the derivatives into Taylor's formula, and voilà, that's your Taylor polynomial.

    For example, if you're asked to find the Taylor polynomial of of degree seven, you'd have to differentiate the function seven times. Keep in mind that this is a kind function, whose derivatives don't grow more complicated as you go along.If you, however, were dealing with , things would quickly escalate in complexity.

    Unless the function is exceptionally kind, we won't bother calculating a Taylor expansion with a large number of terms. We've got to cut it short somewhere. But where?

    In most applications, it doesn't matter whether our approximation is infinitely accurate. An engineer just needs to know an upper error bound for the approximation.

    If he thinks his construction will collapse if external forces exceed N, he can sleep at night if only he knows that the force is less than N. As for the precise number, who cares?

    An example

    Let's say you've found the Taylor polynomial of degree for the function around , which is . Oh, by the way, since it's a Taylor polynomial around , it's likewise a Maclaurin polynomial.

    Next, you approximate by computing . But remember, the Taylor polynomial is just an approximation.

    To find the error, you'd compute the third term of the Taylor polynomial, evaluated in some point . Here, is some number between and , the point at which we're evaluating . The third derivative is , so the error turns out to be

    .

    Let's phrase this more generally. To find the error term of the th degree polynomial around , compute the th term. Instead of evaluating the derivative at , evaluate it in . In symbols, the error comes out to be:

    where lies between and .

    Big O notation

    In computer science, you might come across the concept of Big O. Two programs might return the same output. Yet, the first program might be slow in comparison to the second, taking tons of more time.

    In order to compare different algorithms, computer scientists try estimating how long an algorithm takes to finish. A good algorithm has a short runtime, and the runtime doesn't increase much as you give the algorithm more input data.

    When computer scientists say an algorithm has runtime , where is the size of the input, they mean the runtime increases like the function , with some constant, when gets really big. If the runtime is , it increases like . Clearly, we'd prefer the first case. The basically describes the dominating term as the input grows.

    Mathematicians also have this notion of Big O. In math land, the term is a proxy for some function which behaves like . When doing Taylor expansion, we'll use terms like , etc. The idea of Big O is daunting at first, but all these O's are very friendly. The Big O is like a rug which conceals the remaining terms, so your expression doesn't look quite as messy.

    Let's say you're too lazy to expand beyond the term. You could shove everything else under the Big O rug, writing:

    Or, if you don't even bother expanding until the term, you'd get:

    In opposition to the Big O in computer science, the dominating term here will be the one with the smallest exponent.

    As is small, the function is bigger than . So if you'd have a function like , the would be dominating. In general, the term bosses around with the terms of higher order. So if a function is , where , it's also .

    Table of contents
      Enjoy this topic? Please help us and share it.

      Good outline for calculus and short to-do list

      We work hard to provide you with short, concise and educational knowledge. Contrary to what many books do.

      Get exam problems for old calculus exams divided into chapters

      The trick is to both learn the theory and practice on exam problems. We have categorized them to make it extra easy.

      Apple logo
      Google logo
      © 2024 Elevri. All rights reserved.