display | more...
Some examples of Taylor series:

ex = 1 + x + (1/2!) x2 + (1/3!) x3 + (1/4!) x4 + . . .
= Σ[n = 0, ∞, (1/n!) xn]

sin x = x - (1/3!) x3 + (1/5!) x5 - (1/7!) x7 + . . .
= Σ[n = 0, ∞, (-1)n (1/(2n + 1)!) x2n + 1)]

cos x = 1 - (1/2!) x2 + (1/4!) x4 - (1/6!) x6 + . . .
= Σ[n = 0, ∞, (-1)n (1/(2n)!) x2n]

sinh x = x + (1/3!) x3 + (1/5!) x5 + (1/7!) x7 + . . .
= Σ[n = 0, ∞, (1/(2n + 1)!) x2n + 1]

cosh x = 1 + (1/2!) x2 + (1/4!) x4 + (1/6!) x6 + . . .
= Σ[n = 0, ∞, (1/(2n)!) x2n]

ln x = (x - 1) - (1/2) (x - 1)2 + (1/3) (x - 1)3 - (1/4) (x - 1)4+ . . .
= Σ[n = 1, ∞, ((-1)^n) (1/n) (x - 1)n]

1/(1 - x) = 1 + x + x2 + x3 + x4 + . . .
= Σ[n = 0, ∞, xn] (a geometric series)

tan-1 x = x - (1/3) x3 + (1/5) x5 - (1/7) x7 + . . .
= Σ[n = 0, ∞, (-1)n (1/(2n + 1)) x2n + 1]

These series can be used to find some rather strange patterns in otherwise normal functions, like Euler's equation (e = -1). Some other weird properties are the facts that cos ix = cosh x, and sin ix = i sinh x.

Taylor Series

Definition:
A Taylor Series is a polynomial function with an infinite number of terms, expressed as an Infinite Series. Taylor Series can be used to represent any function, as long as it is an analytic function. If the function is not infinitely differentiable, Taylor Series can be used to approximate values of a function. Either way, the approximation will be more accurate along a certain interval of convergence.

Taylor Series Basics
To understand Taylor Series, let's first construct a polynomial: P(x) = a0 + a1x + a2x2 + a3x3 + a4x4 + ... + anxn + ... or, in other words

``` ∞
---    n
\   a x
/    n
---
n=0
```
But what do we use to represent an? First, lets take a step back and investigate the derivatives of this polynomial. Once you take a few derivatives, you'll find that the following pattern appears: the coefficient of P( n )(x) is an * n! . With this knowledge, we can now...

Construct a Basic Taylor Series
Perhaps the most basic Taylor Series is ƒ(x) = ex. We'll use this function to derive our first Taylor Series, centered at x = 0. Our objective is to make the polynomial we constructed above resemble this function. But how are we going to do this? Easy! Take the derivatives of both P(x) and ƒ(x) and set them equal.
First, we must start "cranking out" derivatives. This is easy, as all the derivatives of ƒ(x) = ex are ex!

Work:
```ƒ(x) = ex                                ƒ(0)    = 1
ƒ'(x) = ex                               ƒ'(0)   = 1
ƒ''(x) = ex                              ƒ''(0)  = 1
ƒ'''(x) = ex                             ƒ'''(0) = 1
ƒIV(x) = ex                              ƒIV(0)   = 1

```

Now, we can set P(0) = ƒ(0), P'(0) = ƒ'(0), P''(0) = ƒ''(0), and so forth. But this comes with a catch. Remember how above we came up with the formula for P( n )(x) is an * n! ? This means we must divide the derivatives of ƒ(x) by n!. Thus, we get the general formula for a Taylor series centered at x = 0:

``` ∞
---     n      n
\      ƒ (0) x
/     ---------
---       n!
n=0
```

Congratulations! You just constructed your first Taylor Series for ƒ(x) = ex, centered at x = 0. Since all of its derivatives at 0 are 1, the sigma notation for this series is:

``` ∞
---      n
\       x
/      ---
---     n!
n=0
```

If you graph this, you will see that the polynomial curve starts to fit to the graph of ex, and fits even better as you add more terms to the polynomial. Basically, this is how your calculator preforms advanced operations (integrals, etc.) on complex functions, because polynomials are much easier to work with compared to the complex function you may provide.

We just created a special type of Taylor Series, because we chose to center our approximation at x = 0. This type of series is specifically known as a MacLaurin Series, named for the mathematician who discovered it. The general formula for a Taylor Series centered at x = a is:

``` ∞
---     n          n
\      ƒ (a) (x-a)
/     -------------
---         n!
n=0
```

Constructing other Taylor Series from known Taylor series

Now that we know the Taylor Series for ƒ(x) = ex centered at x = 0, let's construct a series for g(x) = ex-1, centered at x = 0. This is quite easily done. All we have to do is take the original Taylor Series for ƒ(x) and subtract 1! Now say we wanted to construct a Taylor Series for h(x) = ex-1 / x, centered at x = 0. All we have to do for this is take the series for g(x) and divide by x! Both of these can still be easily represented with sigma notation:

```        ∞
---     n
g(x)=  \      x
/     ---
---    n!
n=1

∞
---     n-1
h(x)=  \      x
/     ---
---    n!
n=1
```
This technique is valid for all algebraic and calculus operations and especially useful for derivatives and integrals. For more information, read on!

Common Taylor Series Useful in Forming Other More Complex Series

Nota Bene: All these series are centered at x = 0.

S(x) = sin(x)

```        ∞
---           (2n+1)
S(x) = \       n    x
/   (-1)  ------
---       (2n+1)!
n=0
```

C(x) = cos(x)

```        ∞
---           (2n)
C(x) = \       n    x
/   (-1)   -----
---        (2n)!
n=0
```

L(x) = 1/(1+x)

```        ∞
---     n  n
L(x) = \   (-1)  x
/
---
n=0
```

Delving a Bit Deeper: Intervals of Convergence

Once you start using these Taylor Series, you will start to notice that some will fit the curve of the actual function better as you increase the number of polynomial terms, others will fit better until a point, and then there are even some that only fit the curve when they equal the point chosen to center the series around. To further understand this, we must analyze the interval of convergence.

Let

```       ∞
---     n
P(x)= \   a  z
/    n
---
n=0
```

be a power series. Then there is an extended real number R ( 0 ≤ R ≤ ±) such that:

1.) P(x) converges for all z such that z is a subset of C and |z| < R
and
2.) P(x) diverges for all z such that z is a subset of C and |z| > R

Now that we have that established, just how do we find that number R? Going back to our basic knowledge of mathematical series, we have a veritable cornucopia of options for testing for convergence. Some of the best for power series are the alternating series test (useful when the terms alternate signs, like in sin(x) and cos(x)), and there's always the good Ratio Test. The Ratio Test is usually the best choice for Taylor Series because they contain exponentials and/or factorials. Let's now find the interval of convergence for f(x) = ex using the Ratio Test.

```      |     n+1           |
lim   |    x         n!   |
n→∞  |   -----  *  ----  |   <   1
|   (n+1)!      n   |
|              x    |
```
```
|       |
lim   |   x   |
n→∞  | ----- |   <   1
| (n+1) |
|       |
```

Thus, we get 0 ≤ 1. Since this is ALWAYS true, our R is ∞! This case means that the fit of the Taylor Polynomial will increasingly get better as more polynomial terms are added. As the number of terms approaches infinity, the EXACT function will appear!

There are two other cases that arise:

A.) The interval of convergence is |x| ≤ k. This is the case mentioned earlier where an increasing number of polynomial terms can be added, but a point is reached where adding more terms does not make a better fit. This is because the series diverges from the actual graph of the function around x = k and x = k.

B.) The interval of convergence is a. This is the last case mentioned above where the actual graph of the function only matches the Taylor Series Polynomial where the two graphs intersect. This usually occurs at the point where the Taylor Polynomial was centered, x = a. This interval of convergence is only good for calculating the value of the function at that specified point.

Taylor series were actually discovered by James Gregory, who published Taylor series for functions and Maclaurin series for tan x, sec x, arctan x and arcsec x. Independently, Nicolaus Mercator discovered the Maclaurin series for log(1+x).

Then in 1715, Brook Taylor came along and published Methodus incrementorum directa et inversa, repeating Gregory's earlier work. However, Taylor's book was not well known until Colin Maclaurin quoted him in Treatise of Fluxions in 1742. Thus general polynomial power series came to be known as Taylor series, and Taylor series around zero came to be known as Maclaurin series.

On the other hand, Maclaurin invented the method for solving linear equations which is now called Cramer's rule.

While all the previous writeups have concentrated on the fact that you can calculate a Taylor series by calculating the appropriate derivatives, this is often not the best solution, especially if you just need a few terms. After all, you are finding the nth derivative of f, when all you need is the nth derivative of f , evaluated at a. Seems to me like you are wasting a lot of information and time! It is far easier if you forget that the coefficients can be obtained by differentiation.

In real life, when you are working with Taylor series you often do not need (or may not be able to obtain) the general formula for the series. Instead you will only have the first n terms. Before we go any further, a few basic facts and theorems on Taylor series (which I will not prove). Everything will be centered at 0.

In all the following, f will be a n times continuously differentiable function.

Saying that I have a nth degree taylor series centered at 0 of f means that I have real numbers a0,a1,a2...,an such that f(x)=a0 + a1x+a2x2+...+anxn+ o(xn).
The important part here is o(xn). This means "something which is very small compared to xn when x is small ". To be entirely precise it means a function g such that g/xn tends to 0 when x tends to 0. More details on this in little o notation.

The little o notation is a little ambiguous. Depending on the context it can mean the set of such functions, or it can mean one of those functions. The use of the little o notation makes these local Taylor series, I am only giving information on what happens round a point. If I were to provide an upper bound on the value of the little o (using Lagrange's or the integral remainder for example) then I would have a global Taylor series.

Some theorems

• The first important theorem is the uniqueness of the coefficients. This means that:
• There is only one set of coefficients for say my 3rd degree expansion
• If I later decide that I want a 5th degree series, I haven't wasted my time, since the coefficients up to the 3rd degree are the same as the coefficients of my 3rd degree expansion.
• The second important theorem is the integration theorem. This says that if I have a Taylor series for f then I can integrate that Taylor series term by term, and the result will be a Taylor series for an antiderivative of f. As you may have guessed this is very useful, as our integrated series has degree one higher than the original series. You might like to note than when you integrate o(xn) you get o(xn+1).
• The third theorem is that you can multiply and add Taylor series together, but with one important caveat: you must watch the o(xn) carefully. It's not that difficult to understand why: if after doing your adding up, your series ends with o(x4) + x5 + o(x6), then something is obviously wrong : x is small, so x5 is smaller than x4. So we actually having 3 things very small compared to x4 which is something small compared to x4. All the above terms sum to o(x4). (If you want to show this properly, use the definition of o(xn) in terms of limits) In practice, what you need to do is find the o(xn) with the smallest n, and drop everything of higher degree.

You also need to know is how to multiply the little o's. 2 easy rules:

• o(xn) * o(xm) = o(xn+m)
• xn * o(xm)=o(xm+n)
To be 100% correct, the equal signs should be "is a member of" signs, with the o's on the left denoting elements of the set, and the o's on the right denoting sets.
• The final theorem is that if I have a series for f and a series for g then I can get a series for f(g) by replacing the x's in the series for f by the series for g. When doing this one must watch the little o's carefully. There is also a more subtle requirement but I won't go into that.

Now that we have got this out of the way, an example. I'm going to choose tan. First of all we note that the first term is easy to guess. If we take a 0 degree series, we have f(x) = a0 + o(1), ie a0 is the limit of f as x tends to 0. In the case of tan we know this to be 0.

We have tan(x) = 0 + o(1)
You probably know that tan has the property that tan'(x)=(tan x)2+1. We can use this to our advantage:
Lets find a Taylor series for (tan x)2+1. Squaring the above gives (tan x)2+1= 1 + o(1)*o(1)= 1+o(1) (using our theorems on multiplying and adding).
We can then use the integration theorem :
tan x = a + x + o(x), where a is a constant that comes from integration. However from our uniqueness theorem we know that a=0.

Hence tan x = x + o(x).
Nothing stops us from continuing:

```(tan x)2+1= x2+2 x*o(x)+o(x)*o(x) +1
= x2+1 + 3 o(x2)
= x2+1 + o(x2)

Integrating gives tan x = x + x3/3 + o(x3).
```

We could of course continue, and it would not be very difficult to work out a recursion relationship for these coefficients. The beauty of this method is that I have not calculated anything that I did not use entirely. In case you are not convinced, if you are thinking that i was lucky because of the special property of tan, I propose another example : 1/ (1 + tan x). You can probably see that differentiating this is quickly going to become a mess.

I will use the series for 1/(1+x) = 1-x+x2+...+ (-1)n mod 2xn+ o(xn)

```tan x = x + o(x).
1/(1+x)= 1 - x + o(x)
1/(1+tan x) = 1 - (x+o(x)) + o(x+o(x))=1 - x + o(x)
```
There is no point in adding extra terms from the series of 1/(1+x) as they will be lost because of the o(x) in the second term: if we want more terms we must better our series for tan.
```tan x = x + x3/3 + o(x3)
1/(1+x)= 1 - x + x2 + o(x2)
1/(1+tan x)= 1- (x + x3/3 + o(x3)) +(x + x3/3 + o(x3))2 + o(x + x3/3 + o((x3))2)
```
We can see that we are going to get a o(x2), so when evaluating this we can forget all terms with higher powers.
Hence 1/(1+tan x)=1-x+x2+o(x2)

As you can see getting the extra term was fairly painless. I won't go into the details, but it took me only a minute or so (including the time to get the next term in the series for tan) with a pen and paper to get the next term, -4x3/3 whereas calculating the 3rd derivative of 1/(1+tan x) would have taken me forever and I would probably have made a mistake somewhere (I calulated it with a CAS package and it ain't pretty ...). I would probably feel like killing myself too, watching all those carefully calculated terms disappear when I set x to 0... Verifying that this result is the same as when calculating the derivatives is left an an exercise to the very patient reader. If you're still not convinced, then you might want to try calculating 1/(1-1/(1+tan x)). Using this method, the calculation is almost identical to the previous one, but obtaining the coefficients by differentiation is a mess .

As you can see this is a quick and easy way of finding Taylor series, even if it requires a little more theory. You can even use this method for calculating derivatives at specific points, for example if you needed the 3rd derivative of 1/(1+tan x) at 0.

In a more general context, Taylor series have many uses, basically anytime an approximation for a function is needed, for example for finding a power series solution to an awkward differential equation. They can also be used for finding limits, eg (sin x)/x at 0. The Taylor series for this is 1 + o(x), which shows that the limit is indeed 1.

All in all Taylor series are an extremely useful mathematical tool to have around.

Log in or register to write something here or to contact authors.