Skip to main content
\(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section7.6Taylor Polynomials and Taylor Series

Motivating Questions
  • What is a Taylor polynomial? For what purposes are Taylor polynomials used?

  • What is a Taylor series?

  • What is the connection between power series and Taylor series?

  • How do we determine the accuracy when we use a Taylor polynomial to approximate a function?

Polynomial functions are the simplest possible functions in mathematics, in part because they require only addition and multiplication to evaluate. Consequently, in practical applications, it is often useful to approximate complicated functions using polynomials. In this section we will learn how to obtain polynomial approximations of functions, and how to determine how good an approximation is.

As an example, consider the geometric series

\begin{equation} 1 + x + x^2 + \cdots + x^k + \cdots = \sum_{k=0}^{\infty} x^k\text{.}\label{E-geomx}\tag{7.21} \end{equation}

Here we see something very interesting: because a geometric series converges whenever its ratio \(r\) satisfies \(|r|\lt 1\text{,}\) and the sum of a convergent geometric series is \(\frac{a}{1-r}\text{,}\) we can say that for \(|x| \lt 1\text{,}\)

\begin{equation} 1 + x + x^2 + \cdots + x^k + \cdots = \frac{1}{1-x}\text{.}\label{E-geomxsummed}\tag{7.22} \end{equation}

Equation(7.22) states that the non-polynomial function \(\frac{1}{1-x}\) on the right is equal to the infinite polynomial expresssion on the left. Because the terms on the left get very small as \(k\) gets large, we can truncate the series and say, for example, that

\begin{equation*} 1 + x + x^2 + x^3 \approx \frac{1}{1-x} \end{equation*}

for small values of \(x\text{.}\) This shows one way that a polynomial function can be used to approximate a non-polynomial function; such approximations are one of the main themes in this section and the next.

In Example7.52, we begin our exploration of approximating functions with polynomials.

Example7.52

Example7.20 showed how we can approximate the number \(e\) using linear, quadratic, and other polynomial functions; we then used similar ideas in Example7.35 to approximate \(\ln(2)\text{.}\) In this example, we review and extend the process to find the best quadratic approximation to the exponential function \(e^x\) around the origin. Let \(f(x) = e^x\) throughout this example.

  1. Find a formula for \(P_1(x)\text{,}\) the linearization of \(f(x)\) at \(x=0\text{.}\) (We label this linearization \(P_1\) because it is a first degree polynomial approximation.) Recall that \(P_1(x)\) is a good approximation to \(f(x)\) for values of \(x\) close to \(0\text{.}\) Plot \(f\) and \(P_1\) near \(x=0\) to illustrate this fact.

  2. Since \(f(x) = e^x\) is not linear, the linear approximation eventually is not a very good one. To obtain better approximations, we want to develop a different approximation that bends to make it more closely fit the graph of \(f\) near \(x=0\text{.}\) To do so, we add a quadratic term to \(P_1(x)\text{.}\) In other words, we let

    \begin{equation*} P_2(x) = P_1(x) + c_2x^2 \end{equation*}

    for some real number \(c_2\text{.}\) We need to determine the value of \(c_2\) that makes the graph of \(P_2(x)\) best fit the graph of \(f(x)\) near \(x=0\text{.}\)

    Remember that \(P_1(x)\) was a good linear approximation to \(f(x)\) near \(0\text{;}\) this is because \(P_1(0) = f(0)\) and \(P'_1(0) = f'(0)\text{.}\) It is therefore reasonable to seek a value of \(c_2\) so that

    \begin{align*} P_2(0) \amp = f(0)\text{,} \amp P'_2(0) \amp = f'(0)\text{,} \amp \text{and }P''_2(0) \amp = f''(0)\text{.} \end{align*}

    Remember, we are letting \(P_2(x) = P_1(x) + c_2x^2\text{.}\)

    1. Calculate \(P_2(0)\) to show that \(P_2(0) = f(0)\text{.}\)

    2. Calculate \(P'_2(0)\) to show that \(P'_2(0) = f'(0)\text{.}\)

    3. Calculate \(P''_2(x)\text{.}\) Then find a value for \(c_2\) so that \(P''_2(0) = f''(0)\text{.}\)

    4. Explain why the condition \(P''_2(0) = f''(0)\) will put an appropriate bend in the graph of \(P_2\) to make \(P_2\) fit the graph of \(f\) around \(x=0\text{.}\)

Solution
  1. We know that

    \begin{equation*} P_1(x) = f(0) + f'(0)x = 1+x\text{.} \end{equation*}

    Since \(P_1(0) = f(0) = 1\) and \(P'_1(0) = f'(0) = 1\text{,}\) the graphs of \(P_1\) and \(f\) agree at \(x=a\) and have the same slope at \(x=0\) (which means they go in the same direction at \(x=0\)). This is why \(P_1(x)\) is a good approximation to \(f(x)\) for values of \(x\) close to \(0\text{.}\)

    1. Since

      \begin{equation*} P_2(x) = P_1(x) + c_2(x)^2 = f(0) + f'(0)x + c_2x^2 \end{equation*}

      we have that

      \begin{equation*} P_2(0) = 1 = f(0) \end{equation*}

      as desired.

    2. A simple calculation shows \(P'_2(x) = P'1(x) + 2c_2x\text{.}\) So \(P'_2(0) = P'_1(0) = 1 = f'(0)\) as desired.

    3. A simple calculation shows \(P''_2(x) = 2c_2\text{.}\) So \(P''_2(0) = 2c_2\text{.}\) To have \(P''_2(0) = f''(0)\) we must have \(2c_2 = f''(0)\) or \(c_2 = \frac{f''(0)}{2} = \frac{1}{2}\text{.}\)

    4. The second derivative of a function tells us the concavity of the function. Concavity measures how the slopes of the tangent lines to the graph of the function are changing. This tells us how much bend there is in the graph. So if \(P''_2(0) = f''(0)\text{,}\) then \(P_2\) will have the same bend in it at \(x=0\) as \(f\) does. This will make the graph of \(P_2\) mold to the graph of \(f\) around \(x=0\text{.}\)

SubsectionTaylor Polynomials

Example7.52 illustrates the first steps in the process of approximating functions with polynomials. Using this process we can approximate trigonometric, exponential, logarithmic, and other nonpolynomial functions as closely as we like (for certain values of \(x\)) with polynomials. This is extraordinarily useful in that it allows us to calculate values of these functions to whatever precision we like using only the operations of addition, subtraction, multiplication, and division, which can be easily programmed in a computer.

We next extend the approach in Example7.52 to arbitrary functions at arbitrary points. Let \(f\) be a function that has as many derivatives as we need at a point \(x=a\text{.}\) Recall that \(P_1(x)\) is the tangent line to \(f\) at \((a,f(a))\) and is given by the formula

\begin{equation*} P_1(x) = f(a) + f'(a)(x-a)\text{.} \end{equation*}

\(P_1(x)\) is the linear approximation to \(f\) near \(a\) that has the same slope and function value as \(f\) at the point \(x = a\text{.}\)

We next want to find a quadratic approximation

\begin{equation*} P_2(x) = P_1(x) + c_2(x-a)^2 \end{equation*}

so that \(P_2(x)\) more closely models \(f(x)\) near \(x=a\text{.}\) Consider the following calculations of the values and derivatives of \(P_2(x)\text{:}\)

\begin{align*} P_2(x) \amp = P_1(x) + c_2(x-a)^2 \amp P_2(a) \amp = P_1(a) = f(a)\\ P'_2(x) \amp = P'_1(x) + 2c_2(x-a) \amp P'_2(a) \amp = P'_1(a) = f'(a)\\ P''_2(x) \amp = 2c_2 \amp P''_2(a) \amp = 2c_2\text{.} \end{align*}

To make \(P_2(x)\) fit \(f(x)\) better than \(P_1(x)\text{,}\) we want \(P_2(x)\) and \(f(x)\) to have the same concavity at \(x=a\text{,}\) in addition to having the same slope and function value. That is, we want to have

\begin{equation*} P''_2(a) = f''(a)\text{.} \end{equation*}

This implies that

\begin{equation*} 2c_2 = f''(a) \end{equation*}

and thus

\begin{equation*} c_2 = \frac{f''(a)}{2}\text{.} \end{equation*}

Therefore, the quadratic approximation \(P_2(x)\) to \(f\) centered at \(x=a\) is

\begin{equation*} P_2(x) = f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2\text{.} \end{equation*}

This approach extends naturally to polynomials of higher degree. We define polynomials

\begin{align*} P_3(x) \amp = P_2(x) + c_3(x-a)^3\text{,}\\ P_4(x) \amp = P_3(x) + c_4(x-a)^4\text{,}\\ P_5(x) \amp = P_4(x) + c_5(x-a)^5\text{,} \end{align*}

and in general

\begin{equation*} P_n(x) = P_{n-1}(x) + c_n(x-a)^n\text{.} \end{equation*}

The defining property of these polynomials is that for each \(n\text{,}\) \(P_n(x)\) and all its first \(n\) derivatives must agree with those of \(f\) at \(x = a\text{.}\) In other words we require that

\begin{equation*} P^{(k)}_n(a) = f^{(k)}(a) \end{equation*}

for all \(k\) from 0 to \(n\text{.}\)

To see the conditions under which this happens, suppose

\begin{equation*} P_n(x) = c_0 + c_1(x-a) + c_2(x-a)^2 + \cdots + c_n(x-a)^n\text{.} \end{equation*}

Then

\begin{align*} P^{(0)}_n(a) \amp = c_0\\ P^{(1)}_n(a) \amp = c_1\\ P^{(2)}_n(a) \amp = 2c_2\\ P^{(3)}_n(a) \amp = (2)(3)c_3\\ P^{(4)}_n(a) \amp = (2)(3)(4)c_4\\ P^{(5)}_n(a) \amp = (2)(3)(4)(5)c_5 \end{align*}

and, in general,

\begin{equation*} P^{(k)}_n(a) = (2)(3)(4) \cdots (k-1)(k)c_k = k!c_k\text{.} \end{equation*}

So having \(P^{(k)}_n(a) = f^{(k)}(a)\) means that \(k!c_k = f^{(k)}(a)\) and therefore

\begin{equation*} c_k = \frac{f^{(k)}(a)}{k!} \end{equation*}

for each value of \(k\text{.}\) Using this expression for \(c_k\text{,}\) we have found the formula for the polynomial approximation of \(f\) that we seek. Such a polynomial is called a Taylor polynomial.

Taylor Polynomials

The \(n\)th order Taylor polynomial of \(f\) centered at \(x = a\) is given by

\begin{align*} P_n(x) =\mathstrut \amp f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n\\ =\mathstrut \amp \sum_{k=0}^n \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{align*}

This degree \(n\) polynomial approximates \(f(x)\) near \(x=a\) and has the property that \(P_n^{(k)}(a) = f^{(k)}(a)\) for \(k = 0, 1, \ldots, n\text{.}\)

Example7.53

Determine the third order Taylor polynomial for \(f(x) = e^x\text{,}\) as well as the general \(n\)th order Taylor polynomial for \(f\) centered at \(x=0\text{.}\)

Solution

We know that \(f'(x) = e^x\) and so \(f''(x) = e^x\) and \(f'''(x) = e^x\text{.}\) Thus,

\begin{equation*} f(0) = f'(0) = f''(0) = f'''(0) = 1\text{.} \end{equation*}

So the third order Taylor polynomial of \(f(x) = e^x\) centered at \(x=0\) is

\begin{align*} P_3(x) \amp = f(0) + f'(0)(x-0) + \frac{f''(0)}{2!}(x-0)^2 + \frac{f'''(0)}{3!}(x-0)^3\\ \amp = 1 + x + \frac{x^2}{2} + \frac{x^3}{6}\text{.} \end{align*}

In general, for the exponential function \(f\) we have \(f^{(k)}(x) = e^x\) for every positive integer \(k\text{.}\) Thus, the \(k\)th term in the \(n\)th order Taylor polynomial for \(f(x)\) centered at \(x=0\) is

\begin{equation*} \frac{f^{(k)}(0)}{k!}(x-0)^k = \frac{1}{k!}x^k\text{.} \end{equation*}

Therefore, the \(n\)th order Taylor polynomial for \(f(x) = e^x\) centered at \(x=0\) is

\begin{equation*} P_n(x) = 1+x+\frac{x^2}{2!} + \cdots + \frac{1}{n!}x^n = \sum_{k=0}^n \frac{x^k}{k!}\text{.} \end{equation*}
Example7.54

We have just seen that the \(n\)th order Taylor polynomial centered at \(a = 0\) for the exponential function \(e^x\) is

\begin{equation*} \sum_{k=0}^{n} \frac{x^k}{k!}\text{.} \end{equation*}

In this example, we determine small order Taylor polynomials for several other familiar functions, and look for general patterns.

  1. Let \(f(x) = \frac{1}{1-x}\text{.}\)

    1. Calculate the first four derivatives of \(f(x)\) at \(x=0\text{.}\) Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\frac{1}{1-x}\) centered at \(0\text{.}\)

    2. Based on your results from part (i), determine a general formula for \(f^{(k)}(0)\text{.}\)

  2. Let \(f(x) = \cos(x)\text{.}\)

    1. Calculate the first four derivatives of \(f(x)\) at \(x=0\text{.}\) Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\cos(x)\) centered at \(0\text{.}\)

    2. Based on your results from part (i), find a general formula for \(f^{(k)}(0)\text{.}\) (Think about how \(k\) being even or odd affects the value of the \(k\)th derivative.)

  3. Let \(f(x) = \sin(x)\text{.}\)

    1. Calculate the first four derivatives of \(f(x)\) at \(x=0\text{.}\) Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\sin(x)\) centered at \(0\text{.}\)

    2. Based on your results from part (i), find a general formula for \(f^{(k)}(0)\text{.}\) (Think about how \(k\) being even or odd affects the value of the \(k\)th derivative.)

Answer
    1. \(f^{(k)}(0) = k! \text{.}\)

    2. \begin{equation*} P_n(x) = \sum_{k=0}^n x^k\text{.} \end{equation*}
    1. \(f^{k}(0) = 0\) if \(k\) is odd, and \(f^{2k}(0) = (-1)^k\text{.}\)

    2. \(P_n(x) = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots + (-1)^{n/2}\frac{x^n}{n!}\) if \(n\) is even and \(P_n(x) = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots + (-1)^{(n-1)/2}\frac{x^(n-1)}{(n-1)!}\) if \(n\) is odd.

    1. \(f^{k}(0) = 0\) if \(k\) is even and \(f^{2k+1}(0) = (-1)^k \text{.}\)

    2. \(P_n(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots + (-1)^{(n-1)/2}\frac{x^n}{n!}\) if \(n\) is odd and \(P_n(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots + (-1)^{n/2+1}\frac{x^{n-1}}{(n-1)!}\) if \(n\) is even.

Solution
    1. The first four derivatives of \(f(x)\) at \(x=0\) are

      \begin{align*} f(x) \amp = \frac{1}{1-x} \amp f(0) \amp = 1\\ f'(x) \amp = \frac{1}{(1-x)^2} \amp f'(0) \amp = 1\\ f''(x) \amp = \frac{2}{(1-x)^3} \amp f''(0) \amp = 2\\ f^{(3)}(x) \amp = \frac{3!}{(1-x)^4} \amp f^{(3)}(0) \amp = 3!\\ f^{(4)}(x) \amp = \frac{4!}{(1-x)^5} \amp f^{(4)}(0) \amp = 4!\text{.} \end{align*}

      It appears that the pattern is

      \begin{equation*} f^{(k)}(0) = k!\text{.} \end{equation*}
    2. The \(n\)th order Taylor polynomial for \(f\) at \(x=0\) is

      \begin{equation*} \sum_{k=0}^n \frac{f^{(k)}}{k!} x^k = \sum_{k=0}^n \frac{k!}{k!} x^k = \sum_{k=0}^n x^k\text{.} \end{equation*}

      This makes sense since \(f(x)\) is the sum of the geometric series with ratio \(x\text{,}\) so the \(n\)th order Taylor polynomial should just be the \(n\)th partial sum of this geometric series.

    1. The first four derivatives of \(f(x)\) at \(x=0\) are

      \begin{align*} f(x) \amp = \cos(x) \amp f(0) \amp = 1\\ f'(x) \amp = -\sin(x) \amp f'(0) \amp = 0\\ f''(x) \amp = -\cos(x) \amp f''(0) \amp = -1\\ f^{(3)}(x) \amp = \sin(x) \amp f^{(3)}(0) \amp = 0\\ f^{(4)}(x) \amp = \cos(x) \amp f^{(4)}(0) \amp = 1\text{.} \end{align*}

      It appears that the odd derivatives of \(f(x)\) are all plus or minus \(\sin(x)\) and so have values of 0 at \(x=0\) and the even derivatives are \(\pm \cos(x)\) and have alternating values of 1 and \(-1\) at \(x-0\text{.}\) Since the even numbers can be represented in the form \(2k\) where \(k\) is an integer we have \(f^{k}(0) = 0\) if \(k\) is odd and \(f^{2k}(0) = (-1)^k\text{.}\)

    2. Based on the previous part of this problem the \(n\)th order Taylor polynomial for \(\cos(x)\) is

      \begin{equation*} 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots + (-1)^{n/2}\frac{x^n}{n!} \end{equation*}

      if \(n\) is even and

      \begin{equation*} 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \frac{x^6}{6!} + \cdots + (-1)^{(n-1)/2}\frac{x^{n-1}}{(n-1)!} \end{equation*}

      if \(n\) is odd.

    1. The first four derivatives of \(f(x)\) at \(x=0\) are

      \begin{align*} f(x) \amp = \sin(x) \amp f(0) \amp = 0\\ f'(x) \amp = \cos(x) \amp f'(0) \amp = 1\\ f''(x) \amp = -\sin(x) \amp f''(0) \amp = 0\\ f^{(3)}(x) \amp = -\cos(x) \amp f^{(3)}(0) \amp = -1\\ f^{(4)}(x) \amp = \sin(x) \amp f^{(4)}(0) \amp = 0\text{.} \end{align*}

      It appears that the even derivatives of \(f(x)\) are all plus or minus \(\sin(x)\) and so have values of 0 at \(x=0\) and the odd derivatives are \(\pm \cos(x)\) and have alternating values of 1 and \(-1\) at \(x=0\text{.}\) Since the odd numbers can be represented in the form \(2k+1\) where \(k\) is an integer we have \(f^{k}(0) = 0 \) if \(k\) is even and \(f^{2k+1}(0) = (-1)^k\text{.}\)

    2. Based on the previous part of this problem the \(n\)th order Taylor polynomial for \(\sin(x)\) is

      \begin{equation*} x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots + (-1)^{(n-1)/2}\frac{x^n}{n!} \end{equation*}

      if \(n\) is odd and

      \begin{equation*} x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots + (-1)^{n/2+1}\frac{x^{n-1}}{(n-1)!} \end{equation*}

      if \(n\) is even.

It is possible that an \(n\)th order Taylor polynomial is not a polynomial of degree \(n\text{;}\) that is, the order of the approximation can be different from the degree of the polynomial. For example, in Example7.56 we found that the second order Taylor polynomial \(P_2(x)\) centered at \(0\) for \(\sin(x)\) is \(P_2(x) = x\text{.}\) In this case, the second order Taylor polynomial is a degree 1 polynomial.

SubsectionTaylor Series

In Example7.54 we saw that the fourth order Taylor polynomial \(P_4(x)\) for \(\sin(x)\) centered at \(0\) is

\begin{equation*} P_4(x) = x - \frac{x^3}{3!}\text{.} \end{equation*}

The pattern we found for the derivatives \(f^{(k)}(0)\) describe the higher-order Taylor polynomials, e.g.,

\begin{align*} P_5(x) \amp= x - \frac{x^3}{3!} + \frac{x^{(5)}}{5!}\text{,}\\ P_7(x) \amp= x - \frac{x^3}{3!} + \frac{x^{(5)}}{5!} - \frac{x^{(7)}}{7!}\text{,}\\ P_9(x) \amp= x - \frac{x^3}{3!} + \frac{x^{(5)}}{5!} - \frac{x^{(7)}}{7!} + \frac{x^{(9)}}{9!}\text{,} \end{align*}

and so on. It is instructive to consider the graphical behavior of these functions; Figure7.55 shows the graphs of a few of the Taylor polynomials centered at \(0\) for the sine function.

Figure7.55The order 1, 5, 7, and 9 Taylor polynomials centered at \(x = 0\) for \(f(x) = \sin(x)\text{.}\)

Notice that \(P_1(x)\) is close to the sine function only for values of \(x\) that are close to \(0\text{,}\) but as we increase the degree of the Taylor polynomial the Taylor polynomials provide a better fit to the graph of the sine function over larger intervals. This illustrates the general behavior of Taylor polynomials: for any sufficiently well-behaved function, the sequence \(\{P_n(x)\}\) of Taylor polynomials converges to the function \(f\) on larger and larger intervals (though those intervals may not necessarily increase without bound). If the Taylor polynomials ultimately converge to \(f\) on its entire domain, we write

\begin{equation*} f(x) = \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{equation*}
Taylor Series

Let \(f\) be a function all of whose derivatives exist at \(x=a\text{.}\) The Taylor series for \(f\) centered at \(x=a\) is the series \(T_f(x)\) defined by

\begin{equation*} T_f(x) = \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{equation*}

In the special case where \(a=0\text{,}\) the Taylor series is also called the Maclaurin series for \(f\text{.}\)

From Example7.53 we know the \(n\)th order Taylor polynomial centered at \(0\) for the exponential function \(e^x\text{;}\) thus, the Maclaurin series for \(e^x\) is

\begin{equation*} \sum_{k=0}^{\infty} \frac{x^k}{k!}\text{.} \end{equation*}
Example7.56

In Example7.54 we determined small order Taylor polynomials for a few familiar functions, and also found general patterns in the derivatives evaluated at \(0\text{.}\) Use that information to write the Taylor series centered at \(0\) for the following functions.

  1. \(f(x) = \frac{1}{1-x}\)

  2. \(f(x) = \cos(x)\) (You will need to carefully consider how to indicate that many of the coefficients are 0. Think about a general way to represent an even integer.)

  3. \(f(x) = \sin(x)\) (You will need to carefully consider how to indicate that many of the coefficients are \(0\text{.}\) Think about a general way to represent an odd integer.)

  4. Determine the \(n\) order Taylor polynomial for \(f(x) = \frac{1}{1-x}\) centered at \(x=0\text{.}\)

Answer
  1. \(P(x) = 1 + x + x^2 + x^3 + \cdots + x^n + \cdots\)

  2. \(P(x) = 1 - \frac{1}{2!}x^2 + \frac{1}{4!}x^4 - \cdots + (-1)^{n}\frac{1}{(2n)!}x^{2n} + \cdots \text{.}\)

  3. \(P(x) = x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \cdots + (-1)^{n}\frac{1}{(2n+1)!}x^{2n+1} + \cdots \text{.}\)

  4. \(P_n(x) = 1 + x + x^2 + x^3 + \cdots + x^n\)

Solution
  1. For \(f(x) = \frac{1}{1-x}\text{,}\) its Taylor series is

    \begin{equation*} P(x) = 1 + x + x^2 + x^3 + \cdots + x^n + \cdots \end{equation*}
  2. For \(f(x) = \cos(x)\text{,}\) its Taylor series is

    \begin{equation*} P(x) = 1 - \frac{1}{2!}x^2 + \frac{1}{4!}x^4 - \cdots + (-1)^{n}\frac{1}{(2n)!}x^{2n} + \cdots\text{.} \end{equation*}
  3. For \(f(x) = \sin(x)\text{,}\) its Taylor series is

    \begin{equation*} P(x) = x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \cdots + (-1)^{n}\frac{1}{(2n+1)!}x^{2n+1} + \cdots\text{.} \end{equation*}
  4. For \(f(x) = \frac{1}{1-x}\text{,}\)

    \begin{equation*} P_n(x) = 1 + x + x^2 + x^3 + \cdots + x^n \end{equation*}
Example7.57

Many of the examples we consider in this section are for Taylor polynomials and series centered at 0, but Taylor polynomials and series can be centered at any value of \(a\text{.}\) Here, we look at more examples of such Taylor polynomials and series.

  1. Let \(f(x) = \sin(x)\text{.}\) Find the Taylor polynomials up through order four of \(f\) centered at \(x = \frac{\pi}{2}\text{.}\) Then find the Taylor series for \(f(x)\) centered at \(x = \frac{\pi}{2}\text{.}\) Why should you have expected the result?

  2. Let \(f(x) = \ln(x)\text{.}\) Find the Taylor polynomials up through order four of \(f\) centered at \(x = 1\text{.}\) Then find the Taylor series for \(f(x)\) centered at \(x = 1\text{.}\)

Answer
  1. \begin{align*} P_1(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) = 1\\ P_2(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 = 1-\frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2\\ P_3(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{0}{3!}\left(x - \frac{\pi}{2} \right)^3\\ &= 1-\frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 = P_2(x)\\ P_4(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{0}{3!}\left(x - \frac{\pi}{2} \right)^3 + \frac{1}{4!}\left(x - \frac{\pi}{2} \right)^4\\ &= 1 - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{1}{4!}\left(x - \frac{\pi}{2} \right)^4\\ P(x) &= 1 - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{1}{4!}\left(x - \frac{\pi}{2} \right)^4 - \frac{1}{6!}\left(x - \frac{\pi}{2} \right)^6 + \cdots \end{align*}
  2. \begin{equation*} P_4(x) = 0 + 1(x-1) - \frac{1}{2!}(x-1)^2 + \frac{2}{3!}(x-1)^3 - \frac{6}{4!}(x-1)^4\text{.} \end{equation*}
    \begin{equation*} P(x) = 1(x-1) - \frac{1}{2}(x-1)^2 + \frac{1}{3}(x-1)^3 - \frac{1}{4}(x-1)^4 + \frac{1}{5}(x-1)^5 - \cdots \end{equation*}
Solution
  1. For \(f(x) = \sin(x)\text{,}\) \(f'(x) = \cos(x)\text{,}\) \(f''(x) = -\sin(x)\text{,}\) \(f'''(x) = -\cos(x)\text{,}\) and \(f^{(4)}(x) = \sin(x)\text{.}\) Thus, \(f\left(\frac{\pi}{2} \right) = 1\text{,}\) \(f'\left(\frac{\pi}{2} \right) = 0\text{,}\) \(f''\left(\frac{\pi}{2} \right) = -1\text{,}\) \(f'''\left(\frac{\pi}{2} \right) = 0\text{,}\) and \(f^{(4)}\left(\frac{\pi}{2} \right) = 1\text{.}\) It follows that the first four Taylor polynomials of \(f\) are

    \begin{align*} P_1(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) = 1\\ P_2(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 = 1-\frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2\\ P_3(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{0}{3!}\left(x - \frac{\pi}{2} \right)^3\\ &= 1-\frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 = P_2(x)\\ P_4(x) &= 1 + 0\left(x - \frac{\pi}{2} \right) - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{0}{3!}\left(x - \frac{\pi}{2} \right)^3 + \frac{1}{4!}\left(x - \frac{\pi}{2} \right)^4\\ &= 1 - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{1}{4!}\left(x - \frac{\pi}{2} \right)^4 \end{align*}

    From the pattern, the Taylor series for \(f(x)\) centered at \(x = \frac{\pi}{2}\) is

    \begin{equation*} P(x) = 1 - \frac{1}{2!}\left(x - \frac{\pi}{2} \right)^2 + \frac{1}{4!}\left(x - \frac{\pi}{2} \right)^4 - \frac{1}{6!}\left(x - \frac{\pi}{2} \right)^6 + \cdots \end{equation*}

    which is expected because of the repeating patterns in the derivatives of the sine function evaluated at \(\frac{\pi}{2}\text{.}\)

  2. For \(f(x) = \ln(x)\text{,}\) \(f'(x) = x^{-1}\text{,}\) \(f''(x) = -x^{-2}\text{,}\) \(f'''(x) = 2x^{-3}\text{,}\) and \(f^{(4)}(x) = -6x^{-4}\text{.}\) It follows that \(f(1) = 0\text{,}\) \(f'(1) = 1\text{,}\) \(f''(1) = -1\text{,}\) \(f'''(1) = 2\text{,}\) and \(f^{(4)}(1) = -6\text{.}\) Thus, the fourth Taylor polynomial (in which we can see the polynomials of lower degree) is

    \begin{equation*} P_4(x) = 0 + 1(x-1) - \frac{1}{2!}(x-1)^2 + \frac{2}{3!}(x-1)^3 - \frac{6}{4!}(x-1)^4\text{.} \end{equation*}

    Simplifying the coefficients and seeing the pattern, it follows that the Taylor series for \(f(x)\) centered at \(x = 1\) is

    \begin{equation*} P(x) = 1(x-1) - \frac{1}{2}(x-1)^2 + \frac{1}{3}(x-1)^3 - \frac{1}{4}(x-1)^4 + \frac{1}{5}(x-1)^5 - \cdots \end{equation*}
Example7.58
  1. Plot the graphs of several of the Taylor polynomials centered at \(0\) (of order at least 5) for \(e^x\) and convince yourself that these Taylor polynomials converge to \(e^x\) for every value of \(x\text{.}\)

  2. Draw the graphs of several of the Taylor polynomials centered at \(0\) (of order at least 6) for \(\cos(x)\) and convince yourself that these Taylor polynomials converge to \(\cos(x)\) for every value of \(x\text{.}\) Write the Taylor series centered at \(0\) for \(\cos(x)\text{.}\)

  3. Draw the graphs of several of the Taylor polynomials centered at \(0\) for \(\frac{1}{1-x}\text{.}\) Based on your graphs, for what values of \(x\) do these Taylor polynomials appear to converge to \(\frac{1}{1-x}\text{?}\) How is this situation different from what we observe with \(e^x\) and \(\cos(x)\text{?}\) In addition, write the Taylor series centered at \(0\) for \(\frac{1}{1-x}\text{.}\)

Answer
  1. It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals.

  2. It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals.

  3. The Taylor polynomials converge to \(\frac{1}{1-x}\) only on the interval \((-1,1)\text{.}\)

Solution
  1. The graphs of the 10th (magenta), 20th (blue), and 30th (green) Taylor polynomials centered at \(0\) for \(e^x\) are shown below along with the graph of \(f(x)\) in red:

    It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals. So it looks like the Taylor polynomials converge to \(e^x\) for every value of \(x\text{.}\)

  2. The graphs of the 10th (magenta), 20th (blue), and 30th (green) Taylor polynomials centered at \(0\) for \(\cos(x)\) are shown below along with the graph of \(f(x)\) in red:

    It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals. So it looks like the Taylor polynomials converge to \(\cos(x)\) for every value of \(x\text{.}\) Based on the \(n\)th order Taylor polynomials we found earlier for \(\cos(x)\text{,}\) the Taylor series for \(f(x)\) centered at \(0\) is

    \begin{equation*} \sum_{k=0}^{\infty} \frac{x^{2k}}{(2k)!}\text{.} \end{equation*}
  3. The graphs of the 10th (magenta), 20th (blue), and 30th (green) Taylor polynomials centered at \(0\) for \(\frac{1}{1-x}\) are shown below along with the graph of \(f(x)\) in red:

    It appears that as we increase the order of the Taylor polynomials, they only fit the graph of \(f\) better and better over the interval \((-1,1)\) and appear to diverge outside that interval. So it looks like the Taylor polynomials converge to \(\frac{1}{1-x}\) only on the interval \((-1,1)\text{.}\)

    Based on the \(n\)th order Taylor polynomials we found earlier for \(\frac{1}{1-x}\text{,}\) the Taylor series for \(f(x)\) centered at \(0\) is

    \begin{equation*} \sum_{k=0}^{\infty} x^k\text{.} \end{equation*}

The Maclaurin series for \(e^x\text{,}\) \(\sin(x)\text{,}\) \(\cos(x)\text{,}\) and \(\frac{1}{1-x}\) will be used frequently, so we should be certain to know and recognize them well.

SubsectionRelating Power Series and Taylor Series

There is an important connection between power series and Taylor series. This is illustrated in the following example.

Example7.59

Suppose \(f\) is defined by a power series centered at 0 so that

\begin{equation*} f(x) = \sum_{k=0}^{\infty} a_kx^k\text{.} \end{equation*}
  1. Determine the first 4 derivatives of \(f\) evaluated at 0 in terms of the coefficients \(a_k\text{.}\)

  2. Show that \(f^{(n)}(0) = n!a_n\) for each positive integer \(n\text{.}\)

  3. Explain how the result of (b) tells us the following:

    On its interval of convergence, a power series is the Taylor series of its sum.

Answer
  1. \begin{align*} f(0) \amp = a_0\\ f'(0) \amp = a_1\\ f''(0) \amp = 2!a_2\\ f^{(3)}(0) \amp = 3!a_3\\ f^{(4)}(0) \amp = 4!a_4 \end{align*}
  2. \begin{align*} f'(x) \amp = \sum_{k=1}^{\infty} ka_kx^{k-1}\\ f''(x) \amp = \sum_{k=2}^{\infty} k(k-1)a_kx^{k-2}\\ f^{(3)}(x) \amp = \sum_{k=3}^{\infty} k(k-1)(k-2)a_kx^{k-3}\\ \vdots \amp \ \qquad \vdots\\ f^{(n)}(x) \amp = \sum_{k=n}^{\infty} k(k-1)(k-2) \cdots (k-n+1) a_kx^{k-n}\\ \vdots \amp \ \qquad \vdots \end{align*}

    So

    \begin{align*} f(0) \amp = a_0\\ f'(0) \amp = a_1\\ f''(0) \amp = 2!a_2\\ f^{(3)}(0) \amp = 3!a_3\\ \vdots \amp \ \qquad \vdots\\ f^{(k)}(0) \amp = k!a_k\\ \vdots \amp \ \qquad \vdots \end{align*}

    and

    \begin{equation*} a_k = \frac{f^{(k)}(0)}{k!} \end{equation*}

    for each \(k \geq 0\text{.}\) But these are just the coefficients of the Taylor series expansion of \(f\text{,}\) which leads us to the following observation.

Solution
  1. Observe that

    \begin{align*} f'(x) \amp = \sum_{k=1}^{\infty} ka_kx^{k-1}\\ f''(x) \amp = \sum_{k=2}^{\infty} k(k-1)a_kx^{k-2}\\ f^{(3)}(x) \amp = \sum_{k=3}^{\infty} k(k-1)(k-2)a_kx^{k-3}\\ f^{(4)}(x) \amp = \sum_{k=4}^{\infty} k(k-1)(k-2)(k-3)a_kx^{k-4} \end{align*}

    and therefore

    \begin{align*} f(0) \amp = a_0\\ f'(0) \amp = a_1\\ f''(0) \amp = 2!a_2\\ f^{(3)}(0) \amp = 3!a_3\\ f^{(4)}(0) \amp = 4!a_4\text{.} \end{align*}
  2. Since

    \begin{equation*} f^{(n)}(x) = \sum_{k=n}^{\infty} k(k-1)(k-2) \cdots (k-n+1) a_kx^{k-n} \end{equation*}

    every term of this series vanishes at \(x = 0\) except the first. Thus it follows \(f^{(n)}(0) = n(n-1)(n-2) \cdots (1) a_n\text{,}\) so \(f^{(n)}(0) = n! a_n\text{.}\)

  3. Since \(a_k = \frac{f^{(k)}(0)}{k!}\) for each \(k \geq 0\text{,}\) we see that these are just the coefficients of the Taylor series expansion of \(f\text{,}\) and thus we get the unsurprising result that the coefficients of a power series are identical to the Taylor series of the power series.

Thus, on its interval of convergence, every power series is in fact the Taylor series of the function it defines.

SubsectionThe Interval of Convergence of a Taylor Series

In the previous section (in Figure7.55 and Example7.58) we observed that the Taylor polynomials centered at \(0\) for \(e^x\text{,}\) \(\cos(x)\text{,}\) and \(\sin(x)\) converged to these functions for all values of \(x\) in their domain, but that the Taylor polynomials centered at \(0\) for \(\frac{1}{1-x}\) converge to \(\frac{1}{1-x}\) on the interval \((-1,1)\) and diverge for all other values of \(x\text{.}\) So the Taylor series for a function \(f(x)\) does not need to converge for all values of \(x\) in the domain of \(f\text{.}\)

Our observations suggest two natural questions: can we determine the values of \(x\) for which a given Taylor series converges? And does the Taylor series for a function \(f\) actually converge to \(f(x)\text{?}\)

Example7.60

Graphical evidence suggests that the Taylor series centered at \(0\) for \(e^x\) converges for all values of \(x\text{.}\) To verify this, use the Ratio Test to determine all values of \(x\) for which the Taylor series

\begin{equation} \sum_{k=0}^{\infty} \frac{x^k}{k!}\label{eq-8-5-exponential}\tag{7.23} \end{equation}

converges absolutely.

Solution

Recall that the Ratio Test applies only to series of nonnegative terms. In this example, the variable \(x\) may have negative values. But we are interested in absolute convergence, so we apply the Ratio Test to the series

\begin{equation*} \sum_{k=0}^{\infty} \left| \frac{x^k}{k!} \right| = \sum_{k=0}^{\infty} \frac{| x |^k}{k!}\text{.} \end{equation*}

Now, observe that

\begin{align*} \lim_{k \to \infty} \frac{a_{k+1}}{a_k} \amp = \lim_{k \to \infty} \frac{\frac{| x |^{k+1}}{(k+1)!} }{ \frac{| x |^k}{k} }\\ \amp = \lim_{k \to \infty} \frac{| x |^{k+1}k!}{ | x |^{k}(k+1)! }\\ \amp = \lim_{k \to \infty} \frac{| x |}{k+1}\\ \amp = 0 \end{align*}

for any value of \(x\text{.}\) So the Taylor series (7.23) converges absolutely for every value of \(x\text{,}\) and thus converges for every value of \(x\text{.}\)

One question still remains: while the Taylor series for \(e^x\) converges for all \(x\text{,}\) what we have done does not tell us that this Taylor series actually converges to \(e^x\) for each \(x\text{.}\) We'll return to this question when we consider the error in a Taylor approximation near the end of this section.

As we did for power series, we define the interval of convergence of a Taylor series to be the set of values of \(x\) for which the series converges. And as we did with power series, we typically use the Ratio Test to find the values of \(x\) for which the Taylor series converges absolutely, and then check the endpoints separately if the radius of convergence is finite.

Example7.61
  1. Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f(x) = \frac{1}{1-x}\) centered at \(x=0\text{.}\)

  2. Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f(x) = \cos(x)\) centered at \(x=0\text{.}\)

  3. Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f(x) = \sin(x)\) centered at \(x=0\text{.}\)

Answer
  1. \((-\infty, \infty)\text{.}\)

  2. \((-\infty, \infty)\text{.}\)

  3. The interval \((-1,1)\text{.}\)

Solution
  1. Using the Ratio Test with the \(k\)th term \(\frac{|x|^{2k}}{(2k)!}\) we get

    \begin{align*} \lim_{k \to \infty} \frac{ \frac{|x|^{2(k+1)}}{(2(k+1))!} }{ \frac{|x|^{2k}}{(2k)!} } \amp = \lim_{k \to \infty} \frac{|x|^{2(k+1)}(2k)!}{|x|^{2k}(2(k+1))!}\\ \amp = \lim_{k \to \infty} \frac{|x|^{2}}{(2k+2)(2k+1)}\\ \amp = 0\text{.} \end{align*}

    So the interval of convergence of the Taylor series for \(f(x) = \cos(x)\) centered at \(x=0\) is \((-\infty, \infty)\text{.}\)

  2. Using the Ratio Test with the \(k\)th term \(\frac{|x|^{2k+1}}{(2k+1)!}\) we get

    \begin{align*} \lim_{k \to \infty} \frac{ \frac{|x|^{2(k+1)+1}}{(2(k+1)+1)!} }{ \frac{|x|^{2k+1}}{(2k+1)!} } \amp = \lim_{k \to \infty} \frac{|x|^{2(k+1)+1}(2k+1)!}{|x|^{2k+1}(2(k+1)+1)!}\\ \amp = \lim_{k \to \infty} \frac{|x|^{2}}{(2k+3)(2k+2)}\\ \amp = 0\text{.} \end{align*}

    So the interval of convergence of the Taylor series for \(f(x) = \sin(x)\) centered at \(x=0\) is \((-\infty, \infty)\text{.}\)

  3. Using the Ratio Test with the \(k\)th term \(|x|^{k}\) we get

    \begin{equation*} \lim_{k \to \infty} \frac{ |x|^{k+1} }{ |x|^{k} } = \lim_{k \to \infty} |x| = |x|\text{,} \end{equation*}

    So the series \(\sum_{k=0}^{\infty} x^k\) converges absolutely when \(|x| \lt 1\) or for \(-1 \lt x \lt 1\) and diverges when \(|x| \gt 1\text{.}\) Since the Ratio Test doesn't tell us what happens when \(x=1\text{,}\) we need to check the endpoints separately.

    • When \(x=1\) we have the series \(\sum_{k=0}^{\infty} 1\) which diverges since \(\lim_{k \to \infty} 1 \neq 0\text{.}\)

    • When \(x=-1\) we have the series \(\sum_{k=0}^{\infty} (-1)^k\) which diverges since \(\lim_{k \to \infty} (-1)^k\) does not exist.

    Therefore, the interval of convergence of the Taylor series for \(f(x) = \frac{1}{1-x}\) centered at \(x=0\) is \((-1,1)\text{.}\)

The Ratio Test allows us to determine the set of \(x\) values for which a Taylor series converges absolutely. However, just because a Taylor series for a function \(f\) converges, we cannot be certain that the Taylor series actually converges to \(f(x)\text{.}\) To show why and where a Taylor series does in fact converge to the function \(f\text{,}\) we next consider the error that is present in Taylor polynomials.

SubsectionError Approximations for Taylor Polynomials

We now know how to find Taylor polynomials for functions such as \(\sin(x)\text{,}\) and how to determine the interval of convergence of the corresponding Taylor series. We next develop an error bound that will tell us how well an \(n\)th order Taylor polynomial \(P_n(x)\) approximates its generating function \(f(x)\text{.}\) This error bound will also allow us to determine whether a Taylor series on its interval of convergence actually equals the function \(f\) from which the Taylor series is derived. Finally, we will be able to use the error bound to determine the order of the Taylor polynomial \(P_n(x)\) that we will ensure that \(P_n(x)\) approximates \(f(x)\) to the desired degree of accuracy.

For this argument, we assume throughout that we center our approximations at \(0\) (but a similar argument holds for approximations centered at \(a\)). We define the exact error, \(E_n(x)\text{,}\) that results from approximating \(f(x)\) with \(P_n(x)\) by

\begin{equation*} E_n(x) = f(x) - P_n(x)\text{.} \end{equation*}

We are particularly interested in \(|E_n(x)|\text{,}\) the distance between \(P_n\) and \(f\text{.}\) Because

\begin{equation*} P^{(k)}_n(0) = f^{(k)}(0) \end{equation*}

for \(0 \leq k \leq n\text{,}\) we know that

\begin{equation*} E^{(k)}_n(0) = 0 \end{equation*}

for \(0 \leq k \leq n\text{.}\) Furthermore, since \(P_n(x)\) is a polynomial of degree less than or equal to \(n\text{,}\) we know that

\begin{equation*} P_n^{(n+1)}(x) = 0\text{.} \end{equation*}

Thus, since \(E^{(n+1)}_n(x) = f^{(n+1)}(x) - P_n^{(n+1)}(x)\text{,}\) it follows that

\begin{equation*} E^{(n+1)}_n(x) = f^{(n+1)}(x) \end{equation*}

for all \(x\text{.}\)

Suppose that we want to approximate \(f(x)\) at a number \(c\) close to \(0\) using \(P_n(c)\text{.}\) If we assume \(|f^{(n+1)}(t)|\) is bounded by some number \(M\) on \([0, c]\text{,}\) so that

\begin{equation*} \left|f^{(n+1)}(t)\right| \leq M \end{equation*}

for all \(0 \leq t \leq c\text{,}\) then we can say that

\begin{equation*} \left|E^{(n+1)}_n(t)\right| = \left|f^{(n+1)}(t)\right| \leq M \end{equation*}

for all \(t\) between \(0\) and \(c\text{.}\) Equivalently,

\begin{equation} -M \leq E^{(n+1)}_n(t) \leq M\label{E-ErrorIneq}\tag{7.24} \end{equation}

on \([0, c]\text{.}\) Next, we integrate the three terms in Inequality(7.24) from \(t = 0\) to \(t = x\text{,}\) and thus find that

\begin{equation*} \int_0^x -M \ dt \leq \int_0^x E^{(n+1)}_n(t) \ dt \leq \int_0^x M \ dt \end{equation*}

for every value of \(x\) in \([0, c]\text{.}\) Since \(E^{(n)}_n(0) = 0\text{,}\) the First FTC tells us that

\begin{equation*} -Mx \leq E^{(n)}_n(x) \leq Mx \end{equation*}

for every \(x\) in \([0, c]\text{.}\)

Integrating this last inequality, we obtain

\begin{equation*} \int_0^x -Mt \ dt \leq \int_0^x E^{(n)}_n(t) \ dt \leq \int_0^x Mt \ dt \end{equation*}

and thus

\begin{equation*} -M\frac{x^2}{2} \leq E^{(n-1)}_n(x) \leq M\frac{x^2}{2} \end{equation*}

for all \(x\) in \([0, c]\text{.}\)

Integrating \(n\) times, we arrive at

\begin{equation*} -M\frac{x^{n+1}}{(n+1)!} \leq E_n(x) \leq M\frac{x^{n+1}}{(n+1)!} \end{equation*}

for all \(x\) in \([0, c]\text{.}\) This enables us to conclude that

\begin{equation*} \left|E_n(x)\right| \leq M\frac{|x|^{n+1}}{(n+1)!} \end{equation*}

for all \(x\) in \([0, c]\text{,}\) and we have found a bound on the approximation's error, \(E_n\text{.}\)

Our work above was based on the approximation centered at \(a = 0\text{;}\) the argument may be generalized to hold for any value of \(a\text{,}\) which results in the following theorem.

The Lagrange Error Bound for \(P_n(x)\)

Let \(f\) be a continuous function with \(n+1\) continuous derivatives. Suppose that \(M\) is a positive real number such that \(\left|f^{(n+1)}(x)\right| \le M\) on the interval \([a, c]\text{.}\) If \(P_n(x)\) is the \(n\)th order Taylor polynomial for \(f(x)\) centered at \(x=a\text{,}\) then

\begin{equation*} \left|P_n(c) - f(c)\right| \leq M\frac{|c-a|^{n+1}}{(n+1)!}\text{.} \end{equation*}

We can use this error bound to tell us important information about Taylor polynomials and Taylor series, as we see in the following examples and activities.

Example7.62

Determine how well the 10th order Taylor polynomial \(P_{10}(x)\) for \(\sin(x)\text{,}\) centered at \(0\text{,}\) approximates \(\sin(2)\text{.}\)

Solution

To answer this question we use \(f(x) = \sin(x)\text{,}\) \(c = 2\text{,}\) \(a=0\text{,}\) and \(n = 10\) in the Lagrange error bound formula. We also need to find an appropriate value for \(M\text{.}\) Note that the derivatives of \(f(x) = \sin(x)\) are all equal to \(\pm \sin(x)\) or \(\pm \cos(x)\text{.}\) Thus,

\begin{equation*} \left| f^{(n+1)}(x) \right| \leq 1 \end{equation*}

for any \(n\) and \(x\text{.}\) Therefore, we can choose \(M\) to be \(1\text{.}\) Then

\begin{equation*} \left|P_{10}(2) - f(2)\right| \leq (1)\frac{|2-0|^{11}}{(11)!} = \frac{2^{11}}{(11)!} \approx 0.00005130671797\text{.} \end{equation*}

So \(P_{10}(2)\) approximates \(\sin(2)\) to within at most \(0.00005130671797\text{.}\) A computer algebra system tells us that

\begin{equation*} P_{10}(2) \approx 0.9093474427 \ \ \text{ and } \ \ \sin(2) \approx 0.9092974268 \end{equation*}

with an actual difference of about \(0.0000500159\text{.}\)

Example7.63

Let \(P_n(x)\) be the \(n\)th order Taylor polynomial for \(\sin(x)\) centered at \(x=0\text{.}\) Determine how large we need to choose \(n\) so that \(P_n(2)\) approximates \(\sin(2)\) to \(20\) decimal places.

Answer

\(n \ge 27\text{.}\)

Solution

In this example, if we can find a value of \(n\) so that

\begin{equation*} M\frac{|2-0|^{n+1}}{(n+1)!} \lt 10^{-20} \end{equation*}

then we will have

\begin{equation*} |P_n(2) - f(2)| \leq M\frac{|2-0|^{n+1}}{(n+1)!} \lt 10^{-20}\text{.} \end{equation*}

Again we use \(f(x) = \sin(x)\text{,}\) \(c = 2\text{,}\) \(a=0\text{,}\) and \(M = 1\) from the previous example. So we need to find \(n\) to make

\begin{equation*} \frac{2^{n+1}}{(n+1)!} \leq 10^{-20}\text{.} \end{equation*}

There is no good way to solve equations involving factorials, so we simply use trial and error, evaluating \(\frac{2^{n+1}}{(n+1)!}\) at different values of \(n\) until we get one we need.

\(n\) \(\frac{2^{n+1}}{(n+1)!}\)
\(10\) \(5.130671797 \times 10^{-5}\)
\(20\) \(4.104743250 \times 10^{-14}\)
\(25\) \(1.664028884 \times 10^{-19}\)
\(26\) \(1.232613988 \times 10^{-20}\)
\(27\) \(8.804385630 \times 10^{-22}\)

So we need to use an \(n\) of at least 27 to ensure accuracy to 20 decimal places.

A computer algebra system gives

\begin{align*} P_{27}(2)\amp \approx 0.9092974268256816953960\\ \sin(2)\amp \approx 0.9092974268256816953960 \end{align*}

and we can see that these agree to 20 places.

Example7.64

Show that the Taylor series for \(\sin(x)\) actually converges to \(\sin(x)\) for all \(x\text{.}\)

Solution

Recall from the previous example that since \(f(x) = \sin(x)\text{,}\) we know

\begin{equation*} \left| f^{(n+1)}(x) \right| \leq 1 \end{equation*}

for any \(n\) and \(x\text{.}\) This allows us to choose \(M = 1\) in the Lagrange error bound formula. Thus,

\begin{equation} |P_n(x) - \sin(x)| \leq \frac{|x|^{n+1}}{(n+1)!}\label{E-ErrorIneqSine}\tag{7.25} \end{equation}

for every \(x\text{.}\)

We showed in earlier work that the Taylor series \(\sum_{k=0}^{\infty} \frac{x^k}{k!}\) converges for every value of \(x\text{.}\) Because the terms of any convergent series must approach zero, it follows that

\begin{equation*} \lim_{n \to \infty} \frac{x^{n+1}}{(n+1)!} = 0 \end{equation*}

for every value of \(x\text{.}\) Thus, taking the limit as \(n \to \infty\) in the inequality(7.25), it follows that

\begin{equation*} \lim_{n \to \infty} |P_n(x) - \sin(x)| = 0\text{.} \end{equation*}

As a result, we can now write

\begin{equation*} \sin(x) = \sum_{n=0}^{\infty} \frac{(-1)^nx^{2n+1}}{(2n+1)!} \end{equation*}

for every real number \(x\text{.}\)

Example7.65
  1. Show that the Taylor series centered at \(0\) for \(\cos(x)\) converges to \(\cos(x)\) for every real number \(x\text{.}\)

  2. Next we consider the Taylor series for \(e^x\text{.}\)

    1. Show that the Taylor series centered at \(0\) for \(e^x\) converges to \(e^x\) for every nonnegative value of \(x\text{.}\)

    2. Show that the Taylor series centered at \(0\) for \(e^x\) converges to \(e^x\) for every negative value of \(x\text{.}\)

    3. Explain why the Taylor series centered at \(0\) for \(e^x\) converges to \(e^x\) for every real number \(x\text{.}\) Recall that we earlier showed that the Taylor series centered at \(0\) for \(e^x\) converges for all \(x\text{,}\) and we have now completed the argument that the Taylor series for \(e^x\) actually converges to \(e^x\) for all \(x\text{.}\)

  3. Let \(P_n(x)\) be the \(n\)th order Taylor polynomial for \(e^x\) centered at \(0\text{.}\) Find a value of \(n\) so that \(P_n(5)\) approximates \(e^5\) correct to \(8\) decimal places.

Answer
  1. Compare Example7.64.

    1. Use the fact that that \(|f^{(n)}(x)| \le e^c\) on the interval \([0,c]\) for any fixed positive value of \(c\text{.}\)

    2. Repeat the argument in (a) but replace \(e^c\) with \(1\text{,}\) and everything else holds in the same way.

    3. Combine the results of (a) and (b)

  2. \(n = 28\text{.}\)

Solution
  1. Compare Example7.64.

    1. Let \(x \ge 0\text{.}\) Since \(f(x) = e^x\text{,}\) \(f^{(n)}(x) = e^x\) for every natural number \(n\text{.}\) Since \(e^x\) is an increasing function, we know that \(|f^{(n)}(x)| \le e^c\) on the interval \([0,c]\) for any fixed positive value of \(c\text{.}\) Thus, by the Lagrange error formula, we can say that

      \begin{equation*} |P_n(x) - e^x| \le e^c \frac{x^{n+1}}{(n+1)!}\text{.} \end{equation*}

      Since the series \(\sum \frac{x^{n}}{n!}\) converges for every \(x\text{,}\) \(\frac{x^{n}}{n!} \to 0\) as \(x \to \infty\text{,}\) and thus \(\frac{x^{n+1}}{(n+1)!} \to 0\) as \(n \to \infty\) for every \(x\) in \([0,c]\text{.}\) Further, since \(e^c\) is a constant independent of \(n\text{,}\) \(e^c \frac{x^{n+1}}{(n+1)!} \to 0\) as well. Thus,

      \begin{equation*} \lim_{n \to \infty} |P_n(x) - e^x| = 0\text{,} \end{equation*}

      as desired.

    2. When \(x \lt 0\text{,}\) we know \(e^x \lt 1\text{.}\) Thus, we can repeat our argument in (a) but replace \(e^c\) with \(1\text{,}\) and everything else holds in the same way.

    3. Because we have shown that the Taylor series for \(e^x\) converges to \(e^x\) for both every nonnegative \(x\)-value and for every negative \(x\)-value, it follows that we have convergence for every value of \(x\text{.}\)

  2. Since \(e^x\) is increasing on \([0,5]\) we know that \(e^x \lt e^5\) on \([0,5]\text{.}\) Now \(e^5 \lt 243\text{,}\) so

    \begin{equation*} \left|P_n(5) - e^5\right| \leq 243\frac{|5|^{n+1}}{(n+1)!}\text{.} \end{equation*}

    We want a value of \(n\)that makes this error term less than \(10^{-8}\text{.}\) Testing various values of \(n\) gives

    \begin{equation*} 243\frac{|5|^{28+1}}{(28+1)!} \approx 5.119146745 \times 10^{-9} \end{equation*}

    so we can choose \(n = 28\text{.}\) A computer algebra system shows that \(P_{28}(5) \approx 148.413159102551\) while \(e^5 \approx 148.413159102577\) and we can see that these two approximations agree to 8 decimal places.

SubsectionSummary

  • We can use Taylor polynomials to approximate functions. This allows us to approximate values of functions using only addition, subtraction, multiplication, and division of real numbers. The \(n\)th order Taylor polynomial centered at \(x=a\) of a function \(f\) is

    \begin{align*} P_n(x) =\mathstrut \amp f(a) + f'(a)(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \cdots + \frac{f^{(n)}(a)}{n!}(x-a)^n\\ =\mathstrut \amp \sum_{k=0}^n \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{align*}
  • The Taylor series centered at \(x=a\) for a function \(f\) is

    \begin{equation*} \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{equation*}

    The \(n\)th order Taylor polynomial centered at \(a\) for \(f\) is the \(n\)th partial sum of its Taylor series centered at \(a\text{.}\) So the \(n\)th order Taylor polynomial for a function \(f\) is an approximation to \(f\) on the interval where the Taylor series converges; for the values of \(x\) for which the Taylor series converges to \(f\) we write

    \begin{equation*} f(x) = \sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}(x-a)^k\text{.} \end{equation*}
  • The connection between power series and Taylor series is that they are essentially the same thing: on its interval of convergence a power series is the Taylor series of its sum.

  • We can often assume a solution to a given problem can be written as a power series, then use the information in the problem to determine the coefficients in the power series. This method allows us to approximate solutions to certain problems using partial sums of the power series; that is, we can find approximate solutions that are polynomials.

  • The Lagrange Error Bound shows us how to determine the accuracy in using a Taylor polynomial to approximate a function. More specifically, if \(P_n(x)\) is the \(n\)th order Taylor polynomial for \(f\) centered at \(x=a\) and if \(M\) is an upper bound for \(\left|f^{(n+1)}(x)\right|\) on the interval \([a, c]\text{,}\) then

    \begin{equation*} \left|P_n(c) - f(c)\right| \leq M\frac{|c-a|^{n+1}}{(n+1)!}\text{.} \end{equation*}

SubsectionExercises

In this exercise we investigation the Taylor series of polynomial functions.

  1. Find the 3rd order Taylor polynomial centered at \(a = 0\) for \(f(x) = x^3-2x^2+3x-1\text{.}\) Does your answer surprise you? Explain.

  2. Without doing any additional computation, find the 4th, 12th, and 100th order Taylor polynomials (centered at \(a = 0\)) for \(f(x) = x^3-2x^2+3x-1\text{.}\) Why should you expect this?

  3. Now suppose \(f(x)\) is a degree \(m\) polynomial. Completely describe the \(n\)th order Taylor polynomial (centered at \(a = 0\)) for each \(n\text{.}\)

In this exercise, we will build a Taylor series for expansion of a binomial function \(f(x)=(1+x)^p\text{,}\) where \(p\) is any constant.

  1. Find the first three derivatives of \(f(x)\text{,}\) \(f'(x),f''(x),f'''(x)\text{.}\)

  2. Build the third-degree Taylor polynomial for \(f(x)\text{.}\)

  3. Using your Taylor polynomial, you should be able to see that the full Taylor series looks like:

    \begin{equation*} 1 + px + \frac{p(p-1)}{2!} x^2 + \frac{p(p-1)(p-2)}{3!}x^3 + \frac{p(p-1)(p-2)(p-3)}{4!}x^4 + \cdots \end{equation*}

    Compute the radius of convergence of this series.

  4. Use this general rule to find the Taylor series about 0 for the function \(g(x)=\sqrt{1+x}\text{.}\)

Based on the examples we have seen, we might expect that the Taylor series for a function \(f\) always converges to the values \(f(x)\) on its interval of convergence. We explore that idea in more detail in this exercise. Let \(f(x) = \begin{cases}e^{-1/x^2} \amp \text{ if } x \neq 0, \\ 0 \amp \text{ if } x = 0. \end{cases}\)

  1. Show, using the definition of the derivative, that \(f'(0) = 0\text{.}\)

  2. It can be shown that \(f^{(n)}(0) = 0\) for all \(n \geq 2\text{.}\) Assuming that this is true, find the Taylor series for \(f\) centered at 0.

  3. What is the interval of convergence of the Taylor series centered at 0 for \(f\text{?}\) Explain. For which values of \(x\) the interval of convergence of the Taylor series does the Taylor series converge to \(f(x)\text{?}\)