Motivating Questions
What is a Taylor series?
What is the connection between power series and Taylor series?
What is a Taylor series?
What is the connection between power series and Taylor series?
In Section7.8, we began looking at Taylor polynomials as a means of approximating a function. If we continue this approximation with polynomials of higher and higher orders, we can construct a power series, called the Taylor series, which yields an even more accurate representation of the function. In this section, we examine the Taylor series, its connections to Taylor polynomials, the convergence of the Taylor series, and error approximations for Taylor polynomials.
In Example7.54 we saw that the fourth order Taylor polynomial \(P_4(x)\) for \(\sin(x)\) centered at \(0\) is
The pattern we found for the derivatives \(f^{(k)}(0)\) describe the higher-order Taylor polynomials, e.g.,
and so on. It is instructive to consider the graphical behavior of these functions; Figure7.55 shows the graphs of a few of the Taylor polynomials centered at \(0\) for the sine function.
Notice that \(P_1(x)\) is close to the sine function only for values of \(x\) that are close to \(0\text{,}\) but as we increase the degree of the Taylor polynomial the Taylor polynomials provide a better fit to the graph of the sine function over larger intervals. This illustrates the general behavior of Taylor polynomials: for any sufficiently well-behaved function, the sequence \(\{P_n(x)\}\) of Taylor polynomials converges to the function \(f\) on larger and larger intervals (though those intervals may not necessarily increase without bound). If the Taylor polynomials ultimately converge to \(f\) on its entire domain, we write
Let \(f\) be a function all of whose derivatives exist at \(x=a\text{.}\) The Taylor series for \(f\) centered at \(x=a\) is the series \(T_f(x)\) defined by
In the special case where \(a=0\text{,}\) the Taylor series is also called the Maclaurin series for \(f\text{.}\)
From Example7.53 we know the \(n\)th order Taylor polynomial centered at \(0\) for the exponential function \(e^x\text{;}\) thus, the Maclaurin series for \(e^x\) is
In Example7.54 we determined small order Taylor polynomials for a few familiar functions, and also found general patterns in the derivatives evaluated at \(0\text{.}\) Use that information to write the Taylor series centered at \(0\) for the following functions.
\(f(x) = \frac{1}{1-x}\)
\(f(x) = \cos(x)\) (You will need to carefully consider how to indicate that many of the coefficients are 0. Think about a general way to represent an even integer.)
\(f(x) = \sin(x)\) (You will need to carefully consider how to indicate that many of the coefficients are \(0\text{.}\) Think about a general way to represent an odd integer.)
Determine the \(n\) order Taylor polynomial for \(f(x) = \frac{1}{1-x}\) centered at \(x=0\text{.}\)
\(P(x) = 1 + x + x^2 + x^3 + \cdots + x^n + \cdots\)
\(P(x) = 1 - \frac{1}{2!}x^2 + \frac{1}{4!}x^4 - \cdots + (-1)^{n}\frac{1}{(2n)!}x^{2n} + \cdots \text{.}\)
\(P(x) = x - \frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \cdots + (-1)^{n}\frac{1}{(2n+1)!}x^{2n+1} + \cdots \text{.}\)
\(P_n(x) = 1 + x + x^2 + x^3 + \cdots + x^n\)
For \(f(x) = \frac{1}{1-x}\text{,}\) its Taylor series is
For \(f(x) = \cos(x)\text{,}\) its Taylor series is
For \(f(x) = \sin(x)\text{,}\) its Taylor series is
For \(f(x) = \frac{1}{1-x}\text{,}\)
Many of the examples we consider in this section are for Taylor polynomials and series centered at 0, but Taylor polynomials and series can be centered at any value of \(a\text{.}\) Here, we look at more examples of such Taylor polynomials and series.
Let \(f(x) = \sin(x)\text{.}\) Find the Taylor polynomials up through order four of \(f\) centered at \(x = \frac{\pi}{2}\text{.}\) Then find the Taylor series for \(f(x)\) centered at \(x = \frac{\pi}{2}\text{.}\) Why should you have expected the result?
Let \(f(x) = \ln(x)\text{.}\) Find the Taylor polynomials up through order four of \(f\) centered at \(x = 1\text{.}\) Then find the Taylor series for \(f(x)\) centered at \(x = 1\text{.}\)
For \(f(x) = \sin(x)\text{,}\) \(f'(x) = \cos(x)\text{,}\) \(f''(x) = -\sin(x)\text{,}\) \(f'''(x) = -\cos(x)\text{,}\) and \(f^{(4)}(x) = \sin(x)\text{.}\) Thus, \(f\left(\frac{\pi}{2} \right) = 1\text{,}\) \(f'\left(\frac{\pi}{2} \right) = 0\text{,}\) \(f''\left(\frac{\pi}{2} \right) = -1\text{,}\) \(f'''\left(\frac{\pi}{2} \right) = 0\text{,}\) and \(f^{(4)}\left(\frac{\pi}{2} \right) = 1\text{.}\) It follows that the first four Taylor polynomials of \(f\) are
From the pattern, the Taylor series for \(f(x)\) centered at \(x = \frac{\pi}{2}\) is
which is expected because of the repeating patterns in the derivatives of the sine function evaluated at \(\frac{\pi}{2}\text{.}\)
For \(f(x) = \ln(x)\text{,}\) \(f'(x) = x^{-1}\text{,}\) \(f''(x) = -x^{-2}\text{,}\) \(f'''(x) = 2x^{-3}\text{,}\) and \(f^{(4)}(x) = -6x^{-4}\text{.}\) It follows that \(f(1) = 0\text{,}\) \(f'(1) = 1\text{,}\) \(f''(1) = -1\text{,}\) \(f'''(1) = 2\text{,}\) and \(f^{(4)}(1) = -6\text{.}\) Thus, the fourth Taylor polynomial (in which we can see the polynomials of lower degree) is
Simplifying the coefficients and seeing the pattern, it follows that the Taylor series for \(f(x)\) centered at \(x = 1\) is
Plot the graphs of several of the Taylor polynomials centered at \(0\) (of order at least 5) for \(e^x\) and convince yourself that these Taylor polynomials converge to \(e^x\) for every value of \(x\text{.}\)
Draw the graphs of several of the Taylor polynomials centered at \(0\) (of order at least 6) for \(\cos(x)\) and convince yourself that these Taylor polynomials converge to \(\cos(x)\) for every value of \(x\text{.}\) Write the Taylor series centered at \(0\) for \(\cos(x)\text{.}\)
Draw the graphs of several of the Taylor polynomials centered at \(0\) for \(\frac{1}{1-x}\text{.}\) Based on your graphs, for what values of \(x\) do these Taylor polynomials appear to converge to \(\frac{1}{1-x}\text{?}\) How is this situation different from what we observe with \(e^x\) and \(\cos(x)\text{?}\) In addition, write the Taylor series centered at \(0\) for \(\frac{1}{1-x}\text{.}\)
It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals.
It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals.
The Taylor polynomials converge to \(\frac{1}{1-x}\) only on the interval \((-1,1)\text{.}\)
The graphs of the 10th (magenta), 20th (blue), and 30th (green) Taylor polynomials centered at \(0\) for \(e^x\) are shown below along with the graph of \(f(x)\) in red:
It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals. So it looks like the Taylor polynomials converge to \(e^x\) for every value of \(x\text{.}\)
The graphs of the 10th (magenta), 20th (blue), and 30th (green) Taylor polynomials centered at \(0\) for \(\cos(x)\) are shown below along with the graph of \(f(x)\) in red:
It appears that as we increase the order of the Taylor polynomials, they fit the graph of \(f\) better and better over larger intervals. So it looks like the Taylor polynomials converge to \(\cos(x)\) for every value of \(x\text{.}\) Based on the \(n\)th order Taylor polynomials we found earlier for \(\cos(x)\text{,}\) the Taylor series for \(f(x)\) centered at \(0\) is
The graphs of the 10th (magenta), 20th (blue), and 30th (green) Taylor polynomials centered at \(0\) for \(\frac{1}{1-x}\) are shown below along with the graph of \(f(x)\) in red:
It appears that as we increase the order of the Taylor polynomials, they only fit the graph of \(f\) better and better over the interval \((-1,1)\) and appear to diverge outside that interval. So it looks like the Taylor polynomials converge to \(\frac{1}{1-x}\) only on the interval \((-1,1)\text{.}\)
Based on the \(n\)th order Taylor polynomials we found earlier for \(\frac{1}{1-x}\text{,}\) the Taylor series for \(f(x)\) centered at \(0\) is
The Maclaurin series for \(e^x\text{,}\) \(\sin(x)\text{,}\) \(\cos(x)\text{,}\) and \(\frac{1}{1-x}\) will be used frequently, so we should be certain to know and recognize them well.
There is an important connection between power series and Taylor series. This is illustrated in the following example.
Suppose \(f\) is defined by a power series centered at 0 so that
Determine the first 4 derivatives of \(f\) evaluated at 0 in terms of the coefficients \(a_k\text{.}\)
Show that \(f^{(n)}(0) = n!a_n\) for each positive integer \(n\text{.}\)
Explain how the result of (b) tells us the following:
On its interval of convergence, a power series is the Taylor series of its sum.
So
and
for each \(k \geq 0\text{.}\) But these are just the coefficients of the Taylor series expansion of \(f\text{,}\) which leads us to the following observation.
Observe that
and therefore
Since
every term of this series vanishes at \(x = 0\) except the first. Thus it follows \(f^{(n)}(0) = n(n-1)(n-2) \cdots (1) a_n\text{,}\) so \(f^{(n)}(0) = n! a_n\text{.}\)
Since \(a_k = \frac{f^{(k)}(0)}{k!}\) for each \(k \geq 0\text{,}\) we see that these are just the coefficients of the Taylor series expansion of \(f\text{,}\) and thus we get the unsurprising result that the coefficients of a power series are identical to the Taylor series of the power series.
Thus, on its interval of convergence, every power series is in fact the Taylor series of the function it defines.
In the previous section (in Figure7.55 and Example7.58) we observed that the Taylor polynomials centered at \(0\) for \(e^x\text{,}\) \(\cos(x)\text{,}\) and \(\sin(x)\) converged to these functions for all values of \(x\) in their domain, but that the Taylor polynomials centered at \(0\) for \(\frac{1}{1-x}\) converge to \(\frac{1}{1-x}\) on the interval \((-1,1)\) and diverge for all other values of \(x\text{.}\) So the Taylor series for a function \(f(x)\) does not need to converge for all values of \(x\) in the domain of \(f\text{.}\)
Our observations suggest two natural questions: can we determine the values of \(x\) for which a given Taylor series converges? And does the Taylor series for a function \(f\) actually converge to \(f(x)\text{?}\)
Graphical evidence suggests that the Taylor series centered at \(0\) for \(e^x\) converges for all values of \(x\text{.}\) To verify this, use the Ratio Test to determine all values of \(x\) for which the Taylor series
converges absolutely.
Recall that the Ratio Test applies only to series of nonnegative terms. In this example, the variable \(x\) may have negative values. But we are interested in absolute convergence, so we apply the Ratio Test to the series
Now, observe that
for any value of \(x\text{.}\) So the Taylor series (7.23) converges absolutely for every value of \(x\text{,}\) and thus converges for every value of \(x\text{.}\)
One question still remains: while the Taylor series for \(e^x\) converges for all \(x\text{,}\) what we have done does not tell us that this Taylor series actually converges to \(e^x\) for each \(x\text{.}\) We'll return to this question when we consider the error in a Taylor approximation near the end of this section.
As we did for power series, we define the interval of convergence of a Taylor series to be the set of values of \(x\) for which the series converges. And as we did with power series, we typically use the Ratio Test to find the values of \(x\) for which the Taylor series converges absolutely, and then check the endpoints separately if the radius of convergence is finite.
Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f(x) = \frac{1}{1-x}\) centered at \(x=0\text{.}\)
Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f(x) = \cos(x)\) centered at \(x=0\text{.}\)
Use the Ratio Test to explicitly determine the interval of convergence of the Taylor series for \(f(x) = \sin(x)\) centered at \(x=0\text{.}\)
\((-\infty, \infty)\text{.}\)
\((-\infty, \infty)\text{.}\)
The interval \((-1,1)\text{.}\)
Using the Ratio Test with the \(k\)th term \(\frac{|x|^{2k}}{(2k)!}\) we get
So the interval of convergence of the Taylor series for \(f(x) = \cos(x)\) centered at \(x=0\) is \((-\infty, \infty)\text{.}\)
Using the Ratio Test with the \(k\)th term \(\frac{|x|^{2k+1}}{(2k+1)!}\) we get
So the interval of convergence of the Taylor series for \(f(x) = \sin(x)\) centered at \(x=0\) is \((-\infty, \infty)\text{.}\)
Using the Ratio Test with the \(k\)th term \(|x|^{k}\) we get
So the series \(\sum_{k=0}^{\infty} x^k\) converges absolutely when \(|x| \lt 1\) or for \(-1 \lt x \lt 1\) and diverges when \(|x| \gt 1\text{.}\) Since the Ratio Test doesn't tell us what happens when \(x=1\text{,}\) we need to check the endpoints separately.
When \(x=1\) we have the series \(\sum_{k=0}^{\infty} 1\) which diverges since \(\lim_{k \to \infty} 1 \neq 0\text{.}\)
When \(x=-1\) we have the series \(\sum_{k=0}^{\infty} (-1)^k\) which diverges since \(\lim_{k \to \infty} (-1)^k\) does not exist.
Therefore, the interval of convergence of the Taylor series for \(f(x) = \frac{1}{1-x}\) centered at \(x=0\) is \((-1,1)\text{.}\)
The Ratio Test allows us to determine the set of \(x\) values for which a Taylor series converges absolutely. However, just because a Taylor series for a function \(f\) converges, we cannot be certain that the Taylor series actually converges to \(f(x)\text{.}\) To show why and where a Taylor series does in fact converge to the function \(f\text{,}\) we next consider the error that is present in Taylor polynomials.
We now know how to find Taylor polynomials for functions such as \(\sin(x)\text{,}\) and how to determine the interval of convergence of the corresponding Taylor series. We next develop an error bound that will tell us how well an \(n\)th order Taylor polynomial \(P_n(x)\) approximates its generating function \(f(x)\text{.}\) This error bound will also allow us to determine whether a Taylor series on its interval of convergence actually equals the function \(f\) from which the Taylor series is derived. Finally, we will be able to use the error bound to determine the order of the Taylor polynomial \(P_n(x)\) that we will ensure that \(P_n(x)\) approximates \(f(x)\) to the desired degree of accuracy.
For this argument, we assume throughout that we center our approximations at \(0\) (but a similar argument holds for approximations centered at \(a\)). We define the exact error, \(E_n(x)\text{,}\) that results from approximating \(f(x)\) with \(P_n(x)\) by
We are particularly interested in \(|E_n(x)|\text{,}\) the distance between \(P_n\) and \(f\text{.}\) Because
for \(0 \leq k \leq n\text{,}\) we know that
for \(0 \leq k \leq n\text{.}\) Furthermore, since \(P_n(x)\) is a polynomial of degree less than or equal to \(n\text{,}\) we know that
Thus, since \(E^{(n+1)}_n(x) = f^{(n+1)}(x) - P_n^{(n+1)}(x)\text{,}\) it follows that
for all \(x\text{.}\)
Suppose that we want to approximate \(f(x)\) at a number \(c\) close to \(0\) using \(P_n(c)\text{.}\) If we assume \(|f^{(n+1)}(t)|\) is bounded by some number \(M\) on \([0, c]\text{,}\) so that
for all \(0 \leq t \leq c\text{,}\) then we can say that
for all \(t\) between \(0\) and \(c\text{.}\) Equivalently,
on \([0, c]\text{.}\) Next, we integrate the three terms in Inequality(7.24) from \(t = 0\) to \(t = x\text{,}\) and thus find that
for every value of \(x\) in \([0, c]\text{.}\) Since \(E^{(n)}_n(0) = 0\text{,}\) the First FTC tells us that
for every \(x\) in \([0, c]\text{.}\)
Integrating this last inequality, we obtain
and thus
for all \(x\) in \([0, c]\text{.}\)
Integrating \(n\) times, we arrive at
for all \(x\) in \([0, c]\text{.}\) This enables us to conclude that
for all \(x\) in \([0, c]\text{,}\) and we have found a bound on the approximation's error, \(E_n\text{.}\)
Our work above was based on the approximation centered at \(a = 0\text{;}\) the argument may be generalized to hold for any value of \(a\text{,}\) which results in the following theorem.
Let \(f\) be a continuous function with \(n+1\) continuous derivatives. Suppose that \(M\) is a positive real number such that \(\left|f^{(n+1)}(x)\right| \le M\) on the interval \([a, c]\text{.}\) If \(P_n(x)\) is the \(n\)th order Taylor polynomial for \(f(x)\) centered at \(x=a\text{,}\) then
We can use this error bound to tell us important information about Taylor polynomials and Taylor series, as we see in the following examples and activities.
Determine how well the 10th order Taylor polynomial \(P_{10}(x)\) for \(\sin(x)\text{,}\) centered at \(0\text{,}\) approximates \(\sin(2)\text{.}\)
To answer this question we use \(f(x) = \sin(x)\text{,}\) \(c = 2\text{,}\) \(a=0\text{,}\) and \(n = 10\) in the Lagrange error bound formula. We also need to find an appropriate value for \(M\text{.}\) Note that the derivatives of \(f(x) = \sin(x)\) are all equal to \(\pm \sin(x)\) or \(\pm \cos(x)\text{.}\) Thus,
for any \(n\) and \(x\text{.}\) Therefore, we can choose \(M\) to be \(1\text{.}\) Then
So \(P_{10}(2)\) approximates \(\sin(2)\) to within at most \(0.00005130671797\text{.}\) A computer algebra system tells us that
with an actual difference of about \(0.0000500159\text{.}\)
Let \(P_n(x)\) be the \(n\)th order Taylor polynomial for \(\sin(x)\) centered at \(x=0\text{.}\) Determine how large we need to choose \(n\) so that \(P_n(2)\) approximates \(\sin(2)\) to \(20\) decimal places.
\(n \ge 27\text{.}\)
In this example, if we can find a value of \(n\) so that
then we will have
Again we use \(f(x) = \sin(x)\text{,}\) \(c = 2\text{,}\) \(a=0\text{,}\) and \(M = 1\) from the previous example. So we need to find \(n\) to make
There is no good way to solve equations involving factorials, so we simply use trial and error, evaluating \(\frac{2^{n+1}}{(n+1)!}\) at different values of \(n\) until we get one we need.
\(n\) | \(\frac{2^{n+1}}{(n+1)!}\) |
\(10\) | \(5.130671797 \times 10^{-5}\) |
\(20\) | \(4.104743250 \times 10^{-14}\) |
\(25\) | \(1.664028884 \times 10^{-19}\) |
\(26\) | \(1.232613988 \times 10^{-20}\) |
\(27\) | \(8.804385630 \times 10^{-22}\) |
So we need to use an \(n\) of at least 27 to ensure accuracy to 20 decimal places.
A computer algebra system gives
and we can see that these agree to 20 places.
Show that the Taylor series for \(\sin(x)\) actually converges to \(\sin(x)\) for all \(x\text{.}\)
Recall from the previous example that since \(f(x) = \sin(x)\text{,}\) we know
for any \(n\) and \(x\text{.}\) This allows us to choose \(M = 1\) in the Lagrange error bound formula. Thus,
for every \(x\text{.}\)
We showed in earlier work that the Taylor series \(\sum_{k=0}^{\infty} \frac{x^k}{k!}\) converges for every value of \(x\text{.}\) Because the terms of any convergent series must approach zero, it follows that
for every value of \(x\text{.}\) Thus, taking the limit as \(n \to \infty\) in the inequality(7.25), it follows that
As a result, we can now write
for every real number \(x\text{.}\)
Show that the Taylor series centered at \(0\) for \(\cos(x)\) converges to \(\cos(x)\) for every real number \(x\text{.}\)
Next we consider the Taylor series for \(e^x\text{.}\)
Show that the Taylor series centered at \(0\) for \(e^x\) converges to \(e^x\) for every nonnegative value of \(x\text{.}\)
Show that the Taylor series centered at \(0\) for \(e^x\) converges to \(e^x\) for every negative value of \(x\text{.}\)
Explain why the Taylor series centered at \(0\) for \(e^x\) converges to \(e^x\) for every real number \(x\text{.}\) Recall that we earlier showed that the Taylor series centered at \(0\) for \(e^x\) converges for all \(x\text{,}\) and we have now completed the argument that the Taylor series for \(e^x\) actually converges to \(e^x\) for all \(x\text{.}\)
Let \(P_n(x)\) be the \(n\)th order Taylor polynomial for \(e^x\) centered at \(0\text{.}\) Find a value of \(n\) so that \(P_n(5)\) approximates \(e^5\) correct to \(8\) decimal places.
Compare Example7.64.
Use the fact that that \(|f^{(n)}(x)| \le e^c\) on the interval \([0,c]\) for any fixed positive value of \(c\text{.}\)
Repeat the argument in (a) but replace \(e^c\) with \(1\text{,}\) and everything else holds in the same way.
Combine the results of (a) and (b)
\(n = 28\text{.}\)
Compare Example7.64.
Let \(x \ge 0\text{.}\) Since \(f(x) = e^x\text{,}\) \(f^{(n)}(x) = e^x\) for every natural number \(n\text{.}\) Since \(e^x\) is an increasing function, we know that \(|f^{(n)}(x)| \le e^c\) on the interval \([0,c]\) for any fixed positive value of \(c\text{.}\) Thus, by the Lagrange error formula, we can say that
Since the series \(\sum \frac{x^{n}}{n!}\) converges for every \(x\text{,}\) \(\frac{x^{n}}{n!} \to 0\) as \(x \to \infty\text{,}\) and thus \(\frac{x^{n+1}}{(n+1)!} \to 0\) as \(n \to \infty\) for every \(x\) in \([0,c]\text{.}\) Further, since \(e^c\) is a constant independent of \(n\text{,}\) \(e^c \frac{x^{n+1}}{(n+1)!} \to 0\) as well. Thus,
as desired.
When \(x \lt 0\text{,}\) we know \(e^x \lt 1\text{.}\) Thus, we can repeat our argument in (a) but replace \(e^c\) with \(1\text{,}\) and everything else holds in the same way.
Because we have shown that the Taylor series for \(e^x\) converges to \(e^x\) for both every nonnegative \(x\)-value and for every negative \(x\)-value, it follows that we have convergence for every value of \(x\text{.}\)
Since \(e^x\) is increasing on \([0,5]\) we know that \(e^x \lt e^5\) on \([0,5]\text{.}\) Now \(e^5 \lt 243\text{,}\) so
We want a value of \(n\)that makes this error term less than \(10^{-8}\text{.}\) Testing various values of \(n\) gives
so we can choose \(n = 28\text{.}\) A computer algebra system shows that \(P_{28}(5) \approx 148.413159102551\) while \(e^5 \approx 148.413159102577\) and we can see that these two approximations agree to 8 decimal places.
The Taylor series centered at \(x=a\) for a function \(f\) is
The \(n\)th order Taylor polynomial centered at \(a\) for \(f\) is the \(n\)th partial sum of its Taylor series centered at \(a\text{.}\) So the \(n\)th order Taylor polynomial for a function \(f\) is an approximation to \(f\) on the interval where the Taylor series converges; for the values of \(x\) for which the Taylor series converges to \(f\) we write
The connection between power series and Taylor series is that they are essentially the same thing: on its interval of convergence a power series is the Taylor series of its sum.
We can often assume a solution to a given problem can be written as a power series, then use the information in the problem to determine the coefficients in the power series. This method allows us to approximate solutions to certain problems using partial sums of the power series; that is, we can find approximate solutions that are polynomials.
The Lagrange Error Bound shows us how to determine the accuracy in using a Taylor polynomial to approximate a function. More specifically, if \(P_n(x)\) is the \(n\)th order Taylor polynomial for \(f\) centered at \(x=a\) and if \(M\) is an upper bound for \(\left|f^{(n+1)}(x)\right|\) on the interval \([a, c]\text{,}\) then
In this exercise we investigation the Taylor series of polynomial functions.
Find the 3rd order Taylor polynomial centered at \(a = 0\) for \(f(x) = x^3-2x^2+3x-1\text{.}\) Does your answer surprise you? Explain.
Without doing any additional computation, find the 4th, 12th, and 100th order Taylor polynomials (centered at \(a = 0\)) for \(f(x) = x^3-2x^2+3x-1\text{.}\) Why should you expect this?
Now suppose \(f(x)\) is a degree \(m\) polynomial. Completely describe the \(n\)th order Taylor polynomial (centered at \(a = 0\)) for each \(n\text{.}\)
In this exercise, we will build a Taylor series for expansion of a binomial function \(f(x)=(1+x)^p\text{,}\) where \(p\) is any constant.
Find the first three derivatives of \(f(x)\text{,}\) \(f'(x),f''(x),f'''(x)\text{.}\)
Build the third-degree Taylor polynomial for \(f(x)\text{.}\)
Using your Taylor polynomial, you should be able to see that the full Taylor series looks like:
Compute the radius of convergence of this series.
Use this general rule to find the Taylor series about 0 for the function \(g(x)=\sqrt{1+x}\text{.}\)
Based on the examples we have seen, we might expect that the Taylor series for a function \(f\) always converges to the values \(f(x)\) on its interval of convergence. We explore that idea in more detail in this exercise. Let \(f(x) = \begin{cases}e^{-1/x^2} \amp \text{ if } x \neq 0, \\ 0 \amp \text{ if } x = 0. \end{cases}\)
Show, using the definition of the derivative, that \(f'(0) = 0\text{.}\)
It can be shown that \(f^{(n)}(0) = 0\) for all \(n \geq 2\text{.}\) Assuming that this is true, find the Taylor series for \(f\) centered at 0.
What is the interval of convergence of the Taylor series centered at 0 for \(f\text{?}\) Explain. For which values of \(x\) the interval of convergence of the Taylor series does the Taylor series converge to \(f(x)\text{?}\)