To understand that numerical algorithms such as Euler’s method allow the approximation of solutions to the initial value problems and that there are more efficient algorithms than Euler’s method such as those algorithms that use the Runge-Kutta methods.
To understand that Taylor’s Theorem is a very useful tool for studying differential equations.
To understand that error analysis of the rate of convergence is very important for any numerical algorithm.
Just as numerical algorithms are useful when finding the roots of polynomials, numerical methods will prove very useful in our study of ordinary differential equations. Consider the polynomial \(f(x) = x^2 - 2\text{.}\) We do not need a numerical algorithm to see that the roots of this polynomial are \(x = \sqrt{2}\) and \(x = - \sqrt{2}\text{.}\) However, a numerical method such as the Newton-Raphson Algorithm is very useful for approximating \(\sqrt{2}\) as a decimal. 1
See any calculus text for a description of the Newton-Raphson Algorithm.
Similarly, it may be easier to generate a numerical solution for differential equations if our goal is simply to plot a solution. In addition, there will be differential equations for which it is impossible to find a solution in terms of elementary functions such as polynomials, trigonometric functions, and exponential functions.
Subsection1.4.1Euler’s Method
Suppose that we wish to solve the initial value problem
The equation \(y' = y + t\) is not separable, which currently is the only analytic technique at our disposal. However, we can try to find a numerical approximation for the solution. A numerical approximation is simply a table (possibly very large) of \(t\) and \(y\) values.
We will attempt to find a numerical solution for (1.4.1)–(1.4.2) on the interval \([0, 1]\text{.}\) Even with the use of a computer, we cannot approximate the solution at every single point on an interval. For the initial value problem
we might be able to find approximations at \(a = t_0, t_1, t_2, \ldots, t_N = b\) in \([a, b]\) at best. If we choose \(t_1, t_2, \ldots, t_N\) to be equally spaced on \([a, b]\text{,}\) we can write
\begin{equation*}
t_k = t_0 + kh,
\end{equation*}
where \(h = 1/N\) and \(k = 1, 2, \ldots, N\text{.}\) We say that \(h\) is the step size for our approximation.
Given an approximation \(Y_k\) for the solution \(y_k = y(t_k)\text{,}\) the question is how to find an approximate solution \(Y_{k+1}\) at \(t_{k+1}\text{.}\) To generate the second approximation, we will construct a tangent line to the solution at \(y(t_0) = y_0\text{.}\) If we use the slope of the solution curve at \(t_0\text{,}\) then
the estimate for our solution at \(t_1 = t_0 + h\) is
\begin{equation*}
Y_1 = Y_0 + h f(t_0, Y_0).
\end{equation*}
Similarly, the approximation at \(t_2 = t_0 + 2h\) will be
\begin{equation*}
Y_2 = Y_1 + h f(t_1, Y_1).
\end{equation*}
Our general algorithm is
\begin{equation*}
Y_{k+1} = Y_k + h f(t_k, Y_k).
\end{equation*}
The idea is to compute tangent lines at each step and use this information to get our next approximation.
The algorithm that we have described is known as Euler’s method. Let us estimate a solution to (1.4.1)–(1.4.2) on the interval \([0, 1]\) with step size \(h = 0.1\text{.}\) Since \(y(0) = 1\text{,}\) we can make our first approximation exact,
\begin{equation*}
Y_0 = y(0) = 1.
\end{equation*}
To generate the second approximation, we will construct a tangent line to the solution at \(y(0) = 1\text{.}\) If we use the slope of the solution curve at \(t_0 = 0\text{,}\)
The initial value problem (1.4.1)–(1.4.2) is, in fact, solvable analytically with solution \(y(t) = 2e^t - t - 1\text{.}\) We can compare our approximation to the exact solution in Table 1.4.1. We can also see graphs of the approximate and exact solutions in Figure 1.4.2. Notice that the error grows as we get further away from our initial value. In fact, the graph of the approximation for \(h = 0.001\) is obscured by the graph of the exact solution. In addition, a smaller step size gives us a more accurate approximation (Table 1.4.3).
Table1.4.1.Euler’s approximation for \(y' = y + t\)
Use separation of variables to solve the initial value problem.
(b)
Compute \(y(x)\) for \(x = 0, 0.2, 0.4, \ldots, 1\text{.}\)
(c)
Use Euler’s method to approximate solutions to the initial value problem for \(x = 0, 0.2, 0.4, \ldots, 1\text{.}\)
(d)
Compare the exact values of the solution (Task 1.4.1.b) to the approximate values of the solution (Task 1.4.1.c) and comment on what happens as \(x\) varies from \(0\) to \(1\text{.}\)
Subsection1.4.2Finding an Error Bound
To fully understand Euler’s method, we will need to recall Taylor’s theorem from calculus.
The terms that we are omitting, all contain powers of \(h\) of at least degree two. If \(h\) is small, then \(h^n\) for \(n = 2, 3, \ldots\) will be very small and these terms will not matter much.
We can actually estimate the error incurred by Euler’s method if we make use of Taylor’s Theorem.
Theorem1.4.5.
Let \(y\) be the unique solution to the initial value problem
whenever \((t, y_1)\) and \((t, y_2)\) are in \(D = [a, b] \times {\mathbb R}\text{.}\) Also assume that there exists an \(M\) such that
\begin{equation*}
|y''(t)| \leq M
\end{equation*}
for all \(t \in [a, b]\text{.}\) If \(Y_0, \ldots, Y_N\) are the approximations generated by Euler’s method for some positive integer \(N\text{,}\) then
whenever \((t, y_1)\) and \((t, y_2)\) are in \(D = [a, b] \times {\mathbb R}\) is called a Lipschitz condition. Many of the functions that we will consider satisfy such a condition. If the condition is satisfied, we can usually say a great deal about the function.
Table1.4.6.Error bound and actual error
\(k\)
\(t_k\)
\(Y_k\)
\(y_k = y(t_k)\)
\(|y_k - Y_k|\)
Estimated Error
0
0.0
1.0000
1.0000
0.0000
0.0000
1
0.1
1.1000
1.1103
0.0103
0.0286
2
0.2
1.2200
1.2428
0.0228
0.0602
3
0.3
1.3620
1.3997
0.0377
0.0951
4
0.4
1.5282
1.5836
0.0554
0.1337
5
0.5
1.7210
1.7974
0.0764
0.1763
6
0.6
1.9431
2.0442
0.1011
0.2235
7
0.7
2.1974
2.3275
0.1301
0.2756
8
0.8
2.4872
2.6511
0.1639
0.3331
9
0.9
2.8159
3.1092
0.2033
0.3968
10
1.0
3.1875
3.4366
0.2491
0.4671
We can now compare the estimated error from our theorem to the actual error of our example. We first need to determine \(M\) and \(L\text{.}\) Since
we can take \(L\) to be one. Since \(y'' = 2e^t\text{,}\) we can bound \(y''\) on the interval \([0, 1]\) by \(M = 2e\text{.}\) Thus, we can bound the error by
for \(h =0.1\text{.}\) Our results are in Table 1.4.6.
Subsection1.4.3Improving Euler’s Method
If we wish to improve upon Euler’s method, we could add more terms of Taylor series. For example, we can obtain a more accurate approximation by using a quadratic Taylor polynomial,
However, we need to know \(y''(t_0)\) in order to use this approximation. Using the chain rule from multivariable calculus, we can differentiate both sides of \(y' = f(t, y)\) to obtain
The problem is that some preliminary analytic work must be done. That is, before we can write a program to compute our solution, we must find \(\partial f/\partial t\) and \(\partial f / \partial y\text{,}\) although this is less of a problem with the availability of computer algebra systems.
Around 1900, two German mathematicians, Carle Runge and Martin Kutta, independently invented several numerical algorithms to solve differential equations. These methods, known as Runge-Kutta methods, estimate the higher-order terms of the Taylor series to find an approximation that does not depend on computing derivatives of \(f(t, y)\text{.}\)
by the Fundamental Theorem of Calculus. In Euler’s method, we approximate the right-hand side of (1.4.3) by
\begin{equation*}
y_1 - y_0 = f(t_0, y_0) h.
\end{equation*}
In terms of the definite integral, this is simply a left-hand sum. In the improved Euler’s method or the second-order Runge-Kutta method we will estimate the right-hand side of (1.4.3) using the trapezoid rule from calculus,
However, we have a problem since \(y_1\) appears in the right-hand side of our approximation. To get around this difficulty, we will replace \(y_1\) in the right-hand side of (1.4.4) with the Euler approximation for \(y_1\text{.}\) Thus,
To understand that the second-order Runge-Kutta method is actually an improvement over the traditional Euler’s method, we will need to use the Taylor approximation for a function of two variables. Let us assume that \(f(x,y)\) is defined on some rectangle and that all of the derivatives of \(f\) are continuously differentiable. Then
As in the case of the single variable Taylor series, we can write a Taylor polynomial if the Taylor series is truncated,
\begin{align*}
f(x + h, y + k) & = \sum_{n = 0}^{N} \frac{1}{n!} \left( h \frac{\partial}{\partial x} + k \frac{\partial}{\partial y} \right)^n f(x, y)\\
& + \frac{1}{(N+1)!} \left( h \frac{\partial}{\partial x} + k \frac{\partial}{\partial y} \right)^{N+1} f(\overline{x}, \overline{y} ),
\end{align*}
where the second term is the remainder term and \((\overline{x}, \overline{y} )\) lies on the line segment joining \((x, y)\) and \((x + h, y + k)\text{.}\)
In the Improved Euler’s Method, we adopt a formula
\begin{align*}
F_1 & = h f(t, y)\\
F_2 & = h f(t + \alpha h, y + \beta F_1).
\end{align*}
That is,
\begin{equation}
y(t + h) = y(t) + w_1 h f(t, y) + w_2 h f(t + \alpha h, y + \beta h f(t, y)).\tag{1.4.5}
\end{equation}
The idea is to choose the constants \(w_1\text{,}\)\(w_2\text{,}\)\(\alpha\text{,}\) and \(\beta\) as accurately as possible in order to duplicate as many terms as possible in the Taylor series
We can make equations (1.4.5) and (1.4.6) agree if we choose \(w_1 = 1\) and \(w_2 = 0\text{.}\) Since \(y' = f\text{,}\) we obtain Euler’s method.
If we are more careful about choosing our parameters, we can obtain agreement up through the \(h^2\) term. If we use the two variable Taylor series to expand \(f(t + \alpha h, y + \beta h f)\text{,}\) we have
\begin{equation*}
f(t + \alpha h, y + \beta h f) = f + \alpha h f_t + \beta h f f_y + {\mathcal O}(h^2),
\end{equation*}
where \({\mathcal O}(h^2)\) means that of the subsequent terms have a factor of \(h^n\) with \(n \geq 2\text{.}\) Using this expression, we obtain a new form for (1.4.5),
The improved Euler’s method or the second-order Runge-Kutta method is a more sophisticated algorithm that is less prone to error due to the step size \(h\text{.}\) Euler’s method is based on truncating the Taylor series after the linear term. Since
we can improve our accuracy up to \(h^4\text{.}\) The idea is exactly the same, but the algebra becomes much more tedious. This method is known as the Runge-Kutta method of order 4 and is given by
on an interval \([a, b]\text{.}\) If we wish to find approximations at \(N\) equally spaced points \(t_1, \ldots, t_N\text{,}\) where \(h = (b-a)/N\) and \(t_i = a + i h\text{,}\) our approximations should be
\begin{align*}
Y_0 & = \alpha,\\
Y_1 & = Y_0 + h f(\alpha, Y_0),\\
Y_2 & = Y_1 + h f(t_1, Y_1,)\\
& \vdots\\
Y_{k+1} & = Y_k + h f(t_k, Y_k),\\
Y_N & = Y_{N-1} + h f(t_{N-1}, Y_{N-1}).
\end{align*}
In practice, no one uses Euler’s method. The Runge-Kutta methods are better algorithms.
Taylor’s Theorem is a very useful tool for studying differential equations. If \(x \gt x_0\text{,}\) then
Error analysis rate of convergence is very important for any numerical algorithm. Our approximation is more accurate for smaller values of \(h\text{.}\) Under reasonable conditions we can also bound the error by
whenever \((t, y_1)\) and \((t, y_2)\) are in \(D = [a, b] \times {\mathbb R}\) is called a Lipschitz condition.
Using Taylor series, we can develop better numerical algorithms to compute solutions of differential equations. The Runge-Kutta methods are an important class of these algorithms.
\begin{align*}
F_1 & = h f(t, y)\\
F_2 & = hf\left( t + \frac{1}{2} h, y + \frac{1}{2} F_1 \right)\\
F_3 & = hf\left( t + \frac{1}{2} h, y + \frac{1}{2} F_2 \right)\\
F_4 & = hf(t + h, y + F_3)
\end{align*}
with the error bound depending on \(h^4\text{.}\)
Reading Questions1.4.5Reading Questions
1.
We can use Taylor polynomials to approximate a function \(f(x)\) near a point \(x_0\text{.}\) Explain why this approximation can only be expected to be accurate near \(x_0\text{.}\)
2.
Should we always use Euler’s method when approximating a solution to an initial value problem? Why or why not?
Find the exact solution of the initial value problem.
Use Euler’s method with step size \(h = 0.5\) to approximate the solution to the initial value problem on the interval \([0,2]\) Your solution should include a table of approximate values of the dependent variable as well as the exact values of the dependent variable. Make sure that your approximations are accurate to four decimal places.
Sketch the graph of the approximate and exact solutions.
Use the error bound theorem (Theorem 1.4.5) to estimate the error at each approximation. Your solution should include a table of approximate values of the dependent variable the exact values of the dependent variable, the error estimates, and the actual error. Make sure that your approximations are accurate to four decimal places.
7.
In this series of exercises, we will prove the error bound theorem for Euler’s method (Theorem 1.4.5).
Use Taylor’s Theorem to show that for all \(x \geq -1\) and any positive \(m\text{,}\)
Use part (1) and geometric series to prove the following statement: If \(s\) and \(t\) are positive real numbers, and \(\{ a_i \}_{i = 0}^k\) is a sequence satisfying