To understand that a linear map converts solutions of to solutions of , and, conversely, the inverse of a linear map takes solutions of to solutions of .
To understand that a change of coordinates converts the system to one of the following special cases,
Although it may seem that we have limited ourselves by attacking only a very small part of the problem of finding solutions for , we are actually very close to providing a complete classification of all solutions. We will now show that we can transform any system of first-order linear differential equations with constant coefficients into one of these special systems by using a change of coordinates.
First, we need to add a few things to our knowledge of matrices and linear algebra. A linear map or linear transformation on is a function that is defined by a matrix. That is,
We will say that is an invertible linear map if we can find a second linear map such that , where is the identity transformation. In terms of matrices, this means that we can find a matrix such that
In Subsection 5.1.2, we discussed what a basis was along with the coordinates with respect to a particular basis. The vectors and form a basis for . Indeed, if , then we can write
Suppose we wish to convert the coordinates with repect to one basis to a new set of coordinates with respect to a different basis; that is, we wish to do a change of coordinates. Observe that
then the coordinates with respect to the basis are given by . If we are given the coordinates with respect to the basis for a vector, we simply need to multiply by the matrix .
Now suppose that we wish to find the coordinates with respect to the basis if we know that a vector . Since , we need only multiply both sides of the equation by to get . In our example,
The idea now is to use a change of coordinates to convert an arbitrary system into one of the special systems mentioned at the beginning of the section (6.1.1), solve the new system, and then convert our new solution back to a solution of the original system using another change of coordinates.
Consider the system , where has two real, distinct eigenvalues and with eigenvectors and , respectively. Let be the matrix with columns and . If and , then for . Consequently, for . Thus, we have
The eigenvalues of are and and the associated eigenvectors are and , respectively. In this case, our matrix is
If and , then for . Consequently, for , where
Thus,
The eigenvalues of the matrix
are and with eigenvectors and , respectively. Thus, the general solution of
is
Hence, the general solution of
is
The linear map converts the phase portrait of the system (Figure 6.1.3) to the phase portrait of the system (Figure 6.1.4).
a direction field of slope arrows and solution curves in each quadrant with the solution curves approaching the horizontal and vertical axes for large values
Figure6.1.3.Phase portrait for
a direction field of slope arrows and solution curves that approach the straight-line solutions for large values
Now calculate and compare this solution with the one that you obtained in Activity 5.2.1. 1
Of couurse, we have much quicker ways of solving a system with distinct real eigenvalues. The goal of this section is show that we have covered all possible cases for systems of linear differential equations and not to invent new methods of solution.
where and are real vectors, then the vectors and are linearly independent.
Proof.
If and are not linearly independent, then for some . On one hand, we have
However,
In other words, . However, this is a contradiction since the left-side of the equation says that we have real eigenvector while the right-side of the equation is complex. Thus, and are linearly independent.
The system is in one of the canonical forms and has a phase portrait that is a spiral sink (), a center (), or a spiral source (). After a change of coordinates, the phase portrait of is equivalent to a sink, center, or source.
Example6.1.7.
Suppose that we wish to find the solutions of the second order equation
This particular equation might model a damped harmonic oscillator. If we rewrite this second-order equation as a first-order system, we have
or equivalently , where
The eigenvalues of are
The eigenvalue has an eigenvector
respectively. Therefore, we can take to be the matrix
If has a single eigenvalue and a pair of linearly independent eigenvectors, then must be of the form
Proof.
Suppose that and are linearly indeendent eigenvectors for , and let be the matrix whose first column is and second column is . That is, and . Since and are linearly independent, and is invertible. So, it must be the case that
Suppose that has a single eigenvalue . If is an eigenvector for and any other eigenvector for is a multiple of , then there exists a matrix such that
Proof.
If is another vector in such that and are linearly independent, then can be written as a linear combination of and ,
We can assume that ; otherwise, we would have a second linearly independent eigenvector. We claim that . If this were not the case, then
and would be an eigenvalue distinct from . Thus, . If we will let , then
We now define and . Since
we have
Therefore, is in canonical form after a change of coordinates.
Example6.1.13.
Consider the system , where
The characteristic polynomial of is , we have only a single eigenvalue with eigenvector . Any other eigenvector for is a multiple of . If we choose , then and are linearly independent. Furthermore,
So we can let . Therefore, the matrix that we seek is
and
From Section 5.3, we know that the general solution to the system
is
Therefore, the general solution to
is
This solution agrees with the solution that we found in Example 5.5.5.
In practice, we find solutions to linear systems using the methods that we outlined in Sections 5.2–5.4. What we have demonstrated in this section is that those solutions are exactly the ones that we want.
For each of the matrices in Exercise Group 6.1.9.1–6, find (1) the eigenvalues, and ; (2) for each eigenvalue and , find an eigenvector and , respectively; and (3) construct the matrix and calculate .
For each of the matrices in Exercise Group 6.1.9.7–10, find (1) an eigenvalue, ; (2) find an eigenvector ReIm for ; and (2) construct the matrix ReIm and calculate . Compare your result to .
For each of the matrices in Exercise Group 6.1.9.11–16, find (1) the eigenvalue, and an eigenvector for ; (2) choose a vector that is linearly independent of and compute . You should find that