Skip to main content

Coordinated Differential Equations

Section 5.1 Linear Algebra in a Nutshell

Linear algebra and matrices provide a convenient notation for representing the \(2 \times 2\) system
\begin{align*} \frac{dx}{dt} & = a x + b y,\\ \frac{dy}{dt} & = c x + d y. \end{align*}
If we let
\begin{equation*} A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \quad\text{and}\quad {\mathbf x}(t) = \begin{pmatrix} x(t) \\ y(t) \end{pmatrix}, \end{equation*}
then we can rewrite our system as
\begin{equation*} \begin{pmatrix} x'(t) \\ y'(t) \end{pmatrix} = \begin{pmatrix} ax(t) + b y(t) \\ cx(t) + d y(t) \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x(t) \\ y(t) \end{pmatrix}. \end{equation*}
In other words, we can write our system as
\begin{equation*} \frac{d \mathbf x}{dt} = A {\mathbf x}, \end{equation*}
where
\begin{equation*} \mathbf x' = \frac{d \mathbf x}{dt} = \begin{pmatrix} x'(t) \\ y'(t) \end{pmatrix}. \end{equation*}

Subsection 5.1.1 Matrices and Systems of Linear Equations

A short review of linear algebra and \(2 \times 2\) matrices is useful at this point. Recall that any system of two equations in two variables,
\begin{align*} ax + by & = \alpha,\\ cx + dy & = \beta, \end{align*}
can be written as a matrix equation
\begin{equation} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} ax + by \\ cx + dy \end{pmatrix} = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}.\tag{5.1.1} \end{equation}
We will denote the \(2 \times 2\) coefficient matrix by \(A\text{.}\) That is,
\begin{equation*} A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}. \end{equation*}
If a solution for the system (5.1.1) exists, it is easy to find. A unique solution will occur exactly when the matrix \(A\) is invertible (or nonsingular). The unique solution is given by
\begin{equation*} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} \begin{pmatrix} \alpha \\ \beta \end{pmatrix}, \end{equation*}
where
\begin{equation*} A^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}. \end{equation*}
The matrix \(A\) is invertible if and only if its determinant is nonzero,
\begin{equation*} \det(A) = ad - bc \neq 0. \end{equation*}
If \(\det(A) = 0\text{,}\) then we either have no solution or infinitely many solutions.
Let us consider the special case
\begin{equation*} A \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}. \end{equation*}
If \(\det(A) \neq 0\text{,}\) we have exactly one solution, \(x = 0\) and \(y = 0\text{.}\) On the other hand, if \(\det(A) = 0\text{,}\) we have infinitely many solutions. Suppose that \(a \neq 0\text{.}\) Then \(x = - (b/a)y\text{,}\) and
\begin{equation*} -c \left( \frac{b}{a}\right) y + dy = 0. \end{equation*}
Therefore, \((ad - bc) y =0\text{.}\) Since \(\det(A) = ad - bc = 0\text{,}\) the variable \(y\) can assume any value and \(x = - (b/a)y\text{.}\) Thus, the solutions to our system lie along a line through the origin. In fact, we will always get a line of solutions through the origin as long as at least one entry in our matrix is nonzero.
 1 
We will not worry about the \(2 \times 2\) zero matrix, since it will not play a role in our study of linear equations.

Subsection 5.1.2 Linear Independence

We say that two vectors \({\mathbf x}\) and \({\mathbf y}\) in \({\mathbb R}^2\) are linearly independent if they do not lie on the same line through the origin. If, on the other hand, they do lie on the same line, then the vectors are linearly dependent. Equivalently, two vectors are linearly dependent if one vector is a multiple of the other. We leave the proof of the following theorem as an exercise.
If we have a pair of linearly independent vectors in \({\mathbb R}^2\text{,}\) then we can write any vector in \({\mathbb R}^2\) as a unique linear combination of the two vectors. That is, given two linearly independent vectors \({\mathbf x} = (x_1, x_2)\) and \({\mathbf y} = (y_1, y_2)\text{,}\) we can write \({\mathbf z} = (z_1, z_2)\) as
\begin{equation*} \begin{pmatrix} z_1 \\ z_2 \end{pmatrix} = \alpha \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \beta \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}, \end{equation*}
where \(\alpha\) and \(\beta\) are unique. To see why this is true, we must solve the equations
\begin{align*} z_1 & = \alpha x_1 + \beta y_1\\ z_2 & = \alpha x_2 + \beta y_2 \end{align*}
for \(\alpha\) and \(\beta\text{.}\) However, this system has a unique solution since
\begin{equation*} \det \begin{pmatrix} x_1 & y_1 \\ x_2 & y_2 \end{pmatrix} \neq 0. \end{equation*}
Two vectors are said to be a basis for \({\mathbb R}^2\) if we can write any vector in \({\mathbb R}^2\) as a linear combination of these two vectors. By our arguments above, any two linearly independent vectors will form a basis for \({\mathbb R}^2\text{.}\)

Example 5.1.2.

The vectors \({\mathbf e}_1 = (1, 0)\) and \({\mathbf e}_2 = (0, 1)\) form a basis for \({\mathbb R}^2\text{.}\) Indeed, if \({\mathbf z} = (z_1, z_2)\text{,}\) then we can write
\begin{equation*} {\mathbf z} = z_1 {\mathbf e}_1 + z_2 {\mathbf e}_2. \end{equation*}
The vectors \({\mathbf e}_1\) and \({\mathbf e}_2\) are called the standard basis for \({\mathbb R}^2\text{.}\)

Example 5.1.3.

Let \({\mathbf v}_1 = (2,1)\) and \({\mathbf v}_2 = (3, 2)\text{.}\) Since
\begin{equation*} \det \begin{pmatrix} 2 & 3 \\ 1 & 2 \end{pmatrix} \neq 0, \end{equation*}
these vectors form a basis for \({\mathbb R}^2\text{.}\) If \({\mathbf z} = (-5, -4)\text{,}\) then we can write
\begin{equation*} {\mathbf z} = 2 {\mathbf v}_1 - 3 {\mathbf v}_2. \end{equation*}
We say that the coordinates of \({\mathbf z}\) are \((2, -3)\) with respect to the basis \(\{ {\mathbf v}_1, {\mathbf v}_2 \}\text{.}\)

Example 5.1.4.

The vectors \((1, 1)\) and \((-1, -1)\) do not form a basis for \({\mathbb R}^2\) since these two vectors lie along the same line.
If \(2 \times 2\) matrices and the rest of what we have described above make you nervous, you should work through the exercises at the end of this section. Below are some examples of matrix operations.

Example 5.1.5.

Compute \(Ay\text{,}\) where \(A\) is the matrix \(\begin{pmatrix}2 & 4 \\ -5 & -6\end{pmatrix}\) and \(y\) is the vector \(\begin{pmatrix}10 \\ 1\end{pmatrix}\)
To solve this, we multiply each row of \(A\) by the vector \(y\text{,}\) and obtain
\begin{equation*} Ay = \begin{pmatrix}2(10) + 4(1) \\ -5(10) + (-6)(1)\end{pmatrix} = \begin{pmatrix}24 \\ -56 \end{pmatrix} \end{equation*}

Example 5.1.6.

Compute \(AB\text{,}\) where \(A = \begin{pmatrix}2 & 3 \\ -1 & 5\end{pmatrix}, B = \begin{pmatrix}-4 & -7 \\ 6 & 0\end{pmatrix}\text{.}\)
We multiply each row of \(A\) by each column of \(B\text{,}\) which gives
\begin{equation*} AB = \begin{pmatrix}2(-4) + 3(6) & 2(-7) + 3(0) \\ -1(-4) + 5(6) & (-1)(-7) + 5(0)\end{pmatrix} = \begin{pmatrix}10 & -14 \\ 34 & 7\end{pmatrix} \end{equation*}

Example 5.1.7.

Compute \(A^{-1}\text{,}\) where \(A = \begin{pmatrix}5 & 7 \\ 4 & 6\end{pmatrix}\text{.}\)
Using the formula \(\begin{pmatrix}a & b \\ c & d\end{pmatrix}^{-1} = \frac{1}{ad - bc}\begin{pmatrix}d & -b \\ -c & a\end{pmatrix}\) we find
\begin{equation*} A^{-1} = \frac{1}{5(6) - 7(4)}\begin{pmatrix}6 & -7 \\ -4 & 5\end{pmatrix} = \frac12\begin{pmatrix}6 & -7 \\ -4 & 5\end{pmatrix} = \begin{pmatrix}3 & \frac{-7}{2} \\ -2 & \frac52 \end{pmatrix} \end{equation*}

Activity 5.1.1. Matrix Operations.

Given the matrices and vectors
\begin{equation*} A = \begin{pmatrix} 5 & 3 \\ -6 & 4 \end{pmatrix}, B = \begin{pmatrix} 2 & 3 \\ 1 & 2 \end{pmatrix}, \mathbf x = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \mathbf y = \begin{pmatrix} 2 \\ -1 \end{pmatrix} \end{equation*}
compute each of the following expressions.
(a)
\(AB\text{,}\) \(BA\)
(b)
\(A^{-1}\text{,}\) \(B^{-1}\text{,}\) \((AB)^{-1}\text{,}\) \(B^{-1}A^{-1}\)
(c)
\(\det(A), \det(B), \det(AB), \det(A^{-1})\)
(d)
\(A \mathbf x\text{,}\) \(A \mathbf y\text{,}\) \(\mathbf y^T \mathbf x\text{,}\) \(\mathbf x \mathbf y^T\text{,}\) where \(\mathbf y^T = (2, -1)\)

Subsection 5.1.3 Finding Eigenvalues and Eigenvectors

A nonzero vector \({\mathbf v}\) is an eigenvector of \(A\) if \(A {\mathbf v} = \lambda {\mathbf v}\) for some \(\lambda \in {\mathbf R}\text{.}\) The constant \(\lambda\) is called an eigenvalue of \(A\text{.}\) Letting
\begin{equation*} A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \quad \text{and} \quad \mathbf v = \begin{pmatrix} x \\ y \end{pmatrix} \neq \mathbf 0, \end{equation*}
we have \(A \mathbf x = \lambda \mathbf v\) or \(A \mathbf v - \lambda \mathbf v = \mathbf 0\text{.}\) In matrix form this is
\begin{align*} \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} - \lambda \begin{pmatrix} x \\ y \end{pmatrix} \amp = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} - \begin{pmatrix} \lambda & 0 \\ 0 & \lambda \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}\\ \amp = \begin{pmatrix} a- \lambda & b \\ c & d - \lambda \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}\\ \amp = \begin{pmatrix} 0 \\ 0 \end{pmatrix}. \end{align*}
This matrix equation is certainly true if \((x, y) = (0, 0)\text{.}\) However, we seek nonzero solutions to this system. This will occur exactly when the determinant of
\begin{equation*} A - \lambda I = \begin{pmatrix} a- \lambda & b \\ c & d - \lambda \end{pmatrix} \end{equation*}
is zero. In this case
\begin{equation*} \det(A - \lambda I) = \det\begin{pmatrix} a - \lambda & b \\ c & d - \lambda \end{pmatrix} = \lambda^2 - (a + d) \lambda + (ad - bc). \end{equation*}
We say that
\begin{equation*} \det(A - \lambda I) = \lambda^2 - (a + d) \lambda + (ad - bc) \end{equation*}
is the characteristic polynomial of \(A\text{.}\) We summarize the results of this discussion in the following theorem.

Example 5.1.9.

Suppose that we wish to find the eigenvalues and associated eigenvectors of
\begin{equation*} A = \begin{pmatrix} 1 & 2 \\ 4 & 3 \end{pmatrix}. \end{equation*}
To find the eigenvalues and eigenvectors for \(A\text{,}\) we must solve the equation
\begin{equation*} A \begin{pmatrix} x \\ y \end{pmatrix} = \lambda \begin{pmatrix} x \\ y \end{pmatrix}. \end{equation*}
If we let \(I\) denote the \(2 \times 2\) identity matrix,
\begin{equation*} I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \end{equation*}
we can rewrite this equation in the form
\begin{equation} (A - \lambda I) \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}.\tag{5.1.2} \end{equation}
We know that \(A - \lambda I\) is a \(2 \times 2\) matrix and that this system will only have nonzero solutions if \(\det(A - \lambda I) = 0\text{.}\) In our example,
\begin{align*} \det(A - \lambda I) & = \det\begin{pmatrix} 1 - \lambda & 2 \\ 4 & 3 - \lambda \end{pmatrix} \\ & = (1 - \lambda) (3 - \lambda ) - 8\\ & = \lambda^2 - 4\lambda - 5\\ & = (\lambda - 5)(\lambda +1 ). \end{align*}
Thus, \(\lambda = 5\) or \(-1\text{.}\)
To see this from a different perspective, we will rewrite equation (5.1.2) as
\begin{align*} x + 2 y & = \lambda x\\ 4 x + 3 y & = \lambda y. \end{align*}
This system is equivalent to
\begin{align*} (1 - \lambda) x + 2 y & = 0\\ 4 x + (3 - \lambda) y & = 0 \end{align*}
which can be reduced to
\begin{align*} (1 - \lambda) x + 2 y & = 0\\ (\lambda^2 - 4\lambda - 5) y & = 0. \end{align*}
Therefore, either \(\lambda = 5\) or \(\lambda = -1\) to obtain a nonzero solution.
  • If \(\lambda = 5\text{,}\) the first equation in the system becomes \(-2x + y = 0\text{,}\) and the eigenvectors corresponding to this eigenvalue are the nonzero solutions of this equation. That is, a vector must be a nonzero multiple of \((1, 2)\) to be an eigenvector of \(A\) corresponding to \(\lambda = 5\text{.}\)
  • If \(\lambda = -1\text{,}\) then the corresponding eigenvectors are the nonzero multiples of \((1, -1)\text{.}\)
There is also a short-cut to finding the eigenvectors of a \(2 \times 2\) matrix once you have the eigenvalues, illustrated in the following example.

Example 5.1.10.

Find the eigenvalues of the matrix \(A\) below, and for each eigenvalue, find an associated eigenvector.
\begin{equation*} A = \begin{pmatrix}1 & 10 \\ 12 & 8\end{pmatrix} \end{equation*}
The characteristic polynomial is \((1 - \lambda)(8 - \lambda) - 10(12) = \lambda^2 - 9\lambda - 112\text{;}\) setting this equal to zero and solving gives \(\lambda = -7, 16\text{.}\)
To find the eigenvectors, we look at each eigenvalue individually and subtract from the magin diagonal; let’s start with \(\lambda = -7\text{.}\) We get
\begin{equation*} A - (-7)I = \begin{pmatrix}1 - (-7) & 10 \\ 12 & 8 - (-7)\end{pmatrix} = \begin{pmatrix}8 & 10 \\ 12 & 15\end{pmatrix} \end{equation*}
We can "read off" an eigenvector as follows: take one of the rows of the new matrix and put them into a column, flip the numbers around, and change exactly one of the signs. For example, we could take the top row \(\begin{pmatrix}8 & 10\end{pmatrix}\) and find that \(\begin{pmatrix}-10 \\ 8\end{pmatrix}\) and \(\begin{pmatrix}10 \\ -8\end{pmatrix}\) are value eigenvectors. Likewise, we could have used the bottom row \(\begin{pmatrix}10 & 12\end{pmatrix}\) and gotten \(\begin{pmatrix}-12 \\ 10\end{pmatrix}\) and \(\begin{pmatrix}12 \\ -10\end{pmatrix}\text{.}\)
Now, working with \(\lambda = 16\text{,}\) we see
\begin{equation*} A - 16I = \begin{pmatrix}1 - 16 & 10 // 12 & 8 - 16\end{pmatrix} = \begin{pmatrix}-15 & 10 \\ 12 & -8\end{pmatrix} \end{equation*}
Flipping a row and changing one sign could give us \(\begin{pmatrix}-10 \\ -15\end{pmatrix}, \begin{pmatrix}10 \\ 15\end{pmatrix}, \begin{pmatrix}8 \\ 12\end{pmatrix}\text{,}\) or \(\begin{pmatrix}-8 \\ -12\end{pmatrix}\) as eigenvectors.

Activity 5.1.2. Finding Eigenvalues and Eigenvectors.

For each of the following matrices (1) find the characteristic polynomial, (2) find all of the eigenvalues, and (3) find an eigenvector for each eigenvalue.
(a)
\begin{equation*} A = \begin{pmatrix} 1 & 3 \\ 1 & -1 \end{pmatrix}. \end{equation*}
(b)
\begin{equation*} A = \begin{pmatrix} -8 & 2 \\ -15 & 3 \end{pmatrix}. \end{equation*}
(c)
\begin{equation*} A = \begin{pmatrix} 4 & 3 \\ -6 & -5 \end{pmatrix}. \end{equation*}
(d)
\begin{equation*} A = \begin{pmatrix} 7 & 4 \\ -10 & -5 \end{pmatrix}. \end{equation*}
(e)
\begin{equation*} A = \begin{pmatrix} 3 & 1 \\ -1 & 1 \end{pmatrix}. \end{equation*}

Subsection 5.1.4 Important Lessons

  • A matrix \(A\) is invertible (or nonsingular) if there exists a matrix \(A^{-1}\) such that \(AA^{-1} = A^{-1}A = I\text{,}\) where \(I\) is the identity matrix. In the case of \(2 \times 2\) matrices,
    \begin{equation*} I =\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{equation*}
  • If
    \begin{equation*} A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, \end{equation*}
    then
    \begin{equation*} A^{-1} = \frac{1}{ad - bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}. \end{equation*}
  • A matrix \(A\) is invertible if and only if its determinant is nonzero,
    \begin{equation*} \det(A) = ad - bc \neq 0. \end{equation*}
  • We say that two vectors \({\mathbf x}\) and \({\mathbf y}\) in \({\mathbb R}^2\) are linearly independent if they do not lie on the same line through the origin. If, on the other hand, they do lie on the same line, then the vectors are linearly dependent. Equivalently, two vectors are linearly dependent if one vector is a multiple of the other.
  • Let \({\mathbf x} = (x_1, x_2)\) and \({\mathbf y} = (y_1, y_2)\text{.}\) Then \({\mathbf x}\) and \({\mathbf y}\) are linearly independent if and only if
    \begin{equation*} \det \begin{pmatrix} x_1 & y_1 \\ x_2 & y_2 \end{pmatrix} \neq 0. \end{equation*}
  • If we have a pair of linearly independent vectors in \({\mathbb R}^2\text{,}\) then we can write any vector in \({\mathbb R}^2\) as a unique linear combination of the two vectors. That is, given two linearly independent vectors \({\mathbf x} = (x_1, x_2)\) and \({\mathbf y} = (y_1, y_2)\text{,}\) we can write \({\mathbf z} = (z_1, z_2)\) as
    \begin{equation*} \begin{pmatrix} z_1 \\ z_2 \end{pmatrix} = \alpha \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} + \beta \begin{pmatrix} y_1 \\ y_2 \end{pmatrix}, \end{equation*}
    where \(\alpha\) and \(\beta\) are unique.
  • Two vectors are a basis for \({\mathbb R}^2\) if we can write any vector in \({\mathbb R}^2\) as a linear combination of these two vectors. Any two linearly independent vectors will form a basis for \({\mathbb R}^2\text{.}\)
  • The roots of the characteristic polynomial, \(\det(A - \lambda I)\text{,}\) of a matrix \(A\) are the eigenvalues of \(A\text{.}\) Given a specific eigenvalue, \(\lambda\text{,}\) for a matrix \(A\text{,}\) the eigenvectors associated with \(A\) are the nonzero solutions of the system of equations
    \begin{equation*} (A - \lambda I) \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}. \end{equation*}
  • If \({\mathbf v}_1\) and \({\mathbf v}_2\) are eigenvectors of two distinct real eigenvalues of a matrix \(A\text{,}\) then \({\mathbf v}_1\) and \({\mathbf v}_2\) are linearly independent.

Reading Questions 5.1.5 Reading Questions

1.

Explain what it means for two vectors to be linearly independent.

2.

Explain what it means for a matrix to be nonsingular.

3.

What is an eigenvalue and an eigenvector?

Exercises 5.1.6 Exercises

1.

Given a column vector
\begin{equation*} \mathbf x = \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}, \end{equation*}
we define the transpose of \(\mathbf x\) to be
\begin{equation*} \mathbf x^T = \begin{pmatrix} x_1 \amp x_2 \end{pmatrix}. \end{equation*}
If
\begin{equation*} A = \begin{pmatrix} 3 & -2 \\ 0 & -1 \end{pmatrix}, \mathbf x = \begin{pmatrix} 4 \\ 1 \end{pmatrix}, \quad \text{and} \quad \mathbf y = \begin{pmatrix} -2 \\ 3 \end{pmatrix}, \end{equation*}
find each of the following.
  1. \(\displaystyle A \mathbf x\)
  2. \(\displaystyle A \mathbf y\)
  3. \(\displaystyle \mathbf x^T \mathbf y\)
  4. \(\displaystyle \mathbf y^T \mathbf x\)

2.

If
\begin{equation*} A = \begin{pmatrix} 1 & -2 \\ 3 & 1 \end{pmatrix} \quad \text{and} \quad B = \begin{pmatrix} 4 & 1 \\ -1 & -2 \end{pmatrix}, \end{equation*}
find each of the following.
  1. \(\displaystyle A + B\)
  2. \(\displaystyle 2A - 3B\)
  3. \(\displaystyle AB\)
  4. \(\displaystyle BA\)
  5. \(\displaystyle A^{-1}\)
  6. \(\displaystyle B^{-1}\)

3.

If
\begin{equation*} A = \begin{pmatrix} 2 & 1 - i \\ 2 - i & 2 \end{pmatrix} \quad \text{and} \quad B = \begin{pmatrix} 4i & 1 - i \\ 1 + 3i & -2 -i \end{pmatrix}, \end{equation*}
find each of the following.
  1. \(\displaystyle A + B\)
  2. \(\displaystyle 3A - 2B\)
  3. \(\displaystyle AB\)
  4. \(\displaystyle BA\)

Finding Determinants.

Find the determinant of each of the matrices \(A\) in Exercise Group 5.1.6.4–13.
4.
\(\displaystyle A = \begin{pmatrix} 1 & 4 \\ 2 & 3 \end{pmatrix}\)
5.
\(\displaystyle A = \begin{pmatrix} 6 & 3 \\ -4 & -1 \end{pmatrix}\)
6.
\(\displaystyle A = \begin{pmatrix} 2 & -6 \\ -2 & 4 \end{pmatrix}\)
7.
\(\displaystyle A = \begin{pmatrix} -1 & 6 \\ -2 & 6 \end{pmatrix}\)
8.
\(\displaystyle A = \begin{pmatrix} 3 & 1 \\ -2 & 0 \end{pmatrix}\)
9.
\(\displaystyle A = \begin{pmatrix} 1 & -2 \\ 1 & 3 \end{pmatrix}\)
10.
\(\displaystyle A = \begin{pmatrix} 2 & 0 \\ 0 & -3 \end{pmatrix}\)
11.
\(\displaystyle A = \begin{pmatrix} 1 & 2 \\ 0 & 3 \end{pmatrix}\)
12.
\(\displaystyle A = \begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix}\)
13.
\(\displaystyle A = \begin{pmatrix} 1 & -2 \\ -3 & 2 \end{pmatrix}\)

Finding Inverses.

Find the inverse (if it exists) of each of the matrices \(A\) in Exercise Group 5.1.6.14–21. that is, find the matrix \(A^{-1}\) such that \(A A^{-1} = A^{-1} A = I\text{,}\) where
\begin{equation*} I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{equation*}
14.
\(\displaystyle A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}\)
15.
\(\displaystyle A = \begin{pmatrix} 6 & 3 \\ -4 & -1 \end{pmatrix}\)
16.
\(\displaystyle A = \begin{pmatrix} 3 & -7 \\ -2 & 5 \end{pmatrix}\)
17.
\(\displaystyle A = \begin{pmatrix} 8 & 7 \\ 2 & 2 \end{pmatrix}\)
18.
\(\displaystyle A = \begin{pmatrix} -3 & 4 \\ -2 & 5 \end{pmatrix}\)
19.
\(\displaystyle A = \begin{pmatrix} 3 & 2 \\ 4 & 3 \end{pmatrix}\)
20.
\(\displaystyle A = \begin{pmatrix} 4 & 0 \\ 0 & -3 \end{pmatrix}\)
21.
\(\displaystyle A = \begin{pmatrix} 1 & 2 \\ -3 & -6 \end{pmatrix}\)

Finding Eigenvalues and Eigenvectors.

For each of the matrices \(A\) in Exercise Group 5.1.6.22–31:
  1. Find the characteristic polynomial of \(A\text{.}\)
  2. Find all of the eigenvalues of \(A\text{.}\)
  3. Find an eigenvector for each eigenvalue of \(A\text{.}\)
22.
\(\displaystyle A = \begin{pmatrix} 3 & 4 \\ 2 & 1 \end{pmatrix}\)
23.
\(\displaystyle A = \begin{pmatrix} 6 & 3 \\ -4 & -1 \end{pmatrix}\)
24.
\(\displaystyle A = \begin{pmatrix} 3 & 1 \\ -1 & 1 \end{pmatrix}\)
25.
\(\displaystyle A = \begin{pmatrix} -1 & 6 \\ -2 & 6 \end{pmatrix}\)
26.
\(\displaystyle A = \begin{pmatrix} 3 & 1 \\ -2 & 0 \end{pmatrix}\)
27.
\(\displaystyle A = \begin{pmatrix} 1 & -2 \\ 1 & 3 \end{pmatrix}\)
28.
\(\displaystyle A = \begin{pmatrix} 2 & 0 \\ 0 & -3 \end{pmatrix}\)
29.
\(\displaystyle A = \begin{pmatrix} 1 & 2 \\ 0 & 3 \end{pmatrix}\)
30.
\(\displaystyle A = \begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix}\)
31.
\(\displaystyle A = \begin{pmatrix} 1 & -2 \\ -3 & 2 \end{pmatrix}\)

32.

For what values of \(a\) are the vectors \((2, a)\) and \((4,-1)\) linearly independent?

33.

We define the trace of a \(2 \times 2\) matrix to be the sum of its diagonal entries. That is, the trace of
\begin{equation*} A =\begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{equation*}
is \(\trace(A) = a + d\text{.}\) Show that \(\trace(AB) = \trace(BA)\) for any \(2 \times 2\) matrices \(A\) and \(B\text{.}\)

34.

Let \(A\) and \(B\) be two \(2 \times 2\) matrices. Show that \(\det(AB) = \det(A) \det(B)\text{.}\)

35.

Let \(A\) be a \(2 \times 2\) matrix. Show that \(\det(A^{-1]}) = 1/\text{.}\)

36.

Define the \(2 \times 2\) identity matrix to be
\begin{equation*} I =\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}. \end{equation*}
Show that \(AI = IA = A\) for any \(2 \times 2\) matrix.

37.

An upper triangular matrix \(A\) is a matrix of the form
\begin{equation*} A =\begin{pmatrix} \alpha & \gamma \\ 0 & \beta \end{pmatrix}. \end{equation*}
Show that \(A\) has eigenvalues \(\alpha\) and \(\beta\text{.}\)

38.

Let \({\mathbf x} = (x_1, x_2)\) and \({\mathbf y} = (y_1, y_2)\text{.}\) Prove that \({\mathbf x}\) and \({\mathbf y}\) are linearly independent if and only if
\begin{equation*} \det \begin{pmatrix} x_1 & y_1 \\ x_2 & y_2 \end{pmatrix} \neq 0. \end{equation*}