library(tidyverse)
library(dasc2594)
21 The Characteristic Equation
The characteristic equation/polynomial encodes information about the eigenvalues of the characteristic equation. In the previous chapter, we showed how we can decide if a scalar \(\lambda\) is an eigenvalue of a matrix and how to find the vectors associated with the eigenvalue. However, we did not learn how to find eigenvalues (other than to just randomly try \(\lambda\)). The characteristic equation/polynomial allows for determining the eigenvalues \(\lambda\).
Definition 21.1 Let \(\mathbf{A}\) be a \(n \times n\) matrix. The characteristic equation/polynomial of \(\mathbf{A}\) is the function \(f(\lambda)\) given by
\[ \begin{aligned} f(\lambda) = det(\mathbf{A} - \lambda \mathbf{I}) \end{aligned} \]
While not obvious, the function \(f(\lambda)\) is a polynomial of \(\lambda\) but requires computing the determinant of the matrix \(\mathbf{A} - \lambda \mathbf{I}\) which contains an unknown value \(\lambda\).
Example 21.1 Find the characteristic equation of the matrix \(\mathbf{A} = \begin{pmatrix} 3 & 5 \\ 2 & -1 \end{pmatrix}\)
- do in class
Example 21.2 Find the characteristic equation of the matrix \(\mathbf{A} = \begin{pmatrix} 0 & 6 & 8 \\ \frac{1}{2} & 0 & 0 \\ 0 & \frac{1}{2} & 0 \end{pmatrix}\)
- do in class (expand cofactors along the third column)
Once the characteristic equation is defined, we can use the equation to solve for the eigenvalues.
Theorem 21.1 Let \(\mathbf{A}\) be a \(n \times n\) matrix and let \(f(\lambda) = det(\mathbf{A} - \lambda \mathbf{I})\) be a characteristic polynomial. Then, the number \(\lambda_0\) is an eigenvalue of \(\mathbf{A}\) if and only if \(f(\lambda_0) = 0\).
Example 21.3 Using the characteristic equation of the matrix \(\mathbf{A} = \begin{pmatrix} 3 & 5 \\ 2 & -1 \end{pmatrix}\), solve for the eigenvalues and find a basis for the \(\lambda\)-eigenspaces
- do in class
Example 21.4 Using the characteristic equation of the matrix \(\mathbf{A} = \begin{pmatrix} 0 & 6 & 8 \\ \frac{1}{2} & 0 & 0 \\ 0 & \frac{1}{2} & 0 \end{pmatrix}\), solve for the eigenvalues and find a basis for the \(\lambda\)-eigenspaces
- do in class (expand cofactors along the third column)
21.1 Similarity
The idea behind similar matrices is to understand how the linear transformations implied by the transformation behave. Two matrices are similar if their transformation behavior (rotation, expansion/contraction, etc.) is the same but the coordinates on which the matrix operates are different.
Definition 21.2 The matrices \(\mathbf{A}\) and \(\mathbf{B}\) are said to be similar if there exists an invertible matrix \(\mathbf{P}\) where
\[ \begin{aligned} \mathbf{A} = \mathbf{P} \mathbf{B} \mathbf{P}^{-1} \end{aligned} \]
or equivalently
\[ \begin{aligned} \mathbf{P}^{-1} \mathbf{A} \mathbf{P}= \mathbf{B} \end{aligned} \]
Therefore, it is possible to change \(\mathbf{A}\) into \(\mathbf{B}\) with an invertible (one-to-one and onto) transformation.
Example 21.5 Consider the following example with matrices \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{P}\) defined as below:
<- matrix(c(3, 0, 0, -2), 2, 2)
A A
[,1] [,2]
[1,] 3 0
[2,] 0 -2
<- matrix(c(-12, -10, 15, 13), 2, 2)
B B
[,1] [,2]
[1,] -12 15
[2,] -10 13
<- matrix(c(-2, 1, 3,-1), 2, 2)
P P
[,1] [,2]
[1,] -2 3
[2,] 1 -1
%*% B %*% solve(P) P
[,1] [,2]
[1,] 3 0
[2,] 0 -2
solve(P) %*% A %*% P
[,1] [,2]
[1,] -12 15
[2,] -10 13
Theorem 21.2 If \(\mathbf{A}\) and \(\mathbf{B}\) are \(n \times n\) similar matrices, then \(\mathbf{A}\) and \(\mathbf{B}\) will have the same characteristic polynomial and therefore the same eigenvalues.
21.2 The geometric interpetation of similar matrices
In general, similar matrices do similar things in different spaces (different spaces in terms of different bases).