33 Eigenvalues

Eigenvalues are very important and have many applications. (A couple of which are studying population growth and solving certain types of differential equations seen in engineering and science [16].) They have grown from the need to solve the eigenvalue problem.

Larson and Falvo give the following statement for the eigenvalue problem: “If A is an n\times n matrix, do there exist n\times 1 nonzero matrices x such that A\textbf{x} is a scalar multiple of x?” [16]. Note that the eigenvalue problem only applies to n\times n matrices.

The scalar is denoted as \lambda and is called the eigenvalue, and x is the eigenvector [16]. So, what we want to know is what eigenvalues and eigenvectors satisfy the equation

    \[A\textbf{x}=\lambda\textbf{x},\]

which can be rewritten as

    \[(\lambda I-A)\textbf{x}=\textbf{0}.\]

To find the eigenvalues, we solve the characteristic equation

    \[\text{det}(\lambda I-A)=|\lambda I-A|=0,\]

for \lambda. This is because the homogeneous equation (\lambda I-A)\textbf{x}=\textbf{0} has a nonzero solution only if the determinant of (\lambda I-A) is equal to zero [16]. In other words, we want to find \lambda such that (\lambda I-A)\textbf{x}=\textbf{0} has a nonzero solution. After finding \lambda, we use the equation

    \[(\lambda I-A)\textbf{x}=\textbf{0},\]

where 0 is the zero vector, to find the corresponding eigenvector. Note that we do not let x equal zero because \textbf{x}=\textbf{0} is just the trivial solution [16]. Let’s now look at an example that I completed for Elementary Linear Algebra that comes from Larson and Falvo [16].


Example 67
Find the the eigenvalues and the corresponding eigenvectors for A=\left[      \begin{array}{@{}*{7}{r}@{}}       -2&4\\     2&5 \\          \end{array}      \right].

Solution
Using the characteristic equation and solving for \lambda, we have

    \begin{align*} 0&= \left|\lambda\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}+ (-1)\left[      \begin{array}{@{}*{7}{r}@{}}       -2&4\\     2&5 \\          \end{array}      \right]\right|\\ &=\left|\begin{bmatrix} \lambda&0\\ &\lambda \end{bmatrix}+\left[      \begin{array}{@{}*{7}{r}@{}}       2&-4\\     -2&-5 \\          \end{array}      \right]\right|\\ &=\left|\left[      \begin{array}{@{}*{7}{r}@{}}       \lambda+2&-4\\     -2&\lambda -5 \\          \end{array}      \right]\right|\\ &=(\lambda+2)(\lambda-5)-(-2)(-4)\\ &=\lambda^2-3\lambda-18\\ &=(\lambda+3)(\lambda-6)\\ \end{align*}

This implies that \lambda equals -3 or 6. To find the corresponding eigenvectors, we have to solve (\lambda I-A)\textbf{x}=\textbf{0} for x when \lambda=-3 and when \lambda=6. For \lambda=-3,

    \begin{align*} \lambda I-A&=-3\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}+ (-1)\left[      \begin{array}{@{}*{7}{r}@{}}       -2&4\\     2&5 \\          \end{array}      \right]\\ &=\left[      \begin{array}{@{}*{7}{r}@{}}       -3&0\\     0&-3 \\          \end{array}      \right]+ \left[      \begin{array}{@{}*{7}{r}@{}}       2&-4\\     -2&-5 \\          \end{array}      \right]\\ &=\left[      \begin{array}{@{}*{7}{r}@{}}       -1&-4\\     -2&-8 \\          \end{array}      \right] \end{align*}

Using elementary row operations, we get \begin{bmatrix} 1&4\\ 0&0 \end{bmatrix}.
The solutions to the equation

    \[\begin{bmatrix} 1&4\\ 0&0 \end{bmatrix}\textbf{x}=\textbf{0},\]

which can be written as

    \[\begin{bmatrix} 1&4\\ 0&0 \end{bmatrix} \begin{bmatrix} x_{11}\\ x_{21} \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ \end{bmatrix},\]

are 2\times 1 matrices that satisfy

    \[1\cdot x_{11}+4\cdot x_{21}=0\]

by matrix multiplication. If we let x_{21}=t, where t is a real number, we have eigenvectors of the form \textbf{x}=\left[      \begin{array}{@{}*{7}{r}@{}}       -4t\\     t\\          \end{array}      \right]. For \lambda=6,

    \begin{align*} \lambda I-A&=6\begin{bmatrix} 1&0\\ 0&1 \end{bmatrix}+ (-1)\left[      \begin{array}{@{}*{7}{r}@{}}       -2&4\\     2&5 \\          \end{array}      \right]\\ &=\left[      \begin{array}{@{}*{7}{r}@{}}       6&0\\     0&6 \\          \end{array}      \right]+ \left[      \begin{array}{@{}*{7}{r}@{}}       2&-4\\     -2&-5 \\          \end{array}      \right]\\ &=\left[      \begin{array}{@{}*{7}{r}@{}}       8&-4\\     -2&1 \\          \end{array}      \right] \end{align*}

Using elementary row operations, we get \left[      \begin{array}{@{}*{7}{r}@{}}       -2&1\\     0&0 \\          \end{array}      \right].
The solutions to the equation

    \[\left[      \begin{array}{@{}*{7}{r}@{}}       -2&1\\     0&0 \\          \end{array}      \right]\textbf{x}=\textbf{0},\]

which can be written as

    \[\left[      \begin{array}{@{}*{7}{r}@{}}       -2&1\\     0&0 \\          \end{array}      \right]\begin{bmatrix} x_{11}\\ x_{21} \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ \end{bmatrix},\]

are 2\times 1 matrices that satisfy

    \[-2\cdot x_{11}+1\cdot x_{21}=0\]

by matrix multiplication. If we let x_{11}=t, where t is a real number, we have eigenvectors of the form \textbf{x}=\left[      \begin{array}{@{}*{7}{r}@{}}        t\\     2t\\          \end{array}      \right].
Therefore, the eigenvalues are

    \[\lambda=-3 \,\,\text{and}\,\,\lambda=6\]

with corresponding eigenvectors that are nonzero scalar multiples of

    \[\left[      \begin{array}{@{}*{7}{r}@{}}       -4\\     1\\          \end{array}      \right]\,\, \text{and}\,\, \begin{bmatrix}      1\\      2      \end{bmatrix}.\]

The eigenvalue problem is not the only one that exists in Linear Algebra. There is another one known as the diagonalization problem that we will discuss in the following section.

Diagonalization

The diagonalization problem goes like this: “For a square matrix A, does there exist an invertible matrix P such that P^{-1}AP is diagonal?” [16]. A diagonal matrix is a square matrix where all entries above and below the main diagonal are zeros. So, the only thing differentiating one diagonal matrix from another is what entries are located along the main diagonal. We will see that the diagonalization of a matrix A, if it can be done, will result in a diagonal matrix with the eigenvalues of A along the main diagonal [16].

Let’s look at the formal definition of a diagonalizable matrix as given by Larson and Falvo [16].


Definition VI.12
An n\times n matrix A is diagonalizable if A is similar to a diagonal matrix. That is, A is diagonalizable if there exists an invertible matrix P such that P^{-1}AP is a diagonal matrix.

Once the eigenvalues and corresponding eigenvectors of a matrix A are found, it is not difficult to determine whether A is diagonalizable. It is also not difficult to find the diagonal matrix if A is diagonalizable. We simply have to follow the steps that are given below by Larson and Falvo [16].

Let A be an n\times n matrix.

  1. Find n linearly independent eigenvectors \textbf{p}_1,\textbf{p}_2,...,\textbf{p}_n for A with corresponding eigenvalues \lambda_1, \lambda_2,...,\lambda_n. If n linearly independent eigenvectors do not exist, A is not diagonalizable.
  2. If A has n linearly independent eigenvectors, let P be the n\times n matrix whose columns are these eigenvectors.
  3. The diagonal matrix D=P^{-1}AP will have the eigenvalues \lambda_1, \lambda_2,...,\lambda_n on its main diagonal (and zeros elsewhere). Additionally, the order of the eigenvectors used to form P will correspond to the order in which the eigenvalues appear on the main diagonal of D.

Let’s now work through an example demonstrating these steps. This is a problem I wrote for this paper.


Example 68
Find a matrix P such that P^{-1}AP is diagonal where

    \[A=\left[      \begin{array}{@{}*{7}{r}@{}}       -2&4\\     2&5 \\          \end{array}      \right]\]

and find the matrix P^{-1}AP.

Solution
By example 67, the eigenvalues of A are -3 and 6 with corresponding eigenvectors

    \[\left[      \begin{array}{@{}*{7}{r}@{}}       -4t\\     t\\          \end{array}      \right]\,\, \text{and}\,\, \left[      \begin{array}{@{}*{7}{r}@{}}        t\\     2t\\          \end{array}      \right]\]

If one of the two eigenvectors could be written as a linear combination of the other, then we would have to find a scalar that could be multiplied by one of the eigenvectors to get the other. It is easy to see that neither can be written as a linear combination of the other. Thus, they are linearly independent, and A is diagonalizable. This concludes the first step.

The second step says to let P be the matrix that whose columns are the eigenvectors. So, we have that

    \[P=\left[      \begin{array}{@{}*{7}{r}@{}}       -4t & t\\     t & 2t\\          \end{array}      \right]\]

By step 3, we have that the diagonal matrix P^{-1}AP has the eigenvalues along the main diagonal appearing in the same order as their corresponding eigenvectors appear in P. So,

    \[P^{-1}AP=\left[      \begin{array}{@{}*{7}{r}@{}}        -3&0\\     0&6\\          \end{array}      \right].\]

The solution to the diagonalization problem for A is that there does exist an invertible matrix P such that P^{-1}AP is diagonal, and

    \[P=\left[      \begin{array}{@{}*{7}{r}@{}}       -4t & t\\     t & 2t\\          \end{array}      \right].\]

License

Portfolio for Bachelor of Science in Mathematics Copyright © by Abigail E. Huettig. All Rights Reserved.

Share This Book