← Back

Eigenvalues & Eigenvectors

Eigenvalues and eigenvectors show up across mathematics and the sciences. Thinking in terms of vectors and linear maps, a matrix \(A\) encodes a linear transformation on a vector space. (For background, see Vector Spaces.)

Throughout, we assume our scalars lie in a field \(K\) (typically \(\mathbb{R}\) or \(\mathbb{C}\)), and that “good” matrices are invertible (nonsingular). The identity matrix is denoted \(I\).

Motivation

A general matrix may stretch, compress, shear, rotate, or reflect vectors depending on their direction. We would like directions that transform in the simplest possible way. These are the eigendirections: directions that are only scaled by \(A\).

Definition

A nonzero vector \(v\) is an eigenvector of \(A\) with associated eigenvalue \(\lambda\in K\) if

\[ A v = \lambda v . \]

Equivalently, moving all terms to one side,

\[ (A - \lambda I) v = 0. \]

For a nontrivial solution \(v\neq 0\) to exist, the matrix \(A-\lambda I\) must be singular, i.e. its determinant vanishes:

\[ \det(A-\lambda I)=0. \]

Characteristic Polynomial

The polynomial \(p_A(\lambda)=\det(A-\lambda I)\) is the characteristic polynomial. For a 2×2 matrix

\[ A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix}, \quad A-\lambda I = \begin{bmatrix} a_{11}-\lambda & a_{12}\\ a_{21} & a_{22}-\lambda \end{bmatrix}. \]

Then

\[ p_A(\lambda)=\det(A-\lambda I)=(a_{11}-\lambda)(a_{22}-\lambda)-a_{12}a_{21} = \lambda^2 - (a_{11}+a_{22})\lambda + (a_{11}a_{22}-a_{12}a_{21}). \]

In general (for \(n\times n\)),

\[ A-\lambda I = \begin{bmatrix} a_{11}-\lambda & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22}-\lambda & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn}-\lambda \end{bmatrix}. \]

Finding Eigenpairs

  1. Compute the roots \(\lambda\) of \(p_A(\lambda)=0\) (the eigenvalues).
  2. For each \(\lambda\), solve \((A-\lambda I)v=0\) for a nonzero \(v\) (an eigenvector).

What Do Eigenvalues Tell Us?

Eigenbasis

An eigenbasis is a basis consisting of eigenvectors of \(A\). When such a basis exists (e.g., \(A\) is diagonalizable), computations simplify drastically.

Diagonal Matrices

If \(A\) is diagonal, its eigenvectors are the standard basis vectors and its eigenvalues are the diagonal entries. More generally, if \(A\) is diagonalizable, then

\[ A = P D P^{-1}, \quad D=\operatorname{diag}(\lambda_1,\dots,\lambda_n), \]

where the columns of \(P\) are eigenvectors. Powers/exponentials are then easy: \(A^k=P D^k P^{-1}\), \(e^{tA}=Pe^{tD}P^{-1}\).