Short Notes: Eigendecomposition of a Matrix

A vector viCn is called the i-th eigenvector of a matrix ARn×n, if it satisfies the simple equation

(1)Avi=λivi,

for a scalar value λiC, called an eigenvalue. (Assuming the matrix A is real-valued, the eigenvalues and eigenvectors might still be complex.) Let us further assume that the n eigenvectors of matrix A are linearly independent.

We can now ‘horizontally’ stack the eigenvectors into a matrix QCn×n:

(2)Q=[v1,v2,,vn].

Multiplying A with Q gives us:

(3)AQ=[Av1,Av2,,Avn].

If we compare Eq. (3) with Eq. (1), we can see that:

(4)AQ=[λ1v1,λ2v2,,λnvn].

If we now define a diagonal matrix carrying the eigenvalues λi as

(5)Λ=[λ1000λ2000λn],

we see that

(6)QΛ=[λ1v1,λ2v2,,λnvn]

which is equal to Eq. (4):

(7)QΛ=AQ.

One final rearrangement – post-multiplying Eq. (7) with Q1 – and we are done:

(8)A=QΛQ1.

Eq. (8) is also called the eigendecomposition of matrix A.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Displaying External Posts on Your al-folio Blog
  • Short Notes: Understanding Euclid’s GCD Algorithm
  • Conway's Game of Life
  • Visualizing High-Dimensional Data Using Parallel Coordinates
  • Short Notes: The Multivariate Gaussian Distribution With a Diagonal Covariance Matrix