If you check eigenvalues and eigenvectors, you always face with Av = λv expression which means the matrix A streches the eigenvector v (is a vector) by an amount specified by the eigenvalue λ (is a scalar).
Eigenvector is a vector whose direction is not changed by the transform. It may be stretched, but still points the same direction. Each eigenvector has a corresponding eigenvalue that gives the scaling factor by which the transform scales the eigenvector. In fancy words, eigenvector is rotational characteristic and eigenvalue is scaling characteristic. Thus, these two guys can give you the information on the scaling and rotational characteristic of a matrix. Which means, we can guess the result of geometrical transformation of a vector without making lots of matrix-vector multiplication.
To sum up, eigenvector is direction, and eigenvalue tell you how much variance in that direction (how spread out the data is on the line). So, eigenvector with the highest eigenvalue is therefore the principal component, and ta-daa, here’s PCA!
If we leave all fancy math words, we can use eigenvalues and eigenvectors to make some accurate guesses in computer graphics. For example, we can predict the final result of the image without making all shading calculations. We can decide to sample more or less some part of the scene. We can remove some unnecessary parameters to reduce computational or storage cost. Because remember, by checking eigenvalues, eigenvector is direction, and eigenvalue tell you how much variance in that direction.
However, since rectangular matrices (mxn matrices) do not have eigenvalues, they have singular values. Luckily, we can get the same information eigenvalues give via singular values.
We can write mxn matrix B like B = USV’ using singular value decomposition (SVD) -” ‘ ” is transpose. SVD can be applied to matrices of any size. Here, U is an mxm matrix with orthonormal columns (UU’ = I, where I is an nxn identity matrix). S is an mxn matrix containing singular values on its diagonal. S scales the dimensions. V is an nxn orthonormal matrix, and V’ = = inv(V).
So yeah, that’s all folks. Now you can get SVD of your matrix, check the diagonals of S, and decide what to do. I will give you even a mini Matlab function that takes matrix, name of the matrix, and plot singular values:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
function S = getPlotSVD(M, name) % S is the singular values (vector), M is the matrix we'll use, name is the name % name of the matrix used in the grpahs singulars = svd(M); % Get singular values % Plot singular values figure; plot(singulars ); title(strcat('Singular values of ',{' '}, name)); % Plot singular values (log10 scale) figure; plot(log10(singulars)); title(strcat('Singular values of ', {' '}, name, ' (log10 scale)')); % Plot cumulative percent figure; plot( cumsum(singulars) / sum(singulars) ); title(strcat('Cumulative Percent of total Sigmas of ', {' '}, name)); S = singulars; end |
If you feel you need more information about eigenvalues, eigenvectors, singular values and SVD, you can check the Internet. You will find lots of fancy mathematical definitions. Also, you may want to check the websites below:
- Cool Linear Algebra: Singular Value Decomposition, Andrew Gibiansky (There’re even some graphs, pictures and a Matlab code)
- Matrix- Eigenvector and Eigenvalue, sharetechnote.com
- Principal Component Analysis 4 Dummies: Eigenvectors, Eigenvalues and Dimension Reduction, George Dallas