next up previous contents
Next: Gerschgorin's disksGershgorin disk Up: Mappings Between Vector Spaces Previous: Fourier pairs   Contents

Equivalent matrices

It takes eigenvectors and eigenvalues to define a matrix. We have seen that the condition of having a common eigenvector set depends on whether two matrices commute or not. One could equally wonder whether there is an analogous relationship for matrices which have common sets of eigenvalues? Since the eigenvectors define a basis, the question is essentially one of how to recognize pairs of matrices whose only difference lies in the coordinate system in which they are defined. Suppose that $M$ and $N$ are two square matrices, and that $O$ defines a mapping between vector spaces (not necessarily the same one), in which case $M$ and $N$ could even have different dimensions.

\begin{eqnarray*}
& O & \\
{\rm Space}\ 1 & \longrightarrow & {\rm Space}\ 2 ...
...
{\rm Space}\ 1 & \longrightarrow & {\rm Space}\ 2 \\
& O &
\end{eqnarray*}



The required relationship is that

\begin{eqnarray*}
O N & = & M O.
\end{eqnarray*}



with the further relationship

\begin{eqnarray*}
N & = & O^{-1} M O
\end{eqnarray*}



whenever $O$ is invertible. This is the usual representation for the new matrix after a change of basis.

So far it is just a question that $M$ and $N$ produce the same results, independently of the stage at which $O$ is introduced, and nothing has been said about eigenvalues. Nevertheless note that if $N X = \lambda X$, we would have $O (N X) = \lambda (O X)$ whilst $(O N) X = M (O X)$ (indicating a use of the associative law by introducing parentheses). Altogether, $M (O X) = \lambda (O X)$, so that $M$ and $N$ can be expected to have matched eigenvectors with the same eigenvalue unless a singularity of $O$ intervenes. To that extent, $M$ and $N$ have the same eigenvalues.

To observe the correspondence of the whole set of eigenvalues, suppose that $U$ diagonalizes $M$ to $\Lambda$ and that $V$ diagonalizes $N$ to $K$:

\begin{eqnarray*}
M U & = & U \Lambda, \\
N V & = & V K,
\end{eqnarray*}



and that the eigenvalues correspond. They don't have to match in order, but they should have the same multiplicities. There is then a permutation of the diagonal elements of $K$ to get the diagonal elements of $\Lambda$. This could be remedied by introducing a permutation matrix, but it is just as well to take advantage of the ambiguity in defining $U$ and $V$, that their columns can be arranged in the order that we want, to make sure that the corresponding eigenvalues were listed in the same order since the beginning. That would make $K = \Lambda$, so

\begin{eqnarray*}
M = U \Lambda U^{-1} & = & (U V^{-1}) (V \Lambda U^{-1}), \\
N = V \Lambda V^{-1} & = & (V \Lambda U^{-1}) (U V^{-1}).
\end{eqnarray*}



Of course, $\Lambda$ could be inserted elsewhere in a similar product and other variants may exist. The essential point is that there is a pair of matrices $A$ and $B$ such that $M = A B$, $N = B A$. The existence of such a factorization could be taken as a the test of whether $M$ and $N$ have a common set of eigenvalues. Furthermore, $A$ and $B$ play the role of $O$ in the previous discussion because

\begin{displaymath}M A = (A B) A = A (B A) = A N,\end{displaymath}

and similarly in the other direction.


next up previous contents
Next: Gerschgorin's disksGershgorin disk Up: Mappings Between Vector Spaces Previous: Fourier pairs   Contents
Pedro Hernandez 2004-02-28