Next: Matrices as vectors
Up: Mappings Between Vector Spaces
Previous: Avoided level crossings
  Contents
Continuing to speculate on the eigenvectors and eigenvalues of a sum of two matrices, consider the case where the second is small relative to the first, perhaps on account of multiplying it by a small parameter. Maybe a small change of coordinates, depending on the same parameter, could account for the changed matrix; suppose then that
with the intention of disregarding anything multipying . First,
so the task becomes solving for the commutator , which is a special case of a more general first order (not linear) equation involving an unknown and given matrices and , or even , , and (instead of the second ).
As usual, the first step is to diagonalize , but it is reasonable to suppose that that has already been done, since it is only a question of the coordinate system. Once that is done, and the equation reduced to components, we find
or for reference,
As always, a problem arises when
, which is traditionally resolved by making still further preparations, namely choosing coordinates for which vanishes, avoiding the necessity of division by zero - just leave that part of the equation alone. The new eigenvectors can now be read off from the columns of
and the rows of
.
Not only is degeneracy an obstacle to this derivation, there is the impicit assumption that has no diagonal elements, avoiding
as a divisor. Consequently this procedure cannot change the eigenvalues of , just its eigenvectors. If it is necessary to change the eigenvalues of as well, that has to be done independently of applying the operator . Why is such a subterfuge necessary? Because
making the transformation orthogonal to first order. That is a rotation, which will not change the lengths of semiaxes of an ellipsiod, which are eigenvalues.
This whole scheme is cometimes called Primas' method.
Next: Matrices as vectors
Up: Mappings Between Vector Spaces
Previous: Avoided level crossings
  Contents
Pedro Hernandez
2004-02-28