next up previous contents
Next: Band Matrices Up: Mappings Between Vector Spaces Previous: Matrices as vectors   Contents

Confluence

Consider the fate of a Vandermonde determinant when two of its columns coincide, for example when a and b were nearly equal.

\begin{displaymath}\left\vert \begin{array}{ccccc}
1 & 1 & 1 & 1 & 1 \\
x &...
...(x) & f(a) & f(b) & f(c) & f(d)
\end{array} \right\vert = 0. \end{displaymath}

Starting from the determinant, we could subtract the a column from the b column,

\begin{displaymath}\left\vert \begin{array}{ccccc}
1 & 1 & 0 & 1 & 1 \\
x &...
...(a) & f(b) - f(a) & f(c) & f(d)
\end{array} \right\vert = 0. \end{displaymath}

and to save writing all the steps one by one, observe that $(b - a)$ is now a common factor of all the terms in the third column except for the last which would leave $(f(b) - f(a))/(b - a)$ and a determinant whose neareness to zero depends on this factor. That suggests discarding the factor and recognizing a derivative. The result is called the confluent form of the Vandermonde determinant, leading to Hermite interpolation when used for that purpose. It is not excluded that several pairs of points coalesce, it just means using more derivatives as well as values at all those points. If three points condense, the second derivative can also be extracted from the assemblage, and so on for as many clusters as might appear.

A similar limiting process reccommends itself when the characteristic equation of a polynomial has multiple roots, because the spectral decomposition of the matrix is essentially a Lagrange interpolation over the eigenvalues. Derivatives ought to begin to make their appearance.

To see how this works, pick out a simple $2x2$ matrix, such as

\begin{displaymath}\left[ \begin{array}{cc}
1 & 1 \\
\varepsilon^2 & 1
\end{array} \right]. \end{displaymath}

whose characteristic equation is

\begin{eqnarray*}
((\lambda - 1)^2 - \varepsilon^2 ) & = & 0,
\end{eqnarray*}



with roots $1 + \varepsilon$ and $1 - \varepsilon$. Consequently, if $\varepsilon$ were near zero, there would be an approximate degeneracy.

Two matrices of eigenvectors, left and right, are,

\begin{displaymath}2 \varepsilon U^{-1} =\left[ \begin{array}{cc}
\varepsilon ...
...
1 & 1 \\
\varepsilon & - \varepsilon
\end{array} \right]. \end{displaymath}

from which it appears that there is only one eignvector when $\varepsilon = 0$, and that that left eigenvector would be orthogonal to the only right eigenvector, precluding its normalization to unit projection.

Figure: Confluence of eigenvectors in the Jordan Normal Form, showing how the left eigenvector becomes orthogonal to its own right eigenvector.
\begin{figure}\begin{picture}(10,250)(-150,0)
\epsffile{degen.eps}\end{picture}\end{figure}


next up previous contents
Next: Band Matrices Up: Mappings Between Vector Spaces Previous: Matrices as vectors   Contents
Pedro Hernandez 2004-02-28