next up previous contents
Next: confluent interpolation polynomials Up: Canonical forms Previous: Cayley-Hamilton theorem   Contents

Lagrange interpolation polynomials

By far the best approach to constructing such a table is to observe that the gee's are Lagrange interpolation polynomials without their normalization factor (which was provided by the inner product in the denominator in the column by row formulation), with which they ought to be given for the sake of greater consistency. Doing so, one gets

\begin{eqnarray*}
G_i(M) & = &
\frac{\prod_{j\ne i}^n(M - \lambda_j I)}
{\prod_{j\ne i}^n(\lambda_i - \lambda_j I)},
\end{eqnarray*}



Once this detail is accommodated, the Gee's are equal to their own square, leaving a multiplication table resembling a unit matrix. The Lagrange polynomials intervene directly in the verification of the table, without having to look at the Gee's in detail, because they are completely defined by their values over a set of distinct points. Multiplying basis polynomials taking values of zero or unity leads to similar polynomials. If at least one of the two factors contributes a zero everywhere, the product must be the constant zero. That is just what happens when $G_i$ contributes the factor which $G_j$ was lacking to complete the characteristic polynomial.

On the other hand, if both factors take the value $1$ in the same places, the product still takes the value $1$, so the polynomial is the same, relative to the characteristic polynomial. It is even true that

\begin{eqnarray*}
I & = & \sum_{i=1}^n G_i
\end{eqnarray*}



on account of having created an interpolation for the constant $1$.

Not only can the constant function $1$ be interpolated, but also the identity function $f(x) = x$, in the form

\begin{eqnarray*}
M & = & \sum{ \lambda_i G_i(M) },
\end{eqnarray*}



and even more generally results such as

\begin{eqnarray*}
f(M) & = & \sum{f( \lambda_i) G_i(M) }, \\
M^{-1} & = & \sum{ \lambda_i^{-1} G_i(M) },\ \left[\lambda_i\ {\rm nonzero}\right].
\end{eqnarray*}



The function formula even gives a mechanism for calculating square roots, at least for matrices with nonnegative eigenvalues. Such a quantity is required to map a matrix symmetrically into its transpose:

\begin{eqnarray*}
\surd(M) & = & \sum{\surd( \lambda_i) G_i(M) }.
\end{eqnarray*}



Note that failing to insist on positive roots of positive eigenvalues inevitably leads to a great multiplicity of square roots for any particular matrix, because of all the binary sign choices at non-zero roots.


next up previous contents
Next: confluent interpolation polynomials Up: Canonical forms Previous: Cayley-Hamilton theorem   Contents
Pedro Hernandez 2004-02-28