next up previous contents
Next: Confluence Up: Contours for <PLOT> Previous: Interpolation

The Vandermonde matrix

The Vandermonde matrix and its determinant have such a regular structure that it is worth examining it in further detail. One starting point is the matrix

\begin{displaymath}D = \left[ \begin{array}{rrr}
0 & 1 & 0 \\
0 & 0 & 1 \\
-c & -b & -a
\end{array} \right], \end{displaymath}

whose column eigenvectors satisfy the equation

\begin{displaymath}\left[ \begin{array}{rrr}
0 & 1 & 0 \\
0 & 0 & 1 \\
-c &...
\lambda \left[ \begin{array}{c}1\\ u\\ v\end{array}\right]. \end{displaymath}

Accordingly the components of the eigenvector satisfy

u = $\displaystyle \lambda,$ (4)
v = $\displaystyle \lambda u,$ (5)
-c-bu-av = $\displaystyle \lambda v.$ (6)

which means that $\lambda$ is a root of the equation
$\displaystyle \chi(\lambda)$ = $\displaystyle \lambda^3+a\lambda^2+b\lambda+c=0,$ (7)

and that the eigenvectors are columns of the Vandermonde matrix formed from the eigenvalues. In turn it follows that the coefficients in the bottom row of D are sums of homogeneous products of the eigenvalues, as befits the coefficients of a polynomial.

The advantage of this discovery is that it is equally easy to deduce the row eigenvectors of D, which is a painless way to invert the Vandermonde matrix, due to the biorthogonality of the two sets of eigenvectors..

\begin{displaymath}\left[ \begin{array}{rrr}u&v&1\end{array} \right]
\left[ \be...
\lambda \left[ \begin{array}{ccc}u&v&1\end{array}\right] \end{displaymath}

This time the components of the eigenvector satisfy

-c = $\displaystyle \lambda u,$ (8)
u-b = $\displaystyle \lambda v,$ (9)
v-a = $\displaystyle \lambda.$ (10)

Again the eigenvector components are defined by a recursive process which can be followed if the constants are understood to be homogeneous product sums of the eigenvalues. Thus c, which is the product of all the eigenvalues, is divided by one of them (say $\lambda_i$) to obtain u. To obtain v, the product from which $\lambda_i$ was just omitted is subtracted from the sum of all products in which one factor at a time is omitted, leaving products from which the factor $\lambda_i$ can be divided out. The process continues in similar fashion to produce a vector whose components are the coefficients of the polynomial

$\displaystyle \xi_i(\lambda)$ = $\displaystyle \frac{\chi(\lambda)}{(\lambda-\lambda_i)},$ (11)
  = $\displaystyle \xi_{i,3}\lambda^2+\xi_{i,2}\lambda+\xi_{i,1}.$ (12)

The inverse of the Vandermonde matrix requires normalized row eigenvectors, obtainable by dividing the i-th row by $\xi_i(\lambda_i)$, which turns out to be $\chi'(\lambda_i)$. The polynomials $\pi_i,$ whose coefficients form the rows of the inverse matrix, are usually written in factored form and are widely known as Lagrange interpolation polynomialsLagrange interpolation polynomials; they are now written as a sum of powers to fit the matrix format. For n+1 data points,

$\displaystyle \pi_i(\lambda)$ = $\displaystyle \frac{(\lambda-\lambda_1)(\lambda-\lambda_2)\cdots
\cdots(\lambda_i-\lambda_{n+1})}$ (13)
  = $\displaystyle \pi_{i,n+1}\lambda^n+\pi_{i,n}\lambda^{n-1}+\ldots+\pi_{i,1}.$ (14)

For the quotient to make sense, it is understood that the i-th factor is omitted from both numerator and denominator. The terms $\pi_{i,j}$ are directly the elements of the inverse Vandermonde matrix; for our example
M-1 = $\displaystyle \left[ \begin{array}{ccc}
\pi_{11} & \pi_{12} & \pi_{13} \\
\pi_{21} & \pi_{22} & \pi_{23} \\
\pi_{31} & \pi_{32} & \pi_{33}
\end{array} \right]$ (15)
  = $\displaystyle \left[ \begin{array}{ccc}
\frac{x_2x_3}{(x_1-x_3)(x_2-x_1)} & \fr...
& \frac{1}{(x_1-x_3)(x_2-x_1)}
\end{array} \right].$ (16)

Of course, inserting these results into y=YT M-1 X yields Lagrange's formula,

y = $\displaystyle \sum_{i=1}^{n}y_i\pi_i(x).$ (17)

next up previous contents
Next: Confluence Up: Contours for <PLOT> Previous: Interpolation