!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (email@example.com), CBLU, University of Leeds >
The largest eigenvalue of a positive matrix is unique and belongs to eigenvectors with positive components, both the row eigenvector and the column eigenvector. Thus any other eigenvectors must have mixed signs both in their real and possible imaginary parts. Knowledge of the relative magnitude of the second eigenvalue of a matrix is often convenient, for example in judging the rate of convergence of most procedures for diagonalizing the matrix. The traditional way to obtain this information is to subtract the contribution of the largest eigenvalue from the matrix, followed by an estimate of the largest eigenvalue of the remaining matrix. There will often be a pair of second largest eigenvalues, complex conjugates on account of belonging to a real matrix; if so they must be considered together.
Removal of the largest eigenvalue requires an explicit knowledge of both the largest eigenvalue and its eigenvectors. It is convenient to avoid this explicit knowledge by supposing the matrix to have been previously transformed to stochastic form; if it is already stochastic, then no preparation is required. The transformation to a column stochastic matrix is accomplished by incorporating the elements of its row eigenvector into the matrix itself by column divisions and row multiplications, finally dividing the entire matrix by its eigenvalue.
Even so, we would still need to know the components of the column eigenvector to achieve the traditional reduction. By substituting a more readily available vector for the unknown column eigenvector we can still obtain useful estimates. If the matrix were stochastic to start with, no additional knowledge would be required. A plausible choice, which would still leave matrix elements of uniform sign, would be to form a vector by taking the least element in each row, or alternatively, the greatest element in each row.
To recapitulate, suppose that M is a column stochastic matrix so that 1 is its largest eigenvalue, with , the row of ones, as its eigenvector. We are seeking an upper bound for , which is another eigenvalue with column eigenvector X, and we have formed the column C by taking the least element from each row of M. Since
we know that both
Either of these two matrices can be used to obtain bounds for since it is an eigenvalue of each. Taking Gerschgorin disks from the column sums, we see that the column sums of M are unity and those of C are all equal and equal to the sum of the minimal elements of each row. Thus
If maximal elements were utilized, there would be a reversal of sign and
Obviously one would choose the more restrictive of the two bounds.
If there is a considerable discrepancy in the sizes and an unfavorable distribution of matrix elements of M, the bounds provided by these two inequalities may not be very useful; for example the probabilistic de Bruijn matrices generally contain zeroes in every row and column so that the minimal inequality is vacuous. Likewise, the concentration of nonzero elements makes them so large that the maximal inequalities give worse limits than the knowledge that the matrices are stochastic. Nevertheless, powers of the de Bruijn are better behaved, and are more suitable for estimates.