The presence of zeroes in an otherwise positive matrix complicates the derivation of its properties. The sporadic presence of zeroes does not greatly complicate the derivations nor change the results; on the contrary certain systematic patterns of zeroes can alter the conclusions considerably. Fortunately it is relatively easy to predict the influence of zeroes, and to state the correct conclusions. Displaying the graph associated with the matrix leads to one of the most concise descriptions that seems to be possible.
The essential point is that zero elements are tolerable in a matrix if they disappear from a power of the matrix. Since powers of a matrix have the same eigenvectors as the original, and powers of the original eigenvalues, the necessary details of proofs can be worked out for the matrix power and then attributed to the matrix itself. Such circumvention could fail in two evident ways: either the zeroes persist through all the powers, or they simply keep shifting around. Both circumstances are fairly evident from an inspection of the graph of the matrix.
In the former case, there are nodes in the graph which are not accessible from one another, so that the basic requirement is for a connected graph. If the nodes are grouped according to their accessibility, inaccessible nodes are reflected in a triangular structure of the matrix which can be handled by partitioning the matrix and applying the relevant theorems to the resulting submatrices. Thus the solution to the problem is to treat disconnected portions of the graph separately, just as it is well to separate out transient parts of the graph and treat them separately from the others.
Even if the graph is connected, it may result that links of only certain lengths may exist; consider for example a graph with two subsets of nodes such that the only direct links occurred between nodes of different sets, none between two nodes of the same set. This structure signifies a partitioned matrix whose diagonal submatrices are zero. Its square would have diagonal submatrices but zero antidiagonal submatrices, while its cube would once again have the original form. But no power at all would be free of blocks of zeroes. Fortunately this alternative has a particularly elegant solution.
For example, the example cited refers to a matrix of the form
but there exists another matrix
for which MJ = - JM. If
is an eigenvector of M belonging to , then JX is another belonging to the eigenvalue , as the use of the anticommutation relation readily shows. Thus all nonzero eigenvalues occur in negative pairs, with eigenvectors partitioned so that corresponding components differ at most in sign, in appropriate places.
The squared matrix is block diagonal, of the form
but the diagonal components AB and are equivalent as long as A is square and non-singular. If it is not, a subspace belonging to eigenvalue 0 can be partitioned off, and the conclusions applied to the complementary space.
Generally M would contain a longer superdiagonal of submatrices, higher roots of unity would occupy the diagonal of J, and the canonical form would require k equivalent copies of a matrix of more tractable form. All eigenvalues would occur in cycles, related by kth roots of unity.