It is convenient to begin with matrices with integer elements, and even more precisely with the connection matrices for diagrams, such as those which form the evolution matrices or the de Bruijn matrices. There is a mutual relationship between matrices and diagrams. To begin with, a diagram can be described in various ways by matrices, one of the most natural being that in which the rows and columns of the matrix are indexed by the nodes of the diagram. The matrix elements are then zero or one, according to whether the node indexing the row is linked to the node indexing the column. Such a matrix need not be symmetric, inasmuch as the links may be directed and not run in both directions.

It is even possible to accommodate multiple links between nodes, by allowing the matrix elements to be positive integers rather then confining them to just the two values zero and one. This interpretation is useful because it allows powers of the connection matrix to be interpreted in terms of the number of composite paths joining two nodes, the power of the matrix determining the number of links involved. Whether or not one would want to suppose that negative matrix elements denote a link in the reverse direction depends upon whether one expects to cancel a path by retracing it or not, a complication which we shall avoid.

Conversely it is possible to represent a square matrix by a diagram, by setting up nodes to correspond to the dimensions of the matrix, and then introducing a link wherever there are non-zero matrix elements. Of course, if none of the matrix elements is zero, every node will then be linked to every other. Although the diagrammatic representation would be most interesting for disentangling the relationships between the elements of sparse matrices, complete mutual accessibility is nevertheless an important special case. In particular, given a finite diagram in which every node can be reached from any other node through a long enough chain of links, there will be a corresponding power of the connection matrix in which no zeroes remain.

Diagonal elements of the connectivity matrix represent nodes linked to themselves. It might seem at first sight that a node would naturally be linked to itself, but it should be borne in mind that links and nodes are two different things, and that even null links have to be shown explicitly. Diagonal elements of powers of the connection matrix are probably easier to understand; aside from possible null links, they would represent loops in which a node was connected to a succession of others, the final link nevertheless returning to the original node.

Generally, the **ij**th element of the **k**th power of the connectivity
matrix **M** counts the number of paths with exactly **k** links leading from
node **i** to node **j**. The trace, , gives the total number
of paths leading back to their point of origin, but under favorable
circumstances each path is counted once for each of the **k** nodes which
it contains. This is the case for loops which do not retrace a part of
themselves, but not otherwise. Thus

is a function which would represent the number of * loops* of length
**k** in the diagram whenever the non-redundancy criterion was met.

Sometimes useful information can be gained by constructing a power series whose coefficients count some quantity or other, particularly if the count can be related to a convolution of the coefficients in some other series. We might begin by forming the series

By either ignoring convergence, or hoping for the best, the series is seen to be equivalent to

which is a variant form of the resolvent of M. It can be used as a generating function for the number of random walks through the diagram of M, but a more interesting generating function results from counting loops rather than random walks. Thus consider the series z(t) defined by

whose coefficients count loops. Since the trace is a linear operator, we can apply it after summing the matrix series, which this time is seen to correspond to a logarithm---the indefinite integral of the series for . Thus we have

Since the logarithm is an awkward function to compute for matrices, it is fortunate that there is an identity relating exponentials of traces to determinants of exponentials, namely

which can be used to define the * zeta function*,

Because the loop counts are embedded in an exponential they must be
evaluated by forming logarithmic derivatives of the zeta function at
the origin, but this is a small price to pay for working with such a
convenient quantity as the determinant of the ``resolvent'' (note that
**t** is not in its accustomed place).

Certain properties of the zeta function can be deduced from its
representation as a determinant. Although the determinant of a product
is a product of determinants, the involvement of the variable **t** in
the resolvent precludes resolvents having the same property. However a
matrix will sometimes have a block diagonal structure allowing the
determinant of the matrix to be related to the determinants of the
blocks; an example would be the connection matrix for two independent
diagrams.

E-mail:mcintosh@servidor.unam.mx