The approximations used to derive the Gerschgorin limits would seem to be fairly generous. If any one of the eigenvalues actually lies on the circumference of a Gerschgorin disk, the implied equality for that one equation could presumably be used to draw additional conclusions; even close proximity to the boundary might possibly be a source of further information.
Consider the point in the derivation at which the ratios were replaced by 1, and recall that i was the index of the largest component of the eigenvector; this substitution would surely alter the sum unless the matrix element multiplying it were zero. If all the remaining offdiagonal matrix elements were zero, we could stop and think about reducing the matrix; otherwise we have forced a number of components of the eigenvector to be equal in absolute value. But this means that there are other equations in which this same largest component occurs, but with other indices.
The same argument forces equality for any further components related through non-zero matrix elements, and we arrive at the eventual conclusion that all the components of the eigenvector have the same absolute value, and that all the Gerschgorin disks intersect at least in the eigenvalue in question. The only exception would arise from a matrix whose diagram were not connected, so that either the class of matrices involved must be restricted, or the conclusion must be confined to the components belonging to connected parts of the matrix.
Bearing this restriction in mind, we conclude that the Gerschgorin bounds can only be realized for matrices having uniform row sums, and thus any nonuniformity is certain evidence that the limits cannot be reached. Nevertheless, we have only derived a necessary---not a sufficient---condition, and it may well happen that the eigenvalues of matrices with uniform row sums lie well within the Gerschgorin disks. Examining simple matrices with elements of mixed signs will quickly confirm this statement. The lack of sufficiency results from not yet having taken into account the liberalizing influence of the triangle inequality on Gerschgorin's inequalities; but if all matrix elements were positive, the uniformity of the row sums would immediately establish the vector with unit components as an eigenvector and the row sum as its eigenvalue.
Given that our arguments are insensitive to scalar multiplication of the whole matrix by a complex factor, we cannot postulate that we are working with a positive matrix. However, if we note that a collection of vectors of fixed lengths cannot reach their maximum sum unless they are all parallel, we see that the only way that the natural eigenvalue equation and the one resulting from inserting absolute values can hold simultaneously is for all the terms to have a common phase.
We can begin by removing phase factors from the components of the eigenvector by performing a similarity transformation via diagonal matrices bearing the phase factors. The rows of the resulting matrix must then have constant phase, which is necessarily the phase of the eigenvalue. If the eigenvalue were not zero, this would be the phase of the whole matrix, and could be discarded by treating it as a constant factor.
Thus non-uniform row sums guarantee that the largest row sum is not an eigenvalue, but even uniformity will only make the common row sum an eigenvalue when the matrix is effectively a positive matrix.
Even though we did not assume that a possible boundary eigenvalue was the largest eigenvalue, that conclusion seems inescapable, inasmuch as we have seen that a connected matrix possessing a boundary eigenvalue is essentially positive, for which summing row elements with anything other than equal weights will necessarily produce a smaller sum and with it a smaller eigenvalue. Likewise, we seem to be led to the uniqueness of this eigenvalue since only one eigenvector can be associated with it.