We had some further discussion of cellular automata this past weekend which
might just as well be of general interest. The reference of interest i:
Francisco Jim\'enez Morales,
Evolving three-dimensional cellular automata to parform
a quasi-period-3 collective behavior task
Physical Review E 60 (4) 4934-4940 (1999).
This is a kind of automatom for which Wuensche was able to give a nice
demonstration with his compact but fast computer. In past years we have
looked at these Chat\'e-Manneville automata, because the probability (but
not the individual configurations) go through a cycle of 3, in contrast
to what people expect from statistical mechanics and the law of large numbers.
It was purportedly proven that such things can't happen, but then they seem to.
That is why you see adjectives like quasi- or pseudo-.
The article has an abundance of acronymns, which gave an opportunity to think
about how to write a paper. There is a balance between what people are willing
to read and what they can remember. Acronymns are short, but when used in
abundance, create the problem of remembering which are which. On the other
hand, it is easy to become entangled in long descriptive phrases.
The solution is to qualify some common and short terms as being used in a
certain way in the article, already in the introduction. And again, if and when
it seems opportune. Mathematicians like to use lots of special symbols, but
even they can become all tangled up in what was x sub zero, zerta prime, ... if
they are used all at once in an article.
Well that is a quibble, although worth mentioning lest one is tempted to follow
suit. The novelty of the article was using a genetic algorithm to find some
rules meeting the requirement, which depends on Hansen, crutchfield, and w
whole body of their work. It is a way of proceeding, although trying to
understand what's behind the phenomonon is another. It turns out they had found
four rules, Hemmingsson's and three others.
What had slipped from my memory, at least, was whether these rules were for
von Neumann or for Moore neighborhoods, whether they were totalistic or not,
and similar details. In the past it has been proposed that if it could be shown
that a cellular automaton followed mean field theory (or some other scheme but
supposing that it really followed it) a combination of Bernstein polynomials
could be could be constructed with any desired return map; in particular one
with a period of 3 like in iteration theory. Problem with this is that such
rules tend to be totalistic, and totalistic rules follow their own customs, not
mean field theory. One way to adhere to mean field theory might be to use
von Neumann neighborhoods in spaces of high dimension.
Another theme in the literature relates to the stability of surfaces, and
of the boundaries of liquid droplets; one variant of this is the swiss-cheese
model, although it has never been particularly well defined. There was another
viewpoint, whose details have now somewhat escaped me, which had to do with
nilpotent or idempotent rules. There was a kind of reversal symmetry in rules
such as Hemmingson's Rule 33, which could be manipulated to show a period 3.
Anyway, it might not be a bad idea to introduce all these ideas and their
relationships supposing that it has been established that the phenomonon is
real and that it can be produced reliably. One thing which I liked in
Wuensche's demonstration was the way the three dimensional model degenerated
of a thin slics to a two dimensional rule which is almost, but not quite,
quasiperiodic.
Anyway it is worth rereading these old articles after a lapse of time to see
whether they have a better perspective now that time and other interests havew
intervened.
This information mechanics or computer mechanics of Crutchfield and associates
is something which we ought to try to understand, if only to be able to make
sense of all the subsequent articles which refer to it. It is not quite the
same as genetic algorithms, which are also worth knowing about. The former
worries about information movement in a medium, which could be ballistic as
with gliders, or diffusive, as with rules like Rule 18 or Rule 54, and somewhat
illustrated by Wuensche's filtering as an attempt to disregard backgrounds. But
he isn't the only one.
Lacking any general ideas in that regard, and having started to work through
the Rule 110 de Bruijn diagrams in case Wolfram's book justifies the effort,
I have spent several more days on that project. It is eating up megabytes, and
turns up a tidbit now and then. Playing around to get a feeling for tiling
problems, it is possible to try to stack triangles diagonally, which ought to
produce some light speed lattices, more or less. Some triangles just can't
be stacked withoug violating the abutting upper margin prohibition. But they
can be cushioned from one another by a T3, for example, and that does seem
to work. In ten generations, I've seen up to T10 and T11. But I haven't listed
all the 210 combinations of shift-period that the current NXLCAU21 can do.
It seems reasonable that you can't have big triangles in lattices with short
periods, but that isn't necessarily so. High superluminal velocities could
fit them by giving them huge sidewise displacements. Another approach would
claim that huge triangles would exceed the area of a unit cell, but again
the period determines the height, but the length is a function of the length
of loops in the de Bruijn diagram, and that is exponential in the generation
number.
The de Bruijn chart has a lot of holes, where only the zero lattice works,
or only simple cycles are possible. Of course, there can be several of them
coexisting. Next in order of complexity are fuses, or other arrangements with
ideals. Finally there is the thing which can permit "gliders" which is to
have two cycles linked back and forth to each other. Wirh Rule 110, it is
interesting thatthere are several combinations which, although they have huge
de Bruijn diagrams, consist of two simple cycles with quite long filaments
connecting them. Looking at a random (but still shft-periodic) evolution the
effect shows up quite clearly as parallel stripes in the space-time diagram.
Remarkable what human vision can do for recognizing patterns. You'd hardly
suspect them from the mere diagram.
Unfortunately carrying out all this analysis for Rule 110 raises the question
of whether it is typical of other rules, or if not, in what respect. Just to
get a feel for Rule 54 I worked out some instances, but each shift-period
combinsation takes a good part of an hour, what with waiting for the diagram,
editing it, making up a page with DRAW, and collecting samples of evolution
from each of the components. If the full treatment takes even half of the
210 hours, that is a fortnight of eight-hour workdays.
The Blue Books have something of this for Rule 22, but that was mostly done
with cycle diagrams, not the de Bruijn diagrams. And I think I only saved the
interesting results, not all the statistics. And there is still a gap and
density theorem to be proven, that every gap increases unless the configuration
belongs to a certain de Bruijn diagram. That came up while trying all possible
initial configurations, and finding that it wasn't worth waiting for the ones
where the gaps weren't very big.
Of course, what is really lacking is to take this seriously and dedicate some
time to writing a better program. The fact that in- and out- linkages are so
easily calculated means that several old ideas could be revived and put to
work. Although it would involve numerous double clicks, either the in-neighbors
or the out-neighbors could be brought right up next to the current node. I've
forgotten if that isn't in the subset diagram editor, at least in one
direction. So much for not having documented these programs more thoroughly
(or at all).