Saturday, April 13
Cellular Automata Lecture
CINVESTAV
Professor: Harold V. McIntosh.




CELLULAR AUTOMATA (9)




We had some further discussion of cellular automata this past weekend which might just as well be of general interest. The reference of interest i:

Francisco Jim\'enez Morales, Evolving three-dimensional cellular automata to parform a quasi-period-3 collective behavior task Physical Review E 60 (4) 4934-4940 (1999).

This is a kind of automatom for which Wuensche was able to give a nice demonstration with his compact but fast computer. In past years we have looked at these Chat\'e-Manneville automata, because the probability (but not the individual configurations) go through a cycle of 3, in contrast to what people expect from statistical mechanics and the law of large numbers. It was purportedly proven that such things can't happen, but then they seem to. That is why you see adjectives like quasi- or pseudo-.

The article has an abundance of acronymns, which gave an opportunity to think about how to write a paper. There is a balance between what people are willing to read and what they can remember. Acronymns are short, but when used in abundance, create the problem of remembering which are which. On the other hand, it is easy to become entangled in long descriptive phrases.

The solution is to qualify some common and short terms as being used in a certain way in the article, already in the introduction. And again, if and when it seems opportune. Mathematicians like to use lots of special symbols, but even they can become all tangled up in what was x sub zero, zerta prime, ... if they are used all at once in an article.

Well that is a quibble, although worth mentioning lest one is tempted to follow suit. The novelty of the article was using a genetic algorithm to find some rules meeting the requirement, which depends on Hansen, crutchfield, and w whole body of their work. It is a way of proceeding, although trying to understand what's behind the phenomonon is another. It turns out they had found four rules, Hemmingsson's and three others.

What had slipped from my memory, at least, was whether these rules were for von Neumann or for Moore neighborhoods, whether they were totalistic or not, and similar details. In the past it has been proposed that if it could be shown that a cellular automaton followed mean field theory (or some other scheme but supposing that it really followed it) a combination of Bernstein polynomials could be could be constructed with any desired return map; in particular one with a period of 3 like in iteration theory. Problem with this is that such rules tend to be totalistic, and totalistic rules follow their own customs, not mean field theory. One way to adhere to mean field theory might be to use von Neumann neighborhoods in spaces of high dimension.

Another theme in the literature relates to the stability of surfaces, and of the boundaries of liquid droplets; one variant of this is the swiss-cheese model, although it has never been particularly well defined. There was another viewpoint, whose details have now somewhat escaped me, which had to do with nilpotent or idempotent rules. There was a kind of reversal symmetry in rules such as Hemmingson's Rule 33, which could be manipulated to show a period 3.

Anyway, it might not be a bad idea to introduce all these ideas and their relationships supposing that it has been established that the phenomonon is real and that it can be produced reliably. One thing which I liked in Wuensche's demonstration was the way the three dimensional model degenerated of a thin slics to a two dimensional rule which is almost, but not quite, quasiperiodic.

Anyway it is worth rereading these old articles after a lapse of time to see whether they have a better perspective now that time and other interests havew intervened.

This information mechanics or computer mechanics of Crutchfield and associates is something which we ought to try to understand, if only to be able to make sense of all the subsequent articles which refer to it. It is not quite the same as genetic algorithms, which are also worth knowing about. The former worries about information movement in a medium, which could be ballistic as with gliders, or diffusive, as with rules like Rule 18 or Rule 54, and somewhat illustrated by Wuensche's filtering as an attempt to disregard backgrounds. But he isn't the only one.

Lacking any general ideas in that regard, and having started to work through the Rule 110 de Bruijn diagrams in case Wolfram's book justifies the effort, I have spent several more days on that project. It is eating up megabytes, and turns up a tidbit now and then. Playing around to get a feeling for tiling problems, it is possible to try to stack triangles diagonally, which ought to produce some light speed lattices, more or less. Some triangles just can't be stacked withoug violating the abutting upper margin prohibition. But they can be cushioned from one another by a T3, for example, and that does seem to work. In ten generations, I've seen up to T10 and T11. But I haven't listed all the 210 combinations of shift-period that the current NXLCAU21 can do.

It seems reasonable that you can't have big triangles in lattices with short periods, but that isn't necessarily so. High superluminal velocities could fit them by giving them huge sidewise displacements. Another approach would claim that huge triangles would exceed the area of a unit cell, but again the period determines the height, but the length is a function of the length of loops in the de Bruijn diagram, and that is exponential in the generation number.

The de Bruijn chart has a lot of holes, where only the zero lattice works, or only simple cycles are possible. Of course, there can be several of them coexisting. Next in order of complexity are fuses, or other arrangements with ideals. Finally there is the thing which can permit "gliders" which is to have two cycles linked back and forth to each other. Wirh Rule 110, it is interesting thatthere are several combinations which, although they have huge de Bruijn diagrams, consist of two simple cycles with quite long filaments connecting them. Looking at a random (but still shft-periodic) evolution the effect shows up quite clearly as parallel stripes in the space-time diagram. Remarkable what human vision can do for recognizing patterns. You'd hardly suspect them from the mere diagram.

Unfortunately carrying out all this analysis for Rule 110 raises the question of whether it is typical of other rules, or if not, in what respect. Just to get a feel for Rule 54 I worked out some instances, but each shift-period combinsation takes a good part of an hour, what with waiting for the diagram, editing it, making up a page with DRAW, and collecting samples of evolution from each of the components. If the full treatment takes even half of the 210 hours, that is a fortnight of eight-hour workdays.

The Blue Books have something of this for Rule 22, but that was mostly done with cycle diagrams, not the de Bruijn diagrams. And I think I only saved the interesting results, not all the statistics. And there is still a gap and density theorem to be proven, that every gap increases unless the configuration belongs to a certain de Bruijn diagram. That came up while trying all possible initial configurations, and finding that it wasn't worth waiting for the ones where the gaps weren't very big.

Of course, what is really lacking is to take this seriously and dedicate some time to writing a better program. The fact that in- and out- linkages are so easily calculated means that several old ideas could be revived and put to work. Although it would involve numerous double clicks, either the in-neighbors or the out-neighbors could be brought right up next to the current node. I've forgotten if that isn't in the subset diagram editor, at least in one direction. So much for not having documented these programs more thoroughly (or at all).


CELLULAR AUTOMATA (10)


Thanks for the samples of solution pairs in SERO. Once one is familiar with SERO, it is simply a matter of putting a second array. Foreseeing the possibility of running several trajectories simultaneously, it might be just as well to iindex the trajectory array and make it two dimensional. Besides the fact that each one holds two-dimensional vectors, of course.

Putting a second control for the second array will be harder to generalize, of course, but the full control is probably not necessary anyway. And besides, it has the form it does in SERO because of a laziness, of not finding out how to use the triangles which can be inserted into the sliders in the window viewing controls.

But anyway, that adjustment is there to get fine control of the energy lavel, which is important for locating bound states. But for wave packets, the side solutions only have to have a certain spacing, and for the moment it is not too critical that every member of the packet has its own fine tuning. Right now the interest is in beats, just to see them, and not to get a very precise description of the envelope of the wave packet.

Also, as the sample I got showed, there is a difference between wave packets for bound states and wave packets for free waves. In the latter, the packet forms from nearby energies, whereas for bound states, they have to be accepted as they are without variation. So the only thing to do is choose their coefficients, which can be done through an array rather than by mouse.

Of course, it is the Dirac wave packets, even for a free particle with zero potential, that is interesting. That variant doesn't exist in SERO as it stands, but is very easy to insert using the page of potential definitions. It is just a matter of adjusting the matrix to include the mass, which is a global variable and not transmitted through arguments. So the only thing which has to be changed is the coding for the potential; one could even take the code for the Dirac Harmonic Oscillator and remove the x squared potential term.

If all these changes are possible in time for the class on Saturday, we can hope that we can zee zitterbewegung and that everybody can look at it.

Work on de Bruijn pages has yielded a good collection of pages, which is naturally taking up megabytes on the disk. But generations 7, 8, 9, 10 are there, and what's missing in the first six can be filled in quickly (don't I wish!). However there is something strange in nxten.c, which does the ten generation diagrams. Namely, the global variable which I have been using to suppressthe zero-zero link and which works for other generations, doesn't deliver its value. I spent a coupleof hours or so on this, but without finding out what might be going wrong. But the linkage array needs 2 megabytes, and I am wondering if the compiler is handling that gracefully?

With so many de Bruijn diagrams to look at, it is possible to wonder about the tiling problem again. I made up some sheets in Draw to play with building up surropundings consisting of only triangles up to a certain size. It starts off well enough, there only seems to be one way to make a covering out of T1's, and as the Rule 110 ... document already comments, there isn't much choice for the smallest triangles. From then on it is the subject of a search program, and it is a question of, who will want to write it.

Waiting, and recent discussions, gives some better idea of how to procede; one advantage is to consuder only triangles up to a limit, another would be to fill in the whole circumference of the initial triangle first, and then to only work with the circumference of *that* and so on until either a contradiction, a periodicity, or the end of patience occurs.



- hvm

Return.