Saturday, February 02
Cellular Automata Lecture
Professor: Harold V. McIntosh.


Enbedding Rule 110 in a five-cell neighborhood

One of the items in the discussion on Saturday afternoon was wherher new things were to be learned about Rule 110, and one of them was to create some mutants by embedding it is a five-cell neighborhood. However since we haven't really discussed cellular automata and weren't planning to emphasize them, it might be a good idea to quickly run through their history.

Automata as such even existed in antiquity; Hero's steam engine, which may actually have been used to lift fuel in the lighthouse of Alexandria. The idea is not just to have machines, but machines capable of performing a variety of actions and even to decide on their sequence. At one time some ingenious mechanisms were constructed; one might think of Swiss music boxes, and even Babbage's mechanical calculator. His Analytical Engine, which was to be programmable, was never constructed, and the whole series of projects is as an interesting lesson in the management of abbitious projects as were any of its technical accomplishments.

With electrical circuitry, in the form of relays and later vacuum tubes, it finally became possible to create large scale calculators and think of computers. The difference is that calculators perform individual operations whereas computers can be programmed and decide their own sequences. John von Neumann got interested in the extent to which computers could really act like living organisms, which can reproduce , particularly like humans who could make calculations and think about them. He finally settled on cellular automata, a purely mathematical construct, after deciding that physical models would be too cumbersome. It helps to recall that this was in an era in which people knew about genes and chromosomes, and even DNA, but Watson and Crick were still ten or fifteen years from deciphering the molecular structure of DNA.

So von Neumann worked on self reproducing cellular automata, giving lectures in various places and on various occasions, which set off a wave of interest in the topic and produced, among other things, Moore's Garden of Eden theorem.

With a lapse of about fifteen years, John Conway was reviewing the field and decided to see what could be done with a binary automaton. Von Neumann's was two dimensional with a five cell neighborhood and 29 states (4 x 7 + 1 for what were really seven operations like reading, writing, and moving, but in four directions. Plus a quiescent state.) Conway found an interesting rule in a nine-cell neighborhood which was publicized in Scientific American and set off a new burst of activity which ended up with a self-reproducing configuration and numerous interesting details. For reasons having probably to do with the spirit of the times (little green trisexual microbes) his invention was called "The Game of Life," or simply "Life."

Of course, neither von Neumann's nor Conway's automata were ever actually constructed, being extraordinarily large and cumbersome, but the details were sufficiently clear and convincing that their operation is generally accepted. Their studies both depended on selecting one particular rule set for the automaton and following out the consequences in minute detail. The rule was chosen among various posibilities to do the job (von Neumann) or for its intrinsic elegance (Conway), and thereafter no attempt was made to change the rule. Little, anyway; there were some ``Alien LIfe'' 's. .

In contrast, another fifteen years later, Stephen Wolfram had greater access to a computer than had previously been the custom, and set out to try all the possibilities, albeit with smaller neighborhoods and in one dimension. One result of this activity was to decide that there were two kinds of rules --- those for which nothing happened, and those for which everything happened. But he may have been influenced by some current fashions in differential equation and dynamical systems theory, so he postulated four classes of automata. Class I: fixed points; Class II, limit cycles; Class III, chaos; and Class IV, ``Islands of chaos in a sea of tranquility.'' Nowadays we combine Classes I and II, and consider Class IV as a boundary or transition between this class and Class III.

Just to be precocious, he also decided that the automata of Class IV would be ``capable of Universal Computation.'' And then he went on to include ``Birds singing in the trees, the wind sighing in the boughs, brooks burbling on their way to the sea, ...., all these are performing universal computations.''

But for all the hoopla, this Rule 110 deserves to be taken seriously, and is the subject of a story of its own. It should be mentioned that the nomenclature {Rule 110} is another invention of Wolfram's, to make a list of the images of all the neighborhoods in order, then treat the list as a number and convert it to decimal. It works for binary automata with three neighbors and a few others, but eventually any description of a rule becomes hard, nay, impossible, to implement with short words or expressions.

Cellular automata are subject to a phenomon called shift-periodicity, which makes them a special object of study. Automata in general are objects with states, interconnected in ways which could be called neighborhoods. But there need be no uniformity, either in the number of states nor in the connections. Such symmetry, especially when spread out over a crystallographic lattice, when present splits the behavioral studies into local parts, which can then be copied globally. Shift periodicity results when the configuration of the whole automaton repeats after a certain time, although it may reappear at another place in the lattice, which is the shift. If a small part of the configuration is observed to move that way, it is called a glider. The name even applies to moving pieces where the rest of the configuration changes less noticably, but their detection is usually possible in the simpler context.

In fact, it seems that Conway coined the term {glider} from crystallographic glide planes in his excitement over observing them for the first time in the rule he was formulating.

Gliders are the most visible aspect of Class IV automata, and of course gliders can collide with one another, to either disappear, cross over each other, or initiate some other consequences. When collisions are clean enough it is possible to think about computing, as for example in the two-dimensional automaton called WireWorld. But having gliders is a long way from concluding that computing is going on.

The reason that the particular rule, Rule 110, has aroused such interest, is that it is about the simplest automataton meeting the requirements for computation if, in fact, it does. First, it is a binary rule, and second, it has two neighbors. A one-neighbor automaton is trivial, whereas two neighbors take their rules from the Boolean functions of two variables, whose behavior is thoroughly familiar. Third, among the three neighbor automata, it is one which has a large number of recognizable gliders.

Actually it has gliders of two kinds. The general gliders do not seem to be especially noteworthy, but one set moves relative to a background of small triangles in a way which is complicated yet holding hope of understanding it. There are between eight and twelve gliders, some of which exist in alternate forms. They have periods in the tens, twenties, and thirties, which means that it takes a reasonable time for them to go through their cycles. They move at different velocities, but well under light velocity, which means that their shifts are around half as far as their periods, but still an appreciable distance. One family is static with a period seven, which coincides with the period of the background. The background has a 14 x 7 unit cell, which is noticably large, although the background triangle fits into a 5 x 5 square.

The triangles are, in fact, a defining feature of Rule 110. Altering the rule in the sense of Wuensche's mutations always produces a defect in the triangles, which are right isosceles triangles with the right angle at top left. If it were not for a supplementary condition of never aligning two top margins and allowing them to touch, Rule 110 would be an exercise in tiling the plane (or at least a half-plane) with integer triangles on a square grid.

Another characteristic of Rule 110 is that it has a high membrane and macrocell index, meaning that all but one of the rules respect the spine of the triangle, meaning its left edge. The exception, or course, is 111 -> 0, needed in defining the top edge. So the only way to interrupt a vertical column of 1's is to place 1's on both sides of it. Were it not for this exception, a vertical line would be a membrane, and all evolution would be confined to its interior. So it could be said that we have a semipermeable membrane, and the macrocells never get high enough to be well defined.

Of course, one might suspect that further tampering might degrade the tiling, but it is easy enough to do, and is one of the first new suggestions that we have seen in quite a while. And there is one interesting artifact - we can have something akin to 30 degree triangles. Or more strictly, they are still 45 degree triangles, but with the hypotenuse on top rather than on the right.

When Rule 110 is simply embedded into a (2,2) automaton, by ignoring the outer cells to the left and to the right, the result is Rule 3CFC3CFC (expressed in hexadecimal which is shorter and more convenient than decimal). There are 32 rules at Hamming Distance 1 from this base rule, whose basins are one of the things which Wuensche's program computes; but it also shows sample evolutions for them and other things as well.

No matter whether these mutants are useful rules, examining them can increase one's understanding of Rule 110 itself. And who knows? Maybe there is something interesting there.

The mutations procede by selecting a neighborhood and flipping the action of Rule 110, but according to the margins of the original neighborhood within the extended neighborhood. This is just one of many ways of introducing a flip.

I. Neighborhood 000 -> 0, which is responsible for conserving the interior of triangles. Flipping it essentially destroys large triangles.

3CFC3CFD wherein 00000 -> 1. No large triangles are possible and there is a general rightward drift indicating a high shift index. T1, T2, T3, T4 still behave normally.

3CFC3CFE wherein 00001 -> 1. Right margins move left 2 cells rather than 1, turning large triangles into 30 degree triangles and closing them in fewer generations. Well defined gliders and glider collisions.

3CFD3CFC wherein 10000 -> 1. Triangles are shortened from the left and furthermore lose the vertical spine.

3CFE3CFC wherein 10001 -> 1. The bottoms of T3's and larger lose the next to last line, which in particular eliminates T3's as Rule 110 knows them, and so disrupts the ether background. A T1 lattice seems to take its place.

II. Neighborhood 001 -> 1, which is responsible for left expansivity and therefore the hypotenuse of Rule 110's triangles. Flipping it will tend to stretch triangles downwards.

3CFC3CF8 wherein 00010 -> 0. Setting in a 1 creates a semipermeable membrane in Rule 110, so this mutation shortens large T's but jams in a new triangle which depends on cells to the right.

3CFC3CF4 wherein 10010 -> 0. The bottoms drop out of large T's, sometimes for quite a distance.

3CF83CFC wherein 00011 -> 0. This mutation only affects places where one triangle is nestled under the hypotenuse of another, by creating a two-column ghost.

3CF43CFC wherein 10011 -> 0. Another bottom-dropper, to which the etheric T3's are unconditionally susceptible, as the rest also.

III. Neighborhood 010 -> 1, which lets T's snuggle up to the spine of whatever triangle from the left. or be snuggled up to, from the right.

3CFC3CEC wherein 00100 -> 0. Cancels the spine of T2 or greater; makes ghosts from upward staggered T's.

3CFC3CDC wherein 10100 -> 0. Mostly seems to create an offset diagonal strip for certai T combinations.

3CEC3CFC wherein 00101 -> 0. Cancels the spine for some snuggles.

3CDC3CFC wherein 10101 -> 0. A rare combination which only interferes with closing the bottoms of some T's.

IV. Neighborhood 011 -> 1, which ensures a two cell thickness to hypotenuses.

3CFC3CBC wherein 00110 -> 0. Permits thin hypotenuses but still keeps the vertex cell where some T contacts the hypotenuse.

3CFC3C7C wherein 10110 -> 0. Permits thin hypotenuses but drops the vertex cell where some T contacts the hypotenuse.

3CBC3CFC wherein 00111 -> 0. Appears to favor a T1 background in which there are light velocity ((2,2) light velocity is twice as fast as (2,1) light velocity) and half-light-velocity gliders and little else.

3C7C3CFC wherein 10111 -> 0. Permits strings of small T's to form a large T, and thus gives a field with more large triangles than customary in Rule 110.

V. Neighborhood 100 -> 0, which allows a triangle to close down to a point.

3CFC3DFC wherein 01000 -> 1. The altitude of the right triangle moves right, creating a right triangle with vertex at the bottom.

3CFC3EFC wherein 11000 -> 1. Clips the bottom of some T's.

3DFC3CFC wherein 01001 -> 1. The altitude of a triangle drifts right.

3EFC3CFC wherein 11001 -> 1. Clips triangles, but the effect seems to be further reaching, leading to a high shift index and small triangles.

VI. Neighborhood 101 -> 1, which closes a triangle at its point.

3CFC38FC wherein 01010 -> 0. The rule is inoperative, given that this is a Garden of Eden sequence in Rule 110. Triangles get closed or left dripping by other rules.

3CFC34FC wherein 11010 -> 0. This will keep a triangle from closing, but it seems to inhibit large triangles in general, probably given that this sequence would normally generate the top of a larger triangle.

38FC3CFC wherein 01011 -> 0. Prolongs bottoms but probably also breaks up tops of triangles. A unique background makes an appearance.

34FC3CFC wherein 11011 -> 0. Drops some bottoms, facilitates some T's.

VII. Neighborhood 110 -> 1, which continues a spine.

3CFC2CFC whereby 01100 -> 0. Breaks spine leaving expansive 1.

3CFC1CFC whereby 11100 -> 0. No T can abut another, degenerate rule.

2CFC3CFC whereby 01101 -> 0. Apparently degenerates into shifting.

1CFC3CFC whereby 11101 -> 0. Only some T's fail to connect, leaving many diagonal streamers.

VIII. Neighborhood 111 -> 0, which creates the empty space under the top edge of a triangle..

3CFC7CFC wherein 01110 -> 1. This would prevent the formation of T1's, which alternate along the diagonal in Rule 110, and participate in many triangle constructions.

3CFCBCFC wherein 11110 -> 1. Foreshortens triangles.

7CFC3CFC wherein 01111 -> 1. Similar effect from the other end.

BCFC3CFC wherein 11111 -> 1, effectively preventing the formation of large triangles.

Well, it took the major part of a day or two just to run out samples of all this. NXLCAU22 is notably slower than NXLCAU21, especially in generating screens of evolution. So trying to get a lot of cases on one page will require running things overnight, or on a machine nobody is using or whatever.

Also, the analyses given to Rule 110 in terms of the de Bruijn diagram and the subset diagram are so slow that only a couple of generations are possible. But we need 10, at least. Of course there is no reason to bother if the rule doesn't look promising in the first place. If one does look promising, it would be interesting to know if the prohibition on large T's holds, whether it has the same glider structure, and so on. .

The cycle, or basis, diagrams run faster, more or less as for the three neighbor automata, but they haven't been especially useful in analyzing Rule 110, and probably won't be for the mutants either.

- hvm


Solutions of simple wave equations

The simplest wave equations are partial differential equations, equating derivatives of one function of two variables to another. One way to work with partial differential equations is to try separation of variables, in which f(x,y) = X(x)Y(y). In German, this is called an Ansatz, and with a little algebra and luck, you get a function of one variable alone equal to a function of the other variable alone. That is only possible if both are constant, whose common variable is called the variable of separation. In the case of Wave equations, it is a frequency, but for Dirac or Schroedinger euations it is called an energy, due to its meaning in the applications.

When distance and time are the variables and there is no explicit dependence on the time, some time derivative of the time function is proportional to itself, so the solution is a complex exponential. In the simplest equations the same thing happens for the space variable, which leaves a product of two exponentials, one in time with an energy exponential, the other in space with a wave number exponential. The two names mean the same thing, but different words are used to distinguish between them when it comes to solving the rest of the equation, or more complicated equations.

If the equation is linear, any linear combination of solutions is a solution, so it is worth looking for a basis and thinking in terms of linear algebra with eigenvalues and eigenvectore. Due to the simplicity of the time part, this usually means finding a basis for the space part, multiplying by the time exponential, and summing (or integrating, as the case may be).

In the meantime, it is worth noting that f(x,y) = phi(x-y) will solve the equation where two derivatives are equal, and phi(alpha x - beta t) will do the job when there are some constant coefficients. When they are not constant, pretending that they are is one way to get an approximate solution. The point of this observation is that solutions can be constant along lines of a fixed slope, which leads to the idea of things moving at constant velocity, even though the thing that actually moves (the envelope of the solution) can be fairly complicated.

When coefficients are not constant, motion occurs at different velocities for different parts of the solution, and that is what is called dispersion; the connection between frequence and wave number is called a dispersion relation (or, for light, an index of refraction; think of prisms where different colors move at different velocities and rays get bent by the prism).

With electrical, acoustical, or optic waves, the mathematics has all become relatively familiar, so that there are not only methods of solution, but ways of talking about them.

For the Schroedinger equation, the only real difference arises in working with probababilities, and even then the problem is not so much with using them as with understanding why, but that lies with the philosophical foundations, not with applications. Even then, it would not be so bad if solutions were normalizable. For bound states they are, and that can be used as a sort of boundary condition. However, there is a whole theory of linear ordinary differential equations which lets you treat everything form the same systematic theory. Trouble is, neither quantum mechanics nor differential equations are often taught that way. Amongst other things complicating the applications is the fact that higher order differential equations can be reduced to lower order differential equations by introducing new variables and working with systems of equations. The easiest substitution is to make the higher derivatives into new variables and end up with their defining equations and the original equation forming the system, in which only first derivatives appear. The process can also be reversed; by calculating lots of derivatives, everything can sometimes be crammed into one single equation of high order.

What happens is that the linear algebra approach gives a basis for solutions via the Sturm-Liouville theory, but the basis is for the , not for individual solutions. When a higher order system was reduced, the derivatives ended up as part of the solution on account of the differential equation, not because you get the solution and then take derivatives. In fact, numerically the derivatives could turn out slightly wrong because of roundoff errors. In that case, a basis of solutions, or a vector basis of solutions together with their derivatives, gives the same result so there is no worry to worry about the distinction.

Because of the way Dirac derived his equation, the starting point is a pair of first order equations rather than a single second order equation, and the two components are not derivatives of one another. You need the square root of the sum of their squares, not one of them separately, to fit the initial equation when solving the space-time equation by separation of variables. There are quite a few ways to pick components, all giving the same length vector, amd that is what is behind the paper of Wigner and Newton, as well as the paper of Foldy and Wouthuysen. However, they don't quite see it that way, and they also work with 3 + 1 dimensions, rather than 1 + 1, so spin gets involved as well.

To read these papers, it would be best to begin with simple wave packets, studying the formation of a packet from the interference of just a few waves, and getting a clear idea of the difference between phase velocity (the f(x - t), above) and group velocity (which is how the combination behaves). The page of Malincrodt has nice examples in Java, and they were on demonstration at the meeting on Saturday. Because of the importance of Wuensche's visit, it may be that few people saw them, but the should be on exhibition around the CIEA; in principle everyone could download them for themselves. It would be worth gerring the source code, if possible, because we will want to work with ever more complicated combinations as time goes on.

We will see more about free particle wave packets when we begin to look at the Heisenberg uncertainty relation and at coherent states, but for the moment experimentation with the very simplest is in order. Joining two or three sectors in each of which the potential is constant gives a very popular form of demonstration, mainly because of the interesting effects which arise when a wave packet makes the transition from one region to another. Diffraction and refliction take place, with a lot of squiggles visible in the wave packets. The article of Goldberg, Schey and Schwartz, ``Computer-Generated Motion Pictures of One-Dimensional Quantum Mechanical Trasnsmission and Reflection Problems,'' is a classic, having been written at a time when computer graphics were still very expensive and required substantial work to create. Even the numerical analysis was a feat in itself; not so much for originality as for carrying it out adequately in a practical setting. We used to have a copy of the film, back at the ESFM, but eventually it got loaned out without being returned so it is hard to know where it is now, if it even still exists.

Such pictures formed an important part of Saxon's book on introductory quantum mechanics. By now it is out of print, although I think we still have our copy. I'm just not sure who's using it right now. But anyway, it is now very easy to create one's own computer graphics, as well to find (hundreds) of web sites. It is one thing to look at the pretty pictures; another to be sure that you understand the velocities and dispersions, the meaning of the ripples (including their wave length) where the collisions occur, how much of the packet is transmitted, trapped, or transmitted, and so on.

The recent article by Nitta, Kudo and Minowa, ``Motion of a wave packet in the Klein paradox,'' representa a similar computation, but in the context of the Dirac Equation. In the article they remark on the long time lapsed since Goldberg, Schey an Schwartz without anyone having made a similar study for the Dirac equation.

Since I have been combing the literature for items to present in the course, I can say that they seem to be right. To begin with, hardly anyone seems to be willing to consider the Dirac equation in one dimension, where there is no spin. But that is actually an advantage, because {spin} and {negative energy states} are actually two quite different phenomona, which just happen to coexist in the three dimensional Dirac equation.

However, Bernd Thaller, author of the book ``Visual Quantum Mechanics,'' has some one-dimensional Dirac items on his website, promised for volume II of his book. I hope that he has better luck meeting deadlines than Wolfram in his ``New Science,'' but nevertheless he seems to be two years overdue.

Nitta, Kudo and Minowa show three different ways to try to form an initial wave packet, to give an idea that it is not such a well understood enterprise. The article of Foldy and Wouthuysen goes to the heart of the matter, although they too choose to work in three dimensions rather than one, thereby making the analysis more complicated, and is probably the reason they didn't go ahead and show some examples.

Part of what they do depends on the fact that you can not only form linear combinations of the separated wave functions, but you can form the linear combination with a matrix. They choose the option of diagonalizing the operator which picks out states belonging to negative eigenvalues, which procedes without further ado for plane waves. But it is still not all that clear just what they have done, and we need to try out some variations on the theme. Here also you get this Zitterbewegung, without the necessity of having a collision with a potential step, which lack is also is a feature in Thaller's demonstrations.

The reason for the Newton - Wigner paper is that they get similar results form rather general relativistic arguments --- one idea is to try to find all the Lorentz-invariant differential equations and see if they have some use or other. Foldy-Wouthuysen worked out details; it all depends on a paper of Pryce, and it may go back to the thirties. It is a simple fact that understanding the Dirac equation has been a long time in coming, and we may still not know it so very well.

The remaining paper in the collection, Chisholm's ``Generalizing the Heisenberg Uncertainty Relation,'' is a recent version of attempts to understand the spreading of wave packets in a more general context than an exercise in the general properties of the Fourier Transform, although perhaps it offers new insight into that as well. As time goes on, we will want to look at several more articles of this general nature.


I have continued examining all the Hamming-distance-1 variants on Rule 110 as a (2,2) rule, but I am not sure that there is an improved Rule 110 there. It is worth recalling that Rule 110 itself is a mutant of the two-neighbor XOR subject to turning into an OR when the left cell is 1, which is what creates the semipermeable membrane. But as the earlier analysis showe, Rule 110 is ideal for tiling the plane with isosceles integer right reiangles, and variants don't seem to do any better. In fact, in my survey of Rule 110, these variants were already considered and discarded.

I gusee that Wuensche's programs are pretty good for isolating Class IV Rules, but Rule 110 seems to lie in a class by itself. In any event, NXLCAU22 won't go very far in making the analysis via de Bruijn diagrams, subset diagram, etc which NXLCAU made for Rule 110. So those who haven't seen Rule 110, or automata in general, are welcome to play around with the program. Those who have already worked with it exhaustively may prefer to concentrate on the details of the gliders rather than provinf that there are gliders. But for my part, I still haven't thought much about how to turn the superstable - superneutral criterion into an equivalent when there is an ether background. Which is clearly essential for Rule 110.

In that respect, several of the mutants seem to prefer T1 ethers, and have gliders therein, but it is not clear whether there is a sufficient variety.

- hvm


In the end, it turns out that the Zitterbewegung is a sort of Gibbs phenomonon, and this can be deduced from Sakurai's book on Advanced Quantum Mechanics. What a simple solution to something which has been bothering people for 75 years or so, as is evident from the articles of Newton-Wigner and Foldy-Wouthuysen which everyone has to study. It is not that I think they (the two articles) are particularly understandable, especially if one does not have a background in quantum mechanics. Nevertheless they can be read from the point of view of how the authors seem to understand their subject, and how well they have succeeded in presenting it. The algebra is very nice, but I never thought that it was particularly clear what it all meant.

It will be interesting to see what Bernd Thaller does with the topic, in that second edition that we are waiting for.

It will be interesting to see what Bernd Thaller does with the topic, in that second edition that we are waiting for.

The Wolfram counter has incremented by two more months in at least one on-line bookstore.

I got a bounce on the last message from someone who had exceeded the Yahoo! quota. That may be from trying to download things found on the internet. Would it be better for everyone to try and get a CIEA address?

- hvm