Saturday, May 25
Cellular Automata Lecture
CINVESTAV
Professor: Harold V. McIntosh.
CELLULAR AUTOMATA (14)
May 14
This has to be one of the best spam weekends ever - 90 messages of which
maybe ten are serious. Also, I am getting bounces which imply that I have
been sending things, which is probably an example of the creative use of
false sender addresses.
Browsing through google and yahoo after a long absence, I note that a lot
of links to delta and/or cinvestav don't work. Has the machine been down
over the weekend? In any event, ``flexagons'' and ``Rule 110'' aren't getting
the response they used to.
Weisstein in particular has us on a dead link list. Haven't we sent him
corrections? or is he slow in getting around to fixing them or updating his
dead list or whatever.
Amazon, but not Amazon-UK nor Barnes and Noble, offer as look at the index
of ``New Science'' although you have to read all 200 pages to see the middle
or front of it. I printed a few, but they came out pretty fuzzy. They were
in light grey and I was using the color printer. But anyway, he mentions Cook
by name, and has a whole column - 1/3 of a page - on Rule 110. The universality
is crammed in at the very end, and only listed as a sketch. In fact it looks
like this important topic is crammed into the last dozen pages or thereabouts,
which may indicate that they haven't resolved the problem after all.
Reconstructing the book from the index is one thing, but presumably they really
mean it this time, that it's going to be published. There are a few hints, but
probably it is - wait and see. It will be interesting if any of the
booksellers put up some independent revierws, and if so, whether the reviewers
say anything. Star Wars is getting better treatment, because now is the time to
push the movie in theatres.
I have just finished an updated ``Concordance for Rule 110'' and it will be
forwarded as soon as Pedro can process it. T12 onwards hasn't changed, except
I took out a few empty pages. However, for the small triangles, the ten
generation de Bruijn diagram has been summarized. Not to the extent of doing
the de Bruijn diagrams, because I still have to fix up the graph drawer, but at
least listing the periodic lattices according to where their largest triangles
occur.
This is a followup to the discussion on tiling as a shift of finite type and
corresponding analysis of which clusters of triangles can be found. This still
isn't clusters, but it does give something to start with, in listing all the
lattices with fairly small unit cells. And it gives an idea of the extent
to which several obvious strings can be continued.
- hvm
CELLULAR AUTOMATA (15)
May 15
Well, today is Wolfram's magic day. Briefly his book was sales rank #1
at Amazon, no doubt due to liquidating their backlog of orders. Some poor
boy has put a user review at B&N, but I suppose it will be a while before
reviews, if any come in, and they will probably run along the lines of ``what
pretty seashells ... .'' What will count dor more will be reviews in places
like Bulletin of the AMS or Physical Review. Normally publishers send out
advanced copies for review, but that may not bave been what Wolfram wanted.
The web page for the book offers a browsable index, unlike the one at
Amazon which had to be read sequantially. Nevertheless, there is not
terribly much information to be gleaned; there are some references to
tilings, and a variety of programming schemes, but it remains that the
Rule 110 universality proof, if any, is at the end and so can't be very
long.
I haven't ordered the book on purpose. If anybody buys a copy or sees one in
a book store, I would bewilling to look at it.
According to Pedro, the TeX file for the concordance should now be at the
CIEA and if so it can be read and printed if anyone wants it. As usual, the
transformation to PDF or .gz will probably take some days more.
In thinking of how to arrange the results, it turns out that there are quite
a few stripes where a certain triangle is strung out with a given relative
displacement, and there are families according to the size of the basic
triangle.
Of course, it is probably possible to invent more complicated stripes and
try them out, even though they don't show up on the de Bruijn diagrams which it
has been possoble to check. Although this is kind of playing around, it should
help in trying to set up some grammars which will generate friezes which can
be embedded in a Rule 110 plane. It also would be worth keeping the Breshenham
lines in mind, relative to the ether.
In working out that first layer surrounding a given triangle, it would seem
that there are three vertices and three edges to consider. Along the diagonel
any triangle can make contact, and from there on there are some wedge-filling
restrictions, mainly that there can't be big triangles in little wedges.
Along the top or along the side, a T1 can occupy 2 cells, but everything else
can only occupy three consecutive cells. Thus a horizontal vertical rim is a
succession of pairs or triples of cells. However those cannot be the corners
of just any triangles, but whenever a large triangle occurs, there is a wedge
to be filled with small triangles. Up to a point the contents are forced;
where they are not means a branch in a search program.
The vertices are a little bit messier, because the variety of triangles which
can be found there is not as constrained.
We have looked at all these things somewhat in the past. What is needed is to
sit down and describe and/or enumerate them more carefully. In some ways, big
triangles seem to be easier to work with than small triangles, just because
the edges are much longer while the confusion around vertices remains the same.
I got the Landauer articles. It seems that he has been guilty of publishing
substantially the same paper in verious places, and in none of them have I been
able to see why you have to erase the memory AFTER the computation. It seems
to me that it would be much more reasonable to erase it beforehand; maybe
there's a problem in knowing how much you will need so you wait until the end
to find out.
Likewise, I still don't see why you can't use a cannon ball or a bowling ball
to run Szilard's engine, rather than a gas molecule. Or let's say, cows in a
pasture.
- hvm
CELLULAR AUTOMATA (16)
May 16
Glad to hear the concordance is visible, although I wonder if anybody is ready
to start looking yet. Levy's review is posted, but it doesn't say anything.
Probably all the reviews are going to run along the lines of what a great
genius Wolfram is, how outrageous his theories are, and how pretty his tigers
and sea shells, without ever delving into what he is actually saying. While
browsing the internet I found a couple of other commentaries, but they are
similar.
As for tilings, it seems that any triangle can abut any other triangle, but
care must be taken at the vertices to use the right paritied triangle. As we
discussed last month, the best way to have unique tiles and at the same time
solve the parity problem is to keep two sets, with alternate diagonal cells
omitted. That also eliminates the need of ever talking about T0's. Very well,
for triangles sitting on top the retained cell (which always holds a 1) must
be on the bottom, except for the last tile to the right, which must have the
notch cut out. Similar argument for tiles sitting on the left. On the diagonal,
a tile may be placed wherever there is a notch.
That means that along the vertical or horizontal edge there is space for
(length of edge)/3 triangles, but not exactly to take account of where the
edge begins and the terminal triangle. Along the diagonal there is space
for (length)/2 triangles, but T1's must alternate so the effective number of
triangles is more like 1/4.
That is just for touching triangles. To place a third or subsequent triangle
there are restrictions due to the wedges, and when the restriction does not
apply any triangle could be placed. So it is worthwhile working up from smaller
to larger limits. Probably a worthwhile intermediate step is to fill in all
the obligatory placements, and discard anything which leads to an obvious
contradiction.
This is the really messy step, because the lists are bound to be fairly large,
the more so as additional tiles are placed around the periphery. That is what
we used to have programs like LISP for. (CENAC director Martinez Marquez: what
in the hell is LISP good for?). It would be interesting to see how you write
this in Mathematica.
Something I noticed looking over the figures a little while ago: Only T1's,
T2's, T4's (two relative alignments), and T6's (three alignments in C gliders
and a couple of other possible columns) seem to permit a vertical stack Can
it be proven that there are no others?
Another classification which can be started, even though it also will soon
become quite messy, is to list possible lattices by the area of their unit
cell. One could start with the examples in the concordance, but given that Tn
has area approximately (n^2)/2, there are only limited ways that a given
integer can be written as a sum of such half-squares, and some combinations
may not even be possible. All in all, another exercise in numerology, but it
might show up some interesting tendencies.
If it's to be a new semester, I suppose that the cc-list ought to be revised.
both for subtracting and adding names.
- hvm
CELLULAR AUTOMATA (17)
May 17
Still not much activity chez Wolfram. The bookstores have lengthened their
delivery time, although it is not clear whether there was a surge in demand
or they didn't lay in a supply, disbelieving that it would be a best seller.
I have been scanning the index in greater detail, but the pickings on the
Universality of Rule 110 are meagre indeed; nevertheless he has a lot of
material on the growth of corals, quantum chromodynamics and the many names
of God. Nor is the Time book review activated yet.
What I did find via Google is:
Wim Hordijk, Cosma Shalizi and James Crutchfield,
Upper Bound on the Products of Particle Interactions in Cellular Automata,
arXiv:nlin.CG/0008038 v3 31 Jan 2001.
As the date indicates, it is over a year old, and maybe it has been published
somewhere. Also, I am not sure whether this is one of the articles which we
looked at last month and I simply failed to notice that it featured Rule 110
(along with Rule 54 and a couple of others). Or if I did, I didn't take it
seriously enough. I do remember discussing what this ``computational
mechanics'' is all about.
Basically, they offer a solution to one of Cook's challanges, to find the
number of ways that two gliders can collide. But they do it using a sort of
de Bruijn diagram approach, which seems to be what the computational mechanics
is about. It is nice to see a well reasoned approach to the subject, and the
article is certainly worth studying.
In mentioning Rule 110, they invent their own notation for the gliders. As near
as I can tell, mostly by looking at the pictures, their notation is that
Lambda^0 = ether
alpha = C3
keta = E1
kappa = G1
eta = F
w_(right) = A,
and they allege that there are three fundamentallky distinct E-G collisions.
My list doesn't have but one E-G collision (I simply didn't try any others)
but it seems to be the one producing an F. Anyway, their parities all check
out. This is something that Genaro will probably want to look into. What they
have goes significantly beyond parity, but I'm not too sure how easy it is to
use. After all, we already have the (mod 14) rule for the ether background.
Shalizi's pages at Santa Fe have been showing a work in progress dedicated
to a thorough analysis of Rule 110 via computation mechanics. I think I'll
send him a copy of this message, in order to inquire if there is a written
form available yet?
- hvm
CELLULAR AUTOMATA (18)
May 21
It is possible to locate articles by Crutchfield, Shalizi, Hansen, and others
by looking around with Google. Also, the Santa Fe list of publications and
working papers has most of that and more. I found a couple that seem to capture
the essence of what we're interested in, and I will have copies at the next
class meeting (which I understand will be Friday, not Saturday).
In the end, this seems to be what they are doing: The main use of the de Bruijn
diagrams is to discover shift-periodic configurations, although the LCAU
programs also detect evolution-to-a-constant. In the first case, the results
are repetitive, and can be used generation after generation; in the second
case, once you get the result, interesting as it may be, that's it!
There are all kinds of predicates that could be applied to a de Bruijn diagram,
mostly unused because they're non-repetitive. One possibility would be to
accept the OR of two different shift-periods, which would likely yield a
chaotic mixture as well as being non-repetitive. However, amongst the included
configurations would be the approach or departure of a pair of well separated
gliders with respect to a common background, such as the ether. That much would
actually be repetitive, and remain so until the gliders came into contact. Or
in the case of separation, an extrapolation back to where an ancestor program
could be used.
At first that doesn't sound useful, but considering the least common multiple
of the two periods, the gliders will move yet stand in the same phase relative
to one another, which defines an equivalence relation on the eventual
collisions. That in turn gives such things as the maximum number of products
of a collision, or the minimum number of phases required to produce a given
pair of retreating gliders.
It is an interesting question, whether there is a combination in some de Bruijn
diagram small enough to check the results. By phrasing their results in terms
of finite automata and transducers, the folks at Santa Fe avoid using the full
de Bruijn Diagram; but in any event the results solve one of Cook's challanges,
to enumerate the number of ways that a given pair of gliders can collide.
I wonder if the Bresenham lines are common to all automata with gliders, or if
they are unique to Rule 110 on account of its being a tiling problem?
Another point of curiosity, is how the process might apply to multiple
collisions, with respect to the least common multiple of all their periods.
Although the calculation would be complicated, it ought to be possible to
derive a set of inequalities relative to before- and after-.
-hvm
Return.