Saturday, April 27
Cellular Automata Lecture
CINVESTAV
Professor: Harold V. McIntosh.




CELLULAR AUTOMATA (12)




As discussed in the class on Saturday, it seems that the combination of Einstein, Rosen and Podolsky paradox, Bell's inequality, and similar items have enough written about them that some conclusions can be formed. The problem is that the discussion really divides into three parts:

I: The philosophical context and background. II. The mathematical aspects III. Whether a mathematical analysis resolves the philosophy.

The background is easy enough to find and we have discussed it, although one could consult biographies and writings on political and scientific history to try to find some of the deeper motives and issues. On the surface, Einstein felt that a probabilistic theory ought to have something behind it generating the probabilities, much as Newton's or Maxwell's equation lie behind ordinary statistical mechanics.

As far as I can tell, probability has always been a loose end in quantum mechanics. The only specific statement which I have found in an extensive but not exhaustive search is a paper on scattering theory by Born in which he proposes the absolute value of the wave function as a probability and in an `added in proof' remark decides that it should be the square of the absolute value.

After Heisenberg, Schroedinger and Dirac had had a go at proposing foundations and methods of procedure, von Neumann came along with his book (originally in german, 1932) "Mathematical Foundations of Quantum Mechanics' in which he took violent exception to Dirac's delta function, and the assumption that all operators were self adjoint. In a way, that is a statement of the mathematical basis of the problem.

So according to von Neumann, the results of physical measurements were eigenvalues of hermitean operators (and thereby necessarily real) but even from the first there was some doubt as to which hermitean operators. They ought to have a `complete' set of eigenfunctions, which led to the concept of self- adjoint extensions, deficiencies and the like. In terms which we are supposedly more familiar with, the question of whether it is Weyl's limit point or limit circle is involved, which in turn decides what has to be done with boundary conditions.

Anyway the discussion was going around that there could be causality behind the probability, so von Neumann decided that zero variance should eliminate probabilistic uncertainty, and derived a condition using the usual formula for variance and the square of the operator. Now an eigenfunction means that you get the same result when you repeat the experiment, but of course you get the square of the previous result. That had better be zero or one (you've got the particle or whatever, or you don't) which calls for a certain kind of operator --- namely an idempotent.

For finite matrices idempotents are easy enough to acquire, because of Sylvester's theorem. It won't work immediately on Hilbert space (as Dirac would have liked) because you are mainly dealing with unbounded operators and the Jordan normal form may become quite involved. Just how involved was essentially the content of von Neumann's work on functional analysis in the early and middle thirties. This is where the work of Mackey and Gleason comes in, because in the fifties of sixties they published work on the structure of operators.

The essence of Bell's article, the one passed out in Saturday's class, was that he found a slight error in von Neumann's derivation (the noncommutativity of matrices) but in the end he confirmed von Neumann's conclusion about nonzero variance. That was apparently not completely to his liking, and the search for ways to circumvent probability in quantum mechanics continues.

However, the paper of Einstein, Rosen and Podolsky was somewhat more complicated than that, because they wanted to have two particles created under certain conditions and then examined after they have become separated. That complicates the analysis because it is now necessary to have a two-particle wave function, but presumably the same analysis of variance can be performed.

Moreover, in the intervening years it has become possible to actually perform such an experiment as they described and look at some of the strange things that quantum mechanics actually predicts, probability and all. This isn't exactly quantum computing, but it leads people to take `quantum cryptography' seriously. And in fact, quantum teleportation is one of the operations which you might want a quantum computer to perform.

One of the features of this subject is the vocabulary which the practitioners have introduced. Teleportation isn't exactly the teleportation of Star Trek or even of the earlier science fiction, it simply means copying a quantum state, phase and all, from one place to another. One quantum state does not a Schroedinger Cat make.

Of course, there is the question of what all this has to do with what we are supposed to be studying in the course. All of these discussions begin with wave functions, such as are solutions of differential equations, and it seems to lead to glorious confusion if you go ahead and discuss the physics without understanding the differential equation. Mathematics before Physics, or Theory before Applications, or howevey you want to describe it.

So the basic question is how to represent wave packets (which means a square integrable element of Hilbert Space) in terms of the solutions of a differential equation. We haven't looked too much at the Heisenberg uncertainty relation and coherent states because that is relatively straightforward. But it has to be done, which is why several reprints were passed out earlier in the term. They should be studied.

Something which ought to be new and interesting to work with, is to see how this should be done with the Dirac equation. If the answer is known and published in the literature, I don't know where to find it.It may well be that nobody was ever particularly concerned with the question. Thaller's forthcoming picture book may or may not give the analysis I'm looking for, but that shouldn't mean that we can't find it for ourselves.

And then there is Huang's claim that particle spin is a consequence of zitterbewegung. I don'think so, which means examining his derivation to see whether the Gordon decomposition doesn't work without the negative energy states. But it is worth resolving the negative energy states first before taking up spin.

But that's part of what comes next. For the moment, the conclusions which should be taker from Saturday'sclass is that Bell.s Inequality is another statistical property of a wave function, improving von Neumann's claim that you can't make dispersion-free states (which is what he calls zero variance). I will try to locate the paper which describes this in terms of mutual entropy.

Going on from there to the Einstein-Podolsky-Rosen paradox implies reading some more of the literature. The series of articles by Mermin put in function-as-a-set-of-mappings terms some results of applying the Bell inequality to some experimental setups. Those who are interested can look into the literature; meanwhile suffice it to say that discrepancies can be found which imply some elements of chance. It's all kind of a digression from the course, but something which I'll have to get clear in time for the Summer School.

One wonders how simple wave functions end up involving all the subtleties of functional analysis. It may be that functional analysis is too general, and that a simpler example will reveal the structure without having to worry about all the possible pathology. It's kind of like going through distribution theory in order to do some simple fourier transforms. And more mabe, perhaps if there weren't such an insistence on overturning quantum mechanics, von Neumann's result could have been corrected sooner and put to use long before now. Or is contemporary electronic instrumentation and technology an essential ingredient in the recent results?

- hvm

CELLULAR AUTOMATA (13)




Still no news of "New Science" on Google, but Eric Weisstein has taken up the cause --- "Many People Beliefe that Stephen Wolfram Invented Cellular Automata." No doubt true; take awasy the first two words, and it becomes false. Seems they are going to sell a Mathematica videogame along with the book; I wonder if it has good graphics?

Continuing to work on the concordance, I've built up some more de Bruijn diagrams with samples, although I'm still to lazy to put in needed fixes to the program.

Because of the difficulty of filling the space under the hypotenuse, there aren't many lattices constructed from s aingle triangle. The unique T1 lattice, the two T2 lattices, ether and antiether.

T4's can be strung out diagonally, but again the nooks and crannies prevent doing that for bigger triangles. With T4's, and in general, quite a bit can be done adding T1's but no larger. The T4's can be stacked in columns with the up-down choice giving many lattices. Specific cases have their own shift periodicities, so T4's occur with regularity in the de Bruijn map. The diagonal strings extend further, but not indefinitely using T1's as fillings.

Just playing around trying to invent lattices, it is probably simplest to begin with strips, then fill them out so their edges will fit together if possible. We saw in a previous class that sometimes that works and sometimes it doesn't. One reason for going over the de Bruijn map is to make sure of not missing any of the simpler ones. .

When T4's are mixed with other things than T1's some chaotic combinations seem possible. Not to forget the whole family of D gliders and their spacings. Of course, bring in T1's alongside T3's gives all the A's and B's, specific combinations of which have thir own lattice periodicities and keep recurring all across the map. As I recall, some plausible gliders degenerate into instances of A's and B's without giving enything new. Nor are there ABars nor ABarBar's.

T1's and T2's seem incompatible, as do T2's and T3's, in a single lattice. Of course, the small T's all mix with larger T's, of which we have many examples. While cataloging the T4 diagonal and column arrangements I neglected to include other combinations with T4, thinking there weren't many. Well, there are some, so it's reviewing the map and adding them to the various indexes which will accompany the map.

What I have been cataloging are the T5 lattices. Not counting that some of them have glider format and hence numerous variants, there are a dozen or so. When T6's and beyond are included, the number will increase still further.

The glider-format concept turns up in the map. There is one case where two different ideals connect to the zero ideal without connectint to each other.

Zero-in-6 is an example, Both T1 columns and T2 columns can join to the aero field, but not to each other. Similar things must happen then the periodicities get high enough to include the C's and the T4 columns.

So far, the ``glider'' concept has depended on having two cycles, with links connecting them in both directions. That allows fields having two parallel stripes, which can belaid out alongside one another. If one is used once or sparsely, relative to an large-scale use of the other, the result is what is visually recognizable as a glider.

One thing to search for, is some combination in which there are three stripes. But this has to be defined carefully; some of the examples have multiple phases of one of the stripes which show up as multiple loops; so what is wanted is non-isomprphic loops, say all of different lengths. So far I don't think I've seen any.

Also, that would not be a glider collision, because for that, three different shift-periods would have to coexist in the same lattice. The paragraph above calls for distinctive stripes of the same shift-period.

So far, the larger T's are relatively scarce. Mainly, it is hard to have a large triangle with a small period, unless they are scattered laterally. That is why some large triangles showed up in the highly superluminal lattices. But in the normal region, the triangles would have half the period and the studies only go up to period 10. Some programming to reindes the nodes in the de Bruijn diagram would give 11, maybe 12 generations. Is it worth the trouble? Right Now, right away, I mean.

The biggest shift-periodic triangles I've seen so far is one T11 at 10-right- in-8, a couple of T10's ar right-9 in respectively 3 and ten generations and maybe some I've missed for not looking carefully. Each combination is adding to the megabytes; where there are simple cycles it is easy to extract the needed information, but the big de Bruijn dagrams with loops in them need further work on the editor.

Now nad then I have been looking at degeneration of periodic lattices due to a defect. actually it is a ``feature'' of the screen when using full secren evolution, which I don't do often. It also happens using ``random'' to choose a node for graphing; the search program doesn't look very far to synchronize itself. Those from previous years will recall that that is how some of the large T's were discovered, and in any event produces what is supposed to be the front cover on the forthcoming book. Just 10 more days!

- hvm

Return.