, it has become accepted theory that there are fundamental units of size and time; Planck length
and Planck time
respectively. By fundamental, it is meant that there is no possibility of any distance or period of time smaller than either. Extending this (and assuming quantum interpretation
s are "real"), one might be able to consider the universe as made up of blocks which are Planck length
on all sides. Looking at it this way, a given quantum particle
can either occupy one block or the one next to it -- there is no way of it being "in between." Further, the time it takes light to move from one block to the next in a vacuum is Planck time. Since that is the fastest movement over the shortest distance, no smaller amount of time can be considered.
Compare this to the cellular automaton. Cellular automata are made up of sites, which are discrete from one another and can hold two or more values. Upon each time step, every site being considered takes a new value based on the values of itself and its surrounding sites. This process is simultaneous at all sites, and depends only on the values at the previous step, no examination further into the past is necessary. Hopefully the parallel is obvious: consider the sites to be Planck length in a three-dimensional cellular automaton, with different quantum particle energies represented by different site values, and each time step to be Planck time.
There are is at least one problem with this simplistic view, though. J. S. Bell showed that the Einstein-Podolsky-Rosen paradox held true; that for two photons entangled with one another, the collapse of one into a given state means the other must be in the opposite state if measured. Hence, depending on interpretation, state information is communicated between two entangled photons (or other quantum particles) faster than it could travel the distance between them at the speed of light. This, in turn, may mean there's a problem with the cellular automaton view of the universe, as it has no way to communicate faster than the cells can change. More widely accepted is the view that the collapsed state is predetermined when the entangled particles are created, thus causing no problems with a cellular interpretation.
Even if a faster-than-light communication vector existed, there'd still one way to salvage the idea. Stephen Wolfram has shown that a class 4 cellular automaton of any dimension is capable of acting as a universal computer, which in turn is capable of completing any computational process. You can explain the values of the blocks which make up the universe as memory sites in a universal computer. Further, the computer can run a program that changes the sites according to the laws of the universe, including those laws that explain quantum entanglement. This is something of a conceptual nightmare: the computer exists entirely within a cellular automaton itself, and is running a program simulating something like cellular automaton but with bizarre rules.
An interesting thought experiment to do with this is to examine it with respect to the big bang. The initial singularity would be the initial site value in the computer's memory before it began running. The first few generations of the simulation would have it expanding as a sphere, within which no complex structures (corresponding to particles or forces) had yet emerged from the noise. After a long enough time, structures would develop by themselves, organizing according to the rules running in the computer. These would correspond, from our internal point of view, to hadrons or leptons, then later protons and neutrinos, and eventually whole atoms, and so forth into the universe we know today.
As with all flavors of metaphysics, this hasn't any practical ramifications. Still, it's fun to think about, and amazing to consider how the activity any of us can see in a class 4 automaton might correspond to the automaton's own internally coherent universe. And further, how it might look to the intelligent beings within that structure considering their own universe....
Update 2002-07-13: In Wolfram's book A New Kind of Science, he examines and extends some of the ideas mentioned in this writeup, coming up with a fairly interesting way to solve the problem of distant states being updated simultaneously. Instead of looking at all of the states as updating simultaneously, he suggests that only a single cell may update at a time, as though an agent -- or a Turing machine! -- were looking at each one in sequence. Because a given observer exists within this system, it is only occasionally fully updated itself, so it must have a view that shows discrete events rather than the underlying "continuous" updating procedure. Wolfram goes on to show how this updating forms a network of casuality, where not only local events can cause local changes, but far (spatially) distant ones, if they are connected in the network. This handily solves the faster-than-light problem mentioned above and several others as well, as explicated in detail in the book.