The theory of assumed existence can be seen in many ways to be a product of photography and later cinema, each of which allows a portion of reality (or more properly its representation) to be stripped out of the temporal present so that the two appear to exist in parallel. As media technologies become more complex and include more of the elements which we typically use to define reality, the notion that they run in parallel with reality rather than being subsumed by it grows stronger. What is perhaps most interesting about the cinematic portrayals of VR and associated parallel realities in films like the Matrix is the way in which cinema and cinematic techniques of representation are always implicitly taken to be the truest embodiment of such realities.

I've been writing a Perl module to play the Game of Life across a network of peers, and this gave me some reasons to believe that we do, indeed, live in a Matrix world.

Suppose that a technologically advanced civilisation wished to create a virtual world, existing as a software simulation. They could build a single huge supercomputer to do it. But I think it is a defensible assumption that they would be more likely, for reasons of cost, to use a network of mass-produced computers, linked together.

Of course, to create a single world within a disparate set of computers, the computers need to talk to each other. For example, in the Game of Life world, each square needs to know how many of its neighbours are alive or dead. A single computer can handle many squares, but it will regularly need to check with other computers for the life status of the borders to its domain.

This might create a problem. If all the computers in our network had to communicate with all the other computers, there would be a Gnutella-like chaos of messages being passed around which would grow exponentially (not sure if that's exactly correct, but you get the idea) with each new node added to the network.

The Game of Life doesn't have this problem, because each square is only affected by its neighbours. By extension, each computer only needs to talk to computers which have adjoining nodes. Instead of increasing exponentially, the number of messages being passed increases arithmetically with each new computer. For example, if each computer handles a board of 100 x 100 GoL squares, new computer N in the diagram below only needs to talk to computers B,D,F and G. A,C and E can remain blissfully unaware.

 - -
|A|B|
 - - -
|C|D|N|
 - - - 
|E|F|G|

To put it another way, the Game of Life forbids action at a distance. Sounds familiar? Well, our physics too forbids action at a distance. It's a core rule that two bodies can only interact if they are contiguous, or by the intervention of a third body which passes between them.

However, this itself wouldn't solve the exponential message growth problem for our world. Why not? Well, the Game of Life proceeds at a constant speed. Each cell affects its own neighbours, once every turn. Our world, however, has variable speeds. So even if a computer simulation of our world forbade action at a distance, there would still be the danger that a very fast-moving object might arrive from the space simulated by a non-neighbour computer.

What our future civilisation would need is some kind of maximum speed limit on objects in the simulation. This speed limit would guarantee that no object could move through the entire region of space governed by a single computer, before that computer could inform its neighbours of the object's existence. The speed limit would ensure that each computer could talk only to its neighbours, but still remain certain that it knew about everything that could affect space in its region.

Of course, our universe famously does have such a speed limit.

So putting all the evidence together, our universe looks suspiciously suitable for running inside of a massive network of computers.

  • It's divisible into regions of space - so each computer can run the simulation for a particular region
  • The laws of physics are invariant between these regions - so the software running on the computers can always be the same
  • There's no action at a distance - so computers only need to talk to their neighbours
  • There's a maximum speed limit for objects in the universe - so computers needn't worry about objects zipping between neighbours faster than network communication can go on.

Well, you have to wonder, don't you?

Disclaimers

IANAP. I Am Not A Physicist. Maybe someone can tell me that "no action at a distance" is all bunk according to the latest quantum experiments.

I can't prove it would be easier/cheaper to use a network of computers than a single supercomputer, to simulate us. It just seems likely.

I don't know if the problems of network communication are artefacts of our physical universe. If the advanced civilisation's universe had no speed-of-light restriction, would infinite amounts of communication between computers be possible?

This hypothesis is really fun to play with when you know just enough computer and physical science to get yourself into trouble.

Beyond requiring a universal speed limit any model of the universe would also require that no measured value can be on an infinitely divisible scale. This is because if a measurable value in our universe could be any arbitrary value it would then require infinite data to store that value in the simulator.

Consider a wanting to measure something that is less than a meter, with a meter-stick of course. You can first take a measurement based on tenths of a meter and represent this approximate value with the one digit numbers 0-9. Measuring hundredth of a meter and you can approximate with the two digits numbers 00-99. But no matter how many times you divide your unit of measure by ten the true length of the object can still fall between the two closest measurable values. Thus with any finite number of digits you cannot guarantee a truly accurate measurement.

The obvious solution for the simulator is to define some absolute minimum distance. Then any length or position must fall perfectly on a multiple of this distance and can be perfectly represented by a finite value (unless you have an infinite length or distance).

And guess what... Some guy called Max Planck showed, indirectly, that their is such a minimum distance*, now know as the Planck length which is about 1.6 × 10^-35 meters. Or 1.6 divided by ten 35 times, small enough that you probably haven't noticed this limit it recently.

The fun with physics continues if you consider the implications of having a maximum speed and a minimum possible distance. At the speed of light it will still take about 5.4 × 10^-44 seconds (5.4 divided by ten 44 times) to cross the Planck length and this length of time is called the, you guessed it, Planck time. If you have two moments in time that are closer together than this length then by definition nothing can have moved in between those two moments. Thus perfectly accurate time values can by stored with a finite value, as long as time itself is finite.

Mass too must have a minimum unit as special relativity tells us that mass and energy are interchangeable (remember E = MC^2) and quantum mechanics gets its name from the fact that the quanta is a smallest unit of energy possible.

Conveniently with basic values for mass/energy, distance and time you can derive all other units of measure.** This means that any value that has an non-infinite range of possible values can be accurately stored as a finite number.

Science tells us that space may in fact be finite, but we don't know about time yet. Luckily it doesn't matter! The beings in charge of hte simulator don't need to store the exact state of the universe at all points in time. All they need to do is store 5 values for each and every particle in the universe: what type of particle it is, X,Y and Z coordinates and it's current velocity (velocity being it's speed AND direction). Then you just iterate from on moment of time to the next by your unit of Planck time forever, throwing out all the old data each step.

Physics, what a beautiful thing.

PS - All this is based off of A) my interpretation what smart people have written and B) really confusing stuff.

I think that the easiest way to make sense of this is to consider all objects as moving at constant speed through 4-D spacetime. Thus faster you travel in the physical dimensions, the slower you travel through time. If you quantize the "distance" you travel in spacetime and take into account relativistic time and length dilation then things work out, well... beautifully.

* I think it's more accurate to say that any two objects that are separated by less than this distance cannot be compared to each other in normal ways. It would be useless in ALL practical senses to view either object as being in front of, behind, above, below, windward, or leeward of the other object.

** I think, I'm not really sure why. Probably that pesky common sense thing.

Log in or register to write something here or to contact authors.