Varying Speed of Light?

There have been theories going around in the undergrowth of theoretical physics for some years that propose that c, the speed of light in a vacuum, may have varied over the course of cosmological evolution. Jakob Bekenstein, John Moffat, Joao Magueijo, John Barrow and Andreas Albrecht have all contributed to "VSL", which is sometimes touted as an alternative to inflation to solve the horizon problem and other puzzles of cosmology. The basic idea is that light was much faster in the early Universe - many orders of magnitude faster. Then at some point it jumped down to the current value. Thus, faraway parts of the observed Universe would have time to communicate with each other, which could explain why they appear so uniform (homogeneous and isotropic).

Recently the measurements of varying alpha (alpha being the fine structure constant) by John Webb and the team at UNSW have revived speculation about varying c, since alpha is defined as e2/(4h-bar c). A veritable storm of media coverage resulted, with headlines like "Was Einstein Wrong?", "Light is Slowing Down", "Is Light strolling along at hot-summer-day-pace?", etc.

Problems

However, both VSL cosmologies and the interpretation of varying alpha as "varying c" suffer from a big problem - that of units. As you will have learnt by reading the other writeups in this node, the metre is defined as the distance travelled by light in 1/299,792,458 seconds. Thus, using this definition, c always takes the same value. What does "varying c" mean now?

Also, suppose you now take a different set of units: the standard second, and the length of a piece of metal containing a certain number of atoms laid end-to-end. The length of the piece of metal in metres will depend on the fine structure constant because of the effect of electromagnetic interactions. So, if alpha varies over time, the speed of light in bits-of-metal per second will also vary. You could also take a different standard of time: a pendulum clock, say. But for every different set of units, the apparent variation in c will be different! Clearly the variation in c is not a physically well-defined thing.

The underlying principle is that only the variation in dimensionless numbers can be measured unambiguously. This is exactly what we do when we use units: we can say that the dimensionless ratio of the length of a table to the standard metre is 1.43. If the number turns out to be 1.45 tomorrow, something is clearly a bit odd, but you can't say definitely whether the table is bigger or the metre is smaller.

As I indicated, variations in dimensionless numbers such as alpha are all that is needed to relate the measurements in one system to those in another. This interconvertibility extends to the so-called VSL theories. Indeed John Barrow has a paper in which he tells us that any theory of varying alpha that includes electromagnetism and general relativity can be written either as "varying c" or "varying e".

Is Einstein dead?

Now, when all that is said, varying alpha is still very strange. In a theory which had just EM and GR, it would not happen. So it tells us there is something beyond these two. But we knew this already - since the Standard Model cannot be the theory of everything. If we stick in some scalar fields, then it is relatively easy to get a theory that still respects the principles of general relativity but allows a solution where alpha and other stuff vary. Heck, temperature and density already vary over the evolution of the Universe - on a large scale, Lorentz invariance is already broken by these effects of the Big Bang. To say this again, the underlying theory may have Lorentz invariance, but we undoubtedly live in a solution that does not.

When we come to the more radical VSL-type proposals, they hit another minefield: they are non-covariant. This means that Lorentz invariance is broken right there in the definition of the theory. Instead of using a dynamical, varying scalar field to induce varying alpha and (maybe) solve some cosmological problems, one introduces an arbitrary function c(t) which suddenly at some point jumps by several orders of magnitude. All sorts of principles like conservation of energy are broken at this point. There is no dynamical mechanism put forward to explain why it should happen this way: it's the cosmological equivalent of a deus ex machina. Technically, the theory is not a closed system of equations (it leaves open the choice of c(t)), so at the "jump" one has to make up a series of rules that are supposed to describe how the matter and radiation and stuff react to the sudden and enormous changes taking place. Sometimes the proponents of the theory wave their hands and talk about phase transitions, but no definite explanation has emerged from this as yet.

Now remember that the value of c can always be set to the same number by a choice of units. When you have done this, the theory looks like a sudden, gigantic discontinuity in the contents of the Universe which happens for no apparent reason and solves the cosmological problems. The entire content of the proposal is in these rules for what happens at the jump - rules which can't be derived from a underlying action (a functional of the fields in the theory that determines its entire behaviour) but are put together according to what the authors think might be reasonable. This doesn't sound like theoretical physics to me.