Scientists describe the behavior of our Universe in terms of the laws of Physics, which have helped us understand much of the world around us, from the workings of our own bodies to the structure of the stars to how the Universe as we know it came into being1. These laws depend on certain constants, which are numbers in the physical laws not predicted by the theory itself, like the mass of the proton or the fine structure constant (which describes the strength of electric and magnetic forces).2 What many scientists find interesting (or troubling) is that there is only a very small range of values for these numbers that would allow for life as we know it to exist3. It is then argued by some (including some scientists) that this clearly means that it is very improbable that life should have come to exist.4 Some people believe that this shows that a divine intelligence exists (e.g. God) that arranged the laws of nature for the purpose of allowing life. This particular argument is sometimes referred to as the strong anthropic principle5 (though there is some argument over whether that's the correct use of the term). Whatever you call it, many proponents of intelligent design have adopted this as one of their arguments. However, it turns out that there's really no good basis for saying that it's unlikely for the physical constants to have values that allow life.
There are a number of possible objections you could raise to the claim that life requires an improbable
fine tuning of the physical constants. For one, you could question if we really know enough about what all possible forms of life might be to say what conditions are required. You could also ask what the different
possible laws of physics might have been, since you could obviously alter far more in the laws of physics than just the values of a few constants.6 But let's leave aside those objections for the moment and just try to answer the question,
How probable is it that the physical constants should have values that allow for life as we know it? It turns out that this is simply not a well posed question of statistics, and there is no sensible way to say what the probability is. To illustrate why, let's take a simple example.
Imagine that there is a constant of nature, foo. This could be, for example, a time scale or a mass. Furthermore, suppose that foo must have a value between 0.01 and 0.1 in order for life as we know it to exist. Now, the local intelligent design guru says,
Look, foo has to be sooo finely tuned for life to exist. The universe must have been designed that way, because it's far too improbable to have happened by chance. Naturally, that might seem reasonable, but let's think about actually calculating the probability that foo takes on a value that allows the existence of life.
First, what are the possible ranges that foo can take on? Who knows? We only have one universe with one value of foo.7 Suppose that we can plug any positive value of foo into our equations and still get something that basically makes sense (for example, if foo is a mass), so we'll say that it could have any value foo > 0. But what's the probability distribution? How likely is foo to lie in a given range of values? Again, we have no way of saying. Maybe we can assume that all values of foo are equally likely? No, we can't. foo can take on values out to infinity, and a probability distribution where there's an equal chance to lie anywhere from zero to infinity can't add up to 1 (or any finite, non-zero number), so it isn't well-defined.8 Let's be generous, though, and make the situation even simpler; let's suppose that we know that foo has to lie in the interval 0.01 < foo < 100. Now we can assume that all values are equally likely and calculate a probability for foo having a value that allows life to exist, namely P(0.01 < foo < 0.1) = 0.09/99.99 = 1/1111 ≅ 0.1%. From this we'd conclude that it's rather unlikely that life should have come about. But I claim that this reasoning is still faulty.
The problem is that we chose a uniform distribution of probability (where every value is equally likely) completely arbitrarily. We could have just as easily chosen a different distribution of probability, for example one that is peaked in the middle or on one side. Saying that all values are equally likely is saying something very specific about this constant that we have no evidence for. And the outcome of our calculation depends entirely on this arbitrary choice. Still, you might object that a uniform distribution is somehow the most
natural choice when we don't know anything about the value. There is another way to see that there really isn't any one, natural choice.
In fact, using the logic above with the uniform distribution our result depends entirely on the arbitrary choice of how we write our equation. The laws of nature we're discussing were written in terms of the constant foo, but we could have just as easily defined things in terms of a constant bar, where foo = 1/bar. If foo were a time scale, then bar would be a frequency, and if foo were a mass (and you're working in particle physics units where hbar = c = 1) then bar might be a length. Each is a valid, reasonable way to write down the laws of nature.9 Which you choose is just arbitrary. We said before that foo must lie between 0.01 and 100 and life could only occur if 0.01 < foo < 0.1, so this implies that 0.01 < bar < 100 and life requires 10 < bar < 100 (just using the fact that bar = 1/foo).
Now, suppose that we'd never heard of foo and had only ever seen the laws of physics written in terms of bar. If we had started out with our laws of physics in terms of bar and used the same logic as before, we would have assumed that every allowed value of bar is equally likely. In that case we'd find that the probability of a value consistent with life is P(10 < bar < 100) = 90/99.99 = 1000/1111 ≅ 90%. We would now conclude that life is quite likely to have come about, and the only thing that changed was how we were writing down the law of nature before applying our statistical reasoning. Our conclusion changed because assuming a uniform probability distribution for foo implies a completely different, non-uniform probability distribution for bar, and vice versa. In fact, it turns out that our arbitrary choice of how to write down our physical constant before applying a uniform probability distribution is completely equivalent to just choosing an arbitrary probability distribution in the first place.10 This shows that the logic of assigning a uniform probability distribution (where every value is equally likely) to the constant whose distribution is not known is flawed, since it gives a different probability depending on how we write down the physical law and choose to define the constant; any answer we might get would be completely determined by our arbitrary choice.
So it seems that even being very generous and granting a lot of things, we find no reasonable way to assign a probability to how likely the fundamental constants are to lie in a range that allows life, and as I mentioned this doesn't even take into account the other objections that might be raised. Fundamentally, if we don't know anything about what values are possible and the relative likelihood of each, there's no way we can say how likely a certain range of values is. If a physical constant has been defined so that the range of values necessary for life happens to be small compared to 1, then we're tempted to assume the probability of this occuring is small without actually trying to reason out the value of the probability as we did above, and this is the root of the mistake. We can then conclude that any talk of the
fine tuning of the fundamental physical constants to allow for life just isn't justified by fact and reason.
You could ask a related question, though,
Why do the fundamental constants have the values they do? Many scientists are interested in this question. What we've talked about here doesn't help answer that at all. One answer that some people have tried to give is in terms of the weak anthropic principle. But that's a whole different discussion; you should check out the anthropic principle node if you want to learn more about it.
the Universe as we know it I mean atoms, stars, etc..
- Depending on the theory you're working with and the way you choose to write it down, which are the fundamental constants and which are derived values may change (e.g. the Higgs mass might be the fundamental constant that determines other particles' masses). Here I'm concerned with the fundamental constants in whatever form of the laws is under consideration.
- See, for example, remarks in the essay
Is the End in Sight for Theoretical Physics in Stephen Hawking's Black Holes and Baby Universes.
- See, for example, physicist Michio Kaku's book Hyperspace (pp. 258-259).
- One example is the previously mentioned section of Kaku's book.
- For starters, why not make the gravitational force proportional to 1/r3?
- I'm leaving aside here scenarios in which there are many
universes with different constants, as any of those are, at present, without any strong evidence. Furthermore, we have to be quite skeptical about the possibility of substantiating any hypothesis involving alternate universes which which we can't interact.
- You can see there's something screwy with the idea in the following way: If there's an equal chance that the value lies anywhere between 0 and 10, then the probability it lies in an interval of length 1 is 1/10. If the value is equally likely to be between 0 and 100, then the probability it will lie in a length 1 interval is 1/100. The larger you make the set of possible values, the smaller the probability of it lying in any particular interval. If you try to keep the total probability equal to 1 (or any finite value), then the probability of lying in any particular interval goes to zero. But the probability can't be zero everywhere!
- Physicists switch between things like this all the time. For example, they may talk alternately about the Planck length or the Planck mass, which are related in essentially this way.
- If foo is uniformly distributed, then any random variable on the probability space my be obtained by bar = g(foo) for the appropriate measurable function g. You might be worried about the fact that if foo has dimensions then it only makes sense to apply a function g if it's a homogeneous function. You can get around this by defining foo = C*x, where C is a fixed, arbitrary constant with the dimensions of foo, and x is a dimensionless number. Then the value of foo is determined only by x and any mapping y = f(x) makes sense.