A version of the Law of the iterated logarithm can be stated as follows:

Let {an} be a series of i.i.d. random variables with variance σ2, Sn be its partial sum, i.e., Sn=a1+a2+...+an, then

lim supn->∞ |Sn|/(σ*sqrt(2*n*log(log(n))))=1, almost surely.

Here log(x) means the natural logarithm of x.

To put it more plainly, suppose you do a random walk, starting from the origin, and every second you go one step right or left with equal probability, every step being independent from earlier ones. Intuitively, as time goes by, you will reach places farther and farther from the origin. Indeed, after n seconds and for very large n, according to the central limit theorem your position will almost follow the Gaussian distribution with a standard deviation of sqrt(n) steps. In other words, after n seconds, you will have about 5% chance to be more than 1.96*sqrt(n) steps from the origin, a 0.1% chance to be more than 3.29*sqrt(n) steps from the origin, etc. However, it is always possible that you be as far as 10*sqrt(n) or 100*sqrt(n) from the origin (but no farther than n steps), though the probability is very small.

This theorem in a sense puts a limit on this variation of your distance from the origin. It basically means that almost every time you do such a random walk ("almost surely" means "with probability one", so it can fail to happen in principle but not in real life), you are going to get farther than sqrt(n), or 10*sqrt(n), 100*sqrt(n), etc., at some (possible very large) time n; indeed you will get farther than sqrt(1.999*log(log(n)))*sqrt(n) for infinitely many times; however, getting farther than sqrt(2.001*log(log(n)))*sqrt(n) would be a rare event, for it would happen only for a finite number of times, after which you will never get this far (note that "this far" increases with the time n) again during this random walk!

Sounds a little complex? But it has to, since simpler propositions, such as "you will never get farther than 5*sqrt(n) before time n", obviously cannot be true.

The fascinating part of this law is the log(log(n)) in it, which does not occur very often in fundamental mathematics (except maybe in the complexity analysis of algorithms), but it occurs here in a very strong and important result concerning a problem rather close to everyday life.

References:

  • http://en.wikipedia.org/wiki/Law_of_the_iterated_logarithm

My knowledge of advanced probability theory is not very solid, please correct me when needed.

Log in or register to write something here or to contact authors.