A real number is called normal if you can "find" all (finite) digit sequences in its expansion. There are several variant definitions in use, and often it's important to determine which is relevant.
First, there's the question of how often each sequence must occur.
- Easiest is the demand that the digit sequence just occur anywhere in the expansion of the number.
- Next, we can demand that the digit sequence appear infinitely many times in the expansion.
- Finally, we can demand that the digit sequence appear with the same density it would have in a truly random independent sequence of digits (for decimal, this means every digit sequence of length k appears with density 10-k).
Additionally, there's a question of bases:
- We can demand any of the above in base 10 (decimal) only (or in binary, or in some other prespecified base), - OR -
- We can demand the above in every base.
No matter which choice we make, almost every real number turns out to be normal. So a randomly chosen number in the interval [0,1) is normal with probability 1. But proving a number is normal is usually very hard; we only know of a few explicitly-constructed examples that are normal.
Note that rational numbers have a cyclic expansion in every base, so they're never normal.
AxelBoldt asks me to mention Gregory Chaitin's Omega. Chaitin's Omega Ω, for any Turing complete system, is normal (by all definitions). The proof by hand waving for this is that if it were not, you could make statistical judgements of Turing machines. Unfortunately, it's also the consummate example of an "unconstructable" number (for the exact same reasons), so how important this is is debatable.