This proof is about as simple as they go. Apparently Gregory Chaitin counts bits to get an information theory proof, but this just complicates matters by taking logarithms.

So here goes. Suppose there are just k prime numbers 2=p1<...<pk. Fix some N (we'll see an exact value later). Every number n<N has a representation

n = p1i1 ⋅ ... ⋅ pkik
(this is the easy part of the fundamental theorem of arithmetic). And by considering sizes, obviously
ij ≤ logpjn ≤ log2n < log2N.
So there are no more than log2N possible choices for each ij, and no more than (log2N)k possible choices for all combinations of indices {ij}j -- and each number 1,...,N must be represented by one of these choices.

But for sufficiently large N,

N > (log2N)k
(note that by our assumptions, k here is some constant!). So obviously for sufficiently large N we simply don't have enough combinations of powers of k primes to generate N different numbers ≤ N... This contradiction shows we must have infinitely many prime numbers.

In fact, we can use the above argument to get a rough estimate on the number of primes < N -- a first step to the prime number theorem.

Log in or register to write something here or to contact authors.