One of the problems traditionally used to create public key cryptography and digital signature systems. It is a so called 'one way function', meaning that it is function that is easy to compute in one direction, but hard to reverse. It is distinct from secure hashes in that it is easy to make zero-knowledge proofs around it.

The Discrete Logarithms problem is based on determining 'e' in the calculation g^e mod p, given only the value of the function, the value of g, and the value of p. 'g' is a generator (random), p is a large prime, and 'e' is an integer. This is believed to be hard.

Discrete Logarithms are definately harder to break than RSA. RSA is vulnerable to calculation of discrete logarithms, advances in factoring, and specific attacks against RSA. An oracle for solving RSA does not help in solving either the discrete logarithm problem or in factoring.

Diffie-Hellman key exchange is based on the discrete logarithms problem; as is ElGamal and DSS. A desciption of how to make a signature system from Discrete Logarithms is in Discrete Logarithm Signatures.

Suppose we have an associative multiplication operation "*" (a group always has one, but we can also find them in other places -- see e.g. the carry lookahead adder). Fix some `g' in the domain of the multiplication. Fast exponentiation lets us compute gn in O(log n) multiplications. Since the size of n is log2 n, this is a linear time algorithm for exponentiation. The other direction -- computing n from gn -- is the "discrete logarithm problem", and turns out to be a lot harder in many cases.

This makes discrete logarithms an extremely elegant one-way function. Its security is known to be at least as good as the security of factorisation (the other leading brand of one-way function) -- if you can break discrete logarithms, you can factorise. The other direction -- whether it is no harder than factorisation -- is not known. Discrete logarithms are also appealing because the input and output have such similar structure -- both are typically bit strings of the same length. Diffie-Hellman key exchange, as well as DSA and DSS, are based on this one-way function.

We'll be working in a group G, since the problem there is hard enough. Since we're fixing g once and for all, only elements {gn: nZ} will be of interest1. So only a cyclic group G, with g a generator (i.e. all elements of G are powers of g), shall be of interest. We consider only the finite case, so G is a finite cyclic group; assume |G|=N (N will be very large, of course, or the bad guys can solve the discrete logarithm problem merely by enumerating all elements of G).

As G is finite cyclic, it is isomorphic to CN, the group of integers modulo N, with addition (module N) as the group operation, and the generator 1. Note that we've exchanged "multiplication" here for "addition", so "exponentiation" becomes "multiplication", and "logarithm" becomes "division". Put this way, the problem is TRIVIAL! For instance, say we take N=232, and work in G=CN with the generator "1". This is just computer arithmetic on 32-bit words. Even worse, our exponentiation is just adding together n 1's, and (of course) n×1 ≡ n mod 232. If asked for the discrete log of 0x12345678, one can immediately see that adding together 0x12345678 1's will result in 0x12345678. OUCH. Picking other generators (any odd number will do for this N) doesn't make things much harder, either (we can find the multiplicative inverse modulo N of any odd number; multiplying by the multiplicative inverse solves the discrete log problem with that base).

To make discrete logs hard, we add a twist. The problem with Cn is that we know far too much about its structure. We'll still work with a cyclic group, but we'll obfuscate its structure in some way. G will still be isomorphic to CN, but we won't know how to compute the isomorphism! Of course, we still need to know how to perform arithmetic on our G. It turns out that algebra supplies us with just what we need.

The easiest way is to pick some prime N=p, and work with F*p, the multiplicative group modulo p. This can be expressed by the numbers 1,...,p-1, and multiplication modulo p. And it turns out that this group is cyclic (more on that, and other cyclic groups, further on). Suppose g generates this group2. Knowing n, we can easily compute gn (in time linear in the size of n, see above!). But h=gn is just some number 1,...,p-1, and (for large p) the binary representation of h offers no (known) hints to what n might be.

Another cyclic group we might pick is the multiplicative group of a finite field. An amazing fact is that the multiplicative group of a finite field is cyclic -- so we can use that. Note that a finite field has size pk for some prime p; taking k=1, we're left with the previous case.

Here's some more about how we do this, but you'll need to know a bit of algebra for it. The structure of F*pk is quite hard, but we know enough about it to perform arithmetic there. The typical representation is as the extension field of degree k of the field of numbers module p, Fp. If we pick a primitive polynomial p(x) of degree k above Fp, then our cyclic group is just

{xn mod p(x) : n=0,1,...,pk-1},
with multiplication modulo p(x) as the group operation. When p=2, we can multiply (and therefore exponentiate) and divide in F*pk very easily, but discrete logarithms appear intractable.


Don't believe everything you read on the WWW. Especially not if you don't trust the writer to know what he's talking about!

In fact, discrete logarithms in GF2k is (barely) tractable using current technology. Lots of preprocessing is needed, but it's only needed once per group. Bruce Schneier recommends NOT to use this field in any applications. (GFp for large prime p is still safe, though, if p-1 has a large factor).

Still other cyclic groups are used, but these two should give the basic taste of the problem.

Note that publishing the base g and the group G=<g> are thought to make no difference to the hardness of the problem. Even better, no useful attacks are known using precomputation -- so everyone can use the same g and G, and still consider themselves safe! Indeed, ISAKMP specifies a very small number of choices of groups and bases for Diffie-Hellman key exchange. The choice of group changes the size of N (there is a 512-bit group, a 1024-bit group, a 1536-bit group, etc.) and the computation time for key exchange. But cryptographers today consider discrete logarithms hard, even if everyone in the world used the same group. If n (the size of the cyclic group) has only small factors, then we can solve the discrete logarithm problem quite quickly. When using Fp* which has n=p-1 elements, it is wise to choose a safe prime p which has a large factor, making the problem on it hard. So using a public prime p can actually make for better security than guessing your own.

One of the things that (we think and hope) makes discrete logarithms so hard to compute is the discrete and unordered setting. Continuous logarithms are, by comparison, extremely easy to compute. For instance, we have the extremely useful power series

ln(1+x) = x - x2/2 + x3/3 + x4/4 - ...,
which lets us evaluate (or at least approximate, but that's enough for practical algorithmic purposes) similar logarithms in the real numbers, in the p-adic rationals, and other topological groups where we have norms and convergence. For the natural numbers, we can easily check if something is a power of g∈N by using the ordering we have. If we had something similar for the discrete case, discrete logarithms would be doable. For instance, computing the inverse (which see) is often very easy, because we have an appropriate norm, and can use some power series or similar trick. Discrete logarithms are hard, because (apparently!) the analytic machinery of calculus doesn't work.


  1. However, in a larger group G, the question ``is h∈G a power of g?'' is also of interest (and hard). For current practical purposes, it is less important.
  2. It's even non-trivial to find a generator. However, it's somewhat easier to test whether a given g generates F*p or not... if you know the factorisation of p-1. In any case, cryptosystems use well-known, well publicised choices of p and g; computing the discrete logarithm is (thought to be) hard enough, and (unless the choice of p and g is particularly bad!) picking random p's and g's probably adds little security.

Log in or register to write something here or to contact authors.