Karatsuba multiplication (sometimes also called Karatsuba-Ofman multiplication) is an algorithm developed by Anatolii Alekseevich Karatsuba and Yu. Ofman in 1962. It is a divide and conquer algorithm for multiplying two numbers together with fewer operations than the normal grade school algorithm.
The basis for Karatsuba multiplication is the following equation:
(a + b*2n) * (c + d*2n) = a*c + ((a+b)*(c+d) - (a*c) - (b*d))*2n + (b*d)*22n
where (a + b*2n) is the first number we want to multiply (split into two parts), and (c + d*2n) is our other number (also split in half).
The important thing to realize about normal multiplication algorithms is that they are O(n2); multiplying two n word numbers together takes about n2 multiplications. Notice that in the second half of the equation, we can compute the value using only 3 multiplications of (n/2) word numbers (since we only need to compute a*c and b*d once), which leads reducing the number of multiplications required (3*(n/2)2 as compared to n2). And indeed, it does, but because of various overheads, such as the additions and subtractions, additional storage space, and so on, Karatsuba is only used for larger numbers; usually 160 to 500 bits is sufficiently large to overcome the overhead.
Now, this is where the bit about divide and conquer comes in. The subcomponents , a*c, (a+b)*(c+d), and b*d, themselves require multiplications, and in fact we can use Karatsuba for computing them as well. And then use Karatsuba for the subcomponents of those multiplications. At some point we bottom out with either a standard O(n2) multiplication, or something trickier like a Comba multiplication.
It's not actually necessary that the two numbers be the same size. For example, say that one of the numbers is twice as large as the other. In that case, we could simply treat b (or d) as all zeros, and then continue on from there. It's usually not worth taking into account, but if you know that one of the subcomponent numbers is 0, you can eliminate one multiplication and one addition. This is a fairly rare case, however.
Karatsuba multiplication is extremely useful for algorithms which require a lot of multiplications of large numbers. A classic case for this is algorithms which perform public key cryptography, such as RSA, DSA, and Diffie-Hellman. The primary operation in these algorithms is exponentiation, which is usually implemented using a square and multiply algorithm. Performing an operation with a large RSA key might require between a dozen and several thousand multiplications; even the tiniest speedup can provide a noticeable increase in performance. Nearly all cryptographic and numerical libraries, such as OpenSSL and GNU MP, implement Karatsuba multiplication.
A generalization of Karatsuba which is more efficient for very large (over 10000 bit) numbers is Toom-Cook multiplication. For even larger numbers, a method based around FFTs is useful, but rarely implemented, as current results in cryptanalysis suggest that keys in the range of 2048 to 4096 bits are sufficiently strong for the foreseeable future.