The device known as a logarithm was invented in the late 1500s by a Scottish baron named John Napier as a tool to simplify arithmetic. It was useful because it replaced multiplication with addition. This was accomplished by a property of logarithms:

ln xy = ln x + ln y

You could multiply two positive numbers x and y by looking up their logarithms in a table, adding the logarithms, finding the sum in the body of the table, and reading the table backwards to find the product xy.
Of course, you needed an actual table to do this, and Napier spent the last 20 years of his life working on a table he never finished. The table was later completed after Napier's death by his friend Henry Briggs.

Definition of a logarithm:

if by = x, then logb x = y


Some useful properties of logarithms: (where a and b are unique bases of the logarithm)

logb(xy) = logb x + logb y

logb(x/y) = logb x - logb y

logbxp = p logb x

lim  logb x = -, b > 1
x→0+

lim  logb x = +, b > 1
x→+∞

lim logb x = logb (lim x)
x→c              x→c
                             
logb 1 = 0

logb b = 1

blogb x  = x, x > 0

logb bx = x

logb x = loga x / loga b, b > 0, b ≠ 1

yx = bx logb y, b > 0, b ≠ 1

Proofs to Jakohn's first three properties:

  1. ln(pq) = ln(p) + ln(q)
  2. ln(p /q) = ln(p) - ln(q)
  3. ln(p r) = r ln(p) for every rational number r

using loge or ln (natural log):

1. If p > 0, then 

(d/dx) (ln px) = (1/px)p = (1/x).

Therefore, ln(px) and ln(x) are both antiderivatives of 1/x (∫(1/x)dx = ln |x| but since x > 0, 
∫(1/x)dx = ln (x) ) so ln(px) = ln(x) + C for some constant C. Letting x = 1, we obtain ln(p) = ln(1) + C. Since ln(1) = 0, C = ln(p), and therefore ln(px) = ln(x) + ln(p). Substituting q for x, ln(pq) = ln(q) + ln(p). Example: ln[(x + 2)(3x - 5)] = ln(x + 2) + ln(3x - 5) 2. Using the formula ln(pq) = ln(p) + ln(q) with p = 1/q, ln(1/q) + ln(q) = ln((1/q)*q) = ln(1) = 0 so ln(1/q) = -ln(q). Consequently, ln(p /q) = ln(p*(1/q)) = ln(p) + ln(1/q) = ln(p) - ln(q). Example: x + 2 ln ------- = 3x - 5 ln(x + 2) - ln(3x - 5) 3. If r is a rational number and x > 0, then (d/dx)(ln(xr)) = (1/xr)(d/dx)(xr) = (1/xr)rx(r - 1) = r(1/x) = r/x. Since ln(xr) and rln(x) are both antiderivatives of r/x, ln(xr) = rln (x) + C for some constant C. If we let x = 1 ln(1) = rln(1) + C. Since ln(1) = 0, C = 0 and, therefore, ln(xr) = rln(x). Example: ln √(x + 1) = (1/2)ln(x + 1)

These properties are convenient to use when the derivative is going to be taken. It makes it so that the chain rule, quotient rule, and product rule are unneeded or easier to do.

Consider taking the derivative of the last example without applying the property


(d/dx)(ln(√(x + 1) =

    1      1
-------- * - (x + 1)(-1/2) =
√(x + 1)   2

    1      1      1
-------- * - * ------- =
√(x + 1)   2   √(x + 1)

   1 
--------
2(x + 1)

If the rules are applied to it first, it becomes a simpler problem.


(d/dx)(ln(√(x + 1) =

(d/dx)(1/2)(ln(x + 1) =

1     1        dx
- * ------ * ------ =
2   x + 1      dx

    1
---------
2(x + 1)

The logarithm log has these two, seemingly very different, properties:

What's the connection?

You can prove the first from the second. And you don't even need to muck around with integrals and equations. The same proof works for any base, not just e; it's just that the diagrams are neatest for the natural logarithm.

ln(x) is the area underneath the curve of the graph f(x)=1/x from 1 to x. The following beautiful ASCIIvision should make everything "clear":

   1 ++-----+-----+------+-----+------+-----+------+----++
     **     +     +      +     +      +     +   y = 1/x  +
     **                                                  |
     ***                                                 |
     ****                                                |
 0.8 +***                                               ++
     *****                                               |
     ******                                              |
     *******                                             |
     ********                                            |
 0.6 +********                                          ++
     ***********                                         |
     *************                                       |
     ***************                                     |
     ****************:                                   |
     ****************::::                                |
 0.4 +***************:::::::                            ++
     ****************:::::::::::                         |
     ****************:::::::::::::::::                   |
     ****************::::::::::::::::::::::::            |
     ****************::::::::::::::::::::::::xxxxxxxxx   |
 0.2 +***************::::::::::::::::::::::::         xxx|
     ****************::::::::::::::::::::::::            |
     ****************::::::::::::::::::::::::            |
     ****************::::::::::::::::::::::::            |
     ****************::::::::::::::::::::::::            |
   0 +******+*****+******+*****+******+*****+******+*****+
     1     1.5    2     2.5    3     3.5    4     4.5    5
The area filled with * is the logarithm of a (=2 1/8). The combined areas filled with * and : are the logarithm of a⋅b (=2 1/8 * 1 15/17 = 4). What's the area filled with :s?

Well, it's the area under the curve 1/x, from a to a⋅b. Your calculus textbook will probably have a formula or two for converting this to a more log-like formulation. But throw away your textbook, in favour of pretty pictures! Whom are you going to trust: Two tenured professors, or something you read on the Internet?

If we look at the curve a/x from a to ab, we have a curve that goes from a/a=1 to a/(ab)=1/b, over a distance of ab-a = a(b-1). So it's really just the curve for ln(b), stretched out a times wider (from a width of b-1 to a width of a(b-1)). Making a curve higher or wider by a factor of a increases the area under it by that factor. So the curve a/x from a to ab encloses an area of a⋅ln(b), and the curve 1/x from a to ab encloses an area of ln(b).

This proves that

ln(a⋅b) = ln(a) + ln(b).


More Useless Information

  • The same argument also works in the other direction: If we want a function with the property of log, and we want it to be defined by an integral, very similar pictures to the above (hint: take b=1+δ/a) suffice to show that it must be a constant multiple of the integral of dt/t from 1 to x.
  • Further up the mathematical pomposity scale, we've shown that log is an isomorphism of the multiplicative group ((0,∞),*) with the additive group (R,+). This is just a fancy-shmancy way of saying log(ab)=log(a)+log(b). But it also explains why slide rules work!
  • John Napier didn't know about isomorphisms or integrals. He just wanted a trick to convert multiplication (which is hard) to addition (which is easy(ier)). Did he really have no idea that he was onto something so profound?

ASCIImation by Gnuplot ("set term dumb"), retouched by hand in XEmacs' picture-mode.

Log"a*rithm (?), n. [Gr. word, account, proportion + number: cf. F. logarithme.] Math.

One of a class of auxiliary numbers, devised by John Napier, of Merchiston, Scotland (1550-1617), to abridge arithmetical calculations, by the use of addition and subtraction in place of multiplication and division. The relation of logarithms to common numbers is that of numbers in an arithmetical series to corresponding numbers in a geometrical series, so that sums and differences of the former indicate respectively products and quotients of the latter; thus

0    1    2     3      4       Indices or logarithms
1   10   100  1000  10,000     Numbers in geometrical progression

Hence, the logarithm of any given number is the exponent of a power to which another given invariable number, called the base, must be raised in order to produce that given number. Thus, let 10 be the base, then 2 is the logarithm of 100, because 102 = 100, and 3 is the logarithm of 1,000, because 103 = 1,000.

Arithmetical complement of a logarithm, the difference between a logarithm and the number ten. -- Binary logarithms. See under Binary. -- Common logarithms, or Brigg's logarithms, logarithms of which the base is 10; -- so called from Henry Briggs, who invented them. -- Gauss's logarithms, tables of logarithms constructed for facilitating the operation of finding the logarithm of the sum of difference of two quantities from the logarithms of the quantities, one entry of those tables and two additions or subtractions answering the purpose of three entries of the common tables and one addition or subtraction. They were suggested by the celebrated German mathematician Karl Friedrich Gauss (died in 1855), and are of great service in many astronomical computations. -- Hyperbolic, or Napierian, logarithms <-- usually called 'natural logarithms' --> , those logarithms (devised by John Speidell, 1619) of which the base is 2.7182818; -- so called from Napier, the inventor of logarithms. -- Logistic or Proportional logarithms., See under Logistic.

© Webster 1913.

Log in or register to write something here or to contact authors.