The Natural Logarithm function (denoted ln(`x`) ) has all the properties described above. There are some interesting things that were missed, though.

Also, I apologize for my lack of humor in this writeup.

(NOTE: Sections 1 and 3 don't need calculus to be understood. They do need, however, knowledge of logarithms, algebra, and other basic mathematical ideas. So no, you do not need to understand advanced Lobachevsky topology or anything.)

The natural log function does have a taylor series expansion for approximating it... but it's only valid for `x` on the interval (0, 2] (0 < `x` ≤ 2). Outside this interval, the series diverges (meaning it gives you the reciprocal of jack-diddly for an answer).

ln(`x`) = ∑[`n`=1-->∞] (-1)^{n + 1} (`x` - 1)^{n}/`n` = (`x` - 1) - (`x` - 1)^{2}/2 + (`x` - 1)^{3}/3 - ... + (-1)^{n + 1} (`x` - 1)^{n}/`n` + ...

This actually comes from a similar Taylor series expansion:

ln(`x` + 1) = ∑[`n`=1-->∞] (-1)^{n + 1} `x`^{n}/`n`

Where you plug in `x` - 1 for `x`. To be technical, the first formula is a Taylor series of ln(`x`) centered at `x` = 1 and its convergence interval of it is (0, 2].

These kinds of things are useful if you're interested in things like, "How can a calculator know how to calculate ln(`x`)?" Well, that's the "magic" formula it uses. With some simple manipulation, you can use it for any number outside the interval. For example, if `x` > 2, you can find the log by evaluating -ln(1/`x`) since 1/`x` will be less than 2 and on the appropriate interval, and because -ln(1/`x`) = ln(`x`) (which is important).

This is where you plebes need to know The Calculus.this, you only need to know basic algebra, the concept of a "limit," and what "taking a derivative" means. You should go look those up if you're clueless. Otherwise, I will now prove that the derivative of ln(`x`) = 1/`x`.

Once, I was curious about something: "How do you prove that the derivative of this is 1/`x`? It seems odd that when you find the derivative of an irrational function, you get a rational function." Well, the best way is to use the difference quotient method. Let's see what happens, using some rudimentary notation.

EXPLANATION OF

NOTATION: When I say "lim_

`x`-->c f(

`x`)", I am saying, "The limit of f(

`x`) as

`x` approaches c." It should be easy to figure out. That

underscore separates the variable being "limited" and the "lim". Just so you're not confused.

f(`x`) = ln(`x`)

1. f'(`x`) = lim_`h`-->0 (f(`x` + `h`) - f(`x`))/`h`

Definition of a derivative

2. = lim_`h`-->0 (ln(`x` + `h`) - ln(`x`))/`h`

Plugging the function f(`x`) = ln(`x`)

3. = lim_`h`-->0 ln((`x` + `h`)/`x`)/`h`

Rule of logarithms: log(a) - log(b) = log(a/b)

4. = lim_`h`-->0 ln(1 + `h`/`x`)/`h`

Algebraic simplification: (`x` + `h`)/`x` = 1 + `x`/`h`

This is where a cool trick comes in. Pay close attention.

5. = lim_`h`-->0 ln(1 + `h`/`x`)⋅(`x`/`h`)⋅(1/`x`)

Algebraically, 1/`h` = (`x`/`h`)(1/`x`)

6. = 1/`x` ⋅ lim_`h`-->0 ln(1 + `h`/`x`)⋅(`x`/`h`)

1/`x` is a constant with respect to the variable

being "limited," so we can pull it out of the limit

Hmm... we're close. But gasp! How do we get rid of that ugly limit! If only we could show that the limit equals 1... (N.B. We will.)

7. = 1/`x` ⋅ lim_`h`-->0 ln((1 + `h`/`x`)^{x/h})

Rule of logs: log(a) ⋅ b = log(a^{b})

We're close... so what do we do now? Let's look at a definition of e using a limit:

e = lim_n-->∞ (1 + 1/n)^{n}

OR EQUIVALENTLY:

e = lim_n-->0 (1 + n)^{1/n}

Both these say the same thing. The latter is similar in form to (1 + `h`/`x`)^{x/h}. Here's the deal: as `h` approaches 0, the thing in the ln gets closer and closer to e:

7.5. lim_`h`-->0 (1 + `h`/`x`)^{x/h} = e

True from the definition of e (the `x` is irrelevant

since it's constant with respect to `h`)

1/`x` ⋅ lim_`h`-->0 ln((1 + `h`/`x`)^{x/h})

8. = 1/`x` ⋅ ln(e)

Follows from (7.5) applied to (7)

Since e is the base of ln: ln(e) = 1

9. = 1/`x`

What happens when you multiply anything by 1 is that it doesn't change.

Q.E.D., yo.

You plebes no longer need to know The Calculus.

A bit more interesting: What if you want to find ln(-3) or ln(1 + 2i) or some other such whacko calculation (where i = √-1)?

...Oh, you've been taught that you can't take logarithms of negative numbers? Well, you've been taught wrong by commie pinko teachers. Maybe.

There's an interesting identity called Euler's Identity, which states that:

e^{i x} = cos(`x`) + i sin(`x`)

I'll leave it up to you to explore this equation more. For now, understand that it's possible to express ANY complex number as the following:

z = a + b i = r ⋅ e^{i x}

where r = √(a^{2} + b^{2}) and `x` = arctan(b/a)

So, if you want to find ln(a + b i) where a and b are any real number (so long as both a and b don't equal 0), or the natural log of any complex number, you apply the manipulations described above and you use the following formula:

ln(a + b i) = ln(r) + i `x` + i 2π`k`

(The derivation of this follows easily from logarithm rules).

The last term in that equation comes as a consequence of Euler's Identity. The k in that equation can represent any integer (positive, negative, or 0), meaning that the Natural Logarithm, when applied to complex numbers, is a multi-valued function. The reason you have to incorporate it is because of the fact that any angle has any number of coterminal angles, and trig functions of coterminal angles give the same values.
That's all I have to say about that.

(I should have been working on my essay for AP Language).