Landau equivalents are used to characterize the behaviour of function
s at given point
s. It is written "~" and is sometimes called asymptotic
equivalent. Other Landau
symbols include O
When limits fail, think of equivalents
Say you and a friend of yours own a discotheque and are counting the number of visitors coming to party each night. One day you decide to compare the number of entrances, but since yours had little problems at the beginning, you will only consider what happened a long time after its creation. After all, the past doesn't matter and only your disco's future is important. Let f(x) be the number of happy clubbers on day x since its creation and g(x) be the frequentation of your friend's discotheque.
The easiest case is when the frequentation has a limit. That is, when x gets large enough, f(x) gets closer and closer to λ, the limit. Typical graphs for the functions are given below. It's easy to compare the both discotheques, you simply need to compare the limits. If both limits are the same (the limit MUST be non zero - see the traps section), then the functions are said to be equivalent, and people are equally found of both discotheques.
v | v |
i | i | __
s | s | .¨ ¨.__
i | __..------ i | / ¨¨-----
t | _.~¨ t | /
o | .¨ o | /
r | .¨ r | /
s |/ s |/
Unfortunately, not all function
s have a limit
and therefore can't be compared with this method. It is however possible to have an idea of their behaviour thanks to equivalents. Functions fall in three bins : those which admit a finite limit
(eg. x→1/x), those which diverge
to +/-∞ (in that case the limit
is +/-∞, eg. x→x2
) and those which don't have any limit
Consider the following frequentation function graphs. Both admit +∞ as limit so how will you be able to compare them ? Thanks to equivalents, you find out that your discotheque is more popular than your friends' because the equivalent of yours is x2 and your friends' is only x.
^ . ^
v | . v | /
i | . i | /
s | . s | /
i | . i | /
t | / t | /
o | / o | /
r | _ _- r | _¨
s |/ ¨-¨ s |.¨¨
Here we have been computing
equivalents at +∞, that is "a long time after the discotheque's creation". It is very important to mention at which point
the equivalent is given because it means that elsewhere we do not worry about the function's evolution. The above graphs are equivalent to x2
and x respectively at +∞, but are not the graphs of x2
Definition and properties
f(x) - g(x) = o(f) means that the difference
f - g can be neglected in comparison to f, or in other words, that the quotient
(f-g)/f → 0, hence g/f → 1. The following definition is also widely used :
"~" is of course an equivalence relation
(f~g iff g~f), transitive
(f~g, g~h => f~h) and reflexive
(f~f is always true).
This means that you can group function
s by their equivalence class
and choose a function
to represent the class. For instance all function
s of the set of function
s equivalent to x are equivalent.
Operations on equivalents - equivalents are easy
This relation is therefore used in mathematics to simplify complex functions. There are simple rules to keep in mind when computing equivalents. In math classes, we are taught how to compute them by head and in many cases this can be done in a glimpse. For example the rational fraction (3*x3 + 8*x2 - 1)/(4*x2 + 3*x + 7) boils down to 3/4*x. It is very easy indeed to do computations with equivalents :
- You can multiply by a non null scalar
λ ≠ 0, f~g => λ*f ~ λ*g
- The product of two functions is equivalent to the product of their respective equivalents.
a~b and f~g => af~bg
- The inverse of a function is equivalent to the inverse of its equivalent.
f~g => 1/f~1/g
- Most of the time, you mustn't add equivalents. See the traps section below.
- Most of the time, you can't compose equivalents. See the traps section below.
Series - equivalents are simple
Another typical use is in series expansions.
In a nutshell, the goal of a series expansion is to write a function as the sum of terms of a given form (for example an*xn) in decreasing order of importance.
The equivalent of a function at 0 is the first term of its Taylor expansion, because all the others can be neglected in comparison. It is a first step, and a good way of detecting errors to compute the expansion on the one hand and the equivalent on the other hand. Here is a list of the most useful equivalents at 0 :
- ex-1 ~ x
- 1/(1-x)-1 ~ x
- ln(1-x) ~ -x
- (1+x)α-1 ~ α*x
- cos(x)-1 ~ -x2/2
- sin(x) ~ x
- tan(x) ~ x
- cosh(x)-1 ~ x2/2
- sinh(x) ~ x
- tanh(x) ~ x
- asin(x) ~ x
- atan(x) ~ x
Equivalents are introduced as a friend companion to Landau's comparison big-oh notation and little o notation.
They are mainly encountered to characterize the evolution of a function as in the above discotheque example.
Growth - equivalents are powerful
Equivalents are very handy, the functions need not be differentiable, or even continuous. They can even be discrete (defined in only a few points, for example on integers as opposed to reals). A famous example of this is Gauss' Π() function. Π(n) is the number of prime numbers ≤ n. Π() is very difficult to compute. But an equivalent of Π() is known.
This means that the number of primes grows "like" the function
x → x/ln(x) does. Beware though, it doesn't mean that the difference
Π(n) - n/ln(n) → 0 as n gets big, but if you recall from the definition, it means that Π(n) - n/ln(n) is o(n/ln(n)). Because n/ln(n) → +∞, the difference Π(n) - n/ln(n) can grow indefinitely
, and in fact it does. This equivalent stuff only shows that the error can only be neglected in comparison
Π(n) ~ n/ln(n) but Π(n) - n/ln(n) → +&infin, n → +∞
This yields interesting results on the convergence of series
. Is the series
Σ 1/Π(n) = 1/Π(2) + 1/Π(3) + 1/Π(4) + 1/Π(5) +... convergent
? (Is it a finite number ?) Since Π(n) ~ n/ln(n), 1/Π(n) ~ ln(n)/n and Σ ln(n)/n is convergent
, therefore the series converge
s. It is possible to state this, thanks to the equivalent, and yet very little is known about the Π function. And guess what ? The same method works with integration
Approximations - equivalents are cool
Equivalents are cool because they are simple. See : this horrible function sin(x) is equivalent to x at 0. They are used a lot in physics. For example say a physicist wants to solve the differential equation :
y'' - λ*sin(y) = k, λ constant > 0, -1 ≥ k constant ≥ 1.
Since the constant
solution x → asin(k)/λ is solution, he knows that M = asin(k)/λ is a steady position
. Now what happens if y is displaced a little from its steady position
? He writes the equation as follows, and since M is constant and y-M → 0, sin(y-M) ~ y-M :
- (y-M)'' - λ*sin(y-M) = k
- y'' - λ*(y-M) = k
- y'' - λ*y = k + λ*M
Wahou ! A linear differential equation
. The physicist is happy because he know how to solve it. This proves how cool equivalents are.
... but equivalents are sometimes tricky. Follow these golden rules to avoid complications :
Do not add equivalents. Most of the time, a+b ~ a or a+b ~ b because either b=o(a) or a=o(b). This means that one term of the sum can be neglected. For example, x2 + x ~ x2, x → +∞. But remember that it sometimes works. A trivial example of this is a+a ~ 2*a.
Never say f ~ 0. This means that f = o(f) which is only true for the null function. I this didn't convince you, consider the following example. If I say 1/x ~ 0, x → +∞. I could also say 1/x2 ~ 0, and therefore that 1/x ~ 1/x2 which is totally insane.
Do not compose equivalents. It is unfortunate, but f~g doesn't imply that h(f)~h(g). For example f~g implies that ln(f) - ln(g) = o(1) and not ln(f) - ln(g) = o(ln(f)) as the definition expects it to be.
"~" is a very handy and powerful tool if you are aware of the typical traps mentioned above. They are generally easy to compute and have many many applications (many more than the few shown above). The most important thing to remember is not to say that a function is equivalent to 0. Remember these words of wisdom :
If the function doesn't have a limit, give its equivalent because equivalents are cool.
Part of the Maths for the Masses project.