The dimension of a space interacts with many facets of behaviour. So far, we've only defined dimension (in that writeup, of which this is s§rt of a continuation) based on one "semi-numerical" behaviour: the number of vectors in a basis. Obviously, any definition based on counting isn't going to give a non-integral result -- and it turns out that the "counting" definition is too coarse to tell the whole story.
We turn instead to a definition based on measure theory. There is a very clear connection between scaling and measure in a given dimension: if we multiply any 2-dimensional figure by λ, its area grows by λ2, and generally if we multiply any d-dimensional set by λ, its d-dimensional measure grows by λd. Unfortunately, the definition of Lebesgue measure is not sharp enough to produce anything further.
Hausdorff's idea is to define simultaneously for all possible "dimensions" δ a δ-dimensional measure which would have this property, and then use that to define the dimension of a set. So δ is first a parameter which yields a measure, but after some justification we see that for any measurable body there exists one particular value of δ which can rightly be called its "dimension". Hausdorff measure Hδ is such a measure; it takes a parameter δ, by which it computes a measure for all sets. It's quite obvious from the definition that if we scale a set by λ then its δ-dimensional Hausdorff measure scales by λδ. The bigger problem is to show that Hausdorff measure is, in fact, a measure.
(Since it's translation invariant, we have that for integral δ Hausdorff measure is the same as Lebesgue measure.)
Having done all that, we note that
Theorem. Let S⊆Rn be a measurable set, and let δ>δ'. Then
- Hδ(S) ≤ Hδ'(S).
- If 0<Hδ(S) then Hδ'(S)=∞.
- If Hδ'(S)<∞ then Hδ(S)=0.
In other words:
- Hδ(S) is a nonincreasing function of δ.
- Measuring S with too-high a dimension δ (e.g. measuring a straight line with a measure of dimension 2) gives measure 0.
- Measuring S with too-low a dimension δ (e.g. measuring a cube with a measure of dimension 2) gives measure ∞.
So there exists a unique value d such that Hδ(S)=0 for all δ<d and Hδ(S)=∞ for all δ>d. This d is the Hausdorff dimension of S: d=dimH(S).
Other fractal measures are possible, and exist (indeed, Hausdorff apparently invented two distinct measures). All these measures have the same property of the existence of a critical value for the dimension, so they all give rise to fractal dimensions. Luckily, the critical values tend to coincide (even if the value of the measure itself at the critical value is different). As a result (thanks, unperson!), you will also see Hausdorff dimension called fractal dimension, capacity dimension, and box counting dimension (which can actually be defined as easily by counting spheres, go figure...).
Since the 1980s, sets of fractional dimensions are called fractals. The concept has been around for a lot longer than that, but those years saw much interest in these sets by people who study dynamics. Oh, and Benoit Mandelbrot (who coined the term), with the help of pretty pictures managed to push a pretty esoteric term into the limelight. These pretty pictures include the Cantor set, the easily-drawn Koch snowflake, and various gaskets (like the Sierpinski gasket and the Menger sponge), L-systems producing mostly pictures of (fractal) trees, the Mandelbrot set (which purists call "a Fatou set of the iteration z→z2+c"), Julia sets (which are related to the Fatou set of the same iteration), and many others.
A whole new world
Hausdorff (and related) dimensions are used in very different mathematical disciplines. The integral dimensions give a very rich algebraic and combinatorial structure. The study of manifolds is almost by definition a study of properties in the large scale: at a small scale, a manifold looks -- by definition! -- like a small piece of Rd. Integral dimensions are studied mostly on the global scale, with occasional excursions into smaller regions.
For fractional dimensions, there is no such finite combinatorial structure. Their definition involves counting how many small sets are required to cover the entire set. This is clearly the world of analysis, and that indeed is where this definition of dimension is used. Global properties are of almost no interest here -- subsets of manifolds can have fractional dimension, but there seems little interaction between the two. Sets of fractional dimension are chiefly interesting for their "infinitesimal" structure. The combinatorial aspects of this structure have a very different form from those of integral dimensions.
Back to analysis
Apart from the important science of pretty pictures on a computer screen, fractional dimension turns out relevant to dynamics. For any dynamical system, an attractor is a compact closed (under the dynamics) set K in the phase space such that any system starting sufficiently close to K will converge to K in the following sense:
cl( ∪s≥t x(t) ) =
where cl(⋅) is the closure
of a set.
The classic example - a ball in a jar with friction
- has a one-point attractor, which is the ball standing still at the bottom of the jar.
Some systems have complex attractors. It turns out that these attractors can have fractional dimension; the Lorenz attractor was the first attractor for which this was established. A system which has a fractal attractor (known usually as a "strange attractor") is termed chaotic, and the branch of analysis dealing with these systems is chaos theory.