First stated by Camille Jordan in 1887, the Jordan curve theorem says that any simple closed curve in the plane divides the plane into two disjoint regions (what we think of as the "inside" and "outside" of the curve). By "simple closed curve" we mean (roughly) a curve which does not cross itself but eventually joins itself; more formally, the theorem refers to any homeomorphic image of a circle.

Whilst this seems obvious, the theorem is not at all trivial. In fact, Jordan's original proof was wrong, and it was 1905 before Oswald Veblen published a correct proof. The difficulty lies in the generality of the theorem's hypotheses: it is easy enough to prove the theorem for polygons or other piecewise smooth curves, but not so easy to generalise to nowhere-differentiable monsters such as the Koch snowflake.

The theorem has some interesting generalisations. Luitzen Brouwer in 1912 proved a higher-dimensional analogue: that any imbedding of an n-1-dimensional sphere into n-dimensional Euclidean space divides the space into two disjoint regions. Arthur Schonflies proved in 1906 that the inside and outside of any simple closed curve in the plane are homeomorphic to the inside and outside (respectively!) of a circle in the plane. This latter result, however, does not generalise to higher dimensions: James Alexander, after failing to find a proof, discovered "Alexander's horned sphere", an imbedding of a sphere in three-dimensional space whose outside is not simply connected, and therefore not homeomorphic to the outside of an ordinary sphere.