In mathematical analysis, distributions are generalized functions that permit the derivative to be extended to things which are not necessarily smooth. The theory of distributions was founded in the middle of the 20th century by the French analyst Laurent Schwartz, and independently by I. M. Gelfand and the Russian school. In Russian they are still called generalized functions. Distributions on a manifold X are defined as continuous linear functionals on the space of smooth functions having compact support. In distribution theory this space is conventionally denoted by D(X) or Cc(X), and the distributions therefore by D′(X) or C-∞(X). Because D(X) is not metrizable (unless X is compact) but only an LF space, the continuity condition is a little technical to state. (An LF space is the inductive limit of an ascending sequence of Fréchet spaces: roughly, the completeness condition is as good as that of a Fréchet space but you only have a uniformity with which to state it rather than a metric.) The advantage of distributions is that they can be differentiated indefinitely even though they may not be "smooth". We simply imagine that the integration by parts formula is valid, and define the derivative of a distribution u to be u′(φ) = u(-&phi′). This notion of differentiation turns out to yield a calculus with the correct properties.

Note that the word distribution is used by statisticians to mean something very different.

For more information, consult a textbook on partial differential equations for pure mathematicians. I like the first volume of Michael Taylor's three-volume work.