**Theorem.** Let
0 ≤ f_{1} ≤ f_{2} ≤ ... ≤ f_{n} ≤ ...

be a (weakly) monotone increasing sequence of nonnegative measurable functions on some measure space. Define
f(x) = lim_{n→∞} f_{n}(x).

Then f(x) (defined almost everywhere, since each of f_{n} is only defined almost everywhere...) is itself **measurable** and
∫f(x)dx = lim_{n→∞} ∫f_{n}(x)dx.

Note that the above theorem explicitly allows f(x)=∞, and works for all such cases. As an example, take f_{n}(x) = n.

This is the basic theorem on convergence of Lebesgue integrals. The monotone convergence theorem is the beginning of what makes analysis, and of why Lebesgue's integrals are so useful: You can change the order of Lebesgue integral and summation relatively easily.

With the Riemann integral, we cannot even write the theorem down, since f(x) is measurable, but needs not be Riemann integrable even if it is the limit of Riemann integrable functions. Lebesgue integrals really are *easier*!

**Example.**
Let q_{1},q_{2},... be an enumeration of the rational numbers on [0,1]. Define

f_{n}(x) = 1, if x ∈ {q_{1}, ..., q_{n}},

f_{n}(x) = 0, otherwise.

Clearly ∫_{0}^{1} f_{n}(x)dx = 0, since f_{n}(x)≠0 at finitely many points. These functions fit the requirements of the theorem. Indeed, we have that the limit
f(x) = 1, if x is rational,

f(x) = 0, if x is irrational.

exists, is measurable (but not Riemann integrable), and ∫_{0}^{1} f(x)dx = 0.

The reason it works so nicely is twofold: Recall the analogous situation with summation: Changing the order of summation in a series usually doesn't work, but always does if the series is nonnegative. The same theorem, with summation replacing integration, is perhaps more understandable. And summation is just a special case of Lebesgue integration.