A concept in machine learning. It stands for "Probably Approximately Correct learnability." A class of functions is said to be PAC-learnable if there exists a learning algorithm such that for all functions in the class, and for all probability distributions on the function's domain, and for any values of epsilon and delta (0 < epsilon, delta <1), the algorithm will produce a hypothesis whose error is greater than epsilon with probability at most delta. The error of a hypothesis is the probability that it will differ from the target function on a random element from its domain, drawn according to the given probability distribution.

Basically, this means that there is some way to learn a "pretty good" approximation to the target function. One such that the probability is as big as you like that the error is as small as you like. (Of course, the tighter you make the bounds, the harder the learning algorithm is likely to have to work).