Predicate calculus is exactly what it says: a mathematical system to perform calculations on logical predicates.

A predicate is a logical statement. Contrary to a proposition, a predicate is generic: it applies to a whole range of things. By filling in concrete examples of these things, a predicate becomes a proposition.

So we can think of a predicate as a proposition with (free) variables.

Example:


  The sun shines.
                         (a proposition: either true or false)
  The sun shines at time t
                         (a predicate: true or false depending on t)
  The sun shines right now.
                         (a proposition)

If you find this confusing, it is because in natural language, there really is no such thing as a clear separation between propositions and predicates. Whether The sun shines. is true depends on the date and time of day, but also depends on the place on earth; it can even apply to a fictitional situation in which time and place are not those on earth. These "hidden variables" aren't indicated in the sentence itself. It is easy to find sentences for which it would be difficult to agree on which variables its truth value depends on exactly.

In formal mathematical logic, things are different. First, we have the propositional calculus, which is concerned with the logical relationships between propositions expressible through logical connectives (and, or, implies, not, etcetera). Expressions in propositional calculus consist entirely of atomic propositions and these connectives. Its calculations represent logical inferences. For example, you have things like De Morgan's Law: for all propositions P and Q, it is the case that

  P and Q   is true exactly when   not P or not Q
The calculus is a bunch of such rules; it allows sets of such propositions to be mechanically validated for consistency, brought into simpler equivalent forms, etcetera.

Predicate calculus introduces the notion of variables mentioned above. For example, we can have

  S(t,p)
to represent the predicate "the sun shines at time t, place p".

What is more interesting, it allows propositions to be stated about these predicates, using quantors. There are two quantors. The existential quantor is traditionally denoted as a reversed E; I will use a normal E here:

  E.t: E.p: S(t,p)
This states: "there is a t such that there is a p such that S(t,p)", in plain English: "sometimes, it rains somewhere".

The universal quantor, usually denoted with a reversed A, here with a normal A:

  A.t: E.p: S(t,p)
Literally: "for all t there is a p such that S(t,p)"; in English, "there is always a place in which it rains", or shorter, "it always rains somewhere". We can also say
  E.p A.t: S(t,p)
"there is a place in which it always rains".

This simple system, combined with and, or, and not, does quite a good job of capturing the meaning of words such as "and", "or", "not", "some", "any", "all", "no", and the like, as used in English.

A very common addition is equality: A.t: (not (E.p: E.q: S(t,p) and S(t.q)) or p = q) that is, at any time, there are no two places in which it rains, unless they're the same, or in plainer English: it always rains in one place.

But wait a minute ... isn't "it always rains in one place" the same proposition as "it always rains somewhere"? I don't think so, but we could argue about that. The beauty of predicate logic is that we now have a way to state our arguments precisely. This is why it is indispensable in formal (mathematical) reasoning.

English and its speakers, on the other hand, tend to be unclear and sloppy. I feed "rains in one place" to Google, and the 4th hit returns

  It rarely rains in one place for long, and it's almost always sunny somewhere.
("Affordable Paradise", Hawaii). Aren't you tempted to read that when going to Hawaii, wherever you go, it will almost always be sunny there, and even in cases where it rains, the rain won't last? Well, technically, that is not implied: it might be the case that the islands have 99% rain, but there is this tiny patch of sun that wanders around, visiting every place with some regularity. These different interpretations correspond to different formulas of predicate logic, which can then be manipulated with calculus, for example, to see if one is always true when the other is. A job that isn't all that easy, so it's best done by machines.

(The quantifiers "rarely" and "almost always" could be added just like E and A.)

The rules of propositional calculus also apply to predicate calculus, but more rules are present to deal with quantors; for example, an equivalent of De Morgan's law:

  A.x: P(x)   is true exactly when   not E.x: not P(x)
that is, a predicate holds for everything exactly when there is nothing for which it does not hold.

We speak of first order predicate calculus if predicates cannot be quantified over. A formula such as


  A.Q: E.x: (Q(x) and not P(x)) or (not Q(x) and P(x))

is a statement of second order predicate calculus.

Relational calculus, or to be more exact domain calculus, is first order predicate calculus with use of the equality primitive = on variables. It is the basis of database languages such as SQL. In such languages, more primitives are present to express properties of the domains from which the variables are drawn, e.g. the < comparison on numerical variables.

First order predicate calculus is limited in expressive power. It is impossible to express, for instance, that a predicate P holds more often than not. The attempt


  A.x: E.y: P(x) and not P(y)

fails for the reason that the same y can be used for different x. A second order expression is required, e.g.

  not E.R: A.x: E.y: not R(x,y) or
      ( P(x) and not P(y) and A.x': not R(x',y) or x'=x )

A common use for second order logic is the expression of transitive closure and other forms of iteration. For example, if P(p,c) expresses the relation "person p is a natural parent of person c", then you typically need second-order logic to say things about the ancestry relationship, for example, the circumstance that no person is his or her own ancestor.