(Programming:)

Name describing an implementation technique which achieves one (or both) of these aims:

As a technique for memory management (the use of the term "garbage-collection" to describe reference counting is somewhat controversial; some people embrace it, while others refuse to consider it garbage collection), reference counting leaves much to be desired. Circular dependencies easily arise in real life data structures (consider even a doubly linked list!); reference counting cannot collect this structure.

Description

Reference counting manages resources (typically memory) which depend on other resources. In the memory management example (which we follow exclusively, although similar uses can be made for other types of resources), a block of memory contains pointers to other blocks of memory. A block may be freed by the program (or the run time library) if it can no longer be used by the program; having freed the block, it can be reallocated for use elsewhere in the system. Assuming no pointer arithmetic, although this too can be accommodated with a smart compiler and some (slight) limitations to the language, a condition for a block to be freed safely is the following:

  • Every block pointed to by a pointer held in a program variable is unsafe;
  • Every block pointed to by a pointer held in an unsafe block is unsafe;
  • Every block which is not unsafe may safely be released.

Garbage collection conservatively approximates the set of unsafe blocks (i.e. it will consider some safe blocks unsafe; this is better than the converse, but will still result in uncontrollable memory leaks, even in the simple doubly linked list example). We attach a reference count to each block, stating how many pointers there are to it (in particular, every block grows to have room for the count). Whenever a new pointer to the block is formed, the reference count is incremented.

For example, the Perl statement
my $x = new Object;
results in a new block holding an Object, with a reference count of 1: exactly a single object $x points at it.

If we now say

my $y = $x;
then the reference count becomes 2: both of $x,$y point at our block.

Whenever a pointer is abandoned, the reference count is decremented; if it reaches 0, the block may safely be released (the converse is not true; consider a doubly linked list somewhere in memory, and a program that has no pointers into it; assuming no C-like pointer arithmetic is allowed, the program can never legally access the list, so it's safe to free, but reference counts on all nodes will be positive).

If we now say
$x = new OtherObject;
$y = 17;
(and assume no other pointers to the original Object were formed, so its reference count is still 2), the original Object has its reference count decremented just before each of the 2 assignments (the assignment's RHS is constructed first, and reference counts incremented before decrementing Object's reference count -- or $y = $y would free the Object!). So after $y = 17, its reference count is 0, and it is freed.

Naturally, before a block is freed, all pointers it contains are abandoned, and their reference counts accordingly decremented. This can lead to further freeing, complete with more pointers being abandoned. And dependent objects get freed after what they depend on, if that gets freed too.

Suppose a Foo object is held in $x, and that the Foo object holds 2 pointers to a Bar object. Suppose further that no other references exist to either of these 2 objects. So the Foo's reference count is 1 ($x points to it) and the Bar's reference count is 2 (2 pointers from the Foo object).

If we abandon the Foo object by saying $x=17, its reference count is decremented to 0, so it's freed. But first, the Bar's reference count is decremented by the 2 Foo pointers that are being abandoned, so it too is freed. Note that the order of destruction guarantees the Bar gets freed first.

Implementation details

Recursion
It appears that object destruction requires recursion through a possibly huge tree of dependencies. But in practice, a clever technique known as pointer reversal enables reference counting to work with only a constant amount of additional storage.
Storage
An additional word is required to hold the reference count. For small objects, this is significant overhead! If we use only 2 bytes, we can only have 65535 pointers to the object; this is actually very little for some uses. And using 4 bytes to reference count a 4 byte float is 100% bloat -- very poor.
Ease of implementation
Here reference counting shines. A simple implementation is very easy (but see below).
Known order of freeing
Again, an advantage. If objects have destructors, they get called in a correct, predictable order. Garbage collection makes no such guarantees.
Multithreading
All updates to the reference count must be atomic. See below for the horrific impact this has on performance.
Performance
Reference counting has remarkably poor performance. Almost every line of the program requires some reference count to be twiddled. Some optimisation is possible with a smart compiler, but not too much. A loop
my $q = $p;          # (1)
while (defined $q) {
  ++$q->{data};      # (2)
  $q = $q->{next}    # (3),(1)
}
(Perl to increment each data element of a linked list; no, Perl programmers don't do this) performs THREE increments/decrements for every node of the list: first (1) it increments the node's reference count when making $q point at it; then (2) it increments $q->{data}; finally (3) it decrements the node's reference count when $q leaves it. That's 3 times as much work as required by the code, and twice as much memory access (which is notoriously slow on modern computers).
Multithreaded performance
Adding threads, reference counting's performance becomes impossible. Every reference count update requires a lock to be acquired and held. This is *s*-*l*-*o*-*w* on an inner loop.
Cyclic dependencies
As alluded to above, reference counting only works if the graph of pointers between objects is a directed acyclic graph (DAG). In particular, it can't be used for bidirectional linked lists, trees with an "up" pointer, and many other common structures.

Improvements can be made if using the technique in a lower level language like C to control only some objects. Reference count bumping can be performed only for major operations (e.g. there is no need to deal with reference counts at all in the linked-list increment loop above, since all reference counts in the list are at least 1; a smart programmer can take advantage of this to avoid the counting overhead for the entire loop, although of course this requires much care). Locks can be held for fewer objects in the multithreaded case: only an object which might be contested must be locked. All of which leads to faster but less clear and less safe code.

Reference counting is chiefly used due to its ease of implementation and its guaranteed order of destruction. In other regards, it is less than stellar.