Although the above writeups name a few loosely-coupled distributed computing efforts, they do not explain the essence
of distributed computing.
Update: Seems like the "above writeups" have been removed... Nevermind, it doesn't change the meaning of this node.
Perhaps the easiest way of explaining what distributed computing is all about, is by naming a few of its properties:
- Distributed computing consists of a network of more or less independent or autonomous nodes.
- The nodes do not share primary storage (i.e. RAM) or secondary storage (i.e. disk) - in the sense that the nodes cannot directly access another node's disk or ram. Think about it in contrast to a multiprocessor machine where the different "nodes" (CPUs) share the same RAM and secondary storage by using a common bus.
- A well designed distributed system does not crash if a node goes down.
- If you are to perform a computing task which is parallel in nature, scaling your system is a lot cheaper by adding extra nodes, compared to getting a faster single machine.
Of course, if your processing task is highly non-parallel (every result depends on the previous), using a distributed computing system may not be very beneficial.
See also, Beowulf Cluster and Distributed System.