The notion that a Beowulf cluster can outperform a traditional supercomputer at a much cheaper price is unfortunately very widespread, but wrong. The truth is that
a Beowulf cluster is totally unfit for the class of problems that require real supercomputing, mainly because Ethernet has much too little bandwidth and, more importantly, too high latency for the massive communication required by these problems. This doesn't mean that the Beowulf concept of a cluster of workstations is useless - it's great when you have localized computations, but then you didn't need a real supercomputer in the first place.



Rancid_Pickle: you misunderstood me. Let me clarify: Yes, supercomputers do, of course, also use lots of processors. But the assumption that lots of fast processors can just be put together to make a supercomputer (which lies at the heart of the "Cluster of workstations == cheap Supercomputer" assumption) is false, or rather, is true only for some kinds of computational jobs. It's true for raytracing, brute force encryption cracking and for the signal analysis that SETI@home does. But it's not true for some other jobs, like physical or chemical simulations. On those problems, a Beowulf cluster would perform very, very poorly, because all the processors would spend 99% of their cycles doing nothing and waiting for data because each part of the problem depends on the previous results of other parts. Real supercomputing requires fast processors and lots of RAM for each of them and (most importantly) an extremely fast, low-latency interconnection network. The ASCI machines like the Blue Mountain one have that, a cluster of workstations doesn't.