Asynchronous logic is really cool. Old computers like ENIAC and MU5 were asynchronous - they weren't timed by clocks. Pretty much every computer since 1965 has been designed with synchronous logic. However, clock synchronization problems are becoming obvious - new CPUs have very low tolerance for clock skew, and the problem of assuring that each part of the CPU core receives the clock pulse simultaneously is not easy to solve. Asynchronous logic solves this, or rather simply avoids it, and provides a nifty bonus solving another problem - heat dissipation. CMOS gates in synchronous designs keep switching all the time, since it's not trivial to stop a gate and restart it later (there are hackish ways around this, cf. Transmeta). Asynchronous designs simply don't use the gate when it's not needed. Other interesting properties of asynchronous design include faster processing speed at lower temperatures (overclock your CPU by putting an ice cube on it) and processing speed proportional to power consumption.

So why isn't asynchronous logic in use? It's not easy to design asynchronous circuits. Having a clock gives you a nice, deterministic view of the silicon world at any given clock tick. Doing away with the clock makes life difficult, and implementing things like memories, which are nice and simple flip-flops in the synchronous world, is more challenging. Still, it is increasingly difficult to synchronize to >1GHz clocks, so more research in this area will probably be happening before too long.

Check out the AMULET project for an example of an ARM processor implemented asynchronously.

Log in or register to write something here or to contact authors.