Cache Coherence is an important issue, if not the important issue, in multiprocessing machines with shared memory. The difficulty of efficiently handling the cache coherence problem is what prevented many early CPU designers from releasing multiple processor chipsets.

Description
Cache coherence describes the problem of keeping a consistent view of memory between different processors who each are changing and modifying various parts of memory. Now, anyone who is familiar with computers knows that there is a memory heirarchy. Cache is the fastest, smallest, and most expensive memory that sits right next to the CPU. The cache gets its data from the larger and slower main memory which gets its data from the very large, very cheap, and extremely slow hard drive.

Now, the CPU's will have access to memory and the hard drive just like a single CPU, but each processor will have it's own set of cache from which it performs normal operations. Now what do you do if two processors are working on the same data? Or if data has changed in the cache of one processor, and another processor requests that data from main memory. Because CPU's traditionally can access the same memory but NOT each other's cache. This is the problem of cache coherence.

Now to make the problem even worse, in shared memory machines there is only 1 memory bus. If each processor accesses the memory a lot (instead of just their own cache), then the bus will become very busy and the machine will slow down drastically. It is not realistic for a CPU to update main memory each time it accesses the cache because if anyone is using memory then nobody else can.

Solutions
Possible solutions are numerous and implemented differently all over the place. Depending on who implements your system, how many processors there are, and what resources are available the solution to this problem will be different. The main method used, however, is snooping.

Snooping involves checking the memory bus each time memory is read to make sure the processor has the latest value and likewise updating the memory bus when a memory write has occured. There are various ways of doing this, and the protocol depends largely on the architecture of the cahce, whether they are write back or write through etc.

When two processors read the same value, there is obviously no problem. The problem occurs when two processors have the same memory value in their cache and one processor updates that value. Once this happens there are two main ways of handling the situation.

Write Invalidate : When a cache value is updated and stored to memory, the CPU has to send a signal to all other processors requiring them to invalidate that memory value if they have it in their cache. This way whenever a cache value is read it will be up to date but the processor may have to go back to memory to get it.

Write Update : In this method a CPU will update the memory in all other caches by broadcasting out the new memory value. If a processor determines that it needs that memory value it will update its own local cache.

On Intel brand dual systems, the MESI protocol is used. Protocol's like this add a few bits to the cache to describe the status of each memory location. When a cpu accesses memory it polls the other CPU's quickly to see if they have the latest value of memory. It basically adds information to the data to keep track of whether or not it was modified. See the MESI protocol wu if you would like more information.

It is also worth noting that all this memory access can create serious bus traffic problems. The issue of synchronization is a large part of multiprocessing, which is basically determining which processor gets use of the memory bus at any given time.