display | more...
A Multiword Direct-Mapped Cache is a more sophisticated version of a normal Direct Mapped Cache.
Judging the bad sides of a normal cache, one pretty soon comes to these two observations:
1. A normal direct-mapped cache wastes cache memory (and cache memory is very expensive)
Looking at the example Direct-Mapped Cache makes this clear: Every cache entry is 14 Bit wide (8 Bit data + 5 Bit tag + 1 valid bit). This means 6/14 = 42% of the cache is wasted.
2. The direct mapped cache deals pretty good with temporal locality, but is bad using spatial locality
This means, that if you access an array, the cache produces a cache miss for every first access of an element.

Both problems have one solution: Store more memory elements in one cache element (normally words, but our example is actually a multibyte cache. But this does not matter it just saves space.). With a multiword cache you get lesser cache misses, as several elements are loaded at once. As the tag and the valid bit are only saved once for all elements of a block you also save space. Let's look at one multiword cache:
memory addresses are 8 Bit wide.
This cache is really small it has only two elements (it is the cache from Direct-Mapped Cache made multibyte). The index is now only 1 bit wide. The two other former index bits are now called block index bits, selecting the byte from the the 4-byte blocks using for example a multiplexor.

Nr  V Tag     Data
0   1 00110   00000001 00001001 00010001 00001001
1   1 10000   00000010 00100001 00100001 00100001

This cache "wastes" only 6 / 38 = 16% of the cache.
Reading is pretty easy: Use the index to select the block, check the tag and if valid bit is set. If both are correct, read the block and let the multiplexor filter the right data using the block index, if not read the complete block from memory and write it to the cache.
Writing is again more complicated: Use the index to select the block, check the tag and if valid bit is set. If both are correct, write the block to cache and memory, if not read the block into cache and then write into memory and cache.

The example above actually is pretty bad in showing the advantages of a multiword cache. Instead it shows the disadvantages: If a direct mapped cache has only a few times more elements than the desired block size is, forget it. You get very few elements, which result in more cache misses. To make it even worse: A cache miss is more expensive now, as you have to read the whole block. But carefully designed multiword caches can reduce the miss rate by the factor 2.

Sources:
Computer Organization & Design: The Hardware / Software Interface, David A. Patterson and John L. Hennessy, Morgan Kaufmann Publishers, San Francisco, California
Technische Grundlagen der Informatik II: Script, Prof. Dr. Michael Gössel, University of Potsdam
Spim Tutorial: Einführung in die Assemblerprogrammierung mit MIPS, Reinhard Nitzsche, avaiable online
Grundlagen der Digitaltechnik, Prof. Dr.-Ing. Dr.-Ing h.c. Dr. h.c. Hans Martin Lipp, Oldenbourg Verlag, München and Wien
Technische Informatik I, Wolfram Schiffmann and Peter Schmitz, Springer Lehrbuch, Berlin
Spim Documentation, avaiable at http://www.cs.wisc.edu/~larus/spim.html

Log in or register to write something here or to contact authors.