Memory caching details vary with cache size. Assume that a cache maps references to address 0001 and 8001 to the same cache cell. If both cells are in use at the same time, only one can be cached. In the worst case, they will compete for the cache cell and may actually slow down overall operation by thrashing -- pushing each other in and out of cache. To get around this, caches may allow several similar addresses to be cached simultaneously by splitting the cache into equal size pieces with identical addressing. The information required to identify which part of cache is to be used comes from separate fast memory called TAG memory. A very rough analogy would be a telephone system with two PBXs with 1000 line capacity. Numbers 0001 through 0999 are handled by one PBX. 1000 through 1999 are routed to the second. The final three digits of the phone number correspond to the cache address. The first digit is used to select which PBX/part of cache to use and corresponds to TAG. This technique is called Associative caching.
Since the TAG information has to be known in order to select the correct cache cell, TAG memory has to be even faster than cache memory. This can be accomplished by selecting a technology for TAG that is even faster than that used by cache. Or it can be accomplished by waiting for TAG data before selecting the cache cell. This slows down cache access. In recent years, TAG memory for memory caching has been incorporated in the CPU rather than in the cache subsystem in order to limit delays in cache accessing.
In the case of memory the number of associative addresses is often two or four leading to the terms 2-way or 4-way association. All things being equal, association doubles or quadruples the chance of cache address conflicts, but provides a mechanism for resolving most of the conflicts while keeping the data in cache. It is still possible for conflicts to force data out of the cache if too many cells with similar addresses are in use, but this will normally be much less likely than with non-associative caches. Associative caches are often a bit slower than non-associative for non-conflicting addresses because of delays in constructing addresses from two fragments stored in different places. But they have far fewer irresolvable address conflicts that require slow accesses to main memory for resolution. Overall, association is thought to speed up cache access although the amount of benefit will vary with the situation.
Return To Index Copyright 1994-2008 by Donald Kenney.