Using the Principle of Locality to improve performance while keeping the memory system affordable we can pose 4 questions about any level of memory hierarchy and we will answer those questions considering one level of memory hierarchy for e.g. cache in our case.
- Block Placement - Where should a block be placed in the cache?
- Block Identification -How to confirm if a block is in the cache or not?
- Block Replacement -Which block frame in the cache should be replaced upon a miss?
- Interaction Policies with Main Memory - What happens when reads and writes are done in the cache?
A number of hardware schemes have been developed for translating main memory addresses to cache memory addresses. The user does not need to know much about the address translation circuitry, which has the advantage, that cache memory enhancements can be introduced into a computer without a corresponding need for modifying application software.
Basically number of cache lines are very less than the number of main memory blocks. As a result an algorithm is needed for mapping main memory blocks into cache lines. Also a means is needed for determining which main memory block currently occupies a cache line.
The choice of cache mapping scheme affects cost and performance, and there is no single best method that is appropriate for all situations. There are three methods in block placement namely
Block Identification
In general a cache has two important parts; the cache data line and the cache tags. But in granular it can shown as below
- Valid Bit : is set to 1 when a valid data is stored in cache.
- Dirty Bit : is set to 1 when data is changed and is not updated to main memory in the same time.
- Tag : this field tells which address is in that line.
- Data : the data fetched from main memory.
Cache has an address tag on each block frame that gives the block address. Therefore tag entry of every cache block is checked to see if it matches the block address from the CPU. As a rule, all possible tags are searched in parallel because speed is critical.
There must be a way to know that a cache block does/doesn’t have valid information. The most common procedure is to add a valid bit to the tag to say whether or not this entry contains a valid address. If the bit is not set, there cannot be a match on this address. Accordingly an address, generated by CPU (or main memory address) is divided as shown below:
If the total cache size is kept the same, increasing associativity increases the number of blocks per set, thereby decreasing the size of the index and increasing the size of the tag.
Block Replacement
When a miss occurs, the cache controller must select a block to be replaced with the desired data. A benefit of direct-mapped placement is that hardware decisions are simplified - in fact, so simple that there is no choice: Only one block frame is checked for a hit, and only that block can be replaced. With fully associative or set-associative placement, there are many blocks to choose from on a miss. There are three primary strategies employed for selecting which block to replace:
Random - To spread allocation uniformly, candidate blocks are randomly selected. Some systems generate pseudorandom block numbers to get reproducible behavior, which is particularly useful when debugging hardware.
Advantage : simple to implement in hardware
Disadvantage : ignores Principle of Locality
Least-recently used (LRU) - To reduce the chance of throwing out information that will be needed soon, accesses to blocks are recorded. Relying on the past to predict the future, the block replaced is the one that has been unused for the longest time. LRU relies on a corollary of locality: If recently used blocks are likely to be used again, then a good candidate for disposal is the least-recently used block.
Advantage : takes locality into account
Disadvantage : as the number of blocks to keep track of increases, LRU becomes more expensive (harder to implement, slower and often just approximated).
First In First Out (FIFO) - Because LRU can be complicated to calculate, this approximates LRU by determining the oldest block rather than the LRU.
Checkout part2.
References
1. Computer Architecture - A Quantitative Approach, Third Edition.
where is part II of cache
ReplyDeleteCheck the following link - http://codingfreak.blogspot.in/2009/03/cache-memory-part2.html
ReplyDelete