Computer Architecture CS 6810
Popular in Course
Popular in ComputerScienence
Marian Kertzmann DVM
verified elite notetaker
This 17 page Class Notes was uploaded by Marian Kertzmann DVM on Monday October 26, 2015. The Class Notes belongs to CS 6810 at University of Utah taught by Alan Davis in Fall. Since its upload, it has received 27 views. For similar materials see /class/229976/cs-6810-university-of-utah in ComputerScienence at University of Utah.
Reviews for Computer Architecture
Report this Material
What is Karma?
Karma is the currency of StudySoup.
Date Created: 10/26/15
Lecture 12 Cache Innovations Today cache access basics and innovations Sections 5152 Accessing the Cache Byte address 101000 Offset 8 byte words 8 words 3 index bits Directmapped cache each address maps to a unique address Sets Data array The Tag Array Byte address 101000 Tag 8 byte words Compare Directmapped cache each address maps to a unique address Tag array Data array Increasing Line Size Byte address A large cache line size 9 smaller tag array fewer misses because of spatial locality 10100000 32byte cache Offset line size or block size Tag array Data array Associativity Byte address Set associativity 9 fewer conflicts wasted power because multiple data and tags are read 10100000 Way1 Way2 Tag array D t Compare a a array 5 Example 32 KB 4way setassociative data cache array with 32 byte line sizes How many sets How many index bits offset bits tag bits How large is the tag array Cache Misses On a write miss you may either choose to bring the block into the cache writeallocate or not writenoallocate On a read miss you always bring the block in spatial and temporal locality but which block do you replace gt no choice for a directmapped cache gt randomly pick one of the ways to replace gt replace the way that was leastrecently used LRU gt FIFO replacement roundrobin Writes When you write into a block do you also update the copy in L2 gt writethrough every write to L1 9 write to L2 gt writeback mark the block as dirty when the block gets replaced from L1 write it to L2 Writeback coalesces multiple writes to an L1 block into one L2 write Writethrough simplifies coherency protocols in a multiprocessor system as the L2 always has a current copy of data Reducing Cache Miss Penalty MultiIevel caches Critical word first Priority for reads Victim caches MultiLevel Caches The L2 and L3 have properties that are different from L1 gt access time is not as critical for L2 as it is for L1 every loadstoreinstruction accesses the L1 gt the L2 is much larger and can consume more power per access Hence they can adopt alternative design choices serial tag and data access high associativity ReadWrite Priority For writebackthru caches writes to lower levels are placed in write buffers When we have a read miss we must look up the write buffer before checking the lower level When we have a write miss the write can merge with another entry in the write buffer or it creates a new entry Reads are more urgent than writes probability of an instr waiting for the result of a read is 100 while probability of an instr waiting for the result of a write is much smaller hence reads get priority unless the write buffer is full Victim Caches A directmapped cache suffers from misses because multiple pieces of data map to the same location The processor often tries to access data that it recently discarded all discards are placed in a small victim cache 4 or 8 entries the victim cache is checked before going to L2 Can be viewed as additional associativity for a few sets that tend to have the most conflicts Types of Cache Misses Compulsory misses happens the first time a memory word is accessed the misses for an infinite cache Capacity misses happens because the program touched many other words before retouching the same word the misses for a fullyassociative cache Conflict misses happens because two words map to the same location in the cache the misses generated while moving from a fullyassociative to a directmapped cache Sidenote can a fullyassociative cache have more misses than a directmapped cache of the same size 13 What Influences Cache Misses Compulsory Capacity Conflict Increasing cache capacity Increasing number of sets Increasing block size Increasing associativity Reducing Miss Rate Large block size reduces compulsory misses reduces miss penalty in case of spatial locality increases traffic between different levels space wastage and conflict misses Large caches reduces capacityconflict misses access time penalty High associativity reduces conflict misses rule of thumb 2way cache of capacity N2 has the same miss rate as 1way cache of capacity N access time penalty Way prediction by predicting the way the access time is effectively like a directmapped cache can also reduce power consumption 15 Tolerating Miss Penalty Out of order execution can do other useful work while waiting for the miss can have multiple cache misses cache controller has to keep track of multiple outstanding misses nonblocking cache Hardware and software prefetching into prefetch buffers aggressive prefetching can increase contention for buses Title Bullet