18 Ekim 2015 Pazar

“Block Replacement Simulator” and “Cache Time Analysis”

Filled under:





Introduction
   We’re going to use “Block Replacement Simulator” and “Cache Time Analysis” tools to

perform the required calculations.




PART 1 (Use Block Replacement Simulator) a.


A)query sequence for cache access is given as follows (numbers are in decimal):
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 16 1 17 33 49 1 65 Apply the following cache organization schemes to query sequence above for a cache size of 16 blocks:

Use LRU as the replacement policy and compare the three cache configurations. Which one is better in this specific case? Why?

b. Use the same query sequence above and apply 4-way set associative (4 sets) cache organization for FIFO and LRU replacement policies. Which one is better in this specific case? Why?

c. What is the difference between least recently used (LRU) and least frequently used (LFU) cache policies? Do a research on this topic and explain it briefly.

Configuration 1 - Direct Mapped (16 sets)









Configuration 2 - 4-way Set Associative (4 sets)



Configuration 3 - Fully Associative (1 set)

The result of these tests are show that when we decrease size of sets it cause less misses and low miss rate just because the sets in caches is not enough. Therefore, having less cach sets force our cpu to have more hits and misses. In first picture there is only one hit and zero miss. At the second picture 5 misess 2 hits and the last one has 3 hits. having hits mean datas already on cache so the last one is the best performance way.

b:)LRU (Least recent used)

FIFO (First in first out)
There is nothing change for number of misses and hits. It shows us the replacement policy is not change percentage of miss rate and hite rate. But it shows us the cache Query sequence a little different. The second one would be better than the first one for our case becuase it’s more efficent in terms of using cache.


C) The main difference is that in LRU we only check on which page is recently that used old in time than other pages checking only based on recent used pages. But in LFU we check the old page as well as the frequency of that page.

[A]
[A, B]
[A, B, C]
[B, C, A]
[C, A, B]
[A, B, C]
[B, C, D]
When we look at this example, we can easily see that we can do better that given the higher expected chance of requesting an A in the future, we should not evict it even if it was least recently used.
A - 12
B - 2
C - 2
D - 1

When we look at this for LRU. We can compare by the using times. Than, we see the lest recently used is D so it will change for the next data.





















PART 2 (Use Cache Time Analysis tool)
A cache with the following specifications is given: ● Cache Size: 256 KByte ● Associativity: Direct mapped (8 sets) ● Block Size: 64 Bytes ● % of Writes: 22 ● % of Dirty Data: 10 ● % Miss Penalty: 40 cycles ● Hit Time: 1 cycle ● Mem. Write: 6 cycles

A)write through (with no-write allocate)
B)write through (with allocate on miss)
C)write back (with no-write allocate)
D)write back (with allocate on miss)

Discussion:

Lets first divide them into the piece Write Through and Write back.

When we analyze write through with no-write Allocate the result is 2.19. for Allocate on miss the result is 2.216. Result mean is average access time. If we check the differences between two of these we will see there in only one different calculation which is at Miss write Contribution at alocate on miss.(MissPenalty+MemWriteTime) we have a bonus MissPenalty this increase time per memory access.

If we compare write back algorithm No write allocate will be again faster than with allocate. That shows in calculation part approaching by sample.

Write Miss Contribution :
for No write allocate is  %Writes * MissRate * MissPenalty
22% * 0.0029 * 40 =0.02552
for allocate on miss is %Writes*MissRate*((MissPenalty+HitTime)+(%Dirty*MissPenalty))
22% * 0.0029 * (( 40 + 1) + (10% * 40)) =0.02871

for write back as we know that write the allocation on miss causes low performance.
As a result allocation on miss for both write back and write through decreased our memory access speed.

When we compare for the best algorith between wrtite-back and write-through. Write-back is proofed as better. Because, in write-through when we want to change something on memory it will need to an accessing and writhing on both cache and memory. For write-back when you want to change something we will only need to change  location is updated. Thats why there is a difference for memory access per clocks.

























0 yorum:

Yorum Gönder