in CO and Architecture
509 views
0 votes
0 votes

Can anyone please confirm that below statement is correct or not  :   -

Statement 3 is false for Only Full Associative cache mapping and for direct and set associative mapping Statement 3 is True 

Because of this concept(Answer)

https://gateoverflow.in/409971/made-easy-test-series-2024

 

in CO and Architecture
509 views

3 Answers

2 votes
2 votes

If block size increases, main memory tag size decreases (because word offset size will increase and we are assuming increasing block size does not increase overall size of ram)
in full associative cache CM tag size = MM tag size. So, CM tag size decreases => comparison time decreases => Hit/Miss TIME decreases.
For Direct/set associative it will NOT affect the hit/miss time, as tag size does not change as shown by you in the link to other question.
But S3 has ‘miss rate’  not miss time. So I think, tag sizes are of no use for us in S3. 

When block size increases, No of cache lines will decrease, so I thought that with lesser cache lines, miss rate will increase. BUT, there is a catch….
When block size increased, it so did for MM also. there too number of blocks decreased. So, actually it is completely dependent on access sequence, whether miss rate will decrease or not.
We cannot say that miss rate decreases ALWAYS.

Eg:
case 1 → CM : 4 lines and MM : 16 blocks : access sequence 3, 2, 5, 4, 6, 8, 14, 13, 7  → miss rate 9/9
after doubling block size → CM : 2 lines and MM : 8 blocks : access sequence becomes 1, 1, 2, 2, 3, 4, 7, 6, 3  → miss rate 7/9

case 2 → CM : 4 lines and MM : 16 blocks : access sequence 0, 4, 8, 12  → miss rate 4/4
after doubling block size → CM : 2 lines and MM : 8 blocks : access sequence becomes 0, 2, 4, 6  → miss rate 4/4

case 1 where access sequence implies increased spatial locality of reference, we see a decrease in miss rate. but when accesses are apart, like in case 2, we see no change.  hence nothing can be surely said about miss rate without looking at an access sequence.

hence, s3 is false.
s1 and s2 are true.

PS : cool question, lets discuss thoughts about this if any… I may wrong. What is the answer anyway?

1 comment

👍!
0
0
0 votes
0 votes

(conflict misses occurs when it is not a cold miss and the mapping blocks are full but whole cache is not full. So increasing the associativity will increase the number of mapping blocks hence less conflict miss)

S1: true

We access the whole block at a time and check if it is present or not so increasing the size of block will not do anything for miss rate or miss penalty.

So,

S2: false

S3: false

4 Comments

is ,, my conclusion is correct ?
0
0
Give an example showing that increasing block size (not increasing number of blocks) will decrease the miss rate or increase the hit rate .
0
0

@dipanshu20 if block size increases, then when there is a miss, wouldn’t it take more time to access the block from main memory? increasing the miss penalty ??
I think S2 is true

0
0
0 votes
0 votes

S1 : True Associativity increase => No of blocks in a set increase => lower conflict miss
S2 : True Miss penalty is the time taken to transfer a block from Main Memory to Cache. Large Block Size => more time to transfer => Miss penalty increase
S3 : False  because of word “always” . It is true that increasing block size may decreases miss rate but beyond a point where block size is too large then miss rate starts increasing.
                   
Block size increases => No of blocks decreases in both main memory and cache memory. But main memory is larger compared to cache, so till the block size is not too large miss rate                              decreases but after that miss rate starts increasing.
                  

Related questions