in CO and Architecture edited by
2,279 views
0 votes
0 votes
A : LRU replacement policy is not applicable to direct mapped caches
B : A unique memory page is associated with every cache page in direct mapped caches

Options:

1) Both True
2) Both False
3) A is True and B is false
4) B is True and A is false
in CO and Architecture edited by
2.3k views

21 Comments

edited by
option 3?

A : As there are conflict misses. (It just do Block number mod line number and replace that even if other lines of cache are empty)

B : Not unique page. All the pages of main memory having same line number will be mapped to each line.
1
1

Me also got the same ..there answer seems wrong

0
0

@Shaik Masthan your view on above discussion.

0
0

LRU replacement policy is not applicable to direct mapped caches

in direct mapping method, we follow mod method ===> we replace exactly a particular block in case of conflict arises.

But in associative or set associative methods, we have a set of blocks, in which we can use any one to replace, So in this methods we have a policy to which block we have to replace ==> the policy can be LRU, FIFO or optimal etc.

So, given statement is true.

 

A unique memory page is associated with every cache page in direct mapped caches

i hope they mis-use the term page instead of block.

So, i am considering the statement is like

A unique memory block is associated with every cache block in direct mapped caches

this is false, the actual reason for the assertion is " A unique cache block is associated with every memory block in direct mapped caches "

9
9
hmm c is true
0
0
Yes D should be correct. A replacement algorithm is required in directly mapped caches
0
0

@Gupta731 https://gateoverflow.in/285714/me_test_series

But last time you answered that Directly mapped cache doesn't require replacement.

0
0
edited by

@Shivam Kasat replacement policy are required for set associative and associative mapping not for direct cache mapping. in direct mapping many memory block is mapped to a unique cache line or block . it is many to one function

0
0

@adarsh_1997 hence A is correct.

0
0

@Shivam Kasat read my comment and read the reason part again in the question

0
0

A unique memory page is associated to every cache page

So A is true and B is false.

0
0
Read my comment on that question again. I said the choice of replacement algorithm doesn't matter but it doesn't mean you don't require any.
0
0

@Gupta731 but right now @adarsh_1997 said we don't require replacement policy for direct mapping.

0
0

Since more than one memory block is mapped onto a given cache block position, contention may arise for that position even when the cache is not full. For example, instructions of a program may start in block 1 and continue in block 129, possibly after a branch. As this program is executed,both of these blocks must be transferred to the block-1 position in the cache. Contention is resolved by allowing the new block to overwrite the currently resident block. With direct mapping, the replacement algorithm is trivial. Placement of a block in the cache is determined by its memory address


From Carl Hamacher.

It means replacement algorithms are insignificant in directly mapped, they are of no use although you may apply one but it will have no effect as Placement of a block in the cache is determined by its memory address

0
0

@Gupta731 @adarsh_1997 Thanks a lot.

0
0

 NOT GETTING UR LAST  LINE   A unique cache block is associated with every memory block in direct mapped caches " 

WHY NOT  "A unique memory block is associated with every cache block in direct mapped caches" BECZ  WE ARE BRINGING MEMORY BLOCK TO MAIN MEMORY??

0
0

i think it should be 

A unique cache block is associated with many memory block in direct mapped caches "

0
0

@eyeamgj

This means, Many Memory blocks mapped to a unique cache block

now read my previous statement !

 

@Gate Fever

i hope those are equivalent, isn't it ?

but yes, your comment makes more clear !

0
0

  i am not getting the term associated here i am very poor in dbms

0
0

Many memory blocks matching line number bits mapped to same Cache line/block

eg cache with 4 lines Direct mapped ==> Bits for line field 2 so address will be

Tag | 2 | BO

let MM has 8 blocks

Bno | BO

000 | BO        block 000 and 001 Mapped to Line 0

100 | BO

---------------

001 | BO       block 001 and 101 Mapped to Line 1

101 | BO

---------------

010 | BO      block 010 and 110  Mapped to Line 2

110 | BO

---------------

011 | BO      block 011 and 111 Mapped to Line 3

111 | BO

hope this helps

0
0
edited by

@Shaik Sir , @Arjun Sir  i think First Statement Is Also False

LRU replacement policy is not applicable to direct mapped caches.

In Set  set associative We apply LRU . yes 

and We know When Associative Is  “ n “ then It will Work As a Fully  Associative and When associative Is “ 1 “ then It will Work As a Direct mapping  i think You agree with me…..

 

Then we Also Say that Direct mapping is Also a kind Of  Set associative Mapping And when LRU is Applicable To  Set associative , then It is also applicable in Direct mapping .

 

 Please Clear My Doubt…...

 

 

 

0
0

1 Answer

0 votes
0 votes

option A ---- Both are False

1-For Direct mapped cache all the algorithms will be trivial because there is fixed block no. to which that address belong.

So, LRU is applicable in Direct Mapped Cache but it won't have any effect on hit/miss.So Not applicable is wrong word.

2-As memory Page = Cache page Data of one memory page will be part of only one Cache page .In Direct Mapped Cache Memory address itself contain the address of Cache page number which is unique in itself.

But one Cache Page belongs to more than one Memory Page.

Therefore the second statement is False.

If it would have been "A unique Cache Page is associated with every memory page in Direct Mapped Cache" Then it would have been true

 

edited by

Related questions