in CO and Architecture edited by
29,908 views
78 votes
78 votes
The read access times and the hit ratios for different caches in a memory hierarchy are as given below:
$$\begin{array}{|l|c|c|} \hline \text {Cache} &  \text{Read access time (in nanoseconds)}& \text{Hit ratio} \\\hline \text{I-cache} & \text{2} & \text{0.8} \\\hline \text{D-cache} & \text{2} & \text{0.9}\\\hline \text{L2-cache} & \text{8} & \text{0.9} \\\hline \end{array}$$
The read access time of main memory in $90\;\text{nanoseconds}$. Assume that the caches use the referred-word-first read policy and the write-back policy. Assume that all the caches are direct mapped caches. Assume that the dirty bit is always $0$ for all the blocks in the caches. In execution of a program, $60\%$ of memory reads are for instruction fetch and $40\%$ are for memory operand fetch. The average read access time in nanoseconds (up to $2$ decimal places) is _________
in CO and Architecture edited by
by
29.9k views

4 Comments

@Arjun Sir

I have a doubt when there is miss should we add previous access times

MAT = L1 Access Time + L1 Miss Rate [ L1 Access Time + L2 Access Time] + L1 Miss Rate * L2 Miss Rate[Memory Access time + L1 Access Time + L2 Access Time]

and

in which case we multiplied with hit rate as given in

https://www.geeksforgeeks.org/multilevel-cache-organisation/

0
0

@Manu Thakur sorry for pointing out this typo error but sir it should be only two times 0.1 for operand fetch calculation. [Your approach is simple & elegant]

1
1
why 3 times 0.1 is added in the question? @manu
0
0

5 Answers

114 votes
114 votes
Best answer

$L2$ cache is shared between Instruction and Data (is it always? see below)

So, average read time

$=$ Fraction of Instruction Fetch $\ast $ Average Instruction fetch time $+$ Fraction of Data Fetch $\ast$ Average Data Fetch Time

Average Instruction fetch Time $= L1$ access time $+ L1$ miss rate $\ast \;L2$ access time $+ L1$ miss rate $\ast\; L2$ miss rate $\ast $ Memory access time

$\quad= 2 + 0.2 \times 8 + 0.2 \times 0.1 \times 90$ 

$\quad= 5.4 \;\text{ns}$

Average Data fetch Time $= L1$ access time $+ L1$ miss rate $\ast \;L2$ access time $+ L1$ miss rate $\ast \;L2$ miss rate $\ast $ Memory access time

$\quad = 2 + 0.1 \times 8 + 0.1 \times 0.1 \times 90$ 

$\quad= 3.7\;\text{ns}$

So, average memory access time

$$= 0.6 \times 5.4 + 0.4 \times 3.7 = 4.72\; \text{ns}$$


Now, why $L2$ must be shared? Because we can otherwise use it for either Instruction or Data and it is not logical to use it for only $1.$ Ideally this should have been mentioned in question, but this can be safely assumed also (not enough merit for Marks to All). Some more points in the question:

Assume that the caches use the referred-word-first read policy and the writeback policy

Writeback policy is irrelevant for solving the given question as we do not care for writes. Referred-word-first read policy means there is no extra time required to get the requested word from the fetched cache line.

Assume that all the caches are direct mapped caches.

Not really relevant as average access times are given

Assume that the dirty bit is always 0 for all the blocks in the caches

Dirty bits are for cache replacement- which is not asked in the given question. But this can mean that there is no more delay when there is a read miss in the cache leading to a possible cache line replacement. (In a write-back cache when a cache line is replaced, if it is dirty then it must be written back to main memory).

edited by
by

48 Comments

Sir how can we know that second cache is shared. I think the 2nd cache should be for data only.
6
6
^Good point- I took it from what is there in current systens. But can argue with GATE regarding it.
1
1
Yes sir. Surely they missed the point and the naming convention also can't tell for which it is used. They may have said "Data cache LEVEL 2" or "shared cache LEVEL 2". Think should wait for the key and then can argue on this.
3
3
I think the solution should go like this isn't it ??

EMAT=0.6[0.8*2+0.2(2+90)) + 0.4[0.9*2 + 0.1*0.9*(10)] + 0.1*0.1*(90+8+2)]= 13.48ns
1
1
@Tendua what was you answer I also considered as Level 2 cache as D-Cache so I got 13.48ns,,,
0
0
Mine was 13.48 considering the l2 cache to be only Normal data cache.
2
2
as there don't mention I also considerd L2 cache as D-Cache,, is there any point to challenge GATE for this ??
1
1
I dont think but no harm in trying..
1
1
I also think so :(
0
0
GATE Key also says $4.72$
0
0
@Arjun sir should I write to iit R regarding this. Sir assumption can be made in any way and definitely it's not a standard to use cache for both instruction and data. I may use it any way.
0
0
I think you  are correct is not always necessary to shared L2 cache it can be only DATA CACHE they need to mention it, as they don't mention there can be more than one implementation how one should know L2 is shared or not .....
0
0
arjun sir, is the hit ratio /miss ratio is same as hit rate /miss rate?
0
0
@Arjun sir please, explain why you considered  2 nsec itself in the equation. Why you did not consider the hit ration for I-Cache and D-Cache?

I think the said cache system is a strict hierarchical one, then please explain why you did not multiplied the hit ratio to read access time.
1
1
did you got the answer to the question why we didn't multiplied by hit ratio?
0
0
what is referred-word-first read policy??
1
1
@sushmita During a cache miss, a block gets loaded in cache and then a word is transferred to CPU. "referred-word-first" read should mean that there is no extra delay for this word fetch once the block is loaded into the cache.
6
6
thanx sir..
0
0
how L1 miss rate at instruction fetch is 0.2 and at data fetch is 0.1?
1
1
Hello Arjun sir, i have a doubt in your solution.My doubt is that in the qstn itself it is given that  cache will use

Write back policy, then why are you calculating Memory Access time like this .I mean in write back policy ,

Cache write will be updated later(at the time of page replcaement), then the Memory access time formula should be-:

$\text{MAT=hit ratio of I cache } \times \text{read access time of I cache }+$

$\text{miss ratio of I cache } \times\text{hit ratio of L2 cache } \times \text{read access time of L2 cache }+$

$\text{miss ratio of I cache } \times\text{miss ratio of L2 cache } \times \text{read access time of main memory}$

Same for write case.

P.s -:Ignore the  instruction fetch and  memory operand fetch .I have no issue in getting those terms .

I have only issue with the formula you are using.

Please clear it.
1
1
@arjun sir....the way the calculation is done is t1 + (1-h1) t2 + (1-h1)(1-h2)t3 which is simultaneous access

but the question is about hiearchial cache and it should be t1 + (1-h1)(t1+ t2) + (1-h1)(1-h2)(t1+t2+t3) na?

can u please help
1
1
hit ratio for instruction and data are 0.8 and 0.9 respectively that's why
0
0
@arjun sir the AMAT you have mentioned is simultaneous access but in question it is mentoned as hierarchial?

plz help
0
0
Is it really simultaneous access formula?
4
4
Sir,if question says 30% are  dirty bits or something like that ,then we will consider the bits also because it is write back cache and if block to be replaced is dirty we first need to move that block from cache to main memory.Here we are ignoring because these are 0,is this correct?
0
0
@Rahul Yes.
0
0
@arjun sir no  but it looks like that. i may be wrong. can you plz explain?
0
0
@sowmya

no it isnt

look at it carefully  t1 + (1-h1) t2 + (1-h1)(1-h2)t3

simultaneous access has h1 *t1 in the beginining and not just t1

the above is a simplified version of the formula

h1.t1 + (1-h1)h2(t1+t2)  + (1-h1)(1-h2)(t1+t2+t3)
6
6
How came to know that here we have to use hierarchial access  ? why not simultaneous?
4
4
Hi @arjun Sir or anyone can please explain below:

You have used the hierarchical formula here. Could you please explain why we have used hierarchical not the simultaneous one?
0
0

Question mentions the word memory hierarchy therefore go for hiearachial approach

5
5
@ Arjun sir ,  why have u used sequential acess formula in this question , by deafult we assume simultaneous acess right?

pls correct me if i'm wrong bcoz till now many questions i have done and there was no mention of which one to use, so i have solved them using simultaneous way method and all of them were correct . so i think this is the default method if nothing is mentioned ?
0
0

@Arjun sir,

please clear the formula, a generalised one, or derive how you got that. please. totally confused !!

1
1
Always remember that you have to use hierarchical organization. Don't get confused.
2
2
it is the most general formula for hierarchical access. what is ur doubt?
1
1
Referred-word-first read policy means there is no extra time required to get the requested word from the fetched cache line.

i think Referred-word-first read policy means that the referred word is first brought from memory to cache and then can be accessed without waiting for the entire block to be transferred.
0
0

@Arjun sir, is write back considered between L1 and L2 cache also or only between L1 and main memory?

0
0
yaa, now clear, i was not getting referred word policy. thanku for explaination
1
1

Can someone explain this part please.

L1 miss rate * L2 miss rate * Memory access time

1
1
(1-h1)(1-h2) memory access time

Where h1 hit rate of L1 h2 is hit rate of L2
1
1
No both are hierarchical access

First is simplified form of 2nd formula
2
2
Why we aren't multiplying hit ratio of  I and D cache in starting like : 0.8*2+8*0.2+0.1*0.2*90
1
1
what if say 30% dirty blocks are present?

I think write back time will be added in both level-2 as well as main memory access times as on either of those hits, a block in the 1st level cache ( ie I-cache and D-cache here ) may be replaced. Is that so?
0
0
edited by

@abhijit_m Then also you wouldn’t add write back time . Write back time is only considered when a particular block is being replaced and has dirty bit 1 . For hit penalty or miss penalty , dirty bit has no effect (only when a block is being replaced but also there are too many techniques used such as write buffer etc, and then also the amortized analysis bring down the miss penalty same as if we were not even writing the block back to memory ) . To bring the dirty bit factor into the question they would’ve to mention way many factors such as cache’s associativity level , allocate or no-allocate scheme(this for writes only) , percentage of times there’s a conflict miss and such. In this question dirty bit is just a diversion.


And for referred-word-first read policy , its means the “critical word” the word referred by the processor will be brought first and then the other words of the same block will be transferred. But this also comes into play when in question its given that whole block cannot be transferred at once and  word by word transfer is needed in order to bring the whole block into a cache line. 

So , for these words to come into factor they would’ve to mention way too many things.

0
0
As Arjun Sir quoted “ Writeback policy is irrelevant for solving the given question as we do not care for writes.” It is correct, however if there was write also in the question still it wouldn’t have mattered as dirty bit is 0 for all blocks all the time.
0
0
Do we have to know the architecture beforehand? Otherwise, how can we know that L2 is shared?
0
0
In  science our main motto is reduce the delay as much as possible and get the maximum efficiency

so simultaneous axis is default because here because delay is less. if explicitly mention use level order or hierarchical  then use hierarchical.
0
0

@saheb sarkar1997 bro you are wrong because default access method is hierarchical access. 

Source:- 

Simultaneous and Hierarchical Cache Accesses - GeeksforGeeks

1
1
8 votes
8 votes

We use hierarchical access

 

Using I cache:

Tavg1= H1T1 + (1-H1)(H2)(T1 + T2) + (1-H1)(1-H2)(T1+T2+T3)

        = (0.8*0.2) + (0.2)(0.9)(10) + (0.2)(0.1)(100)

        = 5.4 ns

 

Using D cache,

Tavg2 = H1T1 + (1-H1)(H2)(T1 + T2) + (1-H1)(1-H2)(T1+T2+T3)

        = (0.9*0.2) + (0.1)(0.9)(10) + (0.1)(0.1)(100)

        = 3.7 ns

 

Now Tavg = (60% of Tavg1) + (40% of Tavg2)

              = 4.72 ns

1 vote
1 vote

..…...….….....…………………….…

 

0 votes
0 votes
  • CPU connected to 2 caches (I cache and D cache) which are further connected to L2 cache and then Main Memory
  • Referred-word-first read policy means there is no extra time required to get the requested word from the fetched cache line
  • Write-back policy is irrelevant for the question as we do not care for writes. In write-back a block is replaced if it is dirty (dirty bit 1) and is written back to main memory and in question dirty bit is 0 for all blocks.
  • Direct mapped caches : Not really relevant as average access times are given

T_avg = 60% x T_Instr +  40% x T_operand

T_Instr = Hit ratio I cache x Time taken to access I cache  + Miss ratio of I cache  x Hit ratio L2 x Time taken to access L2 + Miss ratio of I cache x Miss ratio of L2 x Time taken to access Main memory 

        = (0.8*0.2) + (1-0.8)(0.9)(2+8) + (1-.08)(1-0.9)(2+8+90) = 5.4ns

T_operand  = Hit ratio D cache x Time taken to access D cache  + Miss ratio of D cache  x Hit ratio L2 cache x Time taken to access L2 cache+ Miss ratio of D cache x Miss ratio of L2 cache x Time taken to access Main memory 

        = (0.9*0.2) + (1-0.9)(0.9)(2+8) + (1-0.9)(1-0.9)(2+8+90)  = 3.7 ns

T_avg = (0.6)(5.4) +(0.4)(3.7) => 4.72 ns

Answer:

Related questions