in Operating System edited by
46,094 views
121 votes
121 votes

A processor uses $2-level$ page tables for virtual to physical address translation. Page tables for both levels are stored in the main memory. Virtual and physical addresses are both $32$ bits wide. The memory is byte addressable. For virtual to physical address translation, the $10$ most significant bits of the virtual address are used as index into the first level page table while the next $10$ bits are used as index into the second level page table. The $12$ least significant bits of the virtual address are used as offset within the page. Assume that the page table entries in both levels of page tables are $4$ bytes wide. Further, the processor has a translation look-aside buffer (TLB), with a hit rate of $\text{96%}$. The TLB caches recently used virtual page numbers and the corresponding physical page numbers. The processor also has a physically addressed cache with a hit rate of $\text{90%}$. Main memory access time is $10$ ns, cache access time is $1$ ns, and TLB access time is also $1$ ns.

Assuming that no page faults occur, the average time taken to access a virtual address is approximately (to the nearest $0.5$ ns)

  1. $1.5$ ns
  2. $2$ ns
  3. $3$ ns
  4. $4$ ns
in Operating System edited by
by
46.1k views

4 Comments

Those who have still do not understood go here ->https://www.youtube.com/watch?v=7NoRhQC_XgI&t=384s

2
2
Great explanation @palashbehera5 !!!
0
0
It is divided in two parts at first change VA->PA then access the MM.

Case 1: Hit in TLB -> 1ns

Case 2: Miss in TLB -> Miss rate of TLB*(no. of levels *MM access time)

                                          0.04*(2*10)=0.8ns

Case 3: Hit in cache -> 1ns

Case 4: Miss in cache ->Miss rate of cache *(MM access time)

                                        0.1*10=1ns

Average Memory access time=1+0.8+1+1=>3.8ns
1
1

15 Answers

104 votes
104 votes
Best answer

78. It's given cache is physically addressed. So, address translation is needed for all memory accesses. (I assume page table lookup happens after TLB is missed, and main memory lookup after cache is missed)

Average access time = Average address translation time + Average memory access time
= 1ns 
(TLB is accessed for all accesses)
+ 2*10*0.04 
(2 page tables accessed from main memory in case of TLB miss)
+ Average memory access time
= 1.8ns + Cache access time + Average main memory access time
= 1.8ns + 1 * 0.9 (90% cache hit) 
+ 0.1 * (10+1) (main memory is accessed for cache misses only)
= 1.8ns + 0.9 + 1.1
= 3.8ns

We assumed that page table is in main memory and not cached. This is given in question also, though they do not explicitly say that page tables are not cached. But in practice this is common as given here. So, in such a system, 

Average address translation time 
= 1ns (TLB is accessed for all accesses) 
+ 2*0.04 * [0.9 * 1 + 0.1 * 10] 
(2 page tables accessed in case of TLB miss and they go through cache)

$= 1 \ ns + 1.9 \times .08$

$= 1.152 \ ns $

and average memory access time $= 1.152 \ ns + 2 \ ns = 3.152  \ ns$

If the same thing is repeated now probably you would get marks for both. 2003 is a long way back -- then page table caching never existed as given in the SE answers. Since it exists now, IIT profs will make this clear in question itself.

edited by

68 Comments

For the same question if "virtually addressed cache" was used instead of  "physically addressed cache" then what would have been the answer ?
2
2

I dint get the xplaination of 79... why 3 unique ebtries? What does " two code pages starting from 0x00000000 and two data pages starting from 0x00400000 have same first 10 bits " means here ? 

1
1
as you said, in virtually addressed cache is read comes before TLB so Effective Access Time should be

EAT = (cache hit ratio*cache access time) + (1-cache hit ratio) [ (TLB hit*TLB access time + memory) + (1-TLB miss) (TLB access + (2+1)*m) ]

EAT = .9*1 + .1 [ .96(1+ 10) + .04(1 + 30) ] = 2.08ns
 

how did you get 2.81ns ??
3
3
didn't get Question 79..sir plz have a look and explain again
0
0
3 unique entries means 1 entry for the code page, 1 entry for the data page and 1 entry for the stack page. Code pages and data pages are not counted twice because the 2 code pages and the 2 data pages have the same first 10 bits respectively.
0
0

@Mayank. Yes. you are right. I messed my calculation. Just one correction- cache time need not be counted during cache miss -it was a mistake in the answer. So, in your calculation it will be 

EAT = .9*1 + .1 [ .96(1+ 10) + .04(30) ] = 2.076ns

(This would be for virtually indexed virtually tagged cache)

0
0
@Akash In multi-level paging we only load the lower level tables to memory if they are used. Here only 3 second level tables are actually used and that information is given in the question.
0
0
@arjun thanks for the update.
1
1
answer for 2nd part should be 12KB

check first 10 bit are same for both code and data pages

therefore 2 first level entries

4kb (first level page) + 2 * 4kb (2 different tables for 2 entries of first level table)

=12kb

can u recheck it!!

i am not sure
0
0
Only first 9 bits are same for code and data pages- the 10th bit is different.
0
0

 @Arjun Sir,,,for virtually indexed virtually tagged cache   isn't it first TLB is hit and in Tbl miss we should go for cache?

1
1
TLB is for translating virtual address to physical address. For a virtually indexed virtually tagged cache, what's the need for putting this before cache? All arrangements are always done to maximize the performance. That's why even a guy with a limited knowledge can do well in GATE if he just uses his common sense.
0
0

kindly tell me y this highlighted 4kb even though they didnot whthere pages will store in first or second level then y??? we add this 4kb????????????

 
= 4 KB + 3 * 4 KB
= 16 KB
0
0
Here, we are talking about page tables only- not pages. Page tables gives address of pages only. The 4 KB is for the first level page table, which will be used for all the pages and the three 4KB page tables are for the second levels of code, data and stack pages respectively.
0
0

y not only? three 4KB page tables are for the second levels of code, data and stack pages respectively. is  this require to store 2nd one  4 KB is for the first level page table, which will be used for all the pages & if they did not mention 2 level page table then we would use ony  three 4KB page tables a??????????

0
0
Yes. But one level page table would change the whole question. Assuming everything else change accordingly, then "4KB" needn't be added.
0
0
@Arjun, 1 small doubt main memory and cache are axcessed in parallel : what does it mean coz in calc.. We hv taken them separately ie main mem only after cache miss
0
0
yes, thats more appropriate. Main memory access only when cache is missed. Only when question mentions  parallel access or it is write through cache and for write access, we will have simultaneous (parallel) access.
0
0
reshown by
what is the difference between cache is physically addressed and virtually tagged??

In your answer in TLB miss u take 2 *10 ns (as 2 page tables r accessed from main memory)
and in another effective mat (may be virtually tagged cache) u take 3*10 ns .why??
0
0

I am confused..in 78.

It should be:

EAT = ( (hit ratio of TLB)*(access time of TLB) + (miss ratio)*(2*10*0) ) + ( (cache hit ratio)*(access time of cache) + (miss ratio)*(access time from memory) );

Why hit ratio of TLB and Cache are not included?

0
0
In the "miss part" you have to add the TLB access time and cache access time also (that is the standard formula). After that you can simplify and you should get the same answer given. The simplification done is the following.

$$a \times b + (1-a)  \times (b+c) =  b + (1-a) \times c.$$
1
1

"two code pages starting from 0x00000000 and two data pages starting from 0x00400000 have same first 10 bits".

Little doubt.These address are in Hexadecimal. According to me 0x000 in binary is 0000 0000 0000 and 0x004 in binary is 0000 0000 0100 so if we compare first 10 digit of both binaries they are different and that's why we require 3 PT from 2nd level and 1 PT from 1st level we already have in RAM. So total four PT are required. 

1
1
yes. you are correct.
0
0

sir ,i have adoubt,why we cant use a this formula for tlb with physically addressed cache

Tavg = Tlb hit( Tlb time +cache hit (cache time) + cache miss(cache time +memory time ))

+ Tlb miss(Tlb time +Memory time + cache hit (cache time) + cache miss(cache time +memory time ))

1
1
That formula looks fine. You got a different answer with it?
0
0

yes 3.4 ns. and sir, for virtually addressed is it the formula... tavg = (cache hit ratio*cache access time) + (1-cache hit ratio) [ (TLB hit*TLB access time + memory) + (1-TLB miss) (TLB access + (n+1)*memory) ] where n is no of page level.????

one more qs sir generally tlb hit we have to access memory and for miss twice access of memory for one level,but why we are not using this concept in physical addressed formula...

0
0

Tavg = Tlb hit( Tlb time +cache hit (cache time) + cache miss(cache time +memory time ))

+ Tlb miss(Tlb time + 2*Memory time + cache hit (cache time) + cache miss(cache time +memory time ))

= 0.96 (1 + 0.9*1 + 0.1 * 11)
+ 0.04 (1+ 20 + 0.9 * 1 + 0.1 * 11)

= 0.96 * 3 + 0.04 * 23
= 2.88 + .92
= 3.8 ns. 

We need 2 memory access in case of TLB miss due to 2-level paging. 

 

18
18
and sir am i correct about virtually addressed formula??
0
0

For virtually indexed and virtually tagged, it looks fine except that you didn't add "cache time" for cache miss (this is true for write through cache and write access only). But for virtually indexed, physically tagged, this becomes complex as to get data from cache TLB must hit or page table must be accessed. 

0
0

This processor need 3 second level page table one for code pages, one fot data pages and 3rd for stack pages (each has two contiguous pages) 

Bcz all three have different first 10 msb different. Thts wht u r saying???

And these first 10 bits indicates 1st level page table. And it contains base addresses of 2nd page tables.

Plz explain this scenario lil bit.

What if two of them hav same 10 msb.

1
1
What is the significance of the statement "Assuming no page faults occur". Should we assume all the pages are in cache ?
0
0

sir,

Tavg = Tlb hit( Tlb time +cache hit (cache time) + cache miss(cache time +memory time ))

+ Tlb miss(Tlb time + 2*Memory time + cache hit (cache time) + cache miss(cache time +memory time ))

for 2 level paging 2 times memory access or (2+1)=3 times memory access?????

1
1
@Khush Yes, if 2 of them have same top 10 bits, it means they go to same second level page.

@ryan page fault if from page table- that is it happens when we don't have required data in main memory (or protection issue also) and they have to be taken from secondary memory. But this complicates calculation and hence that assumption is given.

@Sayantan yes. That looks fine..
0
0
so instead of 2 mem access there should be 3 na??? as nlevel we use (n+1)mem access for tlb
0
0
1
1
That +1 is coming for memory access. I have added that separate.
0
0
Why do we need 3 second level page table? As per my knowledge we need only 2 second level page table as 10 most significant bits of data page and code page are common, So 1 second level page table require for data page and code page, and 1 second level page require for stack page as this one has different 10 most significant bits. So, total no of page table required are 3 (1 first level page table + 2 second level page table).
0
0

TLB IS ACCESSED ONCE IF HIT OCCUR AND  FOR  MISS MAIN MEMORY IS ACCESSED TWICE AND TLB IS ACCESSED ONCE.

WHY TLB IS ACCESSED ONCE FOR MISS?? 

0
0
TLB is accessed always as it gives the physical address from virtual address and our cache is "physically addressed".
0
0

Now, as all the accesses are of similar type, we will consider access to only one of them.

Avg main memory access time=10 ns

Avg Cache access time

= 0.9(1) + 0.1(Cache miss time  + avg main memory access  time)

= 0.9 + 0.1(1 + 10)

= 2 ns

Avg TLB access time

= 0.96(1) + 0.04(TLB miss time + Avg Cache access time)

= 0.96(1) + 0.04(1 + 2)

= 1.08 ns

So, for accessing one of the components as shown in the diagram,

avg access time = 1.08 ns

For accessing 3 components, total access time = 3 * 1.08 = 3.24 ns

So, answer is C.

0
0
You are taking TLB access for second level page table access as well as final memory access which wont be happening. Also, you are assuming page tables are cached.
0
0
Yes, you are right indeed :)
0
0
Very nicely explained. Never got this kind of explaination in any solved gate paper. It also cleared many concepts too. :)
1
1
@arjun sir,can you please explain the difference between virtually addressed cache and physically addressed cache??i am not clear about this thing.and what diff it makes to the calculation of such questions??

Thankyou
1
1
@arjun sir  why 2*memory access time is used and not 3* memory access time.

shouldn't it be  

c+(n+1)m ?
0
0

.......................

40
40
I guess above image support selected answer !
3
3
Sir, in second case where you have included the case where Page Tables are searched in cache also, there in cache miss why haven't you taken miss as 0.1*(10 + 1) rather than what you have taken 0.1*(10) ?
1
1
In case if cache is virtually addressed:

Why we need not count the cache access time in case of cache miss? How without accessing cache we can know whether its cache miss?

You have also not used TLB access time in case of TLB miss but @mayank has considered it. Which one is correct?
0
0
@Debashish where is it written in the question that page tables aren't cached? do we have to assume it?
0
0

@Arjun sir evrything is fine but I have a doubt here..
in case of TLB, first we search the TLb,if page is present then we access it otherwise we go for page table.

but here TLB access time is given not the TLB search or/Lookup time. So,in case of miss we wont access the TLB then why we are adding it in case of miss.

as here lookup time is given so we add it https://gateoverflow.in/2067/gate2014-3-33

1
1
That is wrong-- you should not decide the usage of a time based on access/search/look up. Only when TLB is looked we can know a miss -- the only other alternative is if TLB and page tabkes are looed-up simultaneously. But this is not common as given in standard books.
2
2

@Arjun sir

In case when TLB is cached then

Average address translation time 
= 1ns (TLB is accessed for all accesses) 
+ 2*0.04 * [0.9 * 1 + 0.1 * 10] 
(2 page tables accessed in case of TLB miss and they go through cache)

I think it should be 1+10 =11

Why? Because,

If cache hit then access cache  i.e. 0.9*1 

In case of cache miss ..Access main memory for page table and if we follow hierarchical memory access scheme (as is being followed for page access here) then that page table is transferred to cache memory and thus cache access time should be added. i.e. 0.1 * (1+10) where 1ns is cache access time.

8
8
In question it's given that both page tables are stored in the main memory.  So how can we consider that they are in cache memory... Hence when TLB miss then main memory is accessed for page tables.
–1
–1
@VS ,you are correct.

@Arjun sir.Please add the edit suggested by @VS. I took me 30 mins to get the same answer.I could have seen her comment earlier:(
1
1

Prerna Chauhan in second line 

 Page tables for both levels are stored in the main memory.

0
0
@Arjun sir can you verify @VS answer

I think it is correct
0
0
How can TLB be cahed? TLB is itself is buffer storing subset of page table entry.
0
0
Physically addressed cache means physical frames containing virtual page are cached in cache am I correct
0
0
Can anyone please tell why the 2 ns is added to obtain the average memory access time in the answer (3.152 ns) ? I mean, where is the 2 ns coming from ??
0
0

in TLB caching @Arjun sir why have u not taken hierarchical approach?

 

Average address translation time 
= 1ns (TLB is accessed for all accesses) 
+ 2*0.04 * [0.9 * 1 + 0.1 * (10+1)] = 1.16 ns 
(2 page tables accessed in case of TLB miss and they go through cache)

So final answer should be 3.16 ns

2
2

@Arjun

how is page table caching done? I am unable to understand? For 2 level paging why is cache also accessed twice in page table caching?

0
0
edited by

@Arjun sir, thanks for great explaination and also for the concept of virtually tagged cache. Can you kindly explain the issue in the below written formula ( for virtually tagged cache). I am interpreting the situation as this way. Go to Cache, if you find the frame/data it's good otherwise we need to go to TLB for address translation and then fetching the required frame else I need to access 2 level page table.
 => CacheAccessTime + CacheMissRate( TLB Access time +  TLBMissRate(2 * MM) + MM)

Where I am mostly confused is while reading some comments/answers there are times we are trying to multiply the TLB/CacheAccessTime with there hit rate while in some others we are not. For me safer/easier option is to always add the hit ratio*access-time ( cache/TLB). is it so ? or am I thinking wrong here ?

Kindly guide me here sir and really thanks once again

@Kapil sir, @Bikram sir, @Arjun sir

0
0
Yes. That works for the address translation time on virtually addressed cache. And there are 2 ways to write the same formula -- with and without including the hit rate. If you spend some time you can prove their equivalence.
2
2

@Arjun sir, thanks alot :)

0
0

Hope this image helps 

 

30
30
if cache is present in main memory so why we are not addding 10ns along with 1 ns to acess cache??
0
0
nice image, i saw it in NPTEL IIT delhi videos
0
0
40 votes
40 votes

0.96(1+(0.9(1) + 0.1(10+1)))   +   0.04(1+ 2* (10) + (0.9(1) + 0.1(10+1))) = 3.8 ns..so 4ns

4 Comments

@eyeamgl We multiply by 2 because this is 2 level paging. We have to calculate the physical address by visiting two level of pages.
1
1

 

0.96(1+(0.9(1) + 0.1(10+1)))   +   0.04(1+ 2* (10) + (0.9(1) + 0.1(10+1))) 

what is 1 in (10+1) in above equation ? is it cache  update time ?

0
0
1 is cache miss and 10 for MM access
0
0
20 votes
20 votes

Effective memory access time 4 ns (aprox )

Eff Memory Acces Time $=X_{TLB}\begin{pmatrix} C_{TLB}+X_{PAC} [C_{PAC}]+(1-X_{PAC} ) [C_{PAC}+M] \end{pmatrix} +(1- X_{TLB})\begin{pmatrix} C_{TLB}+2M+X_{PAC} [C_{PAC}]+(1-X_{PAC} ) [C_{PAC}+M] \end{pmatrix}$

$=0.96\begin{pmatrix} 1+0.9 [1]+(1-0.9 ) [1+10] \end{pmatrix} +(1- 0.96)\begin{pmatrix} 1+2\times 20+0.9 [1]+(1-0.9 ) [1+10] \end{pmatrix}$

=3.8 ns

edited by
by

4 Comments

.Let S1= ∑nr/2r (r=0 to logn-1) .S2= ∑r2r (r=0 to logn-1) .

s1 and s2 after expension?
0
0
Its 1ns given in the question for TLB access time.
0
0
Perfect Explanation:)

Can you please provide this easy diagram for virtually Indexed cache too?
0
0
10 votes
10 votes
Average time taken to access virtual address

= ( Virtual address to physical address ) + (fetch the word from process or main memory )

= t+ ( 1- $p_{t}$)(k*m) + C +(1-$p_{c}$) *m          [K= # of levels] [m = main memory access time]

=1 ns + (0.04)*(2*10 ns) + 1 ns + (0.1 ) *10 ns

= 3.8 ns
edited by

4 Comments

it is actually frame. we get frame number by accessing page table entry. a frame is bigger than a word.
0
0
What is the purpose of framing or paging ??? to fetch the word from main memory ...  so wats wrong in that statement ??
0
0
thank you for great explanation
0
0
Answer:

Related questions