in CO and Architecture retagged by
4,375 views
8 votes
8 votes
Transferring data in blocks from the main memory to the cache memory enables an interleaved main memory unit to operate unit at its maximum speed.True/False.Explain.
in CO and Architecture retagged by
4.4k views

4 Comments

Note that the way memory is addressed has no effect on the access time for memory locations which are already cached, having an impact only on memory locations which need to be retrieved from DRAM.

0
0
Please put the proper tags :

gate1990 co-and-architecture

It was hard to find.
0
0

interleaved main memory (see 1 min only , you will get the answer).

https://www.youtube.com/watch?v=kzQdgiOlmMc

1
1

Basically, Smaller chips combined to form a big single memory chip. For accessing more words in single access i.e. simultaneous or pipelined fashion.

Reference: https://www.youtube.com/watch?v=UtJzG1Hhy5c

0
0

1 Answer

7 votes
7 votes
Statement is true, main memory transfer the data in the form of block to cache and cache transfer the data in the form of words, so it enables the interleaved main memory unit to operate unit at maximum speed

3 Comments

what is meaning of interleaved memory please explain
1
1
copied from MadeEasy Book
5
5

With interleaved memory, memory addresses are allocated to each memory bank in turn. For example, in an interleaved system with two memory banks (assuming word-addressable memory), if logical address 32 belongs to bank 0, then logical address 33 would belong to bank 1, logical address 34 would belong to bank 0, and so on. An interleaved memory is said to be n-way interleaved when there are n banks and memory location i resides in bank i mod n.

Memory interleaving example with 4 banks. Red banks are refreshing and can't be used.

Interleaved memory results in contiguous reads (which are common both in multimedia and execution of programs) and contiguous writes (which are used frequently when filling storage or communication buffers) actually using each memory bank in turn, instead of using the same one repeatedly. This results in significantly higher memory throughput as each bank has a minimum waiting time between reads and writes.

Main memory (random-access memory, RAM) is usually composed of a collection of DRAM memory chips, where a number of chips can be grouped together to form a memory bank. It is then possible, with a memory controller that supports interleaving, to lay out these memory banks so that the memory banks will be interleaved.

In traditional (flat) layouts, memory banks can be allocated a continuous block of memory addresses, which is very simple for the memory controller and gives the equal performance in completely random access scenarios, when compared to performance levels achieved through interleaving. However, in reality, memory reads are rarely random due to the locality of reference, and optimizing for close together access gives the far better performance in interleaved layouts.

Note that the way memory is addressed has no effect on the access time for memory locations which are already cached, having an impact only on memory locations which need to be retrieved from DRAM.

18
18

Related questions