in Algorithms
12,007 views
42 votes
42 votes

Two matrices $M_1$ and $M_2$ are to be stored in arrays $A$ and $B$ respectively. Each array can be stored either in row-major or column-major order in contiguous memory locations. The time complexity of an algorithm to compute $M_1 \times M_2$ will be

  1. best if $A$ is in row-major, and $B$ is in column-major order

  2. best if both are in row-major order

  3. best if both are in column-major order

  4. independent of the storage scheme

in Algorithms
12.0k views

4 Answers

62 votes
62 votes
Best answer

D is correct

Here time complexity is asked, for each access of array element it will be constant,

So the time complexity will not depend upon storage. If at all program execution time is asked option a is true.

edited by

4 Comments

just asking, what is the time complexity of the fastest matrix multiplication algorithm, i googled and it showed o(n^2.3), is there any other faster algorithm?
0
0

i had doing coding on Solvay Strassen multiplication that takes arround O(n^2.8) something but one more algorithm is suggested to multiply the matrices is "Coppersmith-Winograd algorithm" that takes around O(n^2.38) best known algorithm as off now.

https://stackoverflow.com/questions/8546756/matrix-multiplication-algorithm-time-complexity

3
3
Why the algo won't care about the size here.... because if the size will be so much large access time for particular elements will vary
0
0
23 votes
23 votes

Running time of an algorithm is always independent of the storage scheme. While computing the running time of an algorithm we assume that to access any element time taken is same. So Answer is D.

But if the question asked best time complexity in which of the following implementation (not algorithm) then Option a is correct.

edited by

4 Comments

@reena while measuring the time complexity of an algorithm we do the asymptotic analysis, where we do not bother about the hardware on which the program is running.

But obviously hardware effect the overall time is taken to execute a program.
6
6
program execution time effects when we're accessing row-major order & array is stored in column-major order.( one simple point is huge no. of cache miss which will cause many miss penalty).
0
0

Sir, you switched the explanations.

Time Complexity is independent of hardware and storage schemes. It only depends on the code.

 

Running time, however, would depend upon several other factors like hardware, pipelining, miss penalties etc.

1
1
4 votes
4 votes

ans a)

Matrix multiplication takes the rows of M2 and columns of M2 in order. So, if the array A is stored in row-major and array B is stored in column-major, when the first element is accessed, the neighbouring elements (which will be immediately needed) will also be brought to cache. So, this storage scheme is best for matrix multiplication in terms of execution time.

 

3 Comments

The question asks about "Time Complexity". In any cases O(n3) would be the TC. Had it been asked about "execution time in terms of memory access time" then option a. would be appropriate. 

Please clarify my doubt.

21
21
did you mean to say rows of M1?

and i think you have given answer for the question asking the about the program execution and the time complexity
0
0

 An O(n3) algorithm will still be an O(n3)algorithm (assuming you are using the straightforward matrix-multiply). However, by taking into account the memory hierarchy of the system, you can significantly speed up that O(n3) algorithm.

0
0
3 votes
3 votes
The question asks about time complexity, not time taken by the program and for time complexity, it doesn't matter how we store array elements or which data structure we use, we always need to access same number of elements of M1 and M2 to multiply the matrices. It is always constant or O(1) time to do element access, thus the constants may differ for different schemes, but not the time complexity.
Answer:

Related questions