# zbMATH — the first resource for mathematics

Parallelizing Strassen’s method for matrix multiplication on distributed-memory MIMD architectures. (English) Zbl 0839.68093
Summary: We present a parallel method for matrix multiplication on distributed-memory MIMD architectures based on Strassen’s method. Our timing tests, performed on a 56-node Intel Paragon, demonstrate the realization of the potential of the Strassen’s method with a complexity of $$4.7M^{2.807}$$ at the system level rather than the node level at which several earlier works have been focused. The parallel efficiency is nearly perfect when the processor number is the power of 7. The parallelized Strassen’s method seems always faster than the traditional matrix multiplication methods whose complexity is $$2M^3$$ coupled with the BMR method and the Ring method at the system level. The speed gain depends on matrix order $$M : 20 \%$$ for $$M \approx 1000$$ and more than $$100\%$$ for $$M \approx 5000$$.

##### MSC:
 68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) 68M99 Computer system organization
CLASSPACK; PUMMA
Full Text:
##### References:
 [1] Golub, G.H.; Van Loan, C.F., Matrix computations, (1989), John Hopkins University Press Baltimore, MD · Zbl 0733.65016 [2] Pan, V., How can we speed up matrix multiplication?, SIAM reviews, 26, 393-415, (1984) · Zbl 0563.65028 [3] Strassen, V., Gaussian elimination is not optimal, Numer. math., 13, 354-356, (1969) · Zbl 0185.40101 [4] Coppersmith, D.; Winograd, S., Matrix multiplication via arithmetic progressions, (), 1-6 [5] Winograd, S., Some remarks on fast multiplication of polynomials, (), 181-196 [6] Yau, S.-T.; Lu, Y.Y., Reducing the symmetric matrix eigenvalue problem to matrix multiplications, SIAM journal on scientific computing, 14, 121-144, (1993) · Zbl 0771.65019 [7] Gallivan, K.A.; Plemmons, R.J.; Sameh, A.H., Parallel algorithms for dense linear algebra computations, SIAM rev., 32, 54-135, (1990) · Zbl 0697.65010 [8] Manber, U., Introduction to algorithms, (1989), Addison-Wesley Reading, MA [9] Bailey, D., Extra high speed matrix multiplication on the CRAY-2, SIAM J. sci. statist. comput., 9, 603-607, (1988) · Zbl 0644.65030 [10] Bjørstad, P.; Manne, F.; Sørevik, T.; Vajteršic, M., Efficient matrix multiplication on SIMD computers, SIAM J. anal. appl., 13, 386-401, (1992) · Zbl 0757.65050 [11] D. Scott, Private communication, Supercomputer Systems Division of Intel Corporation, Beaverton, OR, (March 1994). [12] Choi, J.; Dongarra, J.J.; Walker, D.W., PUMMA: parallel universal matrix multiplication algorithms on distributed memory concurrent computers, () · Zbl 0874.68129 [13] Fox, G.C.; Hey, A.I.; Otto, S., Matrix algorithms on the hypercube I: matrix multiplication, Parallel computing, 4, 17, (1987) · Zbl 0632.65043 [14] Huss-Lederman, S.; Tsao, A.; Jacobson, E.M.; Zhang, G., Matrix multiplication on intel touchstone delta, () [15] Kuck and Associates, CLASSPACK basic math library User’s guide, (December 1992), Kuck & Associates Champaign, IL [16] Fox, G.C.; Johnson, M.A.; Lyzenga, G.; Otto, S.W.; Salmon, J.; Walker, D., () [17] Higham, N.J., Exploiting fast matrix multiplication within the level 3 BLAS, ACM trans. math software, 16, 112-115, (1990) · Zbl 0900.65118
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.