Amdahl’s and Gustafson-Barsis’s Laws Revisited

eng Artykuł w języku angielskim DOI: 10.14313/PAR_258/35

wyślij Andrzej Karbowski Warsaw University of Technology, Faculty of Electronics and Information Technology, Institute of Control and Computation Engineering, Nowowiejska 15/19, 00-665 Warszawa, Poland

Pobierz Artykuł

Abstract

Amdahl’s and Gustafson-Barsis’s laws, which belong to the most influential concepts in parallel and distributed processing, describe limits on the speedup obtained owing to parallelization of an application when the sequential fraction of the time of the execution of it on, respectively, sequential and parallel machines, are given. In the literature these two laws are derived separately or their connections are shown in rather complicated ways. In this paper it is shown that by simple arithmetic transformations in few lines Gustafson-Barsis’s Law can be obtained from Amdahl’s Law and vice-versa.

Keywords

distributed computing, parallel computing, performance measures, speedup

Prawa Amdahla i Gustafsona-Barsisa – ponowne spojrzenie

Streszczenie

Prawa Amdahla i Gustafsona-Barsisa, należące do najbardziej wpływowych koncepcji w dziedzinie przetwarzania równoległego i rozproszonego, opisują ograniczenia przyspieszenia uzyskiwanego dzięki zrównolegleniu aplikacji, gdy podany jest sekwencyjny ułamek czasu jej wykonania na maszynach sekwencyjnych i równoległych. W literaturze te dwa prawa są wyprowadzane oddzielnie lub ich powiązania są przedstawiane w dość skomplikowany sposób. W niniejszym artykule wykazano, że za pomocą prostych przekształceń arytmetycznych w kilku wierszach prawo Gustafsona-Barsisa można uzyskać z prawa Amdahla i odwrotnie.

Słowa kluczowe

miary wydajności, obliczenia równoległe, obliczenia rozproszone, przyspieszenie

Bibliografia

  1. Amdahl G.M., Validity of the single-processor approach to achieving large scale computing capabilities, Proceedings of Spring Joint Computer Conference AFIPS ‘67, Vol. 30, 1967, 483–485, DOI: 10.1145/1465482.1465560.
  2. Barlas G., Multicore and GPU Programming: An Integrated Approach, Morgan Kaufmann, Waltham, MA, 2015.
  3. Basu S.K., Parallel and Distributed Computing: Architectures and Algorithms, PHI Learning, Delhi, 2016.
  4. Bertsekas D.P., Tsitsiklis J.N., Parallel and Distributed Computation: Numerical Methods, Prentice-Hall Inc., Englewood Cliffs, 1989.
  5. Chattopadhyay A., Handbook of computer architecture, Springer Nature Singapore Pte Ltd., 2025.
  6. Foster I., Designing and Building Parallel Programs: Concepts and Tools for Parallel Software Engineering, Addison Wesley, Boston, 1995.
  7. Gustafson J.L., Reevaluating Amdahl’s law, “Communications of the ACM”, Vol. 31, No. 5, 1988, 532–533, DOI: 10.1145/42411.42415.
  8. Gustafson J.L., Montry G.R., Benner R.E., Development of parallel methods for a 1024-processor hypercube, “SIAM Journal on Scientific and Statistical Computing”, Vol. 9, No. 4, 1988, 609–638, DOI: 10.1137/0909041.
  9. Hill M.D., Marty M.R., Amdahl’s law in the multicore era, “Computer”, Vol. 41, No. 7, 2008, 33–38, DOI: 10.1109/MC.2008.209.
  10. Huang T., Zhu Y., Qiu M., Yin X., Wang X., Extending Amdahl’s law and Gustafson’s law by evaluating interconnections on multi-core processors, “The Journal of Supercomputing”, Vol. 66, 2013, 305–319, DOI: 10.1007/s11227-013-0908-9.
  11. Hwang K., Cloud Computing for Machine Learning and Cognitive Applications, MIT Press, Cambridge MA, 2017.
  12. Li Z., Duan F., Che H., A unified scaling model in the era of big data analytics, Proceedings of the 3rd International Conference on High Performance Compilation, Computing and Communications, HP3C ‘19, 67–77, DOI: 10.1145/3318265.3318268.
  13. Orii S., Metrics for evaluation of parallel efficiency toward highly parallel processing, “Parallel Computing”, Vol. 36, No. 1, 2010, 16–25, DOI: 10.1016/j.parco.2009.11.003.
  14. Paul J.M., Meyer B.H., Amdahl’s law revisited for single chip systems, “International Journal on Parallel Programming”, Vol. 35, No. 2, 2007, 101–123, DOI: 10.1007/s10766-006-0028-8.
  15. Quinn M., Parallel Programming in C with MPI and OpenMP, McGraw-Hill Higher Education, New York, 2003.
  16. Rafiev A., Al-Hayanni M.A.N., Xia F., Shafik R., Romanovsky A., Yakovlev A., Speedup and power scaling models for heterogeneous many-core systems, “IEEE Transactions on Multi-Scale Computing Systems”, Vol. 4, No. 3, 2018, 436–449, DOI: 10.1109/TMSCS.2018.2791531.
  17. Rauber T., Rünger G., Parallel Programming: for Multicore and Cluster Systems, Springer Nature Switzerland AG, 2023, DOI: 10.1007/978-3-031-28924-8.
  18. Reilly E.D., Milestones in Computer Science and Information Technology, Greenwood Press, Westport, 2003.
  19. Robertazzi T.G., Shi L., Amdahl’s and Other Laws, [In:] Networking and Computation, Springer Nature Switzerland, Cham, Switzerland, 2nd edition, 2020, 139–149, DOI: 10.1007/978-3-030-36704-6_6.
  20. Schryen G., Speedup and efficiency of computational parallelization: A unifying approach and asymptotic analysis, “Journal of Parallel and Distributed Computing”, Vol. 187, 2024, DOI: 10.1016/j.jpdc.2023.104835.
  21. Shi Y., Reevaluating Amdahl’s Law and Gustafson’s Law, [www.cis.temple.edu/~shi/docs/amdahl/amdahl.html], October 1996, Unpublished manuscript.
  22. Sun X.-H., Chen Y., Reevaluating Amdahl’s law in the multicore era, “Journal of Parallel and Distributed Computing”, Vol. 70, No. 2, 2010, 183–188, DOI: 10.1016/j.jpdc.2009.05.002.
  23. Sun X.-H., Ni L., Scalable problems and memory-bounded speedup, “Journal of Parallel and Distributed Computing”, Vol. 19, No. 1, 1993, 27–37, DOI: 10.1006/jpdc.1993.1087.
  24. Theys M.D., Ali S., Siegel H.J., Chandy M., Hwang K., Kennedy K., Sha L., Shin K.G., Snir M., Snyder L., Sterling T., What are the top ten most influential parallel and distributed processing concepts of the past millenium? “Journal of Parallel and Distributed Computing”, Vol. 61, No. 12, 2001, 1827–1841, DOI: 10.1006/jpdc.2001.1767.