Skip to main content

The Impact of Synchronization in Parallel Stochastic Gradient Descent

  • Conference paper
  • First Online:
Distributed Computing and Intelligent Technology (ICDCIT 2022)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 13145))

  • 749 Accesses

Abstract

In this paper, we discuss our and related work in the domain of efficient parallel optimization, using Stochastic Gradient Descent, for fast and stable convergence in prominent machine learning applications. We outline the results in the context of aspects and challenges regarding synchronization, consistency, staleness and parallel-aware adaptiveness, focusing on the impact on the overall convergence.

This work is supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP), Knut and Alice Wallenberg Foundation, the SSF proj. “FiC” nr. GMT14-0032 and the VR proj. with nr. 2021-05443.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Agarwal, A., Duchi, J.C.: Distributed delayed stochastic optimization. In: Advances in Neural Information Processing Systems, pp. 873–881 (2011)

    Google Scholar 

  2. Alistarh, D., Chatterjee, B., Kungurtsev, V.: Elastic consistency: a general consistency model for distributed stochastic gradient descent. arXiv preprint arXiv:2001.05918 (2020)

  3. Alistarh, D., De Sa, C., Konstantinov, N.: The convergence of stochastic gradient descent in asynchronous shared memory. In: ACM Symposium on Principles of Distributed Computing, PODC 2018, pp. 169–178. ACM, New York (2018). https://doi.org/10.1145/3212734.3212763

  4. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Advances in Neural Information Processing Systems, pp. 1709–1720 (2017)

    Google Scholar 

  5. Amdahl, G.M.: Validity of the single processor approach to achieving large scale computing capabilities. In: Proceedings of the April 18–20, 1967, Spring Joint Computer Conference, pp. 483–485 (1967)

    Google Scholar 

  6. Bäckström, K., Papatriantafilou, M., Tsigas, P.: MindTheStep-AsyncPSGD: adaptive asynchronous parallel stochastic gradient descent. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 16–25. IEEE (2019)

    Google Scholar 

  7. Ben-Nun, T., Besta, M., Huber, S., Ziogas, A.N., Peter, D., Hoefler, T.: A modular benchmarking infrastructure for high-performance and reproducible deep learning. In: 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 66–77. IEEE (2019)

    Google Scholar 

  8. Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. (CSUR) 52(4), 1–43 (2019)

    Article  Google Scholar 

  9. Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods, vol. 23. Prentice Hall, Upper Saddle River (1989)

    MATH  Google Scholar 

  10. Bäckström, K., Walulya, I., Papatriantafilou, M., Tsigas, P.: Consistent lock-free parallel stochastic gradient descent for fast and stable convergence. In: 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 423–432 (2021). https://doi.org/10.1109/IPDPS49936.2021.00051

  11. Chaturapruek, S., Duchi, J.C., Ré, C.: Asynchronous stochastic convex optimization: the noise is in the noise and SGD don’t care. In: Advances in Neural Information Processing Systems, pp. 1531–1539 (2015)

    Google Scholar 

  12. De Sa, C.M., Zhang, C., Olukotun, K., Ré, C., Ré, C.: Taming the wild: a unified analysis of Hogwild-style algorithms. In: Advances in Neural Information Processing Systems, vol. 28, pp. 2674–2682. Curran Associates, Inc. (2015). http://papers.nips.cc/paper/5717-taming-the-wild-a-unified-analysis-of-hogwild-style-algorithms.pdf

  13. Gupta, S., Zhang, W., Wang, F.: Model accuracy and runtime tradeoff in distributed deep learning: a systematic study. In: 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 171–180. IEEE (2016)

    Google Scholar 

  14. Ho, Q., et al.: More effective distributed ml via a stale synchronous parallel parameter server. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2013)

    Google Scholar 

  15. Jiang, Z., Balu, A., Hegde, C., Sarkar, S.: Collaborative deep learning in fixed topology networks. In: Advances in Neural Information Processing Systems, pp. 5904–5914 (2017)

    Google Scholar 

  16. Keskar, N.S., Mudigere, D., Nocedal, J., Smelyanskiy, M., Tang, P.T.P.: On large-batch training for deep learning: generalization gap and sharp minima. arXiv:1609.04836 (2016)

  17. Li, M., et al.: Scaling distributed machine learning with the parameter server. In: 11th Symposium on Operating Systems Design and Implementation, pp. 583–598 (2014)

    Google Scholar 

  18. Li, S., Ben-Nun, T., Girolamo, S.D., Alistarh, D., Hoefler, T.: Taming unbalanced training workloads in deep learning with partial collective operations. In: 25th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pp. 45–61 (2020)

    Google Scholar 

  19. Lian, X., Huang, Y., Li, Y., Liu, J.: Asynchronous parallel stochastic gradient for nonconvex optimization. In: Advances in Neural Information Processing Systems, pp. 2737–2745 (2015)

    Google Scholar 

  20. Lian, X., Zhang, W., Zhang, C., Liu, J.: Asynchronous decentralized parallel stochastic gradient descent. In: International Conference on Machine Learning, pp. 3043–3052. PMLR (2018)

    Google Scholar 

  21. Lopez, F., Chow, E., Tomov, S., Dongarra, J.: Asynchronous SGD for DNN training on shared-memory parallel architectures. In: International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 1–4. IEEE (2020)

    Google Scholar 

  22. Ma, Y., Rusu, F., Torres, M.: Stochastic gradient descent on modern hardware: multi-core CPU or GPU? Synchronous or asynchronous? In: 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 1063–1072. IEEE (2019)

    Google Scholar 

  23. Mania, H., Pan, X., Papailiopoulos, D., Recht, B., Ramchandran, K., Jordan, M.I.: Perturbed iterate analysis for asynchronous stochastic optimization. SIAM J. Optim. 27(4), 2202–2229 (2017)

    Article  MathSciNet  Google Scholar 

  24. McMahan, B., Streeter, M.: Delay-tolerant algorithms for asynchronous distributed online learning. In: Advances in Neural Information Processing Systems, vol. 27, pp. 2915–2923. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5242-delay-tolerant-algorithms-for-asynchronous-distributed-online-learning.pdf

  25. Mishkin, D., Sergievskiy, N., Matas, J.: Systematic evaluation of convolution neural network advances on the ImageNet. Comput. Vis. Image Underst. 161, 11–19 (2017)

    Article  Google Scholar 

  26. Mitliagkas, I., Zhang, C., Hadjis, S., Ré, C.: Asynchrony begets momentum, with an application to deep learning. In: 54th Annual Allerton Conference on Communication, Control, and Computing, pp. 997–1004. IEEE (2016)

    Google Scholar 

  27. Nguyen, L.M., Nguyen, P.H., van Dijk, M., Richtárik, P., Scheinberg, K., Takáč, M.: SGD and Hogwild! convergence without the bounded gradients assumption. arXiv preprint arXiv:1802.03801 (2018)

  28. Recht, B., Re, C., Wright, S., Niu, F.: Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems (NIPS), vol. 24, pp. 693–701. Curran Associates, Inc. (2011)

    Google Scholar 

  29. Sallinen, S., Satish, N., Smelyanskiy, M., Sury, S.S., Ré, C.: High performance parallel stochastic gradient descent in shared memory. In: IEEE International Parallel and Distributed Processing Symposium, pp. 873–882. IEEE (2016)

    Google Scholar 

  30. Sra, S., Yu, A.W., Li, M., Smola, A.J.: AdaDelay: delay adaptive distributed stochastic convex optimization. arXiv preprint arXiv:1508.05003 (2015)

  31. Stich, S.U.: Local SGD converges fast and communicates little. In: International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  32. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147 (2013)

    Google Scholar 

  33. Wei, J., Gibson, G.A., Gibbons, P.B., Xing, E.P.: Automating dependence-aware parallelization of machine learning training on distributed shared memory. In: 14th EuroSys Conference 2019, pp. 1–17 (2019)

    Google Scholar 

  34. Zhang, W., Gupta, S., Lian, X., Liu, J.: Staleness-aware Async-SGD for distributed deep learning. arXiv preprint arXiv:1511.05950 (2015)

  35. Zhang, W., Gupta, S., Lian, X., Liu, J.: Staleness-aware Async-SGD for distributed deep learning. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, pp. 2350–2356. AAAI Press (2016)

    Google Scholar 

  36. Zhao, X., An, A., Liu, J., Chen, B.X.: Dynamic stale synchronous parallel distributed training for deep learning. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 1507–1517, July 2019. https://doi.org/10.1109/ICDCS.2019.00150

  37. Zinkevich, M., Weimer, M., Li, L., Smola, A.J.: Parallelized stochastic gradient descent. In: Advances in Neural Information Processing Systems, pp. 2595–2603 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Karl Bäckström , Marina Papatriantafilou or Philippas Tsigas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bäckström, K., Papatriantafilou, M., Tsigas, P. (2022). The Impact of Synchronization in Parallel Stochastic Gradient Descent. In: Bapi, R., Kulkarni, S., Mohalik, S., Peri, S. (eds) Distributed Computing and Intelligent Technology. ICDCIT 2022. Lecture Notes in Computer Science(), vol 13145. Springer, Cham. https://doi.org/10.1007/978-3-030-94876-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-94876-4_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-94875-7

  • Online ISBN: 978-3-030-94876-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics