Skip to main content

Advertisement

Log in

Federated learning for energy constrained devices: a systematic mapping study

  • Published:
Cluster Computing Aims and scope Submit manuscript

Abstract

Federated machine learning (Fed ML) is a new distributed machine learning technique using clients’ local data applied to collaboratively train a global model without transmitting the datasets. Nodes, the participating devices in the ML training, only send parameter updates (e.g., weight updates in the case of neural networks), which are fused together by the server to build the global model without compromising raw data privacy. Fed ML guarantees its confidentiality by not divulging node data to third party, central servers. Privacy of data is a crucial network security aspect of Fed ML that will enable the technique for use in the context of data-sensitive Internet of Things (IoT) and mobile applications (including smart geo-location and smart grid infrastructure). However, most IoT and mobile devices are particularly energy constrained, which requires optimization of the Fed ML process for efficient training tasks and optimized power consumption. This paper, to the best of our knowledge, is the first Systematic Mapping Study (SMS) on Fed ML for energy constrained devices. First, we selected a total of 67 from 800 papers that satisfy our criteria, then provide a structured overview of the field using a set of carefully chosen research questions. Finally, we attempt to offer an analysis of the state-of-the-art Fed ML techniques and outline potential recommendations for the research community.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data availability

No additional data were used in the study besides the list of the papers retrieved from public databases as shown in the process description.

Notes

  1. https://gitlab.com/rachid-el-mokadem/fedmlsysrev.

References

  1. McMahan, H.B., Moore, E., Ramage, D., y Arcas, B.A.: Federated learning of deep networks using model averaging. arXiv preprint https://arxiv.org/abs/1602.05629 (2016)

  2. Konečnỳ, J., McMahan, H.B., Ramage, D., Richtárik, P.: Federated optimization: distributed machine learning for on-device intelligence. arXiv preprint https://arxiv.org/abs/1610.02527 (2016)

  3. Reisizadeh, A., Mokhtari, A., Hassani, H., Jadbabaie, A., Pedarsani, R.: FedPAQ: a communication-efficient federated learning method with periodic averaging and quantization. In: International Conference on Artificial Intelligence and Statistics, pp. 2021–2031. PMLR (2020)

  4. Mrad, I., Samara, L., Abdellatif, A.A., Al-Abbasi, A.O., Hamila, R., Erbad, A.: Federated learning for UAV swarms under class imbalance and power consumption constraints. In: 2021 IEEE Global Communications Conference (GLOBECOM), pp. 1–6 (2021)

  5. Shurdi, O., Ruçi, L., Biberaj, A., Mesi, G.: 5G energy efficiency overview. Eur. Sci. J. 17, 315–327 (2021)

    Google Scholar 

  6. Cisco, V.: Cisco visual networking index: forecast and trends, 2017–2022. White Pap. 1, 1 (2018)

    Google Scholar 

  7. Fettweis, G., Zimmermann, E.: ICT energy consumption-trends and challenges. In: Proceedings of the 11th International Symposium on Wireless Personal Multimedia Communications, vol. 2, p. 6. Citeseer (2008)

  8. Kharote, P.A., Satone, M.P.: WSN: energy aware sensor node design. Int. J. Emerg. Trends Sci. Technol. 1 (2014)

  9. Kandris, D., Nakas, C., Vomvas, D., Koulouras, G.: Applications of wireless sensor networks: an up-to-date survey. Appl. Syst. Innov. 3(1), 14 (2020)

    Article  Google Scholar 

  10. Lo, S.K., Lu, Q., Wang, C., Paik, H., Zhu, L.: A systematic literature review on federated machine learning: from a software engineering perspective. arXiv preprint https://arxiv.org/abs/2007.11354 (2020)

  11. Liu, Y., Zhang, L., Ge, N., Li, G.: A systematic literature review on federated learning: from a model quality perspective. arXiv https://arxiv.org/abs/2012.01973 (2020)

  12. Lim, W.Y.B., Luong, N.C., Hoang, D.T., Jiao, Y., Liang, Y..-C., Yang, Q., Niyato, D., Miao, C.: Federated learning in mobile edge networks: a comprehensive survey. IEEE Commun. Surv. Tutor. 22(3), 2031–2063 (2020)

    Article  Google Scholar 

  13. Aledhari, M., Razzak, R., Parizi, R.M., Saeed, F.: Federated learning: a survey on enabling technologies, protocols, and applications. IEEE Access 8, 140699–140725 (2020)

    Article  Google Scholar 

  14. Briggs, C., Fan, Z., Andras, P.: A review of privacy-preserving federated learning for the internet-of-things. arXiv e-prints, p. arXiv–2004 (2020)

  15. Imteaj, A., Thakker, U., Wang, S., Li, J., Amini, M.H.: A survey on federated learning for resource-constrained IoT devices. IEEE Internet Things J. 9(1), 1–24 (2021)

    Article  Google Scholar 

  16. Briggs, C., Fan, Z., András, P.: A review of privacy-preserving federated learning for the internet-of-things. https://arxiv.org/abs/Learning (2020)

  17. Li, T., Sahu, A.K., Talwalkar, A., Smith, V.: Federated learning: challenges, methods, and future directions. IEEE Signal Process. Mag. 37(3), 50–60 (2020)

    Article  Google Scholar 

  18. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-iid data. arXiv preprint https://arxiv.org/abs/1806.00582 (2018)

  19. Yang, C., Wang, Q., Xu, M., Wang, S., Bian, K., Liu, X.: Heterogeneity-aware federated learning. arXiv preprint https://arxiv.org/abs/2006.06983 (2020)

  20. Li, L., Shi, D., Hou, R., Li, H., Pan, M., Han, Z.: To talk or to work: flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices. arXiv preprint https://arxiv.org/abs/2012.11804 (2020)

  21. Diao, E., Ding, J., Tarokh, V.: HeteroFL: computation and communication efficient federated learning for heterogeneous clients. arXiv preprint https://arxiv.org/abs/2010.01264 (2020)

  22. Martinez, B., Monton, M., Vilajosana, I., Prades, J.D.: The power of models: modeling power consumption for IoT devices. IEEE Sens. J. 15(10), 5777–5789 (2015)

    Article  Google Scholar 

  23. Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. arXiv preprint https://arxiv.org/abs/1610.05492 (2016)

  24. Sattler, F., Wiedemann, S., Müller, K.-R., Samek, W.: Robust and communication-efficient federated learning from non-iid data. IEEE Trans. Neural Netw. Learn. Syst. 31(9), 3400–3413 (2019)

    Article  Google Scholar 

  25. Chen, Y., Sun, X., Jin, Y.: Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans. Neural Netw. Learn. Syst. 31(10), 4229–4238 (2019)

    Article  Google Scholar 

  26. Foukas, X., Kontovasilis, K., Marina, M.K.: Short-range cooperation of mobile devices for energy-efficient vertical handovers. Wirel. Commun. Mob. Comput. (2018). https://doi.org/10.1155/2018/3280927

    Article  Google Scholar 

  27. Kairouz, P., McMahan, H.B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A.N., Bonawitz, K., Charles, Z., Cormode, G., Cummings, R., et al.: Advances and open problems in federated learning. arXiv preprint https://arxiv.org/abs/1912.04977 (2019)

  28. Ma, Z., Xu, Y., Xu, H., Meng, Z., Huang, L., Xue, Y.: Adaptive batch size for federated learning in resource-constrained edge computing. IEEE Trans. Mob. Comput. 37(6), 1205–1221 (2021)

    Google Scholar 

  29. Mills, J., Hu, J., Min, G.: Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet Things J. 7(7), 5986–5994 (2019)

    Article  Google Scholar 

  30. Zhang, Y., Sun, B., Xiao, Y., Xiao, R., Wei, Y.: Feature augmentation for imbalanced classification with conditional mixture WGANS. Signal Process.: Image Commun. 75, 89–99 (2019)

    Google Scholar 

  31. Yao, X., Huang, T., Wu, C., Zhang, R., Sun, L.: Towards faster and better federated learning: a feature fusion approach. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 175–179. IEEE (2019)

  32. Yao, X., Huang, C., Sun, L.: Two-stream federated learning: reduce the communication costs. In: 2018 IEEE Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE (2018)

  33. Bouacida, N., Hou, J., Zang, H., Liu, X.: Adaptive federated dropout: improving communication efficiency and generalization for federated learning. arXiv preprint. https://arxiv.org/abs/2011.04050 (2020)

  34. Jiang, Y., Wang, S., Ko, B.J., Lee, W.-H., Tassiulas, L.: Model pruning enables efficient federated learning on edge devices. arXiv preprint https://arxiv.org/abs/1909.12326 (2019)

  35. Xu, W., Fang, W., Ding, Y., Zou, M., Xiong, N.: Accelerating federated learning for IoT in big data analytics with pruning, quantization and selective updating. IEEE Access 9, 38457–38466 (2021)

    Article  Google Scholar 

  36. Huo, Z., Yang, Q., Gu, B., Huang, L.C., et al.: Faster on-device training using new federated momentum algorithm. arXiv preprint https://arxiv.org/abs/2002.02090 (2020)

  37. Liu, L., Zhang, J., Song, S., Letaief, K.B.: Client-edge-cloud hierarchical federated learning. In: ICC 2020–2020 IEEE International Conference on Communications (ICC), pp. 1–6. IEEE (2020)

  38. Chai, Z., Chen, Y., Zhao, L., Cheng, Y., Rangwala, H.: FedAT: a communication-efficient federated learning method with asynchronous tiers under non-iid data. arXiv preprint https://arxiv.org/abs/2010.05958 (2020)

  39. Malekijoo, A., Fadaeieslam, M.J., Malekijou, H., Homayounfar, M., Alizadeh-Shabdiz, F., Rawassizadeh, R.: Fedzip: a compression framework for communication-efficient federated learning. arXiv preprint https://arxiv.org/abs/2102.01593 (2021)

  40. Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. Adv. Neural Inf. Process. Syst. 30, 1709–1720 (2017)

    Google Scholar 

  41. Xu, J., Du, W., Jin, Y., He, W., Cheng, R.: Ternary compression for communication-efficient federated learning. IEEE Trans. Neural Netw. Learn. Syst. (2020). https://doi.org/10.1109/TNNLS.2020.3041185

    Article  Google Scholar 

  42. Jhunjhunwala, D., Gadhikar, A., Joshi, G., Eldar, Y.C.: Adaptive quantization of model updates for communication-efficient federated learning. In ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3110–3114. IEEE (2021)

  43. Mao, Y., Zhao, Z., Yan, G., Liu, Y., Lan, T., Song, L., Ding, W.: Communication efficient federated learning with adaptive quantization. arXiv preprint https://arxiv.org/abs/2104.06023 (2021)

  44. Shi, S., Chu, X., Cheung, K.C., See, S.: Understanding top-k sparsification in distributed deep learning. arXiv preprint https://arxiv.org/abs/1911.08772 (2019)

  45. Han, P., Wang, S., Leung, K.K.: Adaptive gradient sparsification for efficient federated learning: an online learning approach. arXiv preprint https://arxiv.org/abs/2001.04756 (2020)

  46. Sun, H., Li, S., Yu, F.R., Qi, Q., Wang, J., Liao, J.: Toward communication-efficient federated learning in the internet of things with edge computing. IEEE Internet Things J. 7(11), 11053–11067 (2020)

    Article  Google Scholar 

  47. Rothchild, D., Panda, A., Ullah, E., Ivkin, N., Stoica, I., Braverman, V., Gonzalez, J., Arora, R.: FetchSGD: communication-efficient federated learning with sketching. In: International Conference on Machine Learning, pp. 8253–8265. PMLR (2020)

  48. Siblini, W., Meyer, F., Kuntz, V.: A count-sketch to reduce memory consumption when training a model with gradient descent. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2019)

  49. Qiao, Z., Yu, X., Zhang, J., Letaief, K.B.: Communication-efficient federated learning with dual-side low-rank compression. arXiv preprint https://arxiv.org/abs/2104.12416 (2021)

  50. Lian, Z., Wang, W., Su, C.: Communication-efficient and optimized federated learning with local differential privacy. In: ICC 2021-IEEE International Conference on Communications, pp. 1–6. IEEE (2021)

  51. Wang, S., Tuor, T., Salonidis, T., Leung, K.K., Makaya, C., He, T., Chan, K.: Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 37(6), 1205–1221 (2019)

    Article  Google Scholar 

  52. Wang, H., Qu, Z., Guo, S., Gao, X., Li, R., Ye, B.: Intermittent pulling with local compensation for communication-efficient distributed learning. IEEE Trans. Emerg. Top. Comput. (2020). https://doi.org/10.1109/TETC.2020.3043300

    Article  Google Scholar 

  53. Sattler, F., Marban, A., Rischke, R., Samek, W.: Communication-efficient federated distillation. arXiv preprint https://arxiv.org/abs/2012.00632 (2020)

  54. Jeong, E., Oh, S., Kim, H., Park, J., Bennis, M., Kim, S.-L.: Communication-efficient on-device machine learning: federated distillation and augmentation under non-iid private data. arXiv preprint https://arxiv.org/abs/1811.11479 (2018)

  55. Park, J., Wang, S., Elgabli, A., Oh, S., Jeong, E., Cha, H., Kim, H., Kim, S.-L., Bennis, M.: Distilling on-device intelligence at the network edge. arXiv preprint https://arxiv.org/abs/1908.05895 (2019)

  56. Itahara, S., Nishio, T., Koda, Y., Morikura, M., Yamamoto, K.: Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data. arXiv preprint https://arxiv.org/abs/2008.06180 (2020)

  57. Seo, H., Park, J., Oh, S., Bennis, M., Kim, S.-L.: Federated knowledge distillation. arXiv preprint https://arxiv.org/abs/2011.02367 (2020)

  58. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint https://arxiv.org/abs/1503.02531 (2015)

  59. Nishio, T., Yonetani, R.: Client selection for federated learning with heterogeneous resources in mobile edge. In: ICC 2019–2019 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE (2019)

  60. Anh, T.T., Luong, N.C., Niyato, D., Kim, D.I., Wang, L.-C.: Efficient training management for mobile crowd-machine learning: a deep reinforcement learning approach. IEEE Wirel. Commun. Lett. 8(5), 1345–1348 (2019)

    Article  Google Scholar 

  61. Rahman, S.A., Tout, H., Mourad, A., Talhi, C.: FedMCCS: multi criteria client selection model for optimal IoT federated learning. IEEE Internet Things J. 8(6), 4723–4735 (2020)

    Google Scholar 

  62. Wu, W., He, L., Lin, W., Mao, R., Maple, C., Jarvis, S.A.: SAFA: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 70(5), 655–668 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  63. Sarkar, D., Rai, S., Narang, A.: CatFedAvg: optimising communication-efficiency and classification accuracy in federated learning. arXiv preprint https://arxiv.org/abs/2011.07229 (2020)

  64. Duan, M., Liu, D., Chen, X., Tan, Y., Ren, J., Qiao, L., Liang, L.: Astraea: self-balancing federated learning for improving classification accuracy of mobile deep learning applications. In: 2019 IEEE 37th International Conference on Computer Design (ICCD), pp. 246–254. IEEE (2019)

  65. Wang, Z., Xu, H., Liu, J., Huang, H., Qiao, C., Zhao, Y.: Resource-efficient federated learning with hierarchical aggregation in edge computing. In: IEEE INFOCOM 2021-IEEE Conference on Computer Communications, pp. 1–10. IEEE (2021)

  66. Jeon, J., Park, S., Choi, M., Kim, J., Kwon, Y..-B., Cho, S.: Optimal user selection for high-performance and stabilized energy-efficient federated learning platforms. Electronics 9(9), 1359 (2020)

    Article  Google Scholar 

  67. Zaw, C.W., Pandey, S.R., Kim, K., Hong, C.S.: Energy-aware resource management for federated learning in multi-access edge computing systems. IEEE Access 9, 34938–34950 (2021)

    Article  Google Scholar 

  68. Mo, X., Xu, J.: Energy-efficient federated edge learning with joint communication and computation design. arXiv preprint https://arxiv.org/abs/2003.00199 (2020)

  69. Jiang, J., Hu, L., Hu, C., Liu, J., Wang, Z.: Bacombo–bandwidth-aware decentralized federated learning. Electronics 9(3), 440 (2020)

    Article  Google Scholar 

  70. Sun, Y., Zhou, S., Gündüz, D.: Energy-aware analog aggregation for federated learning with redundant data. In: ICC 2020–2020 IEEE International Conference on Communications (ICC), pp. 1–7. IEEE (2020)

  71. Yu, R., Li, P.: Toward resource-efficient federated learning in mobile edge computing. IEEE Netw. 35(1), 148–155 (2021)

    Article  MathSciNet  Google Scholar 

  72. Elgabli, A., Park, J., Bedi, A.S., Issaid, C.B., Bennis, M., Aggarwal, V.: Q-GADMM: quantized group ADMM for communication efficient decentralized machine learning. IEEE Trans. Commun. 69(1), 164–181 (2020)

    Article  Google Scholar 

  73. Ansari, R.I., Chrysostomou, C., Hassan, S.A., Guizani, M., Mumtaz, S., Rodriguez, J., Rodrigues, J.J.P.C.: 5G D2D networks: techniques, challenges, and future prospects. IEEE Syst. J. 12, 3970–3984 (2018)

    Article  Google Scholar 

  74. Anamuro, C.V., Varsier, N., Schwoerer, J., Lagrange, X.: Distance-aware relay selection in an energy-efficient discovery protocol for 5G D2D communication. IEEE Trans. Wirel. Commun. 20, 4379–4391 (2021)

    Article  Google Scholar 

  75. García-Martín, E., Rodrigues, C.F., Riley, G., Grahn, H.: Estimation of energy consumption in machine learning. J. Parallel Distrib. Comput. 134, 75–88 (2019)

    Article  Google Scholar 

  76. Hothorn, T., Lausen, B.: Double-bagging: combining classifiers by bootstrap aggregation. Pattern Recogn. 36(6), 1303–1309 (2003)

    Article  MATH  Google Scholar 

  77. Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks. arXiv preprint https://arxiv.org/abs/1803.03635 (2018)

  78. Gordienko, Y., Kochura, Y., Taran, V., Gordienko, N., Bugaiov, A., Stirenko, S.: Adaptive iterative pruning for accelerating deep neural networks. In: 2019 XIth International Scientific and Practical Conference on Electronics and Information Technologies (ELIT), pp. 173–178. IEEE (2019)

  79. Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J.: Efficient neural architecture search via parameters sharing. In: International Conference on Machine Learning, pp. 4095–4104. PMLR (2018)

  80. Liang, X., Di, S., Tao, D., Li, S., Li, S., Guo, H., Chen, Z., Cappello, F.: Error-controlled lossy compression optimized for high compression ratios of scientific datasets. In 2018 IEEE International Conference on Big Data (Big Data), pp. 438–447. IEEE (2018)

  81. Jin, S., Li, G., Song, S.L., Tao, D.: A novel memory-efficient deep learning training framework via error-bounded lossy compression. arXiv preprint https://arxiv.org/abs/2011.09017 (2020)

  82. Liu, Z., Huang, X., Hu, Z., Khan, M.K., Seo, H., Zhou, L.: On emerging family of elliptic curves to secure internet of things: ECC comes of age. IEEE Trans. Depend. Secur. Comput. 14, 237–248 (2017)

    Google Scholar 

  83. Gentry, C., et al.: A Fully Homomorphic Encryption Scheme, vol. 20. Stanford University, Stanford (2009)

    MATH  Google Scholar 

  84. Al-Rubaie, M., Chang, J.M.: Privacy-preserving machine learning: threats and solutions. IEEE Secur. Priv. 17, 49–58 (2019)

    Article  Google Scholar 

  85. Rakin, A.S., He, Z., Fan, D.: TBT: targeted neural network attack with Bit Trojan. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13195–13204 (2020)

  86. Ren, H., Deng, J., Xie, X.: GRNN: generative regression neural network–a data leakage attack for federated learning. ACM Trans. Intell. Syst. Technol. 13(4), 1–24 (2022)

    Google Scholar 

  87. Wei, K., Li, J., Ding, M., Ma, C., Yang, H.H., Farokhi, F., Jin, S., Quek, T.Q., Poor, H.V.: Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)

    Article  Google Scholar 

  88. Loukil, F., Ghedira, C., Boukadi, K., Benharkat, A.-N.: Privacy-preserving IoT data aggregation based on blockchain and homomorphic encryption. Sensors (Basel, Switzerland) 21, 2452 (2021)

    Article  Google Scholar 

  89. Bhandari, R., Kirubanand, V.B.: Enhanced encryption technique for secure IoT data transmission. Int. J. Electr. Comput. Eng. 9(5), 3732 (2019)

    Google Scholar 

  90. Shahidinejad, A., Ghobaei-Arani, M., Souri, A., Shojafar, M., Kumari, S.: Light-edge: a lightweight authentication protocol for IoT devices in an edge-cloud environment. IEEE Consum. Electron. Mag. 11(2), 57–63 (2021)

    Article  Google Scholar 

  91. Kunchok, T., KirubanandV, B.: A lightweight hybrid encryption technique to secure IoT data transmission. Int. J. Eng. Technol. 7, 236 (2018)

    Article  Google Scholar 

  92. Ma, X., Jiang, Q., Shojafar, M., Alazab, M., Kumar, S., Kumari, S.: DisBezant: secure and robust federated learning against byzantine attack in IoT-enabled MTS. IEEE Trans. Intell. Transport. Syst. (2022). https://doi.org/10.1109/TITS.2022.3152156

    Article  Google Scholar 

  93. Da Silva, F.Q., Suassuna, M., França, A.C.C., Grubb, A.M., Gouveia, T.B., Monteiro, C.V., dos Santos, I.E.: Replication of empirical studies in software engineering research: a systematic mapping study. Empir. Softw. Eng. 19(3), 501–557 (2014)

    Google Scholar 

  94. Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (2007)

Download references

Funding

No funding was received for conducting this study.

Author information

Authors and Affiliations

Authors

Contributions

All authors have participated in conception and the analysis of the study in addition to writing the paper. REM has also done the paper selection and classification. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Rachid El Mokadem.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethical approval

This work does not raise any ethical issues.

Informed consent

None.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

This appendix lists all papers included in our study, tagged from P1 to P67 (chronological order) (Table 5).

Table 5 Papers list

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

El Mokadem, R., Ben Maissa, Y. & El Akkaoui, Z. Federated learning for energy constrained devices: a systematic mapping study. Cluster Comput 26, 1685–1708 (2023). https://doi.org/10.1007/s10586-022-03763-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10586-022-03763-4

Keywords

Navigation