skip to main content
10.1145/3564625.3564658acmotherconferencesArticle/Chapter ViewAbstractPublication PagesacsacConference Proceedingsconference-collections
research-article
Artifacts Evaluated & Functional / v1.1

Better Together: Attaining the Triad of Byzantine-robust Federated Learning via Local Update Amplification

Published:05 December 2022Publication History

ABSTRACT

Manipulation of local training data and local updates, i.e., the Byzantine poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Many Byzantine-robust aggregation algorithms (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants at the central aggregator. However, they largely suffer from model quality degradation due to the over-removal of local updates or/and the inefficiency caused by the expensive analysis of the high-dimensional local updates.

In this work, we propose AgrAmplifier that aims to simultaneously attain the triad of robustness, fidelity and efficiency for FL. AgrAmplifier features the amplification of the “morality” of local updates to render their maliciousness and benignness clearly distinguishable. It re-organizes the local updates into patches and extracts the most activated features in the patches. This strategy can effectively enhance the robustness of the aggregator, and it also retains high fidelity as the amplified updates become more resistant to local translations. Furthermore, the significant dimension reduction in the feature space greatly benefits the efficiency of the aggregation.

AgrAmplifier is compatible with any existing Byzantine-robust mechanism. In this paper, we integrate it with three mainstream ones, i.e., distance-based, prediction-based, and trust bootstrapping-based mechanisms. Our extensive evaluation against five representative poisoning attacks on five datasets across diverse domains demonstrates the consistent enhancement for all of them, with average gains at , and in terms of robustness, fidelity, and efficiency respectively. We release the source code of AgrAmplifier and our artifacts to facilitate future research in this area: https://github.com/UQ-Trust-Lab/AgrAmplifier.

References

  1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467(2016).Google ScholarGoogle Scholar
  2. Dan Alistarh, Zeyuan Allen-Zhu, and Jerry Li. 2018. Byzantine stochastic gradient descent. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 4618–4628.Google ScholarGoogle Scholar
  3. Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. 2019. Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks. (2019).Google ScholarGoogle Scholar
  4. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics. PMLR, 2938–2948.Google ScholarGoogle Scholar
  5. Moran Baruch, Gilad Baruch, and Yoav Goldberg. 2019. A Little Is Enough: Circumventing Defenses For Distributed Learning. http://arxiv.org/licenses/nonexclusive-distrib/1.0(2019).Google ScholarGoogle Scholar
  6. Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing federated learning through an adversarial lens. In International Conference on Machine Learning. PMLR, 634–643.Google ScholarGoogle Scholar
  7. Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389(2012).Google ScholarGoogle Scholar
  8. Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. In Proceedings of the 31st International Conference on Neural Information Processing Systems. 118–128.Google ScholarGoogle Scholar
  9. Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2020. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. arXiv preprint arXiv:2012.13995(2020).Google ScholarGoogle Scholar
  10. Pierre Courtiol, Charles Maussion, Matahi Moarii, Elodie Pronier, Samuel Pilcer, Meriem Sefta, Pierre Manceron, Sylvain Toldo, Mikhail Zaslavskiy, Nolwenn Le Stang, 2019. Deep learning-based classification of mesothelioma improves prediction of patient outcome. Nature medicine 25, 10 (2019), 1519–1525.Google ScholarGoogle Scholar
  11. Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. 2020. Local model poisoning attacks to byzantine-robust federated learning. In 29th {USENIX} Security Symposium ({USENIX} Security 20). 1605–1622.Google ScholarGoogle Scholar
  12. Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. Machine Learning and Computer Security Workshop (2017).Google ScholarGoogle Scholar
  13. Rachid Guerraoui, Sébastien Rouault, 2018. The hidden vulnerability of distributed learning in byzantium. In International Conference on Machine Learning. PMLR, 3521–3530.Google ScholarGoogle Scholar
  14. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 2016-. IEEE, 770–778.Google ScholarGoogle Scholar
  15. Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. 2018. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 19–35.Google ScholarGoogle ScholarCross RefCross Ref
  16. Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, 2019. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977(2019).Google ScholarGoogle Scholar
  17. Jakub Konečnỳ, H Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527(2016).Google ScholarGoogle Scholar
  18. Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492(2016).Google ScholarGoogle Scholar
  19. Alex Krizhevsky, Geoffrey Hinton, 2009. Learning multiple layers of features from tiny images. (2009).Google ScholarGoogle Scholar
  20. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012).Google ScholarGoogle Scholar
  21. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial Machine Learning at Scale. http://arxiv.org/licenses/nonexclusive-distrib/1.0(2016).Google ScholarGoogle Scholar
  22. Xingyu Li, Zhe Qu, Shangqing Zhao, Bo Tang, Zhuo Lu, and Yao Liu. 2021. LoMar: A Local Defense Against Poisoning Attack on Federated Learning. IEEE Transactions on Dependable and Secure Computing (2021).Google ScholarGoogle Scholar
  23. Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. 2017. Trojaning attack on neural networks. (2017).Google ScholarGoogle Scholar
  24. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273–1282.Google ScholarGoogle Scholar
  25. El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. (2018).Google ScholarGoogle Scholar
  26. Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil Lupu, and Fabio Roli. 2017. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. In Proceedings of the 10th ACM Workshop on artificial intelligence and security(AISec ’17). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. Internet Society (2021), 18.Google ScholarGoogle Scholar
  28. V. Shejwalkar, A. Houmansadr, P. Kairouz, and D. Ramage. 2022. Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning. In 2022 2022 IEEE Symposium on Security and Privacy (SP). 1117–1134.Google ScholarGoogle Scholar
  29. Reza Shokri and Vitaly Shmatikov. 2015. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. ACM, 1310–1321.Google ScholarGoogle Scholar
  30. Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3–18.Google ScholarGoogle ScholarCross RefCross Ref
  31. Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. 2019. Can You Really Backdoor Federated Learning?(2019).Google ScholarGoogle Scholar
  32. Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, and Ling Liu. 2020. Data Poisoning Attacks Against Federated Learning Systems. In ESORICS 2020. 480–501.Google ScholarGoogle Scholar
  33. Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2018. Generalized byzantine-tolerant sgd. arXiv preprint arXiv:1802.10116(2018).Google ScholarGoogle Scholar
  34. Kouichi Yamaguchi, Kenji Sakamoto, Toshio Akabane, and Yoshiji Fujimoto. 1990. A neural network for speaker-independent isolated word recognition. In Proc. First International Conference on Spoken Language Processing (ICSLP 1990). 1077–1080.Google ScholarGoogle ScholarCross RefCross Ref
  35. Dingqi Yang, Daqing Zhang, and Bingqing Qu. 2016. Participatory cultural mapping based on collective behavior data in location-based social networks. ACM Transactions on Intelligent Systems and Technology (TIST) 7, 3(2016), 1–23.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2(2019), 1–19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. LeCun Yann, Cortes Corinna, and Christopher J. Burges. 1998. Mnist handwritten digit database. (1998). Available: http://yann.lecun.com/exdb/mnist.Google ScholarGoogle Scholar
  38. Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR, 5650–5659.Google ScholarGoogle Scholar
  39. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016. Understanding deep learning requires rethinking generalization. (2016).Google ScholarGoogle Scholar
  40. Yanjun Zhang, Guangdong Bai, Xue Li, Caitlin Curtis, Chen Chen, and Ryan KL Ko. 2020. PrivColl: Practical Privacy-Preserving Collaborative Machine Learning. In European Symposium on Research in Computer Security. Springer, 399–418.Google ScholarGoogle Scholar
  41. Yanjun Zhang, Guangdong Bai, Xue Li, Surya Nepal, Marthie Grobler, Chen Chen, and Ryan KL Ko. 2022. Preserving Privacy for Distributed Genome-Wide Analysis Against Identity Tracing Attacks. IEEE Transactions on Dependable and Secure Computing (2022).Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Yanjun Zhang, Guangdong Bai, Mingyang Zhong, Xue Li, and Ryan Ko. 2020. Differentially private collaborative coupling learning for recommender systems. IEEE Intelligent Systems(2020).Google ScholarGoogle Scholar
  43. Bo Zhao, Peng Sun, Tao Wang, and Keyu Jiang. 2022. FedInv: Byzantine-robust Federated Learning by Inversing Local Model Updates. (2022).Google ScholarGoogle Scholar

Index Terms

  1. Better Together: Attaining the Triad of Byzantine-robust Federated Learning via Local Update Amplification

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ACSAC '22: Proceedings of the 38th Annual Computer Security Applications Conference
      December 2022
      1021 pages
      ISBN:9781450397599
      DOI:10.1145/3564625

      Copyright © 2022 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 December 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate104of497submissions,21%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format