skip to main content
research-article

Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for Metaverses

Published:09 April 2024Publication History
Skip Abstract Section

Abstract

Metaverse is expected to emerge as a new paradigm for the next-generation Internet, providing fully immersive and personalized experiences to socialize, work, and play in self-sustaining and hyper-spatio-temporal virtual world(s). The advancements in different technologies such as augmented reality, virtual reality, extended reality (XR), artificial intelligence (AI), and 5G/6G communication will be the key enablers behind the realization of AI-XR metaverse applications. While AI itself has many potential applications in the aforementioned technologies (e.g., avatar generation, network optimization), ensuring the security of AI in critical applications like AI-XR metaverse applications is profoundly crucial to avoid undesirable actions that could undermine users’ privacy and safety, consequently putting their lives in danger. To this end, we attempt to analyze the security, privacy, and trustworthiness aspects associated with the use of various AI techniques in AI-XR metaverse applications. Specifically, we discuss numerous such challenges and present a taxonomy of potential solutions that could be leveraged to develop secure, private, robust, and trustworthy AI-XR applications. To highlight the real implications of AI-associated adversarial threats, we designed a metaverse-specific case study and analyzed it through the adversarial lens. Finally, we elaborate upon various open issues that require further research interest from the community.

REFERENCES

  1. [1] Abadi Martin, Chu Andy, Goodfellow Ian, McMahan H. Brendan, Mironov Ilya, Talwar Kunal, and Zhang Li. 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. ACM, 308318.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. [2] Adadi Amina and Berrada Mohammed. 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 5213852160.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Aiken James and Scott-Hayward Sandra. 2019. Investigating adversarial attacks against network intrusion detection systems in SDNs. In Proceedings of the IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN’19). IEEE, 17.Google ScholarGoogle ScholarCross RefCross Ref
  4. [4] Akhtar Naveed, Mian Ajmal, Kardan Navid, and Shah Mubarak. 2021. Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 9 (2021), 155161155196.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Ali Asad, Ilahi Inaam, Qayyum Adnan, Mohammed Ihab, Al-Fuqaha Ala, and Qadir Junaid. 2021. Incentive-driven federated learning and associated security challenges: A systematic review. TechRxiv (2021).Google ScholarGoogle Scholar
  6. [6] Ali Hassan, Javed Rana Tallal, Qayyum Adnan, AlGhadhban Amer, Alazmi Meshari, Alzamil Ahmad, Al-utaibi Khaled, and Qadir Junaid. 2022. SPAM-DaS: Secure and privacy-aware misinformation detection as a service. TechRxiv (2022).Google ScholarGoogle Scholar
  7. [7] Ali Hassan, Khalid Faiq, Tariq Hammad Ali, Hanif Muhammad Abdullah, Ahmed Rehan, and Rehman Semeen. 2019. SSCNets: Robustifying DNNs using secure selective convolutional filters. IEEE Des. Test 37, 2 (2019), 5865.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Ali Hassan, Khan Muhammad Suleman, Al-Fuqaha Ala, and Qadir Junaid. 2022. Tamp-X: Attacking explainable natural language classifiers through tampered activations. Comput. Secur. (2022), 102791. DOI:DOI:Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Ali Hassan, Khan Muhammad Suleman, AlGhadhban Amer, Alazmi Meshari, Alzamil Ahmad, Al-Utaibi Khaled, and Qadir Junaid. 2021. All your fake detector are belong to us: Evaluating adversarial robustness of fake-news detectors under black-box settings. IEEE Access 9 (2021), 8167881692.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Ali H., Khan M. S., AlGhadhban A., Alazmi M., Alzamil A., K. Al-utaibi, and Qadir J.. 2023. Condetect: Detecting adversarially perturbed natural language inputs to deep classifiers through holistic analysis. Computers & Security 132 (2023), 103367.Google ScholarGoogle Scholar
  11. [11] Ali Hassan, Nepal Surya, Kanhere Salil S., and Jha Sanjay. 2020. HaS-Nets: A heal and select mechanism to defend DNNs against backdoor attacks for data collection scenarios. arXiv preprint arXiv:2012.07474 (2020).Google ScholarGoogle Scholar
  12. [12] Alsenwi Madyan, Tran Nguyen H., Bennis Mehdi, Pandey Shashi Raj, Bairagi Anupam Kumar, and Hong Choong Seon. 2021. Intelligent resource slicing for eMBB and URLLC coexistence in 5G and beyond: A deep reinforcement learning based approach. IEEE Trans. Wirel. Commun. 20, 7 (2021), 45854600.Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Arnab Anurag, Miksik Ondrej, and Torr Philip H. S.. 2018. On the robustness of semantic segmentation models to adversarial attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 888897.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Baluta Teodora, Chua Zheng Leong, Meel Kuldeep S., and Saxena Prateek. 2021. Scalable quantitative verification for deep neural networks. In Proceedings of the IEEE/ACM 43rd International Conference on Software Engineering (ICSE’21). 312323.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Bertino Elisa and Islam Nayeem. 2017. Botnets and internet of things security. Computer 50, 2 (2017), 7679.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. [16] Biggio Battista, Corona Igino, Maiorca Davide, Nelson Blaine, Šrndić Nedim, Laskov Pavel, Giacinto Giorgio, and Roli Fabio. 2013. Evasion attacks against machine learning at test time. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 387402.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. [17] Biggio Battista, Nelson B., and Laskov P.. 2012. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning. ArXiv e-prints, 18071814.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Boenisch Franziska, Dziedzic Adam, Schuster Roei, Shamsabadi Ali Shahin, Shumailov Ilia, and Papernot Nicolas. 2021. When the curious abandon honesty: Federated learning is not private. arXiv preprint arXiv:2112.02918 (2021).Google ScholarGoogle Scholar
  19. [19] Bogdanov Dan, Kamm Liina, Laur Sven, and Sokk Ville. 2018. Implementation and evaluation of an algorithm for cryptographically private principal component analysis on genomic data. IEEE/ACM Trans. Comput. Biol. Bioinform. 15, 5 (2018), 14271432.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] Bonawitz Keith, Ivanov Vladimir, Kreuter Ben, Marcedone Antonio, McMahan H. Brendan, Patel Sarvar, Ramage Daniel, Segal Aaron, and Seth Karn. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. ACM, 11751191.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] Bost Raphael, Popa Raluca Ada, Tu Stephen, and Goldwasser Shafi. 2015. Machine learning classification over encrypted data. In Proceedings of the Network and Distributed System Security Symposium (NDSS’15).Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Boucher Nicholas, Shumailov Ilia, Anderson Ross, and Papernot Nicolas. 2022. Bad characters: Imperceptible NLP attacks. In Proceedings of the IEEE Symposium on Security and Privacy (SP’22). IEEE, 19872004.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Buck Lauren and McDonnell Rachel. 2022. Security and privacy in the metaverse: The threat of the digital human. In Proceedings of the 1st Workshop on Novel Challenges of Safety, Security and Privacy in Extended Reality.Google ScholarGoogle Scholar
  24. [24] Burrows James H.. 1995. Secure Hash Standard. Technical Report. Department of Commerce, Washington, DC.Google ScholarGoogle Scholar
  25. [25] Butt Muhammad Atif, Khattak Asad Masood, Shafique Sarmad, Hayat Bashir, Abid Saima, Kim Ki-Il, Ayub Muhammad Waqas, Sajid Ahthasham, and Adnan Awais. 2021. Convolutional neural network based vehicle classification in adverse illuminous conditions for intelligent transportation systems. Complexity 2021 (2021), Article ID: 6644861.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Butt Muhammad Atif and Riaz Faisal. 2022. CARL-D: A vision benchmark suite and large scale dataset for vehicle detection and scene segmentation. Sig. Process.: Image Commun. 104 (2022), 116667.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Cannavo Alberto and Lamberti Fabrizio. 2020. How blockchain, virtual reality, and augmented reality are converging, and why. IEEE Consum. Electron. Mag. 10, 5 (2020), 613.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Carlini Nicholas and Wagner David. 2017. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 314.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. [29] Casey Peter, Baggili Ibrahim, and Yarramreddy Ananya. 2019. Immersive virtual reality attacks and the human joystick. IEEE Trans. Depend. Sec. Comput. 18, 2 (2019), 550562.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. [30] Chatzikyriakidis Efstathios, Papaioannidis Christos, and Pitas Ioannis. 2019. Adversarial face de-identification. In Proceedings of the IEEE International Conference on Image Processing (ICIP’19). IEEE, 684688.Google ScholarGoogle ScholarCross RefCross Ref
  31. [31] Chen Dawei, Wang Dan, Zhu Yifei, and Han Zhu. 2021. Digital twin for federated analytics using a Bayesian approach. IEEE Internet Things J. 8, 22 (2021), 1630116312.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Chen Mingzhe, Challita Ursula, Saad Walid, Yin Changchuan, and Debbah Mérouane. 2019. Artificial neural networks-based machine learning for wireless networks: A tutorial. IEEE Commun. Surv. Tutor. 21, 4 (2019), 30393071.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. [33] Chen Xiaoyi, Salem Ahmed, Backes Michael, Ma Shiqing, and Zhang Yang. 2021. BadNL: Backdoor attacks against NLP models. In Proceedings of the ICML Workshop on Adversarial Machine Learning.Google ScholarGoogle Scholar
  34. [34] Chen Yanjiao, Gong Xueluan, Wang Qian, Di Xing, and Huang Huayang. 2020. Backdoor attacks and defenses for deep neural networks in outsourced cloud environments. IEEE Netw. 34, 5 (2020), 141–147.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Cheng Ruizhi, Wu Nan, Chen Songqing, and Han Bo. 2022. Will metaverse be NextG internet? Vision, hype, and reality. arXiv preprint arXiv:2201.12894 (2022).Google ScholarGoogle Scholar
  36. [36] Cordts Marius, Omran Mohamed, Ramos Sebastian, Rehfeld Timo, Enzweiler Markus, Benenson Rodrigo, Franke Uwe, Roth Stefan, and Schiele Bernt. 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 32133223.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Dabouei Ali, Soleymani Sobhan, Dawson Jeremy, and Nasrabadi Nasser. 2019. Fast geometrically-perturbed adversarial faces. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV’19). IEEE, 19791988.Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Guzman Jaybie A. De, Thilakarathna Kanchana, and Seneviratne Aruna. 2019. Security and privacy approaches in mixed reality: A literature survey. ACM Comput. Surv. 52, 6 (2019), 137.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Fei-Fei Li. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Pietro Roberto Di and Cresci Stefano. 2021. Metaverse: Security and privacy issues. In Proceedings of the 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA’21). IEEE, 281288.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Dong Yinpeng, Su Hang, Wu Baoyuan, Li Zhifeng, Liu Wei, Zhang Tong, and Zhu Jun. 2019. Efficient decision-based black-box adversarial attacks on face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 77147722.Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Du Jian, Li Song, Feng Moran, and Chen Siheng. 2021. Dynamic differential-privacy preserving SGD. arXiv:2111.00173 (2021).Google ScholarGoogle Scholar
  43. [43] Dwork Cynthia. 2011. Differential privacy. Encyc. Cryptog. Secur. (2011), 338340.Google ScholarGoogle ScholarCross RefCross Ref
  44. [44] Ebrahimi Javid, Rao Anyi, Lowd Daniel, and Dou Dejing. 2017. HotFlip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751 (2017).Google ScholarGoogle Scholar
  45. [45] Falchuk Ben, Loeb Shoshana, and Neff Ralph. 2018. The social metaverse: Battle for privacy. IEEE Technol. Soc. Mag. 37, 2 (2018), 5261.Google ScholarGoogle ScholarCross RefCross Ref
  46. [46] Far Saeed Banaeian and Rad Azadeh Imani. 2022. Applying digital twins in metaverse: User interface, security and privacy challenges. J. Metaverse 2, 1 (2022), 816.Google ScholarGoogle Scholar
  47. [47] FAT M. L.. 2018. Fairness, accountability, and transparency in machine learning. Retrieved August 22, 2023 from https://www.fatml.org/Google ScholarGoogle Scholar
  48. [48] Feng Yu, Ma Benteng, Zhang Jing, Zhao Shanshan, Xia Yong, and Tao Dacheng. 2022. FIBA: Frequency-injection based backdoor attack in medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2087620885.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Fernandez Carlos Bermejo and Hui Pan. 2022. Life, the metaverse and everything: An overview of privacy, ethics, and governance in metaverse. arXiv preprint arXiv:2204.01480 (2022).Google ScholarGoogle Scholar
  50. [50] Fischer Volker, Kumar Mummadi Chaithanya, Metzen Jan Hendrik, and Brox Thomas. 2017. Adversarial examples for semantic image segmentation. arXiv preprint arXiv:1703.01101 (2017).Google ScholarGoogle Scholar
  51. [51] Flintham Martin, Karner Christian, Bachour Khaled, Creswick Helen, Gupta Neha, and Moran Stuart. 2018. Falling for fake news: Investigating the consumption of news via social media. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 376.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Floridi Luciano. 2021. Ethics, Governance, and Policies in Artificial Intelligence. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Garg Siddhant and Ramakrishnan Goutham. 2020. Bae: BERT-based adversarial examples for text classification. arXiv preprint arXiv:2004.01970 (2020).Google ScholarGoogle Scholar
  54. [54] Goodfellow Ian J., Shlens Jonathon, and Szegedy Christian. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).Google ScholarGoogle Scholar
  55. [55] Goswami Gaurav, Ratha Nalini, Agarwal Akshay, Singh Richa, and Vatsa Mayank. 2018. Unravelling robustness of deep learning based face recognition against adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  56. [56] Gu Shixiang and Rigazio Luca. 2014. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014).Google ScholarGoogle Scholar
  57. [57] Gu Tianyu, Liu Kang, Dolan-Gavitt Brendan, and Garg Siddharth. 2019. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 4723047244.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Guidotti Riccardo, Monreale Anna, Ruggieri Salvatore, Turini Franco, Giannotti Fosca, and Pedreschi Dino. 2018. A survey of methods for explaining black box models. ACM Comput. Surv. 51, 5 (2018), 142.Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. [59] David Gunning. 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), Retrieved August 22, 2023 from https://www.darpa.mil/program/explainable-artificialintelligenceGoogle ScholarGoogle Scholar
  60. [60] Halabi Osama, Balakrishnan Shidin, Dakua Sarada Prasad, Navab Nassir, and Warfa Mohammed. 2020. Virtual and augmented reality in surgery. In The Disruptive Fourth Industrial Revolution. Springer, 257285. DOI:DOI:Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Hamdi Abdullah, Rojas Sara, Thabet Ali, and Ghanem Bernard. 2020. AdvPC: Transferable adversarial perturbations on 3D point clouds. In Proceedings of the European Conference on Computer Vision. Springer, 241257.Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. [62] He Warren, Wei James, Chen Xinyun, Carlini Nicholas, and Song Dawn. 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In Proceedings of the 11th USENIX Workshop on Offensive Technologies (WOOT’17).Google ScholarGoogle Scholar
  63. [63] Heller Lynne and Goodman Lizbeth. 2016. What do avatars want now? Posthuman embodiment and the technological sublime. In Proceedings of the 22nd International Conference on Virtual System & Multimedia (VSMM’16). IEEE, 14.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Metzen Jan Hendrik, Kumar Mummadi Chaithanya, Brox Thomas, and Fischer Volker. 2017. Universal adversarial perturbations against semantic image segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE, 27552764.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Hinton Geoffrey, Vinyals Oriol, and Dean Jeff. 2015. Distilling the knowledge in a neural network. arXiv:1503.02531 (2015).Google ScholarGoogle Scholar
  66. [66] Holzinger Andreas, Dehmer Matthias, Emmert-Streib Frank, Cucchiara Rita, Augenstein Isabelle, Ser Javier Del, Samek Wojciech, Jurisica Igor, and Díaz-Rodríguez Natalia. 2022. Information fusion as an integrative cross- cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inf. Fusion 79 (2022), 263278.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. [67] Hossain M. Shamim and Muhammad Ghulam. 2019. Emotion recognition using secure edge and cloud computing. Inf. Sci. 504 (2019), 589601.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. [68] Huynh-The Thien, Pham Quoc-Viet, Pham Xuan-Qui, Nguyen Thanh Thi, Han Zhu, and Kim Dong-Seong. 2022. Artificial intelligence for the metaverse: A survey. arXiv preprint arXiv:2202.10336 (2022).Google ScholarGoogle Scholar
  69. [69] Ibitoye Olakunle, Abou-Khamis Rana, Matrawy Ashraf, and Shafiq M. Omair. 2019. The threat of adversarial attacks on machine learning in network security—A survey. arXiv preprint arXiv:1911.02621 (2019).Google ScholarGoogle Scholar
  70. [70] Irtiza Saquib, Khan Latifur, and Hamlen Kevin W.. 2022. SentMod: Hidden backdoor attack on unstructured textual data. In Proceedings of the IEEE 8th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing,(HPSC) and IEEE International Conference on Intelligent Data and Security (IDS). IEEE, 224231.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Jin Di, Jin Zhijing, Zhou Joey Tianyi, and Szolovits Peter. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In Proceedings of the AAAI Conference on Artificial Intelligence. 80188025.Google ScholarGoogle ScholarCross RefCross Ref
  72. [72] Ju Uijong, Chuang Lewis L., and Wallraven Christian. 2020. Acoustic cues increase situational awareness in accident situations: A VR car-driving study. IEEE Trans. Intell. Transport. Syst. (2020).Google ScholarGoogle Scholar
  73. [73] Katz Guy, Barrett Clark, Dill David L., Julian Kyle, and Kochenderfer Mykel J.. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proceedings of the International Conference on Computer Aided Verification. Springer, 97117.Google ScholarGoogle ScholarCross RefCross Ref
  74. [74] Kesarwani Manish, Mukhoty Bhaskar, Arya Vijay, and Mehta Sameep. 2018. Model extraction warning in MLaaS paradigm. In Proceedings of the 34th Annual Computer Security Applications Conference. ACM, 371380.Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. [75] Khalid Faiq, Ali Hassan, Hanif Muhammad Abdullah, Rehman Semeen, Ahmed Rehan, and Shafique Muhammad. 2020. FaDec: A fast decision-based attack for adversarial machine learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’20). IEEE, 18.Google ScholarGoogle ScholarCross RefCross Ref
  76. [76] Khalid Faiq, Ali Hassan, Tariq Hammad, Hanif Muhammad Abdullah, Rehman Semeen, Ahmed Rehan, and Shafique Muhammad. 2019. QuSecNets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks. In Proceedings of the IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS’19). IEEE, 182187.Google ScholarGoogle ScholarCross RefCross Ref
  77. [77] Kürtünlüoğlu Pınar, Akdik Beste, and Karaarslan Enis. 2022. Security of virtual reality authentication methods in metaverse: An overview. arXiv preprint arXiv:2209.06447 (2022).Google ScholarGoogle Scholar
  78. [78] Lanier Jaron. 2018. Ten Arguments for Deleting Your Social Media Accounts Right Now. Random House.Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. [79] Latif Siddique, Qayyum Adnan, Usama Muhammad, Qadir Junaid, Zwitter Andrej, and Shahzad Muhammad. 2019. Caveat emptor: The risks of using big data for human development. IEEE Technol. Soc. Mag. 38, 3 (2019), 8290.Google ScholarGoogle ScholarCross RefCross Ref
  80. [80] Lee John D. and See Katrina A.. 2004. Trust in automation: Designing for appropriate reliance. Hum. Fact. 46, 1 (2004), 5080.Google ScholarGoogle ScholarCross RefCross Ref
  81. [81] Lee Lik-Hang, Braud Tristan, Zhou Pengyuan, Wang Lin, Xu Dianlei, Lin Zijun, Kumar Abhishek, Bermejo Carlos, and Hui Pan. 2021. All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda. arXiv preprint arXiv:2110.05352 (2021).Google ScholarGoogle Scholar
  82. [82] Lee Mark and Kolter Zico. 2019. On physical adversarial patches for object detection. arXiv preprint arXiv:1906.11897 (2019).Google ScholarGoogle Scholar
  83. [83] Li Jinfeng, Ji Shouling, Du Tianyu, Li Bo, and Wang Ting. 2018. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271 (2018).Google ScholarGoogle Scholar
  84. [84] Li Linyang, Ma Ruotian, Guo Qipeng, Xue Xiangyang, and Qiu Xipeng. 2020. BERT-attack: Adversarial attack against BERT using BERT. arXiv preprint arXiv:2004.09984 (2020).Google ScholarGoogle Scholar
  85. [85] Li Xinke, Chen Zhirui, Zhao Yue, Tong Zekun, Zhao Yabang, Lim Andrew, and Zhou Joey Tianyi. 2021. PointBA: Towards backdoor attacks in 3D point cloud. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’21). 1649216501.Google ScholarGoogle ScholarCross RefCross Ref
  86. [86] Li Yiming, Li Yanjie, Lv Yalei, Jiang Yong, and Xia Shu-Tao. 2021. Hidden backdoor attack against semantic segmentation models. arXiv preprint arXiv:2103.04038 (2021).Google ScholarGoogle Scholar
  87. [87] Liao Q. Vera, Gruen Daniel, and Miller Sarah. 2020. Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 115.Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. [88] Liao Q. Vera and Varshney Kush R.. 2021. Human-centered explainable ai (XAI): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).Google ScholarGoogle Scholar
  89. [89] Lin Jierui, Du Min, and Liu Jian. 2019. Free-riders in federated learning: Attacks and defenses. arXiv preprint arXiv:1911.12560 (2019).Google ScholarGoogle Scholar
  90. [90] Lu Jiajun, Issaranon Theerasit, and Forsyth David. 2017. SafetyNet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 446454.Google ScholarGoogle ScholarCross RefCross Ref
  91. [91] Luo Changqing, Ji Jinlong, Wang Qianlong, Chen Xuhui, and Li Pan. 2018. Channel state information prediction for 5G wireless communications: A deep learning approach. IEEE Trans. Netw. Sci. Eng. 7, 1 (2018), 227236.Google ScholarGoogle ScholarCross RefCross Ref
  92. [92] Marr Bernard. 2021. Extended Reality in Practice. Wiley.Google ScholarGoogle Scholar
  93. [93] Mehrabi Ninareh, Morstatter Fred, Saxena Nripsuta, Lerman Kristina, and Galstyan Aram. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019).Google ScholarGoogle Scholar
  94. [94] Meloni Enrico, Tiezzi Matteo, Pasqualini Luca, Gori Marco, and Melacci Stefano. 2021. Messing up 3D virtual environments: Transferable adversarial 3D objects. In Proceedings of the 20th IEEE International Conference on Machine Learning and Applications. IEEE, 18.Google ScholarGoogle ScholarCross RefCross Ref
  95. [95] Miller Tim. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267 (2019), 138.Google ScholarGoogle ScholarCross RefCross Ref
  96. [96] Muñoz-González Luis, Biggio Battista, Demontis Ambra, Paudice Andrea, Wongrassamee Vasin, Lupu Emil C., and Roli Fabio. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2738.Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. [97] Nguyen Cong T., Hoang Dinh Thai, Nguyen Diep N., and Dutkiewicz Eryk. 2021. Metachain: A novel blockchain-based framework for metaverse applications. arXiv preprint arXiv:2201.00759 (2021).Google ScholarGoogle Scholar
  98. [98] Ning Huansheng, Wang Hang, Lin Yujia, Wang Wenxi, Dhelim Sahraoui, Farha Fadi, Ding Jianguo, and Daneshmand Mahmoud. 2021. A survey on metaverse: The state-of-the-art, technologies, applications, and challenges. arXiv preprint arXiv:2111.09673 (2021).Google ScholarGoogle Scholar
  99. [99] Ntoutsi Eirini, Fafalios Pavlos, Gadiraju Ujwal, Iosifidis Vasileios, Nejdl Wolfgang, Vidal Maria-Esther, Ruggieri Salvatore, Turini Franco, Papadopoulos Symeon, Krasanakis Emmanouil, Kompatsiaris Ioannis, Wagner Katharina Kinder-Kurlanda, Claudia, Karimi Fariba, Fernandez Miriam, Alani Harith, Berendt Bettina, Kruegel Tina, Heinze Christian, Broelemann Klaus, Kasneci Gjergji, Tiropanis Thanassis, and Staab Steffen. 2020. Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdisc. Rev.: Data Min. Knowl. Discov. 10, 3 (2020), e1356.Google ScholarGoogle ScholarCross RefCross Ref
  100. [100] Immersive and Addictive Technologies, UK House of Commons DCMS Committee, UK Parliament. 2019. Retrieved August 22, 2023 from https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1846/1846.pdfGoogle ScholarGoogle Scholar
  101. [101] Ohrimenko Olga, Schuster Felix, Fournet Cédric, Mehta Aastha, Nowozin Sebastian, Vaswani Kapil, and Costa Manuel. 2016. Oblivious multi-party machine learning on trusted processors. In Proceedings of the 25th USENIX Security Symposium (USENIX Security’16). 619636.Google ScholarGoogle Scholar
  102. [102] Olteanu Alexandra, Castillo Carlos, Diaz Fernando, and Kiciman Emre. 2019. Social data: Biases, methodological pitfalls, and ethical boundaries. Front. Big Data 2 (2019), 13.Google ScholarGoogle ScholarCross RefCross Ref
  103. [103] Otter Daniel W., Medina Julian R., and Kalita Jugal K.. 2020. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 32, 2 (2020), 604624.Google ScholarGoogle ScholarCross RefCross Ref
  104. [104] Pan Xudong, Zhang Mi, Sheng Beina, Zhu Jiaming, and Yang Min. 2022. Hidden trigger backdoor attack on NLP models via linguistic style manipulation. In Proceedings of the 31st USENIX Security Symposium (USENIX Security’22). 36113628.Google ScholarGoogle Scholar
  105. [105] Papernot Nicolas, McDaniel Patrick, Wu Xi, Jha Somesh, and Swami Ananthram. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the IEEE Symposium on Security and Privacy (SP’16). IEEE, 582597.Google ScholarGoogle ScholarCross RefCross Ref
  106. [106] Pub NIST FIPS. 2001. 197: Advanced encryption standard (AES). Fed. Inf. Process. Stand. Pub. 197, 441 (2001).Google ScholarGoogle Scholar
  107. [107] Qadir Junaid, Islam Mohammad Qamar, and Al-Fuqaha Ala. 2022. Toward accountable human-centered AI: Rationale and promising directions. J. Inf., Commun. Ethics Soc. 20, 2 (2022).Google ScholarGoogle ScholarCross RefCross Ref
  108. [108] Qayyum Adnan, Ahmad Kashif, Ahsan Muhammad Ahtazaz, Al-Fuqaha Ala, and Qadir Junaid. 2022. Collaborative federated learning for healthcare: Multi-modal COVID-19 diagnosis at the edge. IEEE Open J. Comput. Soc. 3 (2022).Google ScholarGoogle ScholarCross RefCross Ref
  109. [109] Qayyum Adnan, Ijaz Aneeqa, Usama Muhammad, Iqbal Waleed, Qadir Junaid, Elkhatib Yehia, and Al-Fuqaha Ala. 2020. Securing machine learning in the cloud: A systematic review of cloud machine learning security. Front. Big Data 3 (2020), 587139.Google ScholarGoogle ScholarCross RefCross Ref
  110. [110] Qayyum Adnan, Janjua Muhammad Umar, and Qadir Junaid. 2022. Making federated learning robust to adversarial attacks by learning data and model association. Comput. Secur. 121 (2022), 102827.Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. [111] Qayyum Adnan, Qadir Junaid, Bilal Muhammad, and Al-Fuqaha Ala. 2020. Secure and robust machine learning for healthcare: A survey. IEEE Rev. Biomed. Eng. 14 (2020), 156180.Google ScholarGoogle ScholarCross RefCross Ref
  112. [112] Qayyum Adnan, Usama Muhammad, Qadir Junaid, and Al-Fuqaha Ala. 2020. Securing connected & autonomous vehicles: Challenges posed by adversarial machine learning and the way forward. IEEE Commun. Surv. Tutor. 22, 2 (2020), 9981026.Google ScholarGoogle ScholarCross RefCross Ref
  113. [113] Qin Yao, Carlini Nicholas, Cottrell Garrison, Goodfellow Ian, and Raffel Colin. 2019. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In Proceedings of the International Conference on Machine Learning. PMLR, 52315240.Google ScholarGoogle Scholar
  114. [114] Rasheed Khansa, Qayyum Adnan, Ghaly Mohammed, Al-Fuqaha Ala, Razi Adeel, and Qadir Junaid. 2022. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Comput. Biol. Med. 149 (2022), 106043.Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. [115] Reiners Dirk, Davahli Mohammad Reza, Karwowski Waldemar, and Cruz-Neira Carolina. 2021. The combination of artificial intelligence and extended reality: A systematic review. Front. Virt. Real. 2 (2021), 114.Google ScholarGoogle Scholar
  116. [116] Reith Robert Nikolai, Schneider Thomas, and Tkachenko Oleksandr. 2019. Efficiently stealing your machine learning models. In Proceedings of the 18th ACM Workshop on Privacy in the Electronic Society. 198210.Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. [117] Rivest Ronald L., Shamir Adi, and Adleman Leonard. 1978. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21, 2 (1978), 120126.Google ScholarGoogle ScholarDigital LibraryDigital Library
  118. [118] Rosenberg Louis B.. 2022. Regulation of the metaverse: A roadmap. In Proceedings of the 6th International Conference on Virtual and Augmented Reality Simulations (ICVARS’22).Google ScholarGoogle ScholarDigital LibraryDigital Library
  119. [119] Ross Andrew Slavin and Doshi-Velez Finale. 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  120. [120] Rudin Cynthia. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 5 (2019), 206215.Google ScholarGoogle ScholarCross RefCross Ref
  121. [121] Sadeghi Meysam and Larsson Erik G.. 2018. Adversarial attacks on deep-learning based radio signal classification. IEEE Wirel. Commun. Lett. 8, 1 (2018), 213216.Google ScholarGoogle ScholarCross RefCross Ref
  122. [122] Sagduyu Yalin E., Shi Yi, and Erpek Tugba. 2019. IoT network security from the perspective of adversarial deep learning. In Proceedings of the 16th Annual International Conference on Sensing, Communication, and Networking. IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. [123] Sarkar Esha, Chielle Eduardo, Gürsoy Gamze, Mazonka Oleg, Gerstein Mark, and Maniatakos Michail. 2021. Fast and scalable private genotype imputation using machine learning and partially homomorphic encryption. IEEE Access 9 (2021), 9309793110.Google ScholarGoogle ScholarCross RefCross Ref
  124. [124] Sethi Tegjyot Singh and Kantardzic Mehmed. 2018. Data driven exploratory attacks on black box classifiers in adversarial domains. Neurocomputing 289 (2018), 129143.Google ScholarGoogle ScholarDigital LibraryDigital Library
  125. [125] Shafahi Ali, Huang W. Ronny, Najibi Mahyar, Suciu Octavian, Studer Christoph, Dumitras Tudor, and Goldstein Tom. 2018. Poison frogs! Targeted clean-label poisoning attacks on neural networks. Adv. Neural Inf. Process. Syst. 31 (2018).Google ScholarGoogle Scholar
  126. [126] Shahriari Kyarash and Shahriari Mana. 2017. IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In Proceedings of the IEEE Canada International Humanitarian Technology Conference. IEEE, 197201.Google ScholarGoogle Scholar
  127. [127] Shang Jiacheng, Chen Si, Wu Jie, and Yin Shu. 2020. ARSpy: Breaking location-based multi-player augmented reality application for user location tracking. IEEE Trans. Mob. Comput. 21, 2 (2020).Google ScholarGoogle Scholar
  128. [128] Sharif Mahmood, Bhagavatula Sruti, Bauer Lujo, and Reiter Michael K.. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 15281540.Google ScholarGoogle ScholarDigital LibraryDigital Library
  129. [129] She Changyang, Dong Rui, Gu Zhouyou, Hou Zhanwei, Li Yonghui, Hardjawana Wibowo, Yang Chenyang, Song Lingyang, and Vucetic Branka. 2020. Deep learning for ultra-reliable and low-latency communications in 6G networks. IEEE Netw. 34, 5 (2020), 219225.Google ScholarGoogle ScholarCross RefCross Ref
  130. [130] Shen Meng, Liao Zelin, Zhu Liehuang, Xu Ke, and Du Xiaojiang. 2019. VLA: A practical visible light-based attack on face recognition systems in physical world. Proc. ACM Interact., Mob., Wear. Ubiq. Technol. 3, 3 (2019), 119.Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. [131] Shneiderman Ben. 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.–comput. Interact. 36, 6 (2020), 495504.Google ScholarGoogle ScholarCross RefCross Ref
  132. [132] Song Yang, Kim Taesup, Nowozin Sebastian, Ermon Stefano, and Kushman Nate. 2018. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR’18).Google ScholarGoogle Scholar
  133. [133] Standard Data Encryption et al. 1999. Data encryption standard. Fed. Inf. Process. Stand. Pub. 112 (1999).Google ScholarGoogle Scholar
  134. [134] Steinhardt Jacob, Koh Pang Wei W., and Liang Percy S.. 2017. Certified defenses for data poisoning attacks. Adv. Neural Inf. Process. Syst. 30 (2017).Google ScholarGoogle Scholar
  135. [135] Suresh Harini and Guttag John V.. 2019. A framework for understanding unintended consequences of machine learning. arXiv (2019).Google ScholarGoogle Scholar
  136. [136] Szegedy Christian, Zaremba Wojciech, Sutskever Ilya, Bruna Joan, Erhan Dumitru, Goodfellow Ian, and Fergus Rob. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).Google ScholarGoogle Scholar
  137. [137] Tanwar Sudeep, Bhatia Qasim, Patel Pruthvi, Kumari Aparna, Singh Pradeep Kumar, and Hong Wei-Chiang. 2019. Machine learning adoption in blockchain-based smart applications: The challenges, and a way forward. IEEE Access 8 (2019), 474488.Google ScholarGoogle ScholarCross RefCross Ref
  138. [138] Tao Fei, Zhang He, Liu Ang, and Nee Andrew Y. C.. 2018. Digital twin in industry: State-of-the-art. IEEE Trans. Industr. Inform. 15, 4 (2018), 24052415.Google ScholarGoogle ScholarCross RefCross Ref
  139. [139] Tramer Florian and Boneh Dan. 2019. Adversarial training and robustness for multiple perturbations. Adv. Neural Inf. Process. Syst. 32 (2019).Google ScholarGoogle Scholar
  140. [140] Tunze Godwin Brown, Huynh-The Thien, Lee Jae-Min, and Kim Dong-Seong. 2020. Sparsely connected CNN for efficient automatic modulation recognition. IEEE Trans. Vehic. Technol. 69, 12 (2020), 1555715568.Google ScholarGoogle ScholarCross RefCross Ref
  141. [141] Usama Muhammad, Asim Muhammad, Latif Siddique, Qadir Junaid, and Ala-Al-Fuqaha . 2019. Generative adversarial networks for launching and thwarting adversarial attacks on network intrusion detection systems. In Proceedings of the 15th International Wireless Communications & Mobile Computing Conference (IWCMC’19). IEEE, 7883.Google ScholarGoogle ScholarCross RefCross Ref
  142. [142] Usama Muhammad, Ilahi Inaam, Qadir Junaid, Mitra Rupendra Nath, and Marina Mahesh K.. 2021. Examining machine learning for 5G and beyond through an adversarial lens. IEEE Internet Comput. 25, 2 (2021), 2634.Google ScholarGoogle ScholarCross RefCross Ref
  143. [143] Usama Muhammad, Qadir Junaid, and Al-Fuqaha Ala. 2018. Adversarial attacks on cognitive self-organizing networks: The challenge and the way forward. In Proceedings of the IEEE 43rd Conference on Local Computer Networks Workshops (LCN Workshops’18). IEEE, 9097.Google ScholarGoogle ScholarCross RefCross Ref
  144. [144] Usama Muhammad, Qadir Junaid, and Al-Fuqaha Ala. 2019. Black-box adversarial ML attack on modulation classification. arXiv (2019).Google ScholarGoogle Scholar
  145. [145] Farooq Salah-ud-din, Usama Muhammad, Qadir Junaid, and Imran Muhammad Ali. 2019. Adversarial ML attack on self organizing cellular networks. In Proceedings of the UK/China Emerging Technologies (UCET’19). IEEE, 15.Google ScholarGoogle ScholarCross RefCross Ref
  146. [146] Vallor Shannon. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.Google ScholarGoogle ScholarCross RefCross Ref
  147. [147] Valluripally Samaikya, Gulhane Aniket, Mitra Reshmi, Hoque Khaza Anuarul, and Calyam Prasad. 2020. Attack trees for security and privacy in social virtual reality learning environments. In Proceedings of the 17th Annual Consumer Communications & Networking Conference. IEEE, 19.Google ScholarGoogle ScholarDigital LibraryDigital Library
  148. [148] Wang Xupeng, Cai Mumuxin, Sohel Ferdous, Sang Nan, and Chang Zhengwei. 2021. Adversarial point cloud perturbations against 3D object detection in autonomous driving systems. Neurocomputing 466 (2021), 2736.Google ScholarGoogle ScholarCross RefCross Ref
  149. [149] Wang Yuntao, Su Zhou, Ni Jianbing, Zhang Ning, and Shen Xuemin. 2021. Blockchain-empowered space-air-ground integrated networks: Opportunities, challenges, and solutions. IEEE Commun. Surv. Tutor. 24, 1 (2021), 160209.Google ScholarGoogle ScholarCross RefCross Ref
  150. [150] Wang Yuntao, Su Zhou, Zhang Ning, Xing Rui, Liu Dongxiao, Luan Tom H., and Shen Xuemin. 2022. A survey on metaverse: Fundamentals, security, and privacy. IEEE Commun. Surv. Tutor. 25, 1 (2022).Google ScholarGoogle Scholar
  151. [151] Wang Yajie, Tan Yu-an, Zhang Wenjiao, Zhao Yuhang, and Kuang Xiaohui. 2020. An adversarial attack on DNN-based black-box object detectors. J. Netw. Comput. Applic. 161 (2020), 102634.Google ScholarGoogle ScholarCross RefCross Ref
  152. [152] Wasserkrug Segev, Gal Avigdor, and Etzion Opher. 2008. Inference of security hazards from event composition based on incomplete or uncertain information. IEEE Trans. Knowl. Data Eng. 20, 8 (2008), 11111114.Google ScholarGoogle ScholarDigital LibraryDigital Library
  153. [153] Wei Jianhao, Li Junyi, Lin Yaping, and Zhang Jin. 2020. LDP-based social content protection for trending topic recommendation. IEEE Internet Things J. 8, 6 (2020), 43534372.Google ScholarGoogle ScholarCross RefCross Ref
  154. [154] Wei Xingxing, Liang Siyuan, Chen Ning, and Cao Xiaochun. 2018. Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641 (2018).Google ScholarGoogle Scholar
  155. [155] Wenger Emily, Passananti Josephine, Bhagoji Arjun Nitin, Yao Yuanshun, Zheng Haitao, and Zhao Ben Y.. 2021. Backdoor attacks against deep learning systems in the physical world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 62066215.Google ScholarGoogle ScholarCross RefCross Ref
  156. [156] Woodward Julia and Ruiz Jaime. 2022. Analytic review of using augmented reality for situational awareness. IEEE Trans. Visualiz. Comput. Graph. 29, 4 (2022).Google ScholarGoogle Scholar
  157. [157] Wu Tong, Wang Tianhao, Sehwag Vikash, Mahloujifar Saeed, and Mittal Prateek. 2022. Just rotate it: Deploying backdoor attacks via rotation transformation. arXiv preprint arXiv:2207.10825 (2022).Google ScholarGoogle Scholar
  158. [158] Xia Weihao, Yang Yujiu, Xue Jing-Hao, and Wu Baoyuan. 2021. TediGAN: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 22562265.Google ScholarGoogle ScholarCross RefCross Ref
  159. [159] Xiang Chong, Qi Charles R., and Li Bo. 2019. Generating 3D adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 91369144.Google ScholarGoogle ScholarCross RefCross Ref
  160. [160] Xie Chulin, Huang Keli, Chen Pin-Yu, and Li Bo. 2019. DBA: Distributed backdoor attacks against federated learning. In Proceedings of the International Conference on Learning Representations.Google ScholarGoogle Scholar
  161. [161] Xie Cihang, Wang Jianyu, Zhang Zhishuai, Zhou Yuyin, Xie Lingxi, and Yuille Alan. 2017. Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 13691378.Google ScholarGoogle ScholarCross RefCross Ref
  162. [162] Xie Yi, Shi Cong, Li Zhuohang, Liu Jian, Chen Yingying, and Yuan Bo. 2020. Real-time, universal, and robust adversarial attacks against speaker recognition systems. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’20). IEEE, 17381742.Google ScholarGoogle ScholarCross RefCross Ref
  163. [163] Xu Minrui, Ng Wei Chong, Lim Wei Yang Bryan, Kang Jiawen, Xiong Zehui, Niyato Dusit, Yang Qiang, Shen Xuemin Sherman, and Miao Chunyan. 2022. A full dive into realizing the edge-enabled metaverse: Visions, enabling technologies, and challenges. arXiv preprint arXiv:2203.05471 (2022).Google ScholarGoogle Scholar
  164. [164] Xu Weilin, Evans David, and Qi Yanjun. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017).Google ScholarGoogle Scholar
  165. [165] Xu Weilin, Qi Yanjun, and Evans David. 2016. Automatically evading classifiers. In Proceedings of the Network and Distributed Systems Symposium.Google ScholarGoogle Scholar
  166. [166] Xue Mingfu, He Can, Wang Jian, and Liu Weiqiang. 2021. Backdoors hidden in facial features: A novel invisible backdoor attack against face recognition systems. Peer-to-Peer Netw. Applic. 14, 3 (2021), 14581474.Google ScholarGoogle ScholarCross RefCross Ref
  167. [167] Yang Qinglin, Zhao Yetong, Huang Huawei, Xiong Zehui, Kang Jiawen, and Zheng Zibin. 2022. Fusing blockchain and AI with metaverse: A survey. IEEE Open J. Comput. Soc. 3 (2022), 122136.Google ScholarGoogle ScholarCross RefCross Ref
  168. [168] Yang Ziqi, Zhang Jiyi, Chang Ee-Chien, and Liang Zhenkai. 2019. Neural network inversion in adversarial setting via background knowledge alignment. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 225240.Google ScholarGoogle ScholarDigital LibraryDigital Library
  169. [169] Yao Andrew Chi-Chih. 1986. How to generate and exchange secrets. In Proceedings of the 27th Annual Symposium on Foundations of Computer Science (SFCS’86). IEEE, 162167.Google ScholarGoogle ScholarDigital LibraryDigital Library
  170. [170] Yeung Karen. 2020. Recommendation of the council on artificial intelligence (OECD). Int. Legal Mater. 59, 1 (2020), 2734.Google ScholarGoogle ScholarCross RefCross Ref
  171. [171] Yuan Xiaoyong, He Pan, Zhu Qile, and Li Xiaolin. 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30, 9 (2019).Google ScholarGoogle ScholarCross RefCross Ref
  172. [172] Zellers Rowan, Holtzman Ari, Rashkin Hannah, Bisk Yonatan, Farhadi Ali, Roesner Franziska, and Choi Yejin. 2019. Defending against neural fake news. Adv. Neural Inf. Process. Syst. 32 (2019).Google ScholarGoogle Scholar
  173. [173] Zhang Hantao, Zhou Wengang, and Li Houqiang. 2020. Contextual adversarial attacks for object detection. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’20). IEEE, 16.Google ScholarGoogle ScholarCross RefCross Ref
  174. [174] Zhang Qiuchen, Ma Jing, Lou Jian, Xiong Li, and Jiang Xiaoqian. 2020. Towards training robust private aggregation of teacher ensembles under noisy labels. In Proceedings of the IEEE International Conference on Big Data (Big Data’20). IEEE, 11031110.Google ScholarGoogle ScholarCross RefCross Ref
  175. [175] Zhang Xinze, Zhang Junzhe, Chen Zhenhua, and He Kun. 2021. Crafting adversarial examples for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 19671977.Google ScholarGoogle ScholarCross RefCross Ref
  176. [176] Zhao Ruoyu, Zhang Yushu, Zhu Youwen, Lan Rushi, and Hua Zhongyun. 2022. Metaverse: Security and privacy concerns. arXiv preprint arXiv:2203.03854 (2022).Google ScholarGoogle Scholar
  177. [177] Zhong Yaoyao and Deng Weihong. 2020. Towards transferable adversarial attack against deep face recognition. IEEE Trans. Inf. Forens. Secur. 16 (2020), 14521466.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for Metaverses

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 56, Issue 7
      July 2024
      1006 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3613612
      • Editors:
      • David Atienza,
      • Michela Milano
      Issue’s Table of Contents

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 April 2024
      • Online AM: 10 August 2023
      • Accepted: 3 August 2023
      • Revised: 12 June 2023
      • Received: 14 October 2022
      Published in csur Volume 56, Issue 7

      Check for updates

      Qualifiers

      • research-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text