Abstract
Metaverse is expected to emerge as a new paradigm for the next-generation Internet, providing fully immersive and personalized experiences to socialize, work, and play in self-sustaining and hyper-spatio-temporal virtual world(s). The advancements in different technologies such as augmented reality, virtual reality, extended reality (XR), artificial intelligence (AI), and 5G/6G communication will be the key enablers behind the realization of AI-XR metaverse applications. While AI itself has many potential applications in the aforementioned technologies (e.g., avatar generation, network optimization), ensuring the security of AI in critical applications like AI-XR metaverse applications is profoundly crucial to avoid undesirable actions that could undermine users’ privacy and safety, consequently putting their lives in danger. To this end, we attempt to analyze the security, privacy, and trustworthiness aspects associated with the use of various AI techniques in AI-XR metaverse applications. Specifically, we discuss numerous such challenges and present a taxonomy of potential solutions that could be leveraged to develop secure, private, robust, and trustworthy AI-XR applications. To highlight the real implications of AI-associated adversarial threats, we designed a metaverse-specific case study and analyzed it through the adversarial lens. Finally, we elaborate upon various open issues that require further research interest from the community.
- [1] . 2016. Deep learning with differential privacy. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. ACM, 308–318.Google ScholarDigital Library
- [2] . 2018. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access 6 (2018), 52138–52160.Google ScholarCross Ref
- [3] . 2019. Investigating adversarial attacks against network intrusion detection systems in SDNs. In Proceedings of the IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN’19). IEEE, 1–7.Google ScholarCross Ref
- [4] . 2021. Advances in adversarial attacks and defenses in computer vision: A survey. IEEE Access 9 (2021), 155161–155196.Google ScholarCross Ref
- [5] . 2021. Incentive-driven federated learning and associated security challenges: A systematic review. TechRxiv (2021).Google Scholar
- [6] . 2022. SPAM-DaS: Secure and privacy-aware misinformation detection as a service. TechRxiv (2022).Google Scholar
- [7] . 2019. SSCNets: Robustifying DNNs using secure selective convolutional filters. IEEE Des. Test 37, 2 (2019), 58–65.Google ScholarCross Ref
- [8] . 2022. Tamp-X: Attacking explainable natural language classifiers through tampered activations. Comput. Secur. (2022), 102791.
DOI: DOI: Google ScholarDigital Library - [9] . 2021. All your fake detector are belong to us: Evaluating adversarial robustness of fake-news detectors under black-box settings. IEEE Access 9 (2021), 81678–81692.Google ScholarCross Ref
- [10] . 2023. Condetect: Detecting adversarially perturbed natural language inputs to deep classifiers through holistic analysis. Computers & Security 132 (2023), 103367.Google Scholar
- [11] . 2020. HaS-Nets: A heal and select mechanism to defend DNNs against backdoor attacks for data collection scenarios. arXiv preprint arXiv:2012.07474 (2020).Google Scholar
- [12] . 2021. Intelligent resource slicing for eMBB and URLLC coexistence in 5G and beyond: A deep reinforcement learning based approach. IEEE Trans. Wirel. Commun. 20, 7 (2021), 4585–4600.Google ScholarCross Ref
- [13] . 2018. On the robustness of semantic segmentation models to adversarial attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 888–897.Google ScholarCross Ref
- [14] . 2021. Scalable quantitative verification for deep neural networks. In Proceedings of the IEEE/ACM 43rd International Conference on Software Engineering (ICSE’21). 312–323.Google ScholarDigital Library
- [15] . 2017. Botnets and internet of things security. Computer 50, 2 (2017), 76–79.Google ScholarDigital Library
- [16] . 2013. Evasion attacks against machine learning at test time. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 387–402.Google ScholarDigital Library
- [17] . 2012. Poisoning attacks against support vector machines. In Proceedings of the 29th International Conference on Machine Learning. ArXiv e-prints, 1807–1814.Google ScholarDigital Library
- [18] . 2021. When the curious abandon honesty: Federated learning is not private. arXiv preprint arXiv:2112.02918 (2021).Google Scholar
- [19] . 2018. Implementation and evaluation of an algorithm for cryptographically private principal component analysis on genomic data. IEEE/ACM Trans. Comput. Biol. Bioinform. 15, 5 (2018), 1427–1432.Google ScholarDigital Library
- [20] . 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. ACM, 1175–1191.Google ScholarDigital Library
- [21] . 2015. Machine learning classification over encrypted data. In Proceedings of the Network and Distributed System Security Symposium (NDSS’15).Google ScholarCross Ref
- [22] . 2022. Bad characters: Imperceptible NLP attacks. In Proceedings of the IEEE Symposium on Security and Privacy (SP’22). IEEE, 1987–2004.Google ScholarCross Ref
- [23] . 2022. Security and privacy in the metaverse: The threat of the digital human. In Proceedings of the 1st Workshop on Novel Challenges of Safety, Security and Privacy in Extended Reality.Google Scholar
- [24] . 1995. Secure Hash Standard.
Technical Report . Department of Commerce, Washington, DC.Google Scholar - [25] . 2021. Convolutional neural network based vehicle classification in adverse illuminous conditions for intelligent transportation systems. Complexity 2021 (2021), Article ID: 6644861.Google ScholarDigital Library
- [26] . 2022. CARL-D: A vision benchmark suite and large scale dataset for vehicle detection and scene segmentation. Sig. Process.: Image Commun. 104 (2022), 116667.Google ScholarDigital Library
- [27] . 2020. How blockchain, virtual reality, and augmented reality are converging, and why. IEEE Consum. Electron. Mag. 10, 5 (2020), 6–13.Google ScholarCross Ref
- [28] . 2017. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 3–14.Google ScholarDigital Library
- [29] . 2019. Immersive virtual reality attacks and the human joystick. IEEE Trans. Depend. Sec. Comput. 18, 2 (2019), 550–562.Google ScholarDigital Library
- [30] . 2019. Adversarial face de-identification. In Proceedings of the IEEE International Conference on Image Processing (ICIP’19). IEEE, 684–688.Google ScholarCross Ref
- [31] . 2021. Digital twin for federated analytics using a Bayesian approach. IEEE Internet Things J. 8, 22 (2021), 16301–16312.Google ScholarCross Ref
- [32] . 2019. Artificial neural networks-based machine learning for wireless networks: A tutorial. IEEE Commun. Surv. Tutor. 21, 4 (2019), 3039–3071.Google ScholarDigital Library
- [33] . 2021. BadNL: Backdoor attacks against NLP models. In Proceedings of the ICML Workshop on Adversarial Machine Learning.Google Scholar
- [34] . 2020. Backdoor attacks and defenses for deep neural networks in outsourced cloud environments. IEEE Netw. 34, 5 (2020), 141–147.Google ScholarCross Ref
- [35] . 2022. Will metaverse be NextG internet? Vision, hype, and reality. arXiv preprint arXiv:2201.12894 (2022).Google Scholar
- [36] . 2016. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3213–3223.Google ScholarCross Ref
- [37] . 2019. Fast geometrically-perturbed adversarial faces. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV’19). IEEE, 1979–1988.Google ScholarCross Ref
- [38] . 2019. Security and privacy approaches in mixed reality: A literature survey. ACM Comput. Surv. 52, 6 (2019), 1–37.Google ScholarDigital Library
- [39] . 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248–255.Google ScholarCross Ref
- [40] . 2021. Metaverse: Security and privacy issues. In Proceedings of the 3rd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA’21). IEEE, 281–288.Google ScholarCross Ref
- [41] . 2019. Efficient decision-based black-box adversarial attacks on face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7714–7722.Google ScholarCross Ref
- [42] . 2021. Dynamic differential-privacy preserving SGD. arXiv:2111.00173 (2021).Google Scholar
- [43] . 2011. Differential privacy. Encyc. Cryptog. Secur. (2011), 338–340.Google ScholarCross Ref
- [44] . 2017. HotFlip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751 (2017).Google Scholar
- [45] . 2018. The social metaverse: Battle for privacy. IEEE Technol. Soc. Mag. 37, 2 (2018), 52–61.Google ScholarCross Ref
- [46] . 2022. Applying digital twins in metaverse: User interface, security and privacy challenges. J. Metaverse 2, 1 (2022), 8–16.Google Scholar
- [47] . 2018. Fairness, accountability, and transparency in machine learning. Retrieved August 22, 2023 from https://www.fatml.org/Google Scholar
- [48] . 2022. FIBA: Frequency-injection based backdoor attack in medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20876–20885.Google ScholarCross Ref
- [49] . 2022. Life, the metaverse and everything: An overview of privacy, ethics, and governance in metaverse. arXiv preprint arXiv:2204.01480 (2022).Google Scholar
- [50] . 2017. Adversarial examples for semantic image segmentation. arXiv preprint arXiv:1703.01101 (2017).Google Scholar
- [51] . 2018. Falling for fake news: Investigating the consumption of news via social media. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 376.Google ScholarDigital Library
- [52] . 2021. Ethics, Governance, and Policies in Artificial Intelligence. Springer.Google ScholarCross Ref
- [53] . 2020. Bae: BERT-based adversarial examples for text classification. arXiv preprint arXiv:2004.01970 (2020).Google Scholar
- [54] . 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).Google Scholar
- [55] . 2018. Unravelling robustness of deep learning based face recognition against adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarCross Ref
- [56] . 2014. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014).Google Scholar
- [57] . 2019. BadNets: Evaluating backdooring attacks on deep neural networks. IEEE Access 7 (2019), 47230–47244.Google ScholarCross Ref
- [58] . 2018. A survey of methods for explaining black box models. ACM Comput. Surv. 51, 5 (2018), 1–42.Google ScholarDigital Library
- [59] David Gunning. 2017. Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), Retrieved August 22, 2023 from https://www.darpa.mil/program/explainable-artificialintelligenceGoogle Scholar
- [60] . 2020. Virtual and augmented reality in surgery. In The Disruptive Fourth Industrial Revolution. Springer, 257–285.
DOI: DOI: Google ScholarCross Ref - [61] . 2020. AdvPC: Transferable adversarial perturbations on 3D point clouds. In Proceedings of the European Conference on Computer Vision. Springer, 241–257.Google ScholarDigital Library
- [62] . 2017. Adversarial example defense: Ensembles of weak defenses are not strong. In Proceedings of the 11th USENIX Workshop on Offensive Technologies (WOOT’17).Google Scholar
- [63] . 2016. What do avatars want now? Posthuman embodiment and the technological sublime. In Proceedings of the 22nd International Conference on Virtual System & Multimedia (VSMM’16). IEEE, 1–4.Google ScholarCross Ref
- [64] . 2017. Universal adversarial perturbations against semantic image segmentation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE, 2755–2764.Google ScholarCross Ref
- [65] . 2015. Distilling the knowledge in a neural network. arXiv:1503.02531 (2015).Google Scholar
- [66] . 2022. Information fusion as an integrative cross- cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence. Inf. Fusion 79 (2022), 263–278.Google ScholarDigital Library
- [67] . 2019. Emotion recognition using secure edge and cloud computing. Inf. Sci. 504 (2019), 589–601.Google ScholarDigital Library
- [68] . 2022. Artificial intelligence for the metaverse: A survey. arXiv preprint arXiv:2202.10336 (2022).Google Scholar
- [69] . 2019. The threat of adversarial attacks on machine learning in network security—A survey. arXiv preprint arXiv:1911.02621 (2019).Google Scholar
- [70] . 2022. SentMod: Hidden backdoor attack on unstructured textual data. In Proceedings of the IEEE 8th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing,(HPSC) and IEEE International Conference on Intelligent Data and Security (IDS). IEEE, 224–231.Google ScholarCross Ref
- [71] . 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In Proceedings of the AAAI Conference on Artificial Intelligence. 8018–8025.Google ScholarCross Ref
- [72] . 2020. Acoustic cues increase situational awareness in accident situations: A VR car-driving study. IEEE Trans. Intell. Transport. Syst. (2020).Google Scholar
- [73] . 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In Proceedings of the International Conference on Computer Aided Verification. Springer, 97–117.Google ScholarCross Ref
- [74] . 2018. Model extraction warning in MLaaS paradigm. In Proceedings of the 34th Annual Computer Security Applications Conference. ACM, 371–380.Google ScholarDigital Library
- [75] . 2020. FaDec: A fast decision-based attack for adversarial machine learning. In Proceedings of the International Joint Conference on Neural Networks (IJCNN’20). IEEE, 1–8.Google ScholarCross Ref
- [76] . 2019. QuSecNets: Quantization-based defense mechanism for securing deep neural network against adversarial attacks. In Proceedings of the IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS’19). IEEE, 182–187.Google ScholarCross Ref
- [77] . 2022. Security of virtual reality authentication methods in metaverse: An overview. arXiv preprint arXiv:2209.06447 (2022).Google Scholar
- [78] . 2018. Ten Arguments for Deleting Your Social Media Accounts Right Now. Random House.Google ScholarDigital Library
- [79] . 2019. Caveat emptor: The risks of using big data for human development. IEEE Technol. Soc. Mag. 38, 3 (2019), 82–90.Google ScholarCross Ref
- [80] . 2004. Trust in automation: Designing for appropriate reliance. Hum. Fact. 46, 1 (2004), 50–80.Google ScholarCross Ref
- [81] . 2021. All one needs to know about metaverse: A complete survey on technological singularity, virtual ecosystem, and research agenda. arXiv preprint arXiv:2110.05352 (2021).Google Scholar
- [82] . 2019. On physical adversarial patches for object detection. arXiv preprint arXiv:1906.11897 (2019).Google Scholar
- [83] . 2018. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271 (2018).Google Scholar
- [84] . 2020. BERT-attack: Adversarial attack against BERT using BERT. arXiv preprint arXiv:2004.09984 (2020).Google Scholar
- [85] . 2021. PointBA: Towards backdoor attacks in 3D point cloud. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’21). 16492–16501.Google ScholarCross Ref
- [86] . 2021. Hidden backdoor attack against semantic segmentation models. arXiv preprint arXiv:2103.04038 (2021).Google Scholar
- [87] . 2020. Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1–15.Google ScholarDigital Library
- [88] . 2021. Human-centered explainable ai (XAI): From algorithms to user experiences. arXiv preprint arXiv:2110.10790 (2021).Google Scholar
- [89] . 2019. Free-riders in federated learning: Attacks and defenses. arXiv preprint arXiv:1911.12560 (2019).Google Scholar
- [90] . 2017. SafetyNet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 446–454.Google ScholarCross Ref
- [91] . 2018. Channel state information prediction for 5G wireless communications: A deep learning approach. IEEE Trans. Netw. Sci. Eng. 7, 1 (2018), 227–236.Google ScholarCross Ref
- [92] . 2021. Extended Reality in Practice. Wiley.Google Scholar
- [93] . 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019).Google Scholar
- [94] . 2021. Messing up 3D virtual environments: Transferable adversarial 3D objects. In Proceedings of the 20th IEEE International Conference on Machine Learning and Applications. IEEE, 1–8.Google ScholarCross Ref
- [95] . 2019. Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267 (2019), 1–38.Google ScholarCross Ref
- [96] . 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 27–38.Google ScholarDigital Library
- [97] . 2021. Metachain: A novel blockchain-based framework for metaverse applications. arXiv preprint arXiv:2201.00759 (2021).Google Scholar
- [98] . 2021. A survey on metaverse: The state-of-the-art, technologies, applications, and challenges. arXiv preprint arXiv:2111.09673 (2021).Google Scholar
- [99] . 2020. Bias in data-driven artificial intelligence systems—An introductory survey. Wiley Interdisc. Rev.: Data Min. Knowl. Discov. 10, 3 (2020), e1356.Google ScholarCross Ref
- [100] Immersive and Addictive Technologies, UK House of Commons DCMS Committee, UK Parliament. 2019. Retrieved August 22, 2023 from https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1846/1846.pdfGoogle Scholar
- [101] . 2016. Oblivious multi-party machine learning on trusted processors. In Proceedings of the 25th USENIX Security Symposium (USENIX Security’16). 619–636.Google Scholar
- [102] . 2019. Social data: Biases, methodological pitfalls, and ethical boundaries. Front. Big Data 2 (2019), 13.Google ScholarCross Ref
- [103] . 2020. A survey of the usages of deep learning for natural language processing. IEEE Trans. Neural Netw. Learn. Syst. 32, 2 (2020), 604–624.Google ScholarCross Ref
- [104] . 2022. Hidden trigger backdoor attack on NLP models via linguistic style manipulation. In Proceedings of the 31st USENIX Security Symposium (USENIX Security’22). 3611–3628.Google Scholar
- [105] . 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the IEEE Symposium on Security and Privacy (SP’16). IEEE, 582–597.Google ScholarCross Ref
- [106] . 2001. 197: Advanced encryption standard (AES). Fed. Inf. Process. Stand. Pub. 197, 441 (2001).Google Scholar
- [107] . 2022. Toward accountable human-centered AI: Rationale and promising directions. J. Inf., Commun. Ethics Soc. 20, 2 (2022).Google ScholarCross Ref
- [108] . 2022. Collaborative federated learning for healthcare: Multi-modal COVID-19 diagnosis at the edge. IEEE Open J. Comput. Soc. 3 (2022).Google ScholarCross Ref
- [109] . 2020. Securing machine learning in the cloud: A systematic review of cloud machine learning security. Front. Big Data 3 (2020), 587139.Google ScholarCross Ref
- [110] . 2022. Making federated learning robust to adversarial attacks by learning data and model association. Comput. Secur. 121 (2022), 102827.Google ScholarDigital Library
- [111] . 2020. Secure and robust machine learning for healthcare: A survey. IEEE Rev. Biomed. Eng. 14 (2020), 156–180.Google ScholarCross Ref
- [112] . 2020. Securing connected & autonomous vehicles: Challenges posed by adversarial machine learning and the way forward. IEEE Commun. Surv. Tutor. 22, 2 (2020), 998–1026.Google ScholarCross Ref
- [113] . 2019. Imperceptible, robust, and targeted adversarial examples for automatic speech recognition. In Proceedings of the International Conference on Machine Learning. PMLR, 5231–5240.Google Scholar
- [114] . 2022. Explainable, trustworthy, and ethical machine learning for healthcare: A survey. Comput. Biol. Med. 149 (2022), 106043.Google ScholarDigital Library
- [115] . 2021. The combination of artificial intelligence and extended reality: A systematic review. Front. Virt. Real. 2 (2021), 114.Google Scholar
- [116] . 2019. Efficiently stealing your machine learning models. In Proceedings of the 18th ACM Workshop on Privacy in the Electronic Society. 198–210.Google ScholarDigital Library
- [117] . 1978. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 21, 2 (1978), 120–126.Google ScholarDigital Library
- [118] . 2022. Regulation of the metaverse: A roadmap. In Proceedings of the 6th International Conference on Virtual and Augmented Reality Simulations (ICVARS’22).Google ScholarDigital Library
- [119] . 2018. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.Google ScholarCross Ref
- [120] . 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 5 (2019), 206–215.Google ScholarCross Ref
- [121] . 2018. Adversarial attacks on deep-learning based radio signal classification. IEEE Wirel. Commun. Lett. 8, 1 (2018), 213–216.Google ScholarCross Ref
- [122] . 2019. IoT network security from the perspective of adversarial deep learning. In Proceedings of the 16th Annual International Conference on Sensing, Communication, and Networking. IEEE, 1–9.Google ScholarDigital Library
- [123] . 2021. Fast and scalable private genotype imputation using machine learning and partially homomorphic encryption. IEEE Access 9 (2021), 93097–93110.Google ScholarCross Ref
- [124] . 2018. Data driven exploratory attacks on black box classifiers in adversarial domains. Neurocomputing 289 (2018), 129–143.Google ScholarDigital Library
- [125] . 2018. Poison frogs! Targeted clean-label poisoning attacks on neural networks. Adv. Neural Inf. Process. Syst. 31 (2018).Google Scholar
- [126] . 2017. IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In Proceedings of the IEEE Canada International Humanitarian Technology Conference. IEEE, 197–201.Google Scholar
- [127] . 2020. ARSpy: Breaking location-based multi-player augmented reality application for user location tracking. IEEE Trans. Mob. Comput. 21, 2 (2020).Google Scholar
- [128] . 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 1528–1540.Google ScholarDigital Library
- [129] . 2020. Deep learning for ultra-reliable and low-latency communications in 6G networks. IEEE Netw. 34, 5 (2020), 219–225.Google ScholarCross Ref
- [130] . 2019. VLA: A practical visible light-based attack on face recognition systems in physical world. Proc. ACM Interact., Mob., Wear. Ubiq. Technol. 3, 3 (2019), 1–19.Google ScholarDigital Library
- [131] . 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. Int. J. Hum.–comput. Interact. 36, 6 (2020), 495–504.Google ScholarCross Ref
- [132] . 2018. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. In Proceedings of the International Conference on Learning Representations (ICLR’18).Google Scholar
- [133] . 1999. Data encryption standard. Fed. Inf. Process. Stand. Pub. 112 (1999).Google Scholar
- [134] . 2017. Certified defenses for data poisoning attacks. Adv. Neural Inf. Process. Syst. 30 (2017).Google Scholar
- [135] . 2019. A framework for understanding unintended consequences of machine learning. arXiv (2019).Google Scholar
- [136] . 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).Google Scholar
- [137] . 2019. Machine learning adoption in blockchain-based smart applications: The challenges, and a way forward. IEEE Access 8 (2019), 474–488.Google ScholarCross Ref
- [138] . 2018. Digital twin in industry: State-of-the-art. IEEE Trans. Industr. Inform. 15, 4 (2018), 2405–2415.Google ScholarCross Ref
- [139] . 2019. Adversarial training and robustness for multiple perturbations. Adv. Neural Inf. Process. Syst. 32 (2019).Google Scholar
- [140] . 2020. Sparsely connected CNN for efficient automatic modulation recognition. IEEE Trans. Vehic. Technol. 69, 12 (2020), 15557–15568.Google ScholarCross Ref
- [141] . 2019. Generative adversarial networks for launching and thwarting adversarial attacks on network intrusion detection systems. In Proceedings of the 15th International Wireless Communications & Mobile Computing Conference (IWCMC’19). IEEE, 78–83.Google ScholarCross Ref
- [142] . 2021. Examining machine learning for 5G and beyond through an adversarial lens. IEEE Internet Comput. 25, 2 (2021), 26–34.Google ScholarCross Ref
- [143] . 2018. Adversarial attacks on cognitive self-organizing networks: The challenge and the way forward. In Proceedings of the IEEE 43rd Conference on Local Computer Networks Workshops (LCN Workshops’18). IEEE, 90–97.Google ScholarCross Ref
- [144] . 2019. Black-box adversarial ML attack on modulation classification. arXiv (2019).Google Scholar
- [145] . 2019. Adversarial ML attack on self organizing cellular networks. In Proceedings of the UK/China Emerging Technologies (UCET’19). IEEE, 1–5.Google ScholarCross Ref
- [146] . 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.Google ScholarCross Ref
- [147] . 2020. Attack trees for security and privacy in social virtual reality learning environments. In Proceedings of the 17th Annual Consumer Communications & Networking Conference. IEEE, 1–9.Google ScholarDigital Library
- [148] . 2021. Adversarial point cloud perturbations against 3D object detection in autonomous driving systems. Neurocomputing 466 (2021), 27–36.Google ScholarCross Ref
- [149] . 2021. Blockchain-empowered space-air-ground integrated networks: Opportunities, challenges, and solutions. IEEE Commun. Surv. Tutor. 24, 1 (2021), 160–209.Google ScholarCross Ref
- [150] . 2022. A survey on metaverse: Fundamentals, security, and privacy. IEEE Commun. Surv. Tutor. 25, 1 (2022).Google Scholar
- [151] . 2020. An adversarial attack on DNN-based black-box object detectors. J. Netw. Comput. Applic. 161 (2020), 102634.Google ScholarCross Ref
- [152] . 2008. Inference of security hazards from event composition based on incomplete or uncertain information. IEEE Trans. Knowl. Data Eng. 20, 8 (2008), 1111–1114.Google ScholarDigital Library
- [153] . 2020. LDP-based social content protection for trending topic recommendation. IEEE Internet Things J. 8, 6 (2020), 4353–4372.Google ScholarCross Ref
- [154] . 2018. Transferable adversarial attacks for image and video object detection. arXiv preprint arXiv:1811.12641 (2018).Google Scholar
- [155] . 2021. Backdoor attacks against deep learning systems in the physical world. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6206–6215.Google ScholarCross Ref
- [156] . 2022. Analytic review of using augmented reality for situational awareness. IEEE Trans. Visualiz. Comput. Graph. 29, 4 (2022).Google Scholar
- [157] . 2022. Just rotate it: Deploying backdoor attacks via rotation transformation. arXiv preprint arXiv:2207.10825 (2022).Google Scholar
- [158] . 2021. TediGAN: Text-guided diverse face image generation and manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2256–2265.Google ScholarCross Ref
- [159] . 2019. Generating 3D adversarial point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9136–9144.Google ScholarCross Ref
- [160] . 2019. DBA: Distributed backdoor attacks against federated learning. In Proceedings of the International Conference on Learning Representations.Google Scholar
- [161] . 2017. Adversarial examples for semantic segmentation and object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). 1369–1378.Google ScholarCross Ref
- [162] . 2020. Real-time, universal, and robust adversarial attacks against speaker recognition systems. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP’20). IEEE, 1738–1742.Google ScholarCross Ref
- [163] . 2022. A full dive into realizing the edge-enabled metaverse: Visions, enabling technologies, and challenges. arXiv preprint arXiv:2203.05471 (2022).Google Scholar
- [164] . 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017).Google Scholar
- [165] . 2016. Automatically evading classifiers. In Proceedings of the Network and Distributed Systems Symposium.Google Scholar
- [166] . 2021. Backdoors hidden in facial features: A novel invisible backdoor attack against face recognition systems. Peer-to-Peer Netw. Applic. 14, 3 (2021), 1458–1474.Google ScholarCross Ref
- [167] . 2022. Fusing blockchain and AI with metaverse: A survey. IEEE Open J. Comput. Soc. 3 (2022), 122–136.Google ScholarCross Ref
- [168] . 2019. Neural network inversion in adversarial setting via background knowledge alignment. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security. 225–240.Google ScholarDigital Library
- [169] . 1986. How to generate and exchange secrets. In Proceedings of the 27th Annual Symposium on Foundations of Computer Science (SFCS’86). IEEE, 162–167.Google ScholarDigital Library
- [170] . 2020. Recommendation of the council on artificial intelligence (OECD). Int. Legal Mater. 59, 1 (2020), 27–34.Google ScholarCross Ref
- [171] . 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30, 9 (2019).Google ScholarCross Ref
- [172] . 2019. Defending against neural fake news. Adv. Neural Inf. Process. Syst. 32 (2019).Google Scholar
- [173] . 2020. Contextual adversarial attacks for object detection. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’20). IEEE, 1–6.Google ScholarCross Ref
- [174] . 2020. Towards training robust private aggregation of teacher ensembles under noisy labels. In Proceedings of the IEEE International Conference on Big Data (Big Data’20). IEEE, 1103–1110.Google ScholarCross Ref
- [175] . 2021. Crafting adversarial examples for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. 1967–1977.Google ScholarCross Ref
- [176] . 2022. Metaverse: Security and privacy concerns. arXiv preprint arXiv:2203.03854 (2022).Google Scholar
- [177] . 2020. Towards transferable adversarial attack against deep face recognition. IEEE Trans. Inf. Forens. Secur. 16 (2020), 1452–1466.Google ScholarCross Ref
Index Terms
- Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for Metaverses
Recommendations
Regulating the Metaverse, a Blueprint for the Future
Extended RealityAbstractThe core Immersive Media (IM) technologies of Virtual Reality (VR) and Augmented Reality (AR) have steadily advanced over the last thirty years, enabling high fidelity experiences at consumer prices. Over the same period, networking speeds have ...
Novel Challenges of Safety, Security and Privacy in Extended Reality
CHI EA '22: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing SystemsExtended Reality (AR/VR/MR) technology is becoming increasingly affordable and capable, becoming ever more interwoven with everyday life. HCI research has focused largely on innovation around XR technology, exploring new use cases and interaction ...
Exploring Extended Reality Multi-Robot Ground Control Stations
AVI 2022: Proceedings of the 2022 International Conference on Advanced Visual InterfacesThis paper presents work-in-progress research exploring the use of extended reality headsets to overcome the intrinsic limitations of conventional, screen-based ground control stations. Specifically, we discuss an extended reality ground control ...
Comments