Next Article in Journal
Edge-Bound Change Detection in Multisource Remote Sensing Images
Next Article in Special Issue
Exploring the Mediating Role of Different Aspects of Learning Motivation between Metaverse Learning Experiences and Gamification
Previous Article in Journal
Enhancing Zero Trust Models in the Financial Industry through Blockchain Integration: A Proposed Framework
Previous Article in Special Issue
Entering the Next Dimension: A Review of 3D User Interfaces for Virtual Reality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Cardiac Healthcare Digital Twins Supported by Artificial Intelligence-Based Algorithms and Extended Reality—A Systematic Review

1
Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106 Warsaw, Poland
2
Center for Digital Medicine and Robotics, Jagiellonian University Medical College, 7E Street, 31-034 Krakow, Poland
3
Department of Bioinformatics and Telemedicine, Jagiellonian University Medical College, Medyczna 7 Street, 30-688 Krakow, Poland
4
Collegium Prometricum, The Business School for Healthcare, 81-701 Sopot, Poland
5
Royal Society of Arts, 8 John Adam St., London WC2N 6EZ, UK
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(5), 866; https://doi.org/10.3390/electronics13050866
Submission received: 25 January 2024 / Revised: 19 February 2024 / Accepted: 21 February 2024 / Published: 23 February 2024
(This article belongs to the Special Issue Metaverse and Digital Twins, 2nd Edition)

Abstract

:
Recently, significant efforts have been made to create Health Digital Twins (HDTs), Digital Twins for clinical applications. Heart modeling is one of the fastest-growing fields, which favors the effective application of HDTs. The clinical application of HDTs will be increasingly widespread in the future of healthcare services and has huge potential to form part of mainstream medicine. However, it requires the development of both models and algorithms for the analysis of medical data, and advances in Artificial Intelligence (AI)-based algorithms have already revolutionized image segmentation processes. Precise segmentation of lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapy. In this systematic review, a brief overview of recent achievements in HDT technologies in the field of cardiology, including interventional cardiology, was conducted. HDTs were studied taking into account the application of Extended Reality (XR) and AI, as well as data security, technical risks, and ethics-related issues. Special emphasis was put on automatic segmentation issues. In this study, 253 literature sources were taken into account. It appears that improvements in data processing will focus on automatic segmentation of medical imaging in addition to three-dimensional (3D) pictures to reconstruct the anatomy of the heart and torso that can be displayed in XR-based devices. This will contribute to the development of effective heart diagnostics. The combination of AI, XR, and an HDT-based solution will help to avoid technical errors and serve as a universal methodology in the development of personalized cardiology. Additionally, we describe potential applications, limitations, and further research directions.

1. Introduction

A Digital Twin (DT) is a digital replica of its corresponding physical object or process. It is a virtual model with special features that combine the physical and digital worlds [1]. Since modern medicine needs to move from being a wait-and-response therapeutic discipline to an interdisciplinary preventive science, interest in the application of DT technology in medicine is rapidly growing. DTs enable human physical characteristics, including changes and disorders in the body, to be transferred to the digital environment. Thus, DT technology also opens up the possibility of delivering personalized medicine in the form of providing an individual patient with their very own diagnosis, optimization path, health forecast, and treatment plan [2]. Thus, Health Digital Twins (HDTs) may represent a specific organ modeled from high-resolution medical imaging and structural and physiological functional data across multiple scales [3]. The technology can be applied to the development of drug delivery processes, selection of targeted therapies, and design of clinical trials. HDTs fit perfectly into the Healthcare 4.0 concept that assumes the introduction of a publicly available system of effective personalized healthcare [4].
One of the technologies that are applied in the implementation of DTs is Extended Reality (XR). This enables users to experience the feeling of immersion in the real world on various levels through head-mounted displays (HMDs) [5]. This approach provides a new level of quality in the three-dimensional (3D) visualization of complex structures such as organs and their abnormalities as well as touch-free interfaces [6]. XR is increasingly used in preoperative planning, and recently even during surgery [7,8]. Immersive solutions are also starting to play an important role in medical education [9], particularly in the context of distance education [10]. Thus, in the case of a virtual environment and virtual models, the key issue is the development of an environment and/or scene and a model that reflects reality as closely as possible. In this field, Artificial Intelligence (AI)-based algorithms, in particular, Deep Neural Networks (DNNs), have recently revolutionized image creation [11]. Precise segmentation of lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapy. For example, an AI-based algorithm for the segmentation of pigmented skin lesions has been developed, which enables diagnosis in the earlier stages of the disease, without invasive medical procedures [12,13]. With flexibility and scalability, AI can be also considered an efficient tool for cancer diagnosis, particularly in the early stages of the disease [14,15]. On the other hand, in this context, the provision of a stable internet connection is extremely important. However, different XR-based solutions have varying requirements for optimal connectivity. Thus, intelligent DTs combined with AI-based algorithms and XR devices have huge potential to revolutionize medicine and public health. A basic model of such connections is presented in Figure 1.
In this paper, we present a brief overview of the recent achievements (2020–2024) in Health Digital Twin technologies in the field of cardiology, taking into account the application of Extended Reality and Artificial Intelligence. Specifically, we aim to answer the following research questions (RQs).
RQ1: Can AI-based algorithms be used for the accurate segmentation of human organs based on medical data, in particular in the case of the heart?
RQ2: How do AI-based algorithms be beneficial in the Health Digital Twin technologies?
RQ3: How can Extended Reality be used in Health Digital Twin-based solutions?
RQ4: What ethical threats does a world based on the Metaverse and Artificial Intelligence pose to us?
To address the above research questions, a systematic literature review was conducted.

2. Materials and Methods

In this paper, a systematic review was undertaken based on the PRISMA Statement which has been published in several journals, and its extensions, including PRISMA-S [16], formed to accommodate the wide question concerning the application of Digital Twin technology supported by Artificial intelligence and Extended Reality-based solutions in the field of cardiology. Eligible materials, including publications, reports, protocols, and papers from peer-reviewed literature, were identified from the Scopus, Web of Science, and PubMed databases. The keywords Artificial Intelligence, Machine Learning, Digital Twin, Digital Twin in medicine, Digital Twin in cardiology, Extended Reality, Mixed Reality, Virtual Reality, Augmented Reality, Metaverse, digital heart, cardiology, signal segmentation, medical image scan segmentation, segmentation algorithms, and classification algorithms and their variations were used. The inclusion criteria set to select the resources were as follows: resource language: English; type of resource: publications in the form of journal papers, books, and proceedings as well as technical reports; a publishing time frame between 2020 and 2024. This choice of publication dates is based on the fact that areas such as Artificial Intelligence, the Metaverse, and Health Digital Twins have developed very dynamically in recent years, and we wanted to focus only on the latest developments in the field of the Metaverse and Digital Twins taking into account a medical context, in particular cardiology. The literature search was conducted as follows.
(1)
Duplicate and non-relevant records were removed;
(2)
Resources whose titles and abstracts were not relevant to the topic were excluded;
(3)
Non-retrieved resources were removed;
(4)
Conference papers, reviews, Ph.D. theses, and sources that did not contain information about the Metaverse, AI, and XR in the context of cardiology used were excluded.
In addition, resources that were deemed unnecessary during the search were eliminated from consideration. Thus, all documents considered needed to be peer-reviewed and to include answers to research questions. Finally, 253 documents were taken into account.

3. Digital Twins in Cardiology—The Heart Digital Twin

A Digital Twin contains three parts: a physical model of the organ in the body, its virtual counterparts, and their mutual interactions. Outside medicine, this approach works well in industrial solutions such as the battery industry, for example [17], where a pro-ecological approach to batteries already at the design stage is allowed. In turn, in the area of medicine, DTs offer opportunities ranging from research on mechanisms related to various diseases and isolating disease predictors to optimization of health outcomes [18]. However, the human body and its parts are considered to be more complex than objects in engineering and manufacturing [19,20]. For example, it has also been shown on the individual level that mathematical models can be calibrated based on patient-specific data to predict tumor response dynamics [21,22,23,24]. These models can be applied in the development of a Digital Twin of the patient. In recent years, early attempts to apply DTs in medicine have been made [25]. Thus, in cardiology, DTs solve the inverse problem of electrocardiography, relating electrical signals to the anatomy of the heart [26,27], connected with the development of a heart model that involves the parameterization of its elements [28]. This enables non-invasive functional cardiac modeling imaging methods to be used in a clinical setting. For this to become a clinical reality 3D thorax information should result in well-defined heart models inside the thorax, for which currently no algorithms are available due to the large variability in body build and underlying heart disease. However, the solution to the inverse problem is subject to technical errors [29]. For further development of this DT, a model of the patient’s heart and torso derived from medical imaging based on Magnetic Resonance Imaging/Computed Tomography (MRI/CT) is required [30]. Medical image processing is time consuming, especially when the image is of poor quality. Moreover, the participation of humans is needed. Thus, the development of efficient and accurate segmentation algorithms is of high importance [31].

4. Extended Reality in Cardiology

Amongst other things, XR in medicine provides the ability to overlay computer-developed elements and structures onto real-world data and develop touch-free human–computer interfaces (controlled by voice, eye movements, and hand gestures) that can be applied in a sterile environment [32]. Moreover, Schöne et al. [33] have shown that the current configuration of XR-based devices contributes to the experience of a feeling of reality, although this depends on the quality of the scenes and objects presented. Extended Reality can be divided into Virtual Reality (VR) (a completely virtual experience), Augmented Reality (AR) (combining real-world elements with computer-generated elements), and Mixed Reality (MR) (computer-generated elements that can actively interact with the real world) [3]. All these types of XR-based technologies can be very helpful in the field of cardiology [34,35,36] (also see Table 1). Marvin et al. [37] have shown that this approach is quite popular in European cardiology, although it has not led to routine practice. For example, VMersive (VR-Learning, Poland) is an automated tool dedicated to the reconstruction of CT and MRI image scenes. It can be applied to procedure planning as in congenital heart disease treatment [38]. Another study [39] concentrates on the evaluation of a VR-based solution for baffle planning in CHD. The proposed approach enables a medical doctor to simulate different baffle configurations and analyze their impact on blood flow, which is under practical conditions impossible. The knowledge gained may be beneficial in operational planning [40]. In turn, Ghosh et al. [41] applied VR to the creation of a 3D view based on cardiac MRI to visualize multiple ventricular septal defects, an approach that uses commercial software to segment heart areas. This procedure made it possible to reveal what would normally be invisible, the ventricular septal defect of the heart. Another case of XR’s practical application in cardiology is cardiac catheterization which is a commonly used procedure, although it is quite risky. Battal et al. [42] proposed VR as a support for this procedure. As shown by Eves et al. [43] and Chahine et al. [44] techniques such as AR may shorten the operation time. When we consider cardiac surgery, even the simplest types, we must take into account the patient’s recovery after such an operation. Very little attention is paid to this issue. This creates a very interesting area of potential XR-based technology application [45]. Additionally, three-dimensional cardiac imaging is also important in veterinary medicine [46].
Another XR application field in cardiology is connected with rehabilitation. Mocan et al. [64] proposed a combination of home cardiac telerehabilitation based on a virtual environment with a monitoring system. The idea presented allows the continuation of rehabilitation at home. AR may also be helpful in home monitoring as in an application that allows the correct placement of ECG electrodes to be checked using photos taken with a mobile phone [65]. Here, it was found that an AR-based solution enables at least eighty percent of the measurements to be obtained correctly.
Recently, attention has also been paid to pain management for surgical procedures as in cases of advanced heart failure, where VR can also be beneficial [66]. Another approach to VR in cardiology is to use the sensors and cameras found in HDMs to determine heart rate (HR) [67]. HR was estimated based on the central portion of the face by application of remote photo plethysmograph, Eulerian Video Magnification (EVM), and Convolutional Neural Networks (CNNs). According to research results, it is only possible to predict HR based on facial regions. An interesting solution in the field of VR-supported surgical procedures was proposed by Sainsbury et al. [68]. A three-dimensional model of the renal system was developed based on the patient’s preoperative CT scans. Then, the surgeon planned the course of the operation. It turned out that the combination of VR and tactile feedback strongly influences decision making during surgery.
A further important area of XR-based technology consists of the education (broadly understood) of both future medical staff and patients [69,70,71], as in the case of teaching heart anatomy [37]. For example, VR-based simulators provide an effective tool for determining the heart’s mechanical and electrical activities [72]. Additionally, the proposed solution is equipped with a VR catheter module that allows the movement of the catheter to be tracked. In research by Patel et al. [73], traditional learning using 2D imaging and learning using XR and 3D imaging were compared. The objective of the study was to understand congenital heart disease. Although there were no differences in teaching effectiveness from a statistical point of view, participants who used XR in the learning process reported a better understanding of the content. On top of that, Lim et al. [74] found VR to be a very helpful tool for residents participating in pediatric cardiology rotations. O’Sullivan et al. [75] also found that more than eighty percent of participants believed that VR is a good teaching tool for acquiring knowledge about echocardiography, and over sixty percent of them rated VR higher than traditional teaching methods. Then, Choi et al. [76] found that AR glass increases the level of understanding of left ventricular ejection fraction. Additionally, García Fierros et al. [62] made a comparison of VR and fluoroscopic guidance for transseptal puncture. It turned out that VR may have the potential to shorten training. Gladding et al. [77] and Kieu et al. [58] also found that such an approach is helpful.

5. Artificial Intelligence-Based Support in Cardiology

Computer-assisted medicine in general, and cardiac modeling in particular, is by no means an exception from the successful application of continuous advancements in bioelectricity and biomagnetism [78]. Along with enhancements in ECG measuring techniques and a constant increase in computational resources, these advances have provoked the development of many different heart models that can support an automatic and accurate diagnosis of the heart, beat by beat. Knowledge of the anatomical heart structure is an important part of the evaluation of cardiac functionality. Thus, cardiac images are one of the significant techniques applied in the assessment of patient health. At present, the image segmentation procedure is usually performed manually, with an expert sitting in front of a monitor moving a pointer, and not only does this require time and resources to accomplish, but it is also subject to error depending on the experience of the expert. In sum, this procedure is time consuming, inefficient, very often error prone, and highly user dependent [79]. Therefore, the development of an efficient, automatic segmentation procedure is of great importance [80]. However, certain limitations mean that the automatic segmentation of cardiac images is still an open and difficult task. For example, in the case of 2D echocardiographic images, a low signal-to-noise ratio, speckles, and low-quality images form some of the difficulties in determining the contour of the ventricles. Moreover, significant variability in the shape of heart structures makes it difficult to develop universal automated algorithms. Thus, medical image segmentation has become a significant area of AI application in medicine. An image can be segmented in several ways, including semantic segmentation (the assignment of each pixel or voxel of an image to one of the classes) [81], instance segmentation (pixels of an image are assigned to the instances of the object) [82], and panoptic segmentation (the connection of the semantic and instance segmentation) [83]. The main disadvantage of semantic segmentation is the poor definition of the problem (sometimes multiple instances can be abstracted into a single class), which translates into inadequate recognition of image details. As said, in the case of medical images, segmentation is often performed manually, making it a time-consuming and error-based process. Many algorithms have been proposed to support the automatic segmentation of medical images. It is also worth stressing that imaging methods in cardiology have particular characteristics that can affect their reproducibility and reliability. These include spatial, temporal, and contrast resolution as well as tissue penetration and artifact susceptibility. The ultimate goal is to enable fully automatic segmentation of any clinically acquired CT or MRI. Indeed, MRI offers higher resolution in comparison to ultrasound and spatial resolutions impact the ability to visualize tiny structures in the heart and blood vessels. In turn, echocardiography can provide higher temporal resolution compared to MRI or CT processes, which affects the ability to capture dynamic changes in heart function. Thus, different modalities have different capabilities in distinguishing between different tissue types and contrast agents. MRI often excels in contrast resolution compared to other diagnostics methods. Therefore, for medical image segmentation (mostly semantic segmentation), different types of neural networks are applied [84], see also Table 2. The basic concept of AI application in cardiology is presented in Figure 2.

5.1. Application of the You-Only-Look-Once (YOLO) Algorithm

The You-Only-Look-Once (YOLO) algorithm is an approach that is based on deep learning for object detection [149,150]. It depends on the idea that images pass only once through the neural network, and hence the name. This is performed by dividing the input image into a grid and predicting for each grid cell the bounding box and the probability of that class. The algorithm predicts different values related to the object, such as the coordinates of the center of the bounding box around the object, the height and width of the bounding box, the class of the object, and the probability, or the confidence of the prediction. This way of working may cause the algorithm to detect the object multiple times. To avoid duplicate detections of the same object the algorithm uses non-maximum suppression (NMS), which works by calculating a metric called Intersection over Union or (IOU) between the boxes. If the IOU between two boxes is larger than a certain threshold, the box with a higher confidence score is chosen and the other box is ignored. There have been many improvements to the YOLO algorithm that provide higher accuracy, faster performance, improved scalability, and greater flexibility for customization.
In advancing the diagnosis of cardiovascular diseases (CVDs), the YOLOv3 algorithm was developed for the precise segmentation of the left ventricle (LV) in echocardiography. This method leverages YOLOv3’s powerful feature extraction capabilities to accurately locate key areas of the LV, including the apex and bottom, facilitating the acquisition of detailed LV subimages. Employing the Markov random field (MRF) model for initial identification and processing, the method then applies sophisticated techniques including non-linear least-squares curve fitting for exact LV endocardium segmentation. YOLOv3’s role is pivotal in ensuring the accuracy and efficiency of this process, highlighting its significance in the early detection and analysis of CVD [151]. On the other hand, in the realm of cardiac health monitoring and medical image processing, the Lion-Based Butterfly Optimization model with Improved YOLOv4 was introduced as described by Alamelu and Thilagamani, [152]. When applied in the prediction of heart disease based on echocardiography, it was found that a refined version of the segmentation algorithm significantly improves (with an average of 99% accuracy) the analysis of echocardiographic images, offering more accurate and thorough insights into cardiac health, thus marking a substantial advancement in cardiac diagnostics technology. Lee et al. [153] applied the YOLOv5 algorithm to cardiac detection. Based on cardiovascular CT images from Soonchunhyang University Hospital in Korea, the critical role of data preprocessing in deep learning, especially when dealing with limited medical datasets, was presented. The study highlights the advanced capabilities of YOLO-v5, including its efficiency in processing and analyzing complex medical images, in particular in the field of cardiology. The approach presented significantly contributes to the precision and speed of cardiac disease detection, underscoring the impact of deep learning techniques in improving medical diagnostics. Moreover, by combining the capabilities of YOLOv7 with a U-Net Convolutional Neural Network the precise segmentation of left heart structures from echocardiographic images was subsequently developed [154]. This approach efficiently delineates complex anatomical structures, including the left atrium, endocardium, and epicardium. Thus, the integration of YOLOv7 with U-Net significantly improves the accuracy and efficiency of the segmentation process, proving to be a valuable asset in cardiac diagnosis, Animesh Tandontics, and clinical practice.
The proposed YOLO-based approach for image segmentation is fast and efficient. It is also quite efficient in terms of the use of computing resources, which is of key importance considering the huge amounts of cardiological data that need to be processed. However, this may reduce its level of accuracy compared to more complex segmentation algorithms, which is crucial in the case of cardiac images. YOLO-based image segmentation may also lead to a reduction in spatial resolution in segmentation masks, especially for small or complex structures in radiological images. Additionally, to ensure a satisfactory level of accuracy, a large amount of high-quality labeled training data is required. Collecting labeled data for radiological images can be difficult and time consuming due to the need for expert annotation. The algorithm is also very sensitive to class imbalance, which often occurs with radiological data. In the field of cardiology, YOLO is used largely on a black-box basis, which can make it difficult to reliably interpret results.

5.2. Genetic Algorithms

The analysis of medical data can also be approached using metaheuristic methods such as Genetic Algorithms (GAs), Evolutionary Algorithms (EAs) in particular, and Artificial Immune Systems (AISs) that search the possible solution space based on mechanisms taken from the theory of evolution and natural immune systems. GAs can also be used to improve diagnosis as well as the selection of targeted therapy in the field of cardiology. Reddy et al. [148] applied GAs to the diagnostics of early-stage heart disease, which has crucial implications in the selection of further therapy methods. For example, GAs allowed for the optimization of classification rules. As a consequence, the level of accuracy increased and the computational cost was reduced (due to the simplification of the selection process). GAs can also be applied to the determination of personalized parameters of the cardiomyocyte electrophysiology model [155]. Here, the Cauchy mutation was applied. In most cases, GAs were used to limit the number of parameters that are then used as input to another AI-based algorithm, such as a Support Vector Machine (SVM) [156,157]. Genetic Algorithms can effectively search for optimal segmentation solutions in the case of heart image segmentation, where anatomical structures may have different shapes. However, GAs may exhibit difficulties with complex limitations or domain-specific knowledge in cardiac image segmentation tasks. On the other hand, GAs can also be effective in the optimization of the input parameters to neural networks. They are inherently robust concerning noise and local optima. This is an important feature taking into account motion artifacts or imaging noise in cardiac image segmentation. A huge disadvantage of GAs is the cost of computing large search spaces or high-dimensional feature spaces, which is crucial, especially for real-time computations or in clinical settings (such as may occur in cardiology applications). Thus, finding the optimal parameter can be difficult and time consuming. However, GAs can be parallelized in a relatively affordable way which can help eliminate this disadvantage. Another issue is that this approach may converge slowly and therefore require long computation times, which is not desirable in clinical practice.

5.3. Artificial Neural Networks

Artificial Neural Networks (ANNs) are networks whose structure and principle of operation are to some extent modeled on the functioning of fragments of the real nervous system (the brain) [158,159]. This computational invention contributes to the development of medical imaging, especially in cardiology, where their design, inspired by the human brain, enables them to interpret complex patterns within medical data effectively. ANNs consist of layers composed of several neurons, which apply specific weights and biases to the inputs. These neurons utilize non-linear activation functions that enable the network to detect complex patterns and relationships that linear functions might overlook. The output layer plays a pivotal role in making predictions or classifications based on the analysis, such as identifying signs of heart disease, classifying different cardiac conditions, or determining the severity of a disorder [160,161]. In cardiology, the ability to detect conditions accurately and at an early stage is of paramount importance, and the application of ANNs for the analysis of medical images is an important development in this area. Considering the high global prevalence of cardiovascular diseases, the application of ANNs in cardiac imaging may substantially improve diagnostic techniques [162]. ANNs provide an efficient computational tool to detect structural abnormalities in heart tissues. They also play a vital role in assessing cardiac function, evaluating important metrics such as ejection fraction, and analysis of blood flow patterns, essential for diagnosing heart failure or valvular heart disease. Their ability to analyze historical and current medical images aids in predicting the progression of cardiac diseases. This could positively impact patient outcomes, meeting an essential requirement in contemporary healthcare. Based on ANNs, Salte et al. [163] proposed automating the measurement of global longitudinal strain (GLS), a vital metric for assessing left ventricular function in cardiology. Echocardiographic cine-loops were analyzed and the approach developed demonstrates superior accuracy and efficiency compared to conventional speckle-tracking software. A further study by Nithyakalyani et al. [164] also shows ANN potential in the CVD diagnostic process.
ANNs can automatically learn hierarchical features from raw image data without the need to manually extract features, which is beneficial for segmenting complex organs such as the heart. However, ANN application in the field of medical image processing requires converting two-dimensional images to one-dimensional vectors. This increases the number of parameters and increases the cost of calculation. However, as in the case of YOLO-based segmentation algorithms, an ANN-based approach also requires large and good-quality training data to provide high accuracy. Additionally, finding the right combination of hyperparameters can be time consuming and require extensive experimentation involving significant computational resources. Moreover, ANNs are prone to overfitting, especially when trained on limited or noisy data. To prevent overfitting, regularization techniques and data augmentation strategies are often used.

5.4. Convolutional Neural Networks

Another neural network that has been applied to medical image processing is the Convolutional Neural Network (CNN). As opposed to traditional neural networks such as ANNs, which typically process data in a straightforward, sequential manner, CNNs can discern spatial relationships within datasets. This is due to the way they are designed and constructed, intended as they are to maintain and interpret the spatial structure of input data, an attribute that is vital for the accurate assessment of medical images. For example, Roy et al. [91] applied CNNs to cardiac image segmentation to diagnose coronary artery disease (CAD). CNNs were used to analyze 2D X-ray images, significantly enhancing image segmentation accuracy and setting new standards in medical image analysis. Similarly, as in Gao et al. [165], Galea et al. [166] proposed combining U-Net and DeepLabV3+ CNN architectures for the segmentation of cardiac images from smaller datasets. Tandon et al. [167] applied CNNs in cardiology with a specific focus on cardiovascular imaging for patients with Repaired Tetralogy of Fallot (RTOF). A CNN originally designed for ventricular contouring was retrained and adapted to the complexities of RTOF. This enabled an increase in algorithm accuracy. In turn, Stough et al. [168] developed a fully automatic method for segmenting heart substructures in 2D echocardiography images using CNNs that was validated against a robust dataset, and Sander and Išgum [169] focused on enhancing the segmentation of cardiac structures in cardiac MRI. This method integrates automatic segmentation with an assessment of segmentation uncertainty to identify potential local failures. The measures of predictive uncertainty were calculated and trained by another CNN to detect local segmentation errors for potential expert correction. This approach combining automatic segmentation with manual correction of detected errors could significantly reduce the time required for expert segmentation. Masutani et al. [170] considered the lengthy acquisition times and reduced spatial detail in cardiac MRI. Here, CNNs were applied for deep learning super-resolution. It turned out that CNNs considerably outperformed traditional image upscaling methods, recovering high-frequency spatial details and providing accurate left ventricular volumes. Then, Liu et al. [142] focused on creating interpretable deep-learning models for cardiac MRI segmentation, particularly of the left ventricle with the use of CNNs. A deep CNN was also applied to classify Coronary Computed Tomography Angiography (CCTA) scans using the Coronary Artery Disease Reporting and Data System (CAD-RADS) categories [86]. Indeed, one of the advantages of this approach is the reduction in the analysis time compared to manual readings, demonstrating its efficiency and accuracy in automating the classification process for coronary artery disease. For example, O’Brien et al. [171] proposed automated detection of ischemic scars in the left ventricle from routine CTA imaging. The CNN exhibited high accuracy in detecting scar slices, performing better than manual readings and showing the potential of this method in enhancing cardiac imaging and diagnostics at minimal additional costs. Similarly, Candemir et al. [94] employed a deep learning algorithm using a three-dimensional CNN to detect and localize coronary artery atherosclerosis in CCTA scans.
In the context of cardiology, fully connected layers of CNNs are responsible for synthesizing information to perform critical analytical tasks. These include classifying different cardiac conditions, detecting anomalies such as irregularities in heart size or shape, and making predictive assessments based on a comprehensive analysis of cardiac structure and function. CNNs are particularly good at handling complex datasets from various imaging modalities in cardiology, including MRI, CT scans, and ultrasound [172]. The strength of CNNs lies in their ability to handle high-dimensional data and to effectively capture the spatial structures within medical images in cardiology. This leads to more precise and comprehensive analyses of cardiac health. However, in the case of sparse or partial input data, their use is difficult and does not provide high prediction accuracy, while high segmentation accuracy is associated with high computational costs. Nor do CNNs take into account spatial relations in images which is important in the case of cardiology. To overcome this limitation, Capsule Networks (CNs) were introduced [173]. Their output is in the form of vectors that enable some spatial relations to be saved. The disadvantage of this approach is the lack of verification on a large dataset.

5.5. Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are known for their ability to model long-term dependencies and are crucial for capturing the intricate details of cardiac structures. Unlike traditional feedforward neural networks that process inputs in a one-directional manner, RNNs are designed to handle sequences of data. This is achieved through their internal memory, which allows them to retain information from previous inputs and use it in the processing of new data [174]. In the case of medical data in the form of echocardiography, and cardiac MRI segmentation, RNNs have shown promising performance [88]. They also excel in handling the sequential and temporal aspects of both MRI and CT data, crucial for monitoring dynamic changes in cardiac tissues over time [136]. In turn, Wahlang et al. [89] combined RNNs and their variations in Long Short-Term Memory (LSTM) successfully in the segmentation and classification of 2D echo images, 3D Doppler images, and video graphic images. Wang and Zhang [116] also considered the segmentation of the left ventricle wall in four-chamber view cardiac sequential images. RNN was applied to provide detailed information for the initial image, while LSTM to generate the segmentation result: this approach increases accuracy. Another RRN application in the field of cardiology was presented by Muraki et al. [90]. Here, simple RNNs, LSTM, and other RNN variations (such as Gated Recurrent Units (GRU)) were successfully used to detect acute myocardial infarction (AMI) in echocardiography. Another cardiology-connected RNN application field was developed by Fischer et al. [175] to detect coronary artery calcium (CAC) from Coronary Computed Tomography Angiography (CCTA) data. Here, the automatic detection and labeling of heart and coronary artery centerlines based on the RNN-LSTM algorithm was considered. Then, Lyu et al. [176] put forward a recurrent Generative Adversarial Network model for cine cardiac Magnetic Resonance Imaging. This model utilizes bi-directional convolutional LSTM and multi-scale convolutions, adept at managing long-range temporal features and capturing both local and global features, thereby enhancing the network’s performance. The method showcases significant improvements in cine cardiac MRI image quality and an ability to generate missing intermediate frames, thus improving the temporal resolution of cine cardiac MRI sequences. A similar approach can be found in the work of Ammar et al. [177]. Additionally, the method shows strong correlation coefficients and limits of agreement for clinical indices when compared to their ground truth counterparts, highlighting its potential effectiveness and efficiency in cardiac cine MRI analysis.
RNNs have proven to be well suited to managing the sequential and temporal characteristics inherent in MRI and CT data, a capability that is essential for accurately tracking the dynamic alterations in cardiac tissues due to the possibility of effective capturing of long-range non-linear dependencies, such as modeling the risk trajectory of heart failure [178]. However, one limitation of RNNs is connected with vanishing or exploding gradients.

5.6. Spiking Neural Networks

Calculations related to the analysis of cardiac data are very time consuming and involve a great deal of computing resources. One alternative that can potentially reduce computational cost could be Spiking Neural Networks (SNNs). Currently, SNNs are not yet as accurate in comparison to traditional neural networks: they have characteristics that are more similar to biological neurons [179]. They may also be advantageous in wearable and implantable devices for their energy efficiency and real-time processing capabilities. This makes them ideal for continuous cardiac monitoring, as they require less frequent recharging or battery replacement, a significant benefit for devices like cardiac monitors and pacemakers. For example, Rana and Kim [180] modify the synaptic weights such as to be binary. This operation provides a reduction in computational complexity and power consumption. This is crucial, especially in the context of wearable monitors where continuous monitoring is key but the constraints of power and computational resources are limiting factors. Their binarized SNN model may be a highly efficient alternative for ECG classification, setting a new standard in continuous cardiac health monitoring technologies. Shekhawat et al. [181] propose a Binarized Spiking Neural Network (BSNN) optimized with a Momentum Search Algorithm (MSA) for fetal arrhythmia detection. Another study in the field of arrhythmia detection introduces a Memristive Spike-Based Computing in Memory (MSB-CIM) system using a Memristive Spike-Based Computing Engine with Adaptive Neuron (MSPAN). Then, a multi-layer deep integrative Spiking Neural Network (DiSNN) in edge computing environments was developed by Jiang et al. [182]. This system efficiently manages ECG classification tasks, greatly reducing computational complexity without compromising accuracy. Furthermore, Banerjee et al. [96] optimized SNNs for ECG classification in wearable and implantable devices such as smartwatches and pacemakers. Their approach in designing both reservoir-based and feed-forward SNNs, and integrating a new peak-based spike encoder, has led to significant enhancements in network efficiency. Yan et al. [183] proposed training the SNN model on diverse patient data and then adeptly applied it to classify ECG patterns from new, untrained individuals. This approach addresses the balance between low power consumption and high accuracy effectively, making it a highly suitable choice for continuous, real-time heart monitoring in everyday wearable technology. Similarly, Kovács and Samiee [131] introduced a hybrid neural network architecture that merges the strengths of Variable Projections (VP) with the capabilities of SNNs. An interesting solution in ECG classification has also been presented by Feng et al. [129]. Their approach involves building a structure analogous to a deep ANN, transferring the trained parameters to this new structure, and utilizing leaky integrate-and-fire (LiF) neurons for activation. This method not only matches but in some cases, exceeds the accuracy of the original ANN model. This may lead to more efficient, accurate, and reliable systems for continuous cardiac health monitoring, potentially revolutionizing the way heart diseases are detected and monitored.
SNNs, however, are more computationally efficient (connected to the high level of computational speed and real-time performance). As a consequence, SNNs consume less energy, which translates into better use of hardware resources. However, their learning algorithms require improvement (in terms of accuracy gains), in comparison, for example, to the accuracies achieved by the application of CNNs [184]. In the case of SNNs, the requirement of increasingly powerful hardware is also of high importance. SNNs also have a significant limitation in practical applications due to the smaller number of available tools, libraries, and structures in comparison to other neural network types. SNNs also provide worse results in terms of accuracy compared to traditional approaches. To fully exploit the potential of SNNs, including detecting anomalies in biomedical signals and designing more detailed networks, the SNNs’ learning mechanisms/rules need to be improved. Another issue is connected to scalability, especially for large-scale heart image segmentation tasks.

5.7. Generative Adversarial Networks

Generative Adversarial Networks (GANs) are network architectures that consist of two core components: the generator and the discriminator. The generator shoulders the responsibility of creating data that faithfully emulates specific data (artificial data identical to real data) to cheat the discriminator. It initiates the process with an input of random noise, meticulously refining it through multiple layers of neural network architecture. Each layer integrated within the generator network fulfills a distinct role, harnessing techniques such as convolutional or fully connected layers. These layers operate cohesively to progressively metamorphose the initial noise input into an output that becomes increasingly indistinguishable from the target data. A discriminator is designed to distinguish artificial data (produced by a generator) from real data based on small nuances. Thus, the core concept of this solution is to train two networks that compete with each other. As a consequence, they are expected to produce more authentic data [185]. GANs seem to be promising computational tools to elevate patient care and improve clinical outcomes, in particular in the field of cardiology. First, the most important GAN application field is CVD diagnosis [135]. Retinal fundus images were used as input to the network. This approach led to the analysis of microstructural alterations within retinal blood vessels to pinpoint pivotal risk factors associated with CVD, such as Hypertensive Retinopathy (HR) and Cholesterol-Embolization Syndrome (CES). Moreover, the incorporation of a retrained ImageNet model for customized image classification further bolstered predictive accuracy. Furthermore, Chen et al. [136] demonstrated the potential of GAN in automating precise segmentation of the left atrium (LA) and atrial scars in late gadolinium-enhanced cardiac magnetic resonance (LGE CMR) images. The quantification of atrial scars, distinguished by substantial volume disparities, necessitated a departure from traditional two-phase segmentation methods. To surmount this hurdle, JAS-GAN was introduced, an intercascade Generative Adversarial Network, to autonomously and accurately segment unbalanced atrial targets within LGE CMR images. Thus, an adaptive attention cascade and adversarial regularization culminate in simultaneous and precise segmentation of both LA and atrial scars. This solution provides some insight into clinical applications in the treatment of patients with atrial fibrillation (AF), underscoring the indispensable role of GANs in the realm of medical imaging tasks. The transformative potential of GANs to enhance dynamic CT angiography derived from CT perfusion data has been shown in studies by Wu et al. [186]. Vessel-GAN, characterized as an explainable Generative Adversarial Network, allows for standalone coronary CT angiography. Additionally, automated atherosclerosis screening from coronary CT angiography (CCTA) by harnessing the capabilities of Generative Adversarial Networks (GANs) was developed by Laidi et al. [187]. GANs help to address the conundrum of limited positive images within the test dataset. Zhang et al. [137] concentrated on the precise segmentation of ventricles within MRI scans. Their work recognized the difficulties posed by unclear contrast, blurred boundaries, and noise inherent in these images. Pushing development further, Decourt and Duong [138] addressed the essential task of left ventricle segmentation in pediatric MRI scans. They introduced DT-GAN, a GAN approach that uses semi-supervised semantic segmentation to reduce the reliance on large annotated datasets. Their innovative GAN loss function and methodology enhanced segmentation accuracy, particularly for boundary pixels, showing promise for automated left ventricle segmentation in cardiac MRI scans. Diller et al. [139] explored the potential application of Progressive Generative Adversarial Networks (PG-GAN) to generate synthetic cardiac MRI images for congenital heart disease research. This approach enables both data privacy concerns to be addressed and yields segmentation results comparable to those achieved with direct patient data, showcasing the potential of PG-GANs in generating realistic cardiac MRI images for rare cardiac conditions.
GANs have shown exceptional proficiency in handling complex and varied cardiac datasets. They generate highly realistic images, aiding training and research, particularly where access to real patient data is limited. GANs are instrumental in enlarging existing datasets and creating diverse and extensive data for training more accurate and robust diagnostic models. In addition to image generation, GANs are adept at image-to-image translation tasks, a significant feature in medical imaging [188]. They can transform MRI images into CT scans, offering different perspectives of the same anatomical structure without needing multiple imaging modalities. This is particularly beneficial in scenarios where certain imaging equipment might be unavailable. However, the main disadvantages of GANs are the complex training needed that does not necessarily lead to hoped-for results, a tendency to overfit, and high computational costs. Moreover, GANs are difficult to interpret, which is of key importance in medicine, especially in cardiology.

5.8. Graph Neural Networks

If the data format is approached differently, as in non-Euclidean space in the form of graphs, it can be understood in terms of vertices (i.e., objects). Then, the concept of Graph Neural Networks (GNNs) can be applied [189]. All relations in this type of neural network are expressed as those between nodes and edges of the graph. These networks are designed to handle graph data that form a critical aspect in medical fields, especially when the intricate relationships and connections between data points are essential for accurate diagnosis and health condition analysis. This principle of operation is useful in medical imaging, especially in neuroimaging and molecular imaging, where understanding complex relationships is crucial [128,190]. In the field of cardiology, GNNs have been effectively employed in several key areas. They have been used in the classification of polar maps in cardiac perfusion imaging, a critical technique for assessing heart muscle activity and blood flow. Another significant application of GNNs in cardiology is the estimation of left ventricular ejection fraction in echocardiography. This measurement is vital for evaluating heart health, specifically in assessing the volume of blood the left ventricle pumps out with each contraction [140]. This allows for more accurate analyses through an understanding of the intricate graph structures of the heart’s imagery. GNNs are also being utilized in analyzing CT/MRI scans. This approach can also be used to interpret the relationships and structures within the scan, providing detailed insights into various conditions and helping in diagnosis and treatment planning [142]. A further application of GNNs in cardiology is connected with cardiac perfusion imaging. This task covers the classification of the polar maps which is necessary for the evaluation of heart muscle activity and blood flow. These maps also play an important role in echocardiography, particularly in the estimation of the left ventricular ejection fraction, an important indicator of heart function. GNNs have also been applied in predicting ventricular arrhythmia and segmenting cardiac fibrosis based on MRI data [191], a two-stage deep learning network for segmenting the left ventricle myocardium and fibrosis in Late Gadolinium Enhanced (LGE) CMR images, achieving high dice scores and surpassing previous methods. This approach may provide potential improvements in ventricular arrhythmia treatment and SCD risk assessment. Lu et al. [143] proposed Spatio-Temporal Graph Convolutional Networks (ST-GCNs) to diagnose cardiac conditions, namely by understanding and quantifying left ventricular (LV) motion in cardiac MR cine images. Another GNN application field is that of the automated anatomical labeling of coronary arteries, which addresses the variability of human anatomy [144]. This approach was based on a Conditional Partial-Residual Graph Convolutional Network (CPR-GCN). This is a combination of 3D CNNs AND LSTM. Fan Huang et al. [145] focused on predicting coronary artery disease (CAD) from CT scans using vascular biomarkers derived from fundus photographs through a GNN. This method showed that specific retinal vascular biomarkers, such as arterial width and fractal dimensions, were significantly associated with adverse CAD-RADS scores. Simultaneously, Gao et al. [146] tackled the automation of coronary artery analysis using Coronary Computed Tomography Angiography (CCTA). This crucial analysis assists clinicians in diagnosing and evaluating CAD. Deep learning models are used for centerline extraction and lumen segmentation of coronary arteries. One of the components, a CNNTracker, traced the coronary artery centerline, while a Vascular Graph Convolutional Network (VGCN) achieved precise lumen segmentation. This method included an iterative refinement process alternating between the CNNTracker and GCN. It resulted in providing a high level of accuracy in CCTA data analysis from patients, particularly in key arteries such as the right coronary artery (RCA), left coronary artery (LCA), and X-ray coronary angiography (XCA). In another study, a GNN-based method for comorbidity-aware chest radiograph screening was developed [147]. It allows the screening of cardio, thoracic, and pulmonary conditions to be enhanced, and in this way significantly improves screening performance over standard ensemble techniques. Another interesting study introduced the Non-linear Regression Convolutional Encoder-Decoder (NRCED), a framework designed to map multivariate inputs to multivariate output [192]. This framework was specifically applied to the reconstruction of 12-lead surface ECG from intracardiac electrograms (EGMs) and vice versa. The study analyzed the features learned by the model, utilizing them to create a diagnostic tool for identifying atypical and diseased heartbeats. A high Receiver Operating Characteristic (ROC) curve is produced with an associated area under the curve (AUC) value of 0.98, indicating excellent discrimination between the two classes. This approach may have a significant potential for improving cardiac patient monitoring and diagnostics, ultimately enhancing healthcare outcomes.
GNNs provide a powerful tool for understanding and interpreting complex data structures, such as those found in medical image processing. One of the key strengths of GNNs is their adaptability to varying input sizes and structures, an essential feature in medical imaging where patient data can greatly differ. The architecture of GNNs is tailored to process and interpret graph-structured data, making it a powerful tool in areas such as medical image processing where data often forms complex networks. This specialized structure of GNNs sets them apart in their ability to handle data that is inherently interconnected, such as neurological networks or molecular structures. It is also worth stressing that GNNs were created for tasks that cannot be effectively solved by other types of networks based on input data in Euclidean space. However, GNNs are difficult to interpret. On the other hand, computational cost is also a crucial parameter. Here, QNNs may provide some insight, while the GA can effectively help in the optimization of the input parameters to neural networks.

5.9. Transformers

One further type of neural network that has recently come into focus in the field of medicine concerns transformers. These learn rules based on the context and tracking the relations between the data. Originally, they were networks used for natural language processing (NLP). Their effectiveness in these tasks resulted in the development of transformers such as the Detection Transformer (DETR) for tasks related to vision analysis [193], the Swin-Transformer [61], the Vision Transformer (ViT) [194], and the Data-Efficient Image Transformer (DeiT) [194]. The DETR is dedicated to object detection which also includes manual analytical processes, and it uses CNN to learn 2D representations of the input data (images). In turn, the ViT converts input to a series of fixed-size non-overlapping patches and treats them as a token. Each of them encodes the spatial position of each part of the image to provide spatial information, while the spatial information of the pixels is lost during tokenization. However, ViTs require large training datasets. On the other hand, DeiTs also provide high accuracy in the case of small training datasets, while Swin-Transformers allow the cost of calculations to be reduced. They process an image divided into overlapping areas showing tokens at multiple scales with a hierarchical structure using a shifted window (local self-attention). The transformer principle of operation is based on the self-attention mechanism. This enables the network to decide on the importance of different parts of the input data for future prediction (i.e., weight). This may be beneficial for the evaluation of the relationships between different regions in medical images. For example, the majority of AI-based MRI analysis is performed employing CNNs. However, this introduces the lack of long-range dependency as a limitation.
The application of transformer networks allows for a deeper understanding of cardiac function, which aids in refining diagnostic methods and improving treatment strategies. For example, Jungiewicz et al. focused on stenosis detection in coronary arteries, comparing different variants of the Inception Network with the ViT [113]. They analyzed small fragments from coronary angiography videos, highlighting the role of dataset configuration in model performance. A key innovation in their approach is the use of Sharpness-Aware Minimization (SAM) alongside Vision Transformers (VTs), which enhances the accuracy and reliability of stenosis detection. They also employed Explainable AI techniques to understand the differences in classification performance between the models. Their findings indicate that while Convolutional Neural Networks generally outperform transformer-based architectures, the gap narrows significantly with the addition of SAM to VTs. In some measures, the SAM-VT model even surpasses other models. It turned out that ViT can effectively be applied to diagnose coronary angiography. Zhang et al. [114] present a Topological Transformer Network (TTN) for automated coronary artery branch labeling in Cardiac CT Angiography (CCTA). The TTN, inspired by the success of transformers in sequence data analysis, treats vessel branch labeling as a sequence labeling learning problem. It introduces a unique topological encoding to represent spatial positions of vessel segments within the arterial tree, enhancing classification accuracy. The network also includes a segment-depth loss function to address the class imbalance between primary and secondary branches. The effectiveness of a TTN is demonstrated in CCTA scans, where it achieves unprecedented results, outperforming existing methods in overall branch labeling and side branch identification. TTNs mark a departure from traditional methods, representing the first transformer-based vessel branch labeling method in the field. The integration of this method into computer-aided diagnosis systems can enhance the generation of cardiovascular disease diagnosis reports, thereby improving patient outcomes in cardiac care. Additionally, Minqi Liao et al. [195] proposed a novel approach for left ventricle (LV) segmentation in echocardiography using pure transformer models. They proposed two models: one combining the Swin transformer with K-Net and another utilizing Segformer, evaluated on the EchoNet-Dynamic dataset. These models excel in segmenting challenging cardiac regions, such as the valve area, and separating the left ventricle from the left atrium, particularly in difficult samples. This work fully utilizes the capabilities of the transformer architecture for LV segmentation, moving beyond traditional methods and showcasing the potential of transformers in clinical applications. While the study currently focuses on static frames without including automated LVEF calculation, the researchers plan to extend these models to echocardiographic videos in future work. This represents a significant advancement in medical imaging, particularly in cardiac echocardiography, demonstrating the powerful applications of TNNs in healthcare technology. Going further, Ahn et al. [196] introduced the Co-Attention Spatial Transformer Network (CA-STN) for unsupervised motion tracking in 3D echocardiography.
This approach significantly enhances the detection and analysis of myocardial ischemia and infarction by tracking wall-motion abnormalities in the left ventricle. The core innovation is the integration of a co-attention mechanism within the Spatial Transformer Network (STN), which improves feature extraction between frames for smoother motion fields and enhanced interpretability in noisy 3D echocardiography images. Additionally, a novel temporal regularization term guides the motion of the left ventricle, producing smooth and realistic cardiac displacement paths. The CA-STN outperforms traditional methods that rely on heavy regularization functions, marking a new standard in cardiac motion tracking. Strain analysis using the Co-Attention STNs aligns with matched SPECT perfusion maps, illustrating the clinical utility of 3D echocardiography for localizing and quantifying myocardial strain following ischemic injury. This study contributes a novel tool for cardiac imaging and opens new possibilities for early detection and interventions in myocardial injuries. In another advancement, Lhuqita Fazry et al. [197] developed a groundbreaking approach using hierarchical Vision Transformers to estimate cardiac ejection fraction from echocardiogram videos. Addressing the variability in ejection fraction assessment among different observers, this method does not require prior segmentation of the left ventricle, making it a more efficient process. The team’s evaluations on the EchoNet-Dynamic dataset show enhanced accuracy and efficiency compared to state-of-the-art methods, demonstrating the potential of TNNs in revolutionizing cardiac function assessment. The public availability of their source code fosters further innovation in the field. Yang Ning et al. [124] proposed Efficient Multi-Scale Vision Transformers (EMVTs) for coronary artery calcium (CAC) segmentation (CAC-EMVT). This approach addresses the segmentation of CAC, which often has fuzzy boundaries and inconsistent appearances. The CAC-EMVT effectively models both short and long-range dependencies using a combination of local and global branches. Its three distinct modules, Key Factor Sampling (KFS), Non-Local Sparse Context Fusion (NSCF), and Non-Local Multi-Scale Context Aggregation (NMCA), enhance segmentation accuracy. Tested on CT scans from CVD patients, the CAC-EMVT shows significant improvements over existing methods in accuracy and reliability, representing a significant step forward in the detection and analysis of coronary artery calcium. On the other hand, Han et al. [123] developed a method for detecting coronary artery stenosis in X-ray angiography (XRA) images. Their hybrid architecture, integrating transformer neural networks with Convolutional Neural Networks (CNNs), captures the spatio-temporal nuances of XRA sequences. The Proposal-Shifted Spatio-Temporal Tokenization (PSSTT) within the transformer module tokenizes spatio-temporal elements of XRA sequences processed through the transformer-based feature aggregation (TFA) network. Erwan et al. [121] introduced a new method for segmenting cardiac infarction in delayed-enhancement MRI, tackling the challenge of differentiating between healthy and infarcted myocardial tissues. Their approach, aimed at enhancing the quantitative evaluation of myocardial infarction using Late Gadolinium Enhancement cardiac MRI (LGE-MRI), employs a dual-approach methodology. Initially, a dedicated 2D U-Net generates a probability map of the healthy myocardium, which guides the accurate localization of infarcted areas. Then, a U-Net transformer network refines this segmentation by combining the probability map with the original image. An adapted loss function addresses the limitations of U-Net in segmenting infarcted regions, significantly improving accuracy. Similarly, Ding et al. [118] developed a transformative approach for segmenting and classifying myocardial fibrosis in DE-MRI scans. Addressing the complex process of categorizing fibrotic tissue, their self-supervised myocardial histology segmentation algorithm employs a Siamese system for multi-scale representation. The integration of an end-to-end method using a transformer model for detecting myocardial fibrosis tissue is a key feature. This model combines a Pre-LN Transformer with a Multi-Scale Transformer (MST) backbone and a joint regression cost to accurately determine distances between forecast blocks and labels. The method significantly improves performance metrics, establishing its effectiveness and reliability in segmenting and classifying myocardial fibrosis. In turn, Upendra et al. [198] proposed a hybrid architecture combining ViT for deformable image registration of 3D cine cardiac MRI images. This approach consistently estimates cardiac motion by capturing the optical flow representation between consecutive 3D volumes from a 4D cine cardiac MRI dataset. Experiments on The Heart Disease UCI Dataset, hosted on Kaggle, demonstrate superior results in deformable image registration compared to traditional methods. This advancement showcases the potential of Vision Transformers in enhancing the accuracy and reliability of cardiac function assessment, representing a major stride in cardiac imaging technology.
Thus, an approach based on transformers in cardiological data segmentations offers advantages such as global context modeling, parallel processing, attention mechanisms, transfer learning, and interpretability for cardiac image segmentation. However, transformers process the input data sequentially, which may cause some important information to be missed and the segmentation performed (especially for tasks requiring precise localization of anatomical structures in heart images) to be inaccurate. Like CNN and the YOLO algorithm, this approach requires a large amount of good-quality data and the involvement of significant computational resources. Careful hyperparameter tuning and regularization techniques can overcome this disadvantage, but potentially increase the complexity of the training process.

5.10. Quantum Neural Networks

Recently, some work has also been devoted to the development of quantum neural networks (QNNs) that are based on the idea of quantum mechanics [199,200]. These may have huge potential to speed up calculations and reduce the computational costs associated with them. This approach can be developed in two ways related to the segmentation of medical images. The first is the use of quantum circuits to train classical neural networks, and the second is the design and training of quantum networks, as proposed by Mathur et al. [160]. Indeed, Shahwar et al. [201] showed the potential of QNNs in the classification of Alzheimer’s detection, and Ullah et al. [97] proposed a quantum version of the Fully Convolutional Neural Network (FCNN) as applied to a challenge that concerned the classification of ischemic heart disease. This allowed for a prediction accuracy of over 80 percent. However, the approach based on quantum neural networks requires further improvement. When it comes to interventional practice, QNNs have the potential for stenosis detection in X-ray coronary angiography [202], and they can be also applied to selecting medicines for patients with high accuracy [203,204]. Thus, QNNs may also provide some insight into the reduction in computational cost.

5.11. Evaluation Metrics in Medical Image Segmentation

Artificial Intelligence has the chance to become a high-precision tool in medicine. However, there are certain technical risks (TERs) connected with the application of AI in clinical and educational practice, including algorithm performance, legal regulation, and safety. For example, it is known that small, even imperceptible changes in the training dataset can drastically change the results of predictions, which in medicine can have very serious consequences and influence learning. The key to the evaluation of AI adaptability is to use an appropriate metric to assess the correctness and accuracy of different kinds of forecasts including clinical prognoses and for this to be understood by users [205]. For example, overfitting between training and testing datasets will reduce the accuracy of the algorithm. Other crucial factors that influence the qualitative efficiency of the AI-based algorithm’s dataset include data availability issues. However, even if developers do not have sufficient quantity and quality of data, cross-validation can be applied [206]. This procedure helps avoid overfitting by the selection of a subset. Thus, the choice of a proper evaluation metric depends on the specific task type. The binary classifier Dice coefficient (also called the Sørensen–Dice index) and the Index of Union (IoU) are most commonly used in medical image segmentation metrics. However, in the field of cardiology, accuracy is of particular concern (see Table 3).

6. Data and Data Security Issues Connected with the Metaverse and Artificial Intelligence

One of the key issues in algorithm development is data. It is known that the accuracy of AI-supported diagnosis depends largely on the quality and quantity of the input data. Thus, the errors in predictions made by AI in the field of medicine may be caused by biased input data. Providing diverse and representative inputs can help mitigate bias by providing a more balanced reconstruction of different demographic groups, medical conditions, and health practices. In the medical field, there are many public databases available containing sample medical input data. For example, the databases concerning cardiological data include Physionet (such as the PTB Diagnostic ECG Database), the MIMIC Database, the Automated Cardiac Diagnosis Challenge (ACDC) Dataset, the Heart Disease UCI Dataset, the European Society of Cardiology (ESC) Heart Failure Registry, the ISCHEMIA Dataset, the CATCH Database, and the UK Biobank CMR Imaging Dataset. However, since many publicly available medical databases contain errors [224], results based on them may be of low reliability and new verified medical databases have been prepared [225]. Another data-related issue concerns imbalanced data [226,227]. To eliminate the problem of data imbalance, which may be particularly important in cardiology, techniques such as resampling (oversampling or undersampling techniques), cost-sensitive approaches (assigning different weights to class classification), transfer learning (applications of pre-trained models on large, balanced datasets), ensemble methods (combining multiple models trained on different data or with different weights to improve overall accuracy), and data augmentation have been proposed [226,227]. Also, the type of AI-based algorithms chosen may influence imbalances in datasets. For example, decision trees and Support Vector Machines (SVMs) are less sensitive to imbalanced data than other algorithms. However, imbalanced data remains still a challenge in cardiology as well as other fields of medicine. In addition to improving the medical data collection process by taking into account the imbalance of certain classes, it is also important to develop more effective techniques tailored specifically to address the complexity of unbalanced medical datasets. One interesting solution for efficient and remote receiving, describing, and verifying data by clinicians is the general cloud annotation system proposed by Pawłowska et al. [225].
In the era of rapid development of Artificial Intelligence, the Metaverse, and Digital Twins, a natural question that arises concerns the field of data security and this is particularly crucial in medicine. Given this, new approaches such as AI Trust, Risk, and Security Management (AI TRiSM) have been developed [228]. This framework enables an AI-based system to be evaluated according to certain criteria such as compliance, fairness, reliability, and preserving data privacy. Data security in an AI-based system is quite complicated. It includes security of systems design, model testing, applications, regulatory compliance, infrastructure, auditing, and an ethics review. Thus, medical data can become subject to attacks, both passive and active. The way medical systems such as implantable medical devices and internet wearable devices are implemented makes them more vulnerable to attacks than other systems. As many as half of all attacks may be carried out in this sector [229]. Given that human health or even life is at stake, these systems must be specially protected. Data must be subject to authentication, availability, integrity, non-repudiation, and confidentiality. To minimize the leakage of sensitive information about the patient, a process of anonymization is intended to prevent such information about the patient from being read again based on his or her medical record including patient identification. To this end, the most common method involves the use of pseudonymization techniques such as replacing direct identifiers with pseudonymous codes. When it comes to the security of medical information systems many solutions involve blockchain technologies involving robust encryption and authentication methods [230]. Also, the idea of storage and distribution of sensitive information among the number of cloud nodes combined with encryption has been introduced [231], based on quantum Deep Neural Networks. It has turned out that this approach provides a better detection rate than other commonly applied methods. Moreover, all medical systems that process and store sensitive personal information must be developed and used with the compliance of the European General Data Protection Regulation (GDPR) and the American California Consumer Privacy Act (CCPA) [232]. However, differences between regulations in Europe and the USA have effectively hampered the exchange of sensitive patient information without appropriate institutional safeguards [233]. In the case of patient privacy, XR-based systems also provide good solutions [234]. Furthermore, AI can also be applied to tracking attacks and their location.

7. Ethical Issues Connected with the Metaverse and Artificial Intelligence

Within medicine as a whole, the Metaverse is conceived as a general space where the behavior of medical practitioners in their use of technologies such as XR, VR, AR, and MR could be subject to the same ethical issues and open to the same ethical threats as found in other virtual landscapes such as online gaming [235] and as a specific space where such technologies are used to develop and deploy treatment practices for specific diseases such as those covered by cardiology. In the former, ethical issues such as moral equivalence arise. For example, Grinbaum [236] asks whether behavior in the Metaverse should be judged according to real-world values, or whether there are aspects of virtual behavior that are new or overlapping: how should actors who assume different personae be treated when those personae act badly in different ways? Then, Radovanović and Tomić [237] point out that actors entering the Metaverse may come with a set of religious beliefs that may color their judgments and actions, such as how they set up a virtual church. These beliefs will likely also develop as the actor’s life in the virtual world unfolds, and very general codes of conduct have been proposed in this case. Indeed, early attempts have been made to establish ethical codes of conduct as practical guidelines for humans operating avatars in the Metaverse. The practical code of ethics proposed by Heider [238] is aimed at humans operating in the Metaverse through their avatar(s). The code has seven points: show respect, tell the truth, do no harm, show concern, work for good, demonstrate tolerance, and respect privacy. However, only the first two of these seven specifically deal with behavior that is qualitatively different from that found in the physical world: the others have general applicability. They highlight two important aspects of difference: a human may evoke or create more than one avatar with different appearances, and those may assume different roles. In the latter conception of the Metaverse as a specific space, physicians will use particular Metaverse technologies to deal with certain diseases and conditions, where there is a contrast between a tool or suite of tools designed to do a job [239] and a general environment in which virtual agents may live [240,241]. However, in both conceptions of the Metaverse, similar ethical issues are found: human characteristics inform behavior, attitudes, and usage. The application of Digital Twins (DTs) has become central in medical practice overall. Applications are found in areas ranging from clinical trials to treatment interventions, medical education, and scenario modeling [242]. In all of these cases, the two basic notions of the Metaverse are core: an environment and a set of tools along with issues of representation. Indeed, Braun [243], raises the difficult issue of how a person is to be represented by their DT in terms of accuracy (in the various types of representation) and control (who will have the authority to control the DT and how?). Safeguarding issues also arise with regard to children [243] and, by extension, vulnerable groups of people.
Clinical DTs in cardiology provide the opportunity for the generation of abundant quantities of data. Armeni [242] points out that DTs comprise virtual modeling of qualitatively different types of real-world objects and phenomena ranging from people to devices, environmental features, and institutions (such as clinics) connected by means of data streams used not only during treatment interventions but also in clinical trial design and medical education. All these are relevant to cardiology. For example, the production of data during treatment (whether rapid (emergency interventions) or slow (cardiac monitoring) might be used not only to inform patient choice but also to make financial decisions and even set insurance premiums: heart issues comprise an important part of any medical insurance form. However, the very notion of a clearly defined dataset comes into question when the data journey is considered: data travels from one location to another and may be adapted along the way [244]. Thus, the identification and description of cardiac-related data may prove ethically contentious. Moreover, issues such as security and privacy, data characteristics (selection, collection, categorization, and use), ownership, and rights of use and access all come into play and all comprise risk points. Unless they are carefully defined, the ability to make insurance or even liability claims (in case of potential clinical malpractice) may be limited and problematic. Overall, DT models will become progressively more detailed and accurate, such as those of the human heart described by Viola et al. [239] and those reviewed by Coorey et al. [245,246]. However, ethical dimensions are currently lacking in such models. These increasingly important dimensions include not only ownership and control of data but also the human side of the influence of the DT on the human and the rights of stakeholders. A truly effective DT will need four parts: the physical, the virtual, the data connection, and the ethical. Indeed, neglect of the ethical side could be said to be the biggest threat to the development of DT technology, especially in the vital area of cardiology. In this perspective, there is a clear need for DTs to be explainable through a framework of Explainable AI (XAI) and trustworthy through one of Trustworthy AI (TAI) [246]. If development in these areas is lacking, progress on the technical side will be held up due to a series of multiple ethical objections. These factors can be accounted for at some level, whether as integrated into specific systems, in local policies, or in national and international regulations [247].

8. Discussion and Conclusions

Extended Reality provides a tool for 3D representation of the structures of the heart [53]. Although HDMs offer great opportunities in clinical cardiology, they are not without drawbacks. Some users complain of health problems after long-term use of HDMs, including dizziness, nausea, and even blurred vision (symptoms accompanying motion sickness) [248], although this cybersickness may not be experienced by some users [249]. Also, the application of AI-based algorithms to six degrees of freedom motion support in a VR simulator may help alleviate cybersickness [250]. Hardware improvements such as frame rates and headset tracking have made it possible to partially counteract symptoms, but further development of HDMs is needed. On the other hand, Daling et al. [251] showed that XR-based training is not necessarily better, but at least as good as traditional methods. Another significant limitation of HDMs is the size of the field of view, much smaller than the field of view of a human. In turn, the implementation of XR-based solutions in clinical practice is also limited due to their high cost, in particular in lower–middle-income countries (LMICs) [252]. In Figure 3, the percentage share of the latest published research by territory (2020–2024) in AI, XR, and DTs is presented. It can be seen that LMICs have a low participation level. However, the spread of XR as a tool seems to be inevitable, especially in medical practice.
Since HDTs can generate specific data, they can also predict the outcome of a surgical procedure, disease progression, or the performance of an implanted device. HDTs combined with AI and XR-based technology also unlock the potential for sustainable development of the healthcare ecosystem. In cardiology, a personalized, computational model of the heart is crucial in better understanding patient-specific pathophysiology and supporting clinical decision-making processes, but the development of the heart DT requires the fitting of various types of parameters, including cardiac electromechanics and cardiovascular hemodynamics parameters [253]. Indeed, the implementation of the heart Digital Twin is a complex process, which has not yet been fully accomplished [239]. Thus, further research activity should be concentrated on issues of applying an electrical cardiac modeling approach in combination with Artificial Intelligence-based algorithms to build a Digital Twin of the heart for different clinical applications, ranging from those used by the general practitioner to the highly specialized electrophysiologist.
Furthermore, AI-based algorithms have been successfully used in recent medical imaging, in particular in the field of cardiology. Consequently, a list of AI models used in cardiology, including interventional cardiology, according to the application field is presented in Table 2, and it has been shown that CNNs and transformers are the most frequently used solutions in the field of cardiology, while GA is commonly used to optimize the parameter space. Table 3 provides a summary of the types of neural networks used in cardiology, taking into account their accuracy and application area, and it also includes information on the relationship between the neural network and XR and DT-based technology. It became evident that only a few studies combine these fields, and then regarding certain concepts and perspectives [27].
While there have been and continue to be great technical advances in the specific technologies of HDTs, ethical concerns are not generally systematically connected to the same. Rather, the ethical discussion tends to take place in parallel with the technical, whereas a more robust model would integrate the two. Ethical issues ranging from control and ownership of data to social values embedded in technical decisions and human behavior in the Metaverse need to be addressed at every step along the way. If this does not happen, progress may be delayed and even blocked in some cases by ethical disputes, thus holding back valuable DT applications in cardiology and other areas of medicine.
The development of HDTs in pre-clinical imaging gives numerous benefits, including improved outcomes, shorter development timelines, and lower costs. The application of HDTs will be increasingly popular in the future of healthcare service and has huge potential to become central in mainstream medicine. However, it requires the development of both models and algorithms for the analysis of medical data. On the other hand, in cardiology, the interpretation of ECGs currently relies on experts and requires training and clinical expertise and is thus subject to considerable inter and intra-clinician variability. Additionally, the diagnostic value of the standard 12-lead ECG is limited by the difficulty of linking the ECG data directly to cardiac anatomy and also due to the prevalence of technical errors such as electrode positioning. Therefore, the combination of AI, XR, and HDT technology in cardiology with the potential of avoiding technical errors can serve as a universal methodology to predict health status and improve outcome quality.
Moreover, an important element in improving the effectiveness of cardiology data segmentation is the collection of as much reliable, good-quality data as possible while keeping class balance in mind. This procedure should take into account input data diversity that helps AI models better generalize unseen cases while their reliability is improved. It is also necessary to provide diverse and representative input data whenever possible, which can help mitigate bias in AI-based algorithms. Another issue related to data is the application of the open data policy following UNESCO guidelines (especially for scientific applications, and research) so that more efficient AI algorithms can be developed in the area of cardiology. Moreover, compliance with ethical and bioethical standards in the collection, storage, and use of medical data is essential for the development of reliable AI systems in cardiology. As a consequence, the establishment of standards for the quality, integrity, and interoperability of cardiology data used in AI applications in cardiology as well as the development of the protocols for the validation and regulation of AI-based algorithms is of high importance. It is also necessary to develop guidelines on how to integrate Artificial Intelligence technologies into cardiology workflows as well as strategies for managing risks associated with the implementation of AI-based technologies in cardiology. Finally, it should be the responsibility of the cardiology community to ensure the control of results and feedback loops in the implementation of mechanisms for monitoring the performance of AI algorithms in cardiology and the collection of feedback from clinicians and patients.

Author Contributions

Conceptualization, A.P., K.P. and M.P.; methodology, A.P. and Z.R.; formal analysis, A.P., K.P. and Z.R.; investigation, A.P., K.P., M.P. and Z.R.; resources, A.P., K.P. and Z.R.; data curation, A.P. and Z.R.; ethics, M.P.; writing—original draft preparation, A.P., Z.R. and M.P.; writing—review and editing, A.P., K.P., M.P. and Z.R.; visualization, A.P.; supervision, A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported and financed by the National Centre for Research and Development under Grant Lider No. LIDER/17/0064/L-11/19/NCBR/2020. The research was also partially supported by the National Centre for Research and Development (research grant Infostrateg I/0042/2021-00).

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kamel Boulos, M.N.; Zhang, P. Digital Twins: From Personalised Medicine to Precision Public Health. J. Pers. Med. 2021, 11, 745. [Google Scholar] [CrossRef] [PubMed]
  2. Sabri, A.; Sönmez, U. Digital Twin in Health Care. In Digital Twin Driven Intelligent Systems and Emerging Metaverse; Enis, K., Aydin, Ö., Cali, Ü., Challenger, M., Eds.; Springer Nature: Singapore, 2023; pp. 209–231. [Google Scholar] [CrossRef]
  3. Venkatesh, K.P.; Brito, G.; Kamel Boulos, M.N. Health Digital Twins in Life Science and Health Care Innovation. Annu. Rev. Pharmacol. Toxicol. 2024, 64, 159–170. [Google Scholar] [CrossRef] [PubMed]
  4. Tortorella, G.L.; Fogliatto, F.S.; Mac Cawley Vergara, A.; Vassolo, R.; Sawhney, R. Healthcare 4.0: Trends, Challenges and Research Directions. Prod. Plan. Control 2020, 31, 1245–1260. [Google Scholar] [CrossRef]
  5. Duque, R.; Bravo, C.; Bringas, S.; Postigo, D. Leveraging a Visual Language for the Awareness-Based Design of Interaction Requirements in Digital Twins. Future Gener. Comput. Syst. 2024, 153, 41–51. [Google Scholar] [CrossRef]
  6. Logeswaran, A.; Munsch, C.; Chong, Y.J.; Ralph, N.; McCrossnan, J. The Role of Extended Reality Technology in Healthcare Education: Towards a Learner-Centred Approach. Future Health J. 2021, 8, e79–e84. [Google Scholar] [CrossRef]
  7. Castille, J.; Remy, S.; Vermue, H.; Victor, J. The Use of Virtual Reality to Assess the Bony Landmarks at the Knee Joint—The Role of Imaging Modality and the Assessor’s Experience. Knee 2024, 46, 41–51. [Google Scholar] [CrossRef]
  8. Marrone, S.; Costanzo, R.; Campisi, B.M.; Avallone, C.; Buscemi, F.; Cusimano, L.M.; Bonosi, L.; Brunasso, L.; Scalia, G.; Iacopino, D.G.; et al. The Role of Extended Reality in Eloquent Area Lesions: A Systematic Review. Neurosurg. Focus 2023, 56, E16. [Google Scholar] [CrossRef]
  9. Cai, Y.; Wu, X.; Cao, Q.; Zhang, X.; Pregowska, A.; Osial, M.; Dolega-Dolegowski, D.; Kolecki, R.; Proniewska, K. Information and Communication Technologies Combined with Mixed Reality as Supporting Tools in Medical Education. Electronics 2022, 11, 3778. [Google Scholar] [CrossRef]
  10. Garlinska, M.; Osial, M.; Proniewska, K.; Pregowska, A. The Influence of Emerging Technologies on Distance Education. Electronics 2023, 12, 1550. [Google Scholar] [CrossRef]
  11. Hasan, M.M.; Islam, M.U.; Sadeq, M.J.; Fung, W.-K.; Uddin, J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. Sensors 2023, 23, 527. [Google Scholar] [CrossRef] [PubMed]
  12. Hosny, K.M.; Elshoura, D.; Mohamed, E.R.; Vrochidou, E.; Papakostas, G.A. Deep Learning and Optimization-Based Methods for Skin Lesions Segmentation: A Review. IEEE Access 2023, 11, 85467–85488. [Google Scholar] [CrossRef]
  13. Young, M.R.; Abrams, N.; Ghosh, S.; Rinaudo, J.A.S.; Marquez, G.; Srivastava, S. Prediagnostic Image Data, Artificial Intelligence, and Pancreatic Cancer: A Tell-Tale Sign to Early Detection. Pancreas 2020, 49, 882–886. [Google Scholar] [CrossRef]
  14. Khayyam, H.; Madani, A.; Kafieh, R.; Hekmatnia, A.; Hameed, B.S.; Krishnan, U.M. Artificial Intelligence-Driven Diagnosis of Pancreatic Cancer. Cancers 2022, 14, 5382. [Google Scholar] [CrossRef]
  15. Granata, V.; Fusco, R.; Setola, S.V.; Galdiero, R.; Maggialetti, N.; Silvestro, L.; De Bellis, M.; Di Girolamo, E.; Grazzini, G.; Chiti, G.; et al. Risk Assessment and Pancreatic Cancer: Diagnostic Management and Artificial Intelligence. Cancers 2023, 15, 351. [Google Scholar] [CrossRef]
  16. Rethlefsen, M.L.; Kirtley, S.; Waffenschmidt, S.; Ayala, A.P.; Moher, D.; Page, M.J.; Koffel, J.B.; Group, P.-S. PRISMA-S: An Extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst. Rev. 2021, 10, 39. [Google Scholar] [CrossRef] [PubMed]
  17. Garg, A.; Mou, J.; Su, S.; Gao, L. Reconfigurable Battery Systems: Challenges and Safety Solutions Using Intelligent System Framework Based on Digital Twins. IET Collab. Intell. Manuf. 2022, 4, 232–248. [Google Scholar] [CrossRef]
  18. Subasi, A.; Subasi, M.E. Digital Twins in Healthcare and Biomedicine. In Artificial Intelligence, Big Data, Blockchain and 5G for the Digital Transformation of the Healthcare Industry: A Movement Toward More Resilient and Inclusive Societies; Academic Press: London, UK, 2024; pp. 365–401. [Google Scholar] [CrossRef]
  19. Banerjee, S.; Jesubalan, N.G.; Kulkarni, A.; Agarwal, A.; Rathore, A.S. Developing cyber-physical system and digital twin for smart manufacturing: Methodology and case study of continuous clarification. J. Ind. Inf. Integr. 2024, 38, 100577. [Google Scholar] [CrossRef]
  20. Capriulo, M.; Pizzolla, I.; Briganti, G. On the Use of Patient-Reported Measures in Digital Medicine to Increase Healthcare Resilience. In Artificial Intelligence, Big Data, Blockchain and 5G for the Digital Transformation of the Healthcare Industry: A Movement Toward More Resilient and Inclusive Societies; Academic Press: London, UK, 2024; pp. 41–66. [Google Scholar] [CrossRef]
  21. Jarrett, A.M.; Hormuth, D.A.; Wu, C.; Kazerouni, A.S.; Ekrut, D.A.; Virostko, J.; Sorace, A.G.; DiCarlo, J.C.; Kowalski, J.; Patt, D.; et al. Evaluating Patient-Specific Neoadjuvant Regimens for Breast Cancer via a Mathematical Model Constrained by Quantitative Magnetic Resonance Imaging Data. Neoplasia 2020, 22, 820–830. [Google Scholar] [CrossRef] [PubMed]
  22. Scheufele, K.; Subramanian, S.; Biros, G. Fully Automatic Calibration of Tumor-Growth Models Using a Single MpMRI Scan. IEEE Trans. Med. Imaging 2020, 40, 193–204. [Google Scholar] [CrossRef]
  23. Lorenzo, G.; Heiselman, J.S.; Liss, M.A.; Miga, M.I.; Gomez, H.; Yankeelov, T.E.; Reali, A.; Hughes, T.J.R.; Lorenzo, G. Patient-Specific Computational Forecasting of Prostate Cancer Growth during Active Surveillance Using an Imaging-Informed Biomechanistic Model. arXiv 2023, arXiv:2310.00060. [Google Scholar]
  24. Wu, C.; Jarrett, A.M.; Zhou, Z.; Elshafeey, N.; Adrada, B.E.; Candelaria, R.P.; Mohamed, R.M.M.; Boge, M.; Huo, L.; White, J.B.; et al. MRI-Based Digital Models Forecast Patient-Specific Treatment Responses to Neoadjuvant Chemotherapy in Triple-Negative Breast Cancer. Cancer Res. 2022, 82, 3394–3404. [Google Scholar] [CrossRef] [PubMed]
  25. Azzaoui, A.E.; Kim, T.W.; Loia, V.; Park, J.H. Blockchain-Based Secure Digital Twin Framework for Smart Healthy City. In Advanced Multimedia and Ubiquitous Engineering; Park, J.J., Loia, V., Pan, Y., Sung, Y., Eds.; Springer: Singapore, 2021; pp. 107–113. [Google Scholar]
  26. Croatti, A.; Gabellini, M.; Montagna, S.; Ricci, A. On the Integration of Agents and Digital Twins in Healthcare. J. Med. Syst. 2020, 44, 161. [Google Scholar] [CrossRef] [PubMed]
  27. Corral-Acero, J.; Margara, F.; Marciniak, M.; Rodero, C.; Loncaric, F.; Feng, Y.; Gilbert, A.; Fernandes, J.F.; Bukhari, H.A.; Wajdan, A.; et al. The “Digital Twin” to Enable the Vision of Precision Cardiology. Eur. Heart J. 2020, 41, 4556–4564B. [Google Scholar] [CrossRef] [PubMed]
  28. Gerach, T.; Schuler, S.; Fröhlich, J.; Lindner, L.; Kovacheva, E.; Moss, R.; Wülfers, E.M.; Seemann, G.; Wieners, C.; Loewe, A. Mathematics Electro-Mechanical Whole-Heart Digital Twins: A Fully Coupled Multi-Physics Approach. Mathematics 2021, 9, 1247. [Google Scholar] [CrossRef]
  29. Laita, N.; Rosales, R.M.; Wu, M.; Claus, P.; Janssens, S.; Martínez, M.Á.; Doblaré, M.; Peña, E. On Modeling the in Vivo Ventricular Passive Mechanical Behavior from in Vitro Experimental Properties in Porcine Hearts. Comput. Struct. 2024, 292, 107241. [Google Scholar] [CrossRef]
  30. Chen, Y.C.; Zheng, G.; Donner, D.G.; Wright, D.K.; Greenwood, J.P.; Marwick, T.H.; McMullen, J.R. Cardiovascular Magnetic Resonance Imaging for Sequential Assessment of Cardiac Fibrosis in Mice: Technical Advancements and Reverse Translation. Am. J. Physiol. Heart Circ. Physiol. 2024, 326, H1–H24. [Google Scholar] [CrossRef] [PubMed]
  31. Kouzehkonan, V.G.; Paul Finn, J. Artificial Intelligence in Cardiac MRI. In Intelligence-Based Cardiology and Cardiac Surgery: Artificial Intelligence and Human Cognition in Cardiovascular Medicine; Academic Press: London, UK, 2024; pp. 191–199. [Google Scholar] [CrossRef]
  32. Brent Woodland, M.; Ong, J.; Zaman, N.; Hirzallah, M.; Waisberg, E.; Masalkhi, M.; Kamran, S.A.; Lee, A.G.; Tavakkoli, A. Applications of Extended Reality in Spaceflight for Human Health and Performance. Acta Astronaut. 2024, 214, 748–756. [Google Scholar] [CrossRef]
  33. Schöne, B.; Kisker, J.; Lange, L.; Gruber, T.; Sylvester, S.; Osinsky, R. The Reality of Virtual Reality. Front. Psychol. 2023, 14, 1093014. [Google Scholar] [CrossRef]
  34. Chessa, M.; Van De Bruaene, A.; Farooqi, K.; Valverde, I.; Jung, C.; Votta, E.; Sturla, F.; Diller, G.P.; Brida, M.; Sun, Z.; et al. Three-Dimensional Printing, Holograms, Computational Modelling, and Artificial Intelligence for Adult Congenital Heart Disease Care: An Exciting Future. Eur. Heart J. 2022, 43, 2672–2684. [Google Scholar] [CrossRef]
  35. Willaert, W.I.M.; Aggarwal, R.; Van Herzeele, I.; Cheshire, N.J.; Vermassen, F.E. Recent Advancements in Medical Simulation: Patient-Specific Virtual Reality Simulation. World J. Surg. 2012, 36, 1703–1712. [Google Scholar] [CrossRef]
  36. Rad, A.A.; Vardanyan, R.; Lopuszko, A.; Alt, C.; Stoffels, I.; Schmack, B.; Ruhparwar, A.; Zhigalov, K.; Zubarevich, A.; Weymann, A. Virtual and Augmented Reality in Cardiac Surgery. Braz. J. Cardiovasc. Surg. 2022, 37, 123–127. [Google Scholar] [CrossRef]
  37. Iannotta, M.; Angelo d’Aiello, F.; Van De Bruaene, A.; Caruso, R.; Conte, G.; Ferrero, P.; Bassareo, P.P.; Pasqualin, G.; Chiarello, C.; Militaru, C.; et al. Modern Tools in Congenital Heart Disease Imaging and Procedure Planning: A European Survey. J. Cardiovasc. Med. 2024, 25, 76–87. [Google Scholar] [CrossRef]
  38. Gałeczka, M.; Smerdziński, S.; Tyc, F.; Fiszer, R. Virtual Reality for Transcatheter Procedure Planning in Congenital Heart Disease. Kardiol. Pol. 2023, 81, 1026–1027. [Google Scholar] [CrossRef] [PubMed]
  39. Priya, S.; La Russa, D.; Walling, A.; Goetz, S.; Hartig, T.; Khayat, A.; Gupta, P.; Nagpal, P.; Ashwath, R. “From Vision to Reality: Virtual Reality’s Impact on Baffle Planning in Congenital Heart Disease”. Pediatr. Cardiol. 2023, 45, 165–174. [Google Scholar] [CrossRef]
  40. Stepanenko, A.; Perez, L.M.; Ferre, J.C.; Ybarra Falcón, C.; Pérez de la Sota, E.; San Roman, J.A.; Redondo Diéguez, A.; Baladron, C. 3D Virtual Modelling, 3D Printing and Extended Reality for Planning of Implant Procedure of Short-Term and Long-Term Mechanical Circulatory Support Devices and Heart Transplantation. Front. Cardiovasc. Med. 2023, 10, 1191705. [Google Scholar] [CrossRef]
  41. Ghosh, R.M.; Mascio, C.E.; Rome, J.J.; Jolley, M.A.; Whitehead, K.K. Use of Virtual Reality for Hybrid Closure of Multiple Ventricular Septal Defects. JACC Case Rep. 2021, 3, 1579–1583. [Google Scholar] [CrossRef] [PubMed]
  42. Battal, A.; Taşdelen, A. The Use of Virtual Worlds in the Field of Education: A Bibliometric Study. Particip. Educ. Res. 2023, 10, 408–423. [Google Scholar] [CrossRef]
  43. Eves, J.; Sudarsanam, A.; Shalhoub, J.; Amiras, D. Augmented Reality in Vascular and Endovascular Surgery: Scoping Review. JMIR Serious Games 2022, 10, e34501. [Google Scholar] [CrossRef]
  44. Chahine, J.; Mascarenhas, L.; George, S.A.; Bartos, J.; Yannopoulos, D.; Raveendran, G.; Gurevich, S. Effects of a Mixed-Reality Headset on Procedural Outcomes in the Cardiac Catheterization Laboratory. Cardiovasc. Revascularization Med. 2022, 45, 3–8. [Google Scholar] [CrossRef]
  45. Ghlichi Moghaddam, N.; Namazinia, M.; Hajiabadi, F.; Mazlum, S.R. The Efficacy of Phase I Cardiac Rehabilitation Training Based on Augmented Reality on the Self-Efficacy of Patients Undergoing Coronary Artery Bypass Graft Surgery: A Randomized Clinical Trial. BMC Sports Sci. Med. Rehabil. 2023, 15, 156. [Google Scholar] [CrossRef]
  46. Vernemmen, I.; Van Steenkiste, G.; Hauspie, S.; De Lange, L.; Buschmann, E.; Schauvliege, S.; Van den Broeck, W.; Decloedt, A.; Vanderperren, K.; van Loon, G. Development of a Three-Dimensional Computer Model of the Equine Heart Using a Polyurethane Casting Technique and in Vivo Contrast-Enhanced Computed Tomography. J. Vet. Cardiol. 2024, 51, 72–85. [Google Scholar] [CrossRef]
  47. Alonso-Felipe, M.; Aguiar-Pérez, J.M.; Pérez-Juárez, M.Á.; Baladrón, C.; Peral-Oliveira, J.; Amat-Santos, I.J. Application of Mixed Reality to Ultrasound-Guided Femoral Arterial Cannulation During Real-Time Practice in Cardiac Interventions. J. Health Inf. Res. 2023, 7, 527–541. [Google Scholar] [CrossRef]
  48. Bloom, D.; Colombo, J.N.; Miller, N.; Southworth, M.K.; Andrews, C.; Henry, A.; Orr, W.B.; Silva, J.R.; Avari Silva, J.N. Early Preclinical Experience of a Mixed Reality Ultrasound System with Active GUIDance for NEedle-Based Interventions: The GUIDE Study. Cardiovasc. Digit. Health J. 2022, 3, 232–240. [Google Scholar] [CrossRef]
  49. Syahputra, M.F.; Zanury, R.; Andayani, U.; Hardi, S.M. Heart Disease Simulation with Mixed Reality Technology. J. Phys. Conf. Ser. 2021, 1898, 012025. [Google Scholar] [CrossRef]
  50. Proniewska, K.; Khokhar, A.A.; Dudek, D. Advanced Imaging in Interventional Cardiology: Mixed Reality to Optimize Preprocedural Planning and Intraprocedural Monitoring. Kardiol. Pol. 2021, 79, 331–335. [Google Scholar] [CrossRef]
  51. Brun, H.; Bugge, R.A.B.; Suther, L.K.R.; Birkeland, S.; Kumar, R.; Pelanis, E.; Elle, O.J. Mixed Reality Holograms for Heart Surgery Planning: First User Experience in Congenital Heart Disease. Eur. Heart J. Cardiovasc. Imaging 2019, 20, 883–888. [Google Scholar] [CrossRef] [PubMed]
  52. Southworth, M.K.; Silva, J.R.; Silva, J.N.A. Use of Extended Realities in Cardiology. In Trends in Cardiovascular Medicine; Elsevier Inc.: Amsterdam, The Netherlands, 2020; pp. 143–148. [Google Scholar] [CrossRef]
  53. Salavitabar, A.; Zampi, J.D.; Thomas, C.; Zanaboni, D.; Les, A.; Lowery, R.; Yu, S.; Whiteside, W. Augmented Reality Visualization of 3D Rotational Angiography in Congenital Heart Disease: A Comparative Study to Standard Computer Visualization. Pediatr. Cardiol. 2023, 1–8. [Google Scholar] [CrossRef] [PubMed]
  54. Hemanth, J.D.; Kose, U.; Deperlioglu, O.; de Albuquerque, V.H.C. An Augmented Reality-Supported Mobile Application for Diagnosis of Heart Diseases. J. Supercomput. 2020, 76, 1242–1267. [Google Scholar] [CrossRef]
  55. Yhdego, H.; Kidane, N.; Mckenzie, F.; Audette, M. Development of Deep-Learning Models for a Hybrid Simulation of Auscultation Training on Standard Patients Using an ECG-Based Virtual Pathology Stethoscope. Simulation 2023, 99, 903–915. [Google Scholar] [CrossRef]
  56. Tahri, S.M.; Al-Thani, D.; Elshazly, M.B.; Al-Hijji, M. A Blueprint for an AI & AR-Based Eye Tracking System to Train Cardiology Professionals Better Interpret Electrocardiograms. In Persuasive Technology; Baghaei, N., Vassileva, J., Ali, R., Oyibo, K., Eds.; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 221–229. [Google Scholar]
  57. Bamps, K.; De Buck, S.; Ector, J. Deep Learning Based Tracked X-Ray for Surgery Guidance. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2022, 10, 339–347. [Google Scholar] [CrossRef]
  58. Kieu, V.; Sumski, C.; Cohen, S.; Reinhardt, E.; Axelrod, D.M.; Handler, S.S. The Use of Virtual Reality Learning on Transition Education in Adolescents with Congenital Heart Disease. Pediatr. Cardiol. 2023, 44, 1856–1860. [Google Scholar] [CrossRef]
  59. Pham, J.; Kong, F.; James, D.L.; Marsden, A.L. Virtual shape-editing of patient-specific vascular models using Regularized Kelvinlets. IEEE Trans. Biomed. Eng. 2024, 1–14. [Google Scholar] [CrossRef] [PubMed]
  60. Skalidis, I.; Salihu, A.; Kachrimanidis, I.; Koliastasis, L.; Maurizi, N.; Dayer, N.; Muller, O.; Fournier, S.; Hamilos, M.; Skalidis, E. Meta-CathLab: A Paradigm Shift in Interventional Cardiology Within the Metaverse. Can. J. Cardiol. 2023, 39, 1549–1552. [Google Scholar] [CrossRef]
  61. Huang, J.; Ren, L.; Feng, L.; Yang, F.; Yang, L.; Yan, K. AI Empowered Virtual Reality Integrated Systems for Sleep Stage Classification and Quality Enhancement. IEEE Trans. Neural. Syst. Rehabil. Eng. 2022, 30, 1494–1503. [Google Scholar] [CrossRef]
  62. García Fierros, F.J.; Moreno Escobar, J.J.; Sepúlveda Cervantes, G.; Morales Matamoros, O.; Tejeida Padilla, R. VirtualCPR: Virtual Reality Mobile Application for Training in Cardiopulmonary Resuscitation Techniques. Sensors 2021, 21, 2504. [Google Scholar] [CrossRef] [PubMed]
  63. Fan, M.; Yang, X.; Ding, T.; Cao, Y.; Si, Q.; Bai, J.; Lin, Y.; Zhao, X. Application of Ultrasound Virtual Reality in the Diagnosis and Treatment of Cardiovascular Diseases. J. Health Eng. 2021, 9999654. [Google Scholar] [CrossRef]
  64. Mocan, B.; Mocan, M.; Fulea, M.; Murar, M.; Feier, H. Home-Based Robotic Upper Limbs Cardiac Telerehabilitation System. Int. J. Env. Res. Public Health 2022, 19, 11628. [Google Scholar] [CrossRef]
  65. Serfözö, P.D.; Sandkühler, R.; Blümke, B.; Matthisson, E.; Meier, J.; Odermatt, J.; Badertscher, P.; Sticherling, C.; Strebel, I.; Cattin, P.C.; et al. An Augmented Reality-Based Method to Assess Precordial Electrocardiogram Leads: A Feasibility Trial. Eur. Heart J. Digit. Health 2023, 4, 420–427. [Google Scholar] [CrossRef] [PubMed]
  66. Groninger, H.; Stewart, D.; Fisher, J.M.; Tefera, E.; Cowgill, J.; Mete, M. Virtual Reality for Pain Management in Advanced Heart Failure: A Randomized Controlled Study. Palliat. Med. 2021, 35, 2008–2016. [Google Scholar] [CrossRef]
  67. Pagano, T.P.; dos Santos, L.L.; Santos, V.R.; Sá, P.H.M.; da Bonfim, Y.S.; Paranhos, J.V.D.; Ortega, L.L.; Nascimento, L.F.S.; Santos, A.; Rönnau, M.M.; et al. Remote Heart Rate Prediction in Virtual Reality Head-Mounted Displays Using Machine Learning Techniques. Sensors 2022, 22, 9486. [Google Scholar] [CrossRef]
  68. Perrotta, A.; Alexandra Silva, P.; Martins, P.; Sainsbury, B.; Wilz, O.; Ren, J.; Green, M.; Fergie, M.; Rossa, C. Preoperative Virtual Reality Surgical Rehearsal of Renal Access during Percutaneous Nephrolithotomy: A Pilot Study. Electronics 2022, 11, 1562. [Google Scholar] [CrossRef]
  69. Lau, I.; Gupta, A.; Ihdayhid, A.; Sun, Z. Clinical Applications of Mixed Reality and 3D Printing in Congenital Heart Disease. Biomolecules 2022, 12, 1548. [Google Scholar] [CrossRef] [PubMed]
  70. Lopez-Espada, C.; Linares-Palomino, J. Mixed Reality: A Promising Technology for Therapeutic Patient Education. Vasa 2023, 52, 160–168. [Google Scholar] [CrossRef]
  71. El Ali, A.; Ney, R.; van Berlo, Z.M.C.; Cesar, P. Is That My Heartbeat? Measuring and Understanding Modality-Dependent Cardiac Interoception in Virtual Reality. IEEE Trans. Vis. Comput. Graph. 2023, 29, 4805–4815. [Google Scholar] [CrossRef]
  72. Chiang, P.; Zheng, J.; Yu, Y.; Mak, K.H.; Chui, C.K.; Cai, Y. A VR Simulator for Intracardiac Intervention. IEEE Comput. Graph. Appl. 2013, 33, 44–57. [Google Scholar] [CrossRef] [PubMed]
  73. Patel, N.; Costa, A.; Sanders, S.P.; Ezon, D. Stereoscopic Virtual Reality Does Not Improve Knowledge Acquisition of Congenital Heart Disease. Int. J. Cardiovasc. Imaging 2021, 37, 2283–2290. [Google Scholar] [CrossRef]
  74. Lim, T.R.; Wilson, H.C.; Axelrod, D.M.; Werho, D.K.; Handler, S.S.; Yu, S.; Afton, K.; Lowery, R.; Mullan, P.B.; Cooke, J.; et al. Virtual Reality Curriculum Increases Paediatric Residents’ Knowledge of CHDs. Cardiol. Young 2023, 33, 410–414. [Google Scholar] [CrossRef]
  75. O’Sullivan, D.M.; Foley, R.; Proctor, K.; Gallagher, S.; Deery, A.; Eidem, B.W.; McMahon, C.J. The Use of Virtual Reality Echocardiography in Medical Education. Pediatr. Cardiol. 2021, 42, 723–726. [Google Scholar] [CrossRef]
  76. Choi, S.; Nah, S.; Cho, Y.S.; Moon, I.; Lee, J.W.; Lee, C.A.; Moon, J.E.; Han, S. Accuracy of Visual Estimation of Ejection Fraction in Patients with Heart Failure Using Augmented Reality Glasses. Heart 2023, heartjnl-2023-323067. [Google Scholar] [CrossRef]
  77. Gladding, P.A.; Loader, S.; Smith, K.; Zarate, E.; Green, S.; Villas-Boas, S.; Shepherd, P.; Kakadiya, P.; Hewitt, W.; Thorstensen, E.; et al. Multiomics, Virtual Reality and Artificial Intelligence in Heart Failure. Future Cardiol 2021, 17, 1335–1347. [Google Scholar] [CrossRef]
  78. Boonstra, M.J.; Oostendorp, T.F.; Roudijk, R.W.; Kloosterman, M.; Asselbergs, F.W.; Loh, P.; Van Dam, P.M. Incorporating Structural Abnormalities in Equivalent Dipole Layer Based ECG Simulations. Front. Physiol. 2022, 13, 2690. [Google Scholar] [CrossRef] [PubMed]
  79. He, B.; Hu, W.; Zhang, K.; Yuan, S.; Han, X.; Su, C.; Zhao, J.; Wang, G.; Wang, G.; Zhang, L. Image Segmentation Algorithm of Lung Cancer Based on Neural Network Model. Expert Syst. 2021, 39, e12822. [Google Scholar] [CrossRef]
  80. Evans, L.M.; Sözümert, E.; Keenan, B.E.; Wood, C.E.; du Plessis, A. A Review of Image-Based Simulation Applications in High-Value Manufacturing. Arch. Comput. Methods Eng. 2023, 30, 1495–1552. [Google Scholar] [CrossRef]
  81. Arafin, P.; Billah, A.M.; Issa, A. Deep Learning-Based Concrete Defects Classification and Detection Using Semantic Segmentation. Struct. Health Monit. 2024, 23, 383–409. [Google Scholar] [CrossRef]
  82. Ye-Bin, M.; Choi, D.; Kwon, Y.; Kim, J.; Oh, T.H. ENInst: Enhancing Weakly-Supervised Low-Shot Instance Segmentation. Pattern. Recognit. 2024, 145, 109888. [Google Scholar] [CrossRef]
  83. Hong, F.; Kong, L.; Zhou, H.; Zhu, X.; Li, H.; Liu, Z. Unified 3D and 4D Panoptic Segmentation via Dynamic Shifting Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 1–16. [Google Scholar] [CrossRef] [PubMed]
  84. Rudnicka, Z.; Szczepanski, J.; Pregowska, A. Artificial Intelligence-Based Algorithms in Medical Image Scan Seg-Mentation and Intelligent Visual-Content Generation-a Concise over-View. Electronics 2024, 13, 746. [Google Scholar] [CrossRef]
  85. Sammani, A.; Bagheri, A.; Van Der Heijden, P.G.M.; Te Riele, A.S.J.M.; Baas, A.F.; Oosters, C.A.J.; Oberski, D.; Asselbergs, F.W. Automatic Multilabel Detection of ICD10 Codes in Dutch Cardiology Discharge Letters Using Neural Networks. NPJ Digit. Med. 2021, 4, 37. [Google Scholar] [CrossRef]
  86. Muscogiuri, G.; Van Assen, M.; Tesche, C.; De Cecco, C.N.; Chiesa, M.; Scafuri, S.; Guglielmo, M.; Baggiano, A.; Fusini, L.; Guaricci, A.I.; et al. Review Article Artificial Intelligence in Coronary Computed Tomography Angiography: From Anatomy to Prognosis. BioMed Res. Int. 2020, 2020, 6649410. [Google Scholar] [CrossRef]
  87. Yasmin, F.; Shah, S.M.I.; Naeem, A.; Shujauddin, S.M.; Jabeen, A.; Kazmi, S.; Siddiqui, S.A.; Kumar, P.; Salman, S.; Hassan, S.A.; et al. Artificial Intelligence in the Diagnosis and Detection of Heart Failure: The Past, Present, and Future. Rev. Cardiovasc. Med. 2021, 22, 1095–1113. [Google Scholar] [CrossRef]
  88. Samieiyeganeh, M.E.; Rahmat, R.W.; Khalid, F.B.; Kasmiran, K.B. An overview of deep learning techniques in echocardiography image segmentation. J. Theor. Appl. Inf. Technol. 2020, 98, 3561–3572. [Google Scholar]
  89. Wahlang, I.; Kumar Maji, A.; Saha, G.; Chakrabarti, P.; Jasinski, M.; Leonowicz, Z.; Jasinska, E.; Dimauro, G.; Bevilacqua, V.; Pecchia, L. Electronics Article. Electronics 2021, 10, 495. [Google Scholar] [CrossRef]
  90. Muraki, R.; Teramoto Id, A.; Sugimoto, K.; Sugimoto, K.; Yamada, A.; Watanabe, E. Automated Detection Scheme for Acute Myocardial Infarction Using Convolutional Neural Network and Long Short-Term Memory. PLoS ONE 2022, 17, e0264002. [Google Scholar] [CrossRef] [PubMed]
  91. Roy, S.S.; Hsu, C.H.; Samaran, A.; Goyal, R.; Pande, A.; Balas, V.E. Vessels Segmentation in Angiograms Using Convolutional Neural Network: A Deep Learning Based Approach. CMES Comput. Model. Eng. Sci. 2023, 136, 241–255. [Google Scholar] [CrossRef]
  92. Liu, J.; Yuan, G.; Yang, C.; Song, H.; Luo, L. An Interpretable CNN for the Segmentation of the Left Ventricle in Cardiac MRI by Real-Time Visualization. CMES Comput. Model. Eng. Sci. 2023, 135, 1571–1587. [Google Scholar] [CrossRef]
  93. Tandon, A. Artificial Intelligence in Pediatric and Congenital Cardiac Magnetic Resonance Imaging. In Intelligence-Based Cardiology and Cardiac Surgery: Artificial Intelligence and Human Cognition in Cardiovascular Medicine; Academic Press: London, UK, 2024; pp. 201–209. [Google Scholar] [CrossRef]
  94. Candemir, S.; White, R.D.; Demirer, M.; Gupta, V.; Bigelow, M.T.; Prevedello, L.M.; Erdal, B.S. Automated Coronary Artery Atherosclerosis Detection and Weakly Supervised Localization on Coronary CT Angiography with a Deep 3-Dimensional Convolutional Neural Network. Comput. Med. Imaging Graph. 2020, 83, 101721. [Google Scholar] [CrossRef]
  95. Singh, N.; Gunjan, V.K.; Shaik, F.; Roy, S. Detection of Cardio Vascular Abnormalities Using Gradient Descent Optimization and CNN. Health Technol. 2024, 14, 155–168. [Google Scholar] [CrossRef]
  96. Banerjee, D.; Dey, S.; Pal, A. An SNN Based ECG Classifier for Wearable Edge Devices. In Proceedings of the NeurIPS 2022 Workshop on Learning from Time Series for Health, New Orleans, LA, USA, 2 December 2022. [Google Scholar]
  97. Ullah, U.; García, A.; Jurado, O.; Diez Gonzalez, I.; Garcia-Zapirain, B. A Fully Connected Quantum Convolutional Neural Network for Classifying Ischemic Cardiopathy. IEEE Access 2022, 10, 134592–134605. [Google Scholar] [CrossRef]
  98. Çınar, A.; Tuncer, S.A. Classification of Normal Sinus Rhythm, Abnormal Arrhythmia and Congestive Heart Failure ECG Signals Using LSTM and Hybrid CNN-SVM Deep Neural Networks. Comput. Methods Biomech. Biomed. Engin. 2021, 24, 203–214. [Google Scholar] [CrossRef]
  99. Fradi, M.; Khriji, L.; Machhout, M. Real-Time Arrhythmia Heart Disease Detection System Using CNN Architecture Based Various Optimizers-Networks. Multimed. Tools Appl. 2022, 81, 41711–41732. [Google Scholar] [CrossRef]
  100. Rahul, J.; Sharma, L.D. Automatic Cardiac Arrhythmia Classification Based on Hybrid 1-D CNN and Bi-LSTM Model. Biocybern. Biomed. Eng. 2022, 42, 312–324. [Google Scholar] [CrossRef]
  101. Ahmed, A.A.; Ali, W.; Abdullah, T.A.A.; Malebary, S.J.; Yao, Y.; Huang, X.; Ahmed, A.A.; Ali, W.; Abdullah, T.A.A.; Malebary, S.J. Citation: Classifying Cardiac Arrhythmia from ECG Signal Using 1D CNN Deep Learning Model. Mathematics 2023, 11, 562. [Google Scholar] [CrossRef]
  102. Eltrass, A.S.; Tayel, M.B.; Ammar, A.I. A New Automated CNN Deep Learning Approach for Identification of ECG Congestive Heart Failure and Arrhythmia Using Constant-Q Non-Stationary Gabor Transform. Biomed. Signal Process. Control 2021, 65, 102326. [Google Scholar] [CrossRef]
  103. Zheng, Z.; Chen, Z.; Hu, F.; Zhu, J.; Tang, Q.; Liang, Y. Electronics Article. Electronics 2020, 9, 121. [Google Scholar] [CrossRef]
  104. Cheng, J.; Zou, Q.; Zhao, Y. ECG Signal Classification Based on Deep CNN and BiLSTM. BMC Med. Inf. Decis. Mak. 2021, 21, 365. [Google Scholar] [CrossRef] [PubMed]
  105. Rawal, V.; Prajapati, P.; Darji, A. Hardware Implementation of 1D-CNN Architecture for ECG Arrhythmia Classification. Biomed. Signal Process. Control 2023, 85, 104865. [Google Scholar] [CrossRef]
  106. Zhang, Y.; Liu, S.; He, Z.; Zhang, Y.; Wang, C. A CNN Model for Cardiac Arrhythmias Classification Based on Individual ECG Signals. Cardiovasc. Eng. Technol. 2022, 13, 548–557. [Google Scholar] [CrossRef]
  107. Khozeimeh, F.; Sharifrazi, D.; Izadi, N.H.; Hassannataj Joloudari, J.; Shoeibi, A.; Alizadehsani, R.; Tartibi, M.; Hussain, S.; Sani, Z.A.; Khodatars, M.; et al. RF-CNN-F: Random Forest with Convolutional Neural Network Features for Coronary Artery Disease Diagnosis Based on Cardiac Magnetic Resonance. Sci. Rep. 2022, 12, 17. [Google Scholar] [CrossRef]
  108. Aslan, M.F.; Sabanci, K.; Durdu, A. A CNN-Based Novel Solution for Determining the Survival Status of Heart Failure Patients with Clinical Record Data: Numeric to Image. Biomed. Signal Process. Control 2021, 68, 102716. [Google Scholar] [CrossRef]
  109. Yoon, T.; Kang, D. Bimodal CNN for Cardiovascular Disease Classification by Co-Training ECG Grayscale Images and Scalograms. Sci. Rep. 2023, 13, 2937. [Google Scholar] [CrossRef]
  110. Sun, L.; Shang, D.; Wang, Z.; Jiang, J.; Tian, F.; Liang, J.; Shen, Z.; Liu, Y.; Zheng, J.; Wu, H.; et al. MSPAN: A Memristive Spike-Based Computing Engine With Adaptive Neuron for Edge Arrhythmia Detection. Front. Neurosci. 2021, 15, 761127. [Google Scholar] [CrossRef]
  111. Wang, J.; Zang, J.; Yao, S.; Zhang, Z.; Xue, C. Multiclassification for Heart Sound Signals under Multiple Networks and Multi-View Feature. Measurement 2024, 225, 114022. [Google Scholar] [CrossRef]
  112. Wang, L.-H.; Ding, L.-J.; Xie, C.-X.; Jiang, S.-Y.; Kuo, I.-C.; Wang, X.-K.; Gao, J.; Huang, P.-C.; Patricia, A.; Abu, A.R. Automated Classification Model With OTSU and CNN Method for Premature Ventricular Contraction Detection. IEEE Access 2021, 9, 156581–156591. [Google Scholar] [CrossRef]
  113. Jungiewicz, M.; Jastrzębski, P.; Wawryka, P.; Przystalski, K.; Sabatowski, K.; Bartuś, S. Vision Transformer in Stenosis Detection of Coronary Arteries. Expert Syst. Appl. 2023, 228, 120234. [Google Scholar] [CrossRef]
  114. Zhang, Y.; Luo, G.; Wang, W.; Cao, S.; Dong, S.; Yu, D.; Wang, X.; Wang, K. TTN: Topological Transformer Network for Automated Coronary Artery Branch Labeling in Cardiac CT Angiography. IEEE J. Transl. Eng. Health Med. 2023, 12, 129–139. [Google Scholar] [CrossRef] [PubMed]
  115. Rao, S.; Li, Y.; Ramakrishnan, R.; Hassaine, A.; Canoy, D.; Cleland, J.; Lukasiewicz, T.; Salimi-Khorshidi, G.; Rahimi, K.; Shishir Rao, O. An Explainable Transformer-Based Deep Learning Model for the Prediction of Incident Heart Failure. IEEE J. Biomed. Health Inf. 2022, 26, 3362–3372. [Google Scholar] [CrossRef] [PubMed]
  116. Wang, Y.; Zhang, W. A Dense RNN for Sequential Four-Chamber View Left Ventricle Wall Segmentation and Cardiac State Estimation. Front. Bioeng. Biotechnol. 2021, 9, 696227. [Google Scholar] [CrossRef]
  117. Ding, C.; Wang, S.; Jin, X.; Wang, Z.; Wang, J. A Novel Transformer-Based ECG Dimensionality Reduction Stacked Auto-Encoders for Arrhythmia Beat Detection. Med. Phys. 2023, 50, 5897–5912. [Google Scholar] [CrossRef]
  118. Ding, Y.; Xie, W.; Wong, K.K.L.; Liao, Z. DE-MRI Myocardial Fibrosis Segmentation and Classification Model Based on Multi-Scale Self-Supervision and Transformer. Comput. Methods Programs Biomed. 2022, 226, 107049. [Google Scholar] [CrossRef]
  119. Hu, R.; Chen, J.; Zhou, L. A Transformer-Based Deep Neural Network for Arrhythmia Detection Using Continuous ECG Signals. Comput. Biol. Med. 2022, 144, 105325. [Google Scholar] [CrossRef]
  120. Gaudilliere, P.L.; Sigurthorsdottir, H.; Aguet, C.; Van Zaen, J.; Lemay, M.; Delgado-Gonzalo, R. Generative Pre-Trained Transformer for Cardiac Abnormality Detection. Available online: https://physionet.org/content/mitdb/1.0.0/ (accessed on 20 February 2024).
  121. Lecesne, E.; Simon, A.; Garreau, M.; Barone-Rochette, G.; Fouard, C. Segmentation of Cardiac Infarction in Delayed-Enhancement MRI Using Probability Map and Transformers-Based Neural Networks. Comput. Methods Programs Biomed. 2023, 242, 107841. [Google Scholar] [CrossRef]
  122. Ahmadi, N.; Tsang, M.Y.; Gu, A.N.; Tsang, T.S.M.; Abolmaesumi, P. Transformer-Based Spatio-Temporal Analysis for Classification of Aortic Stenosis Severity from Echocardiography Cine Series. IEEE Trans. Med. Imaging 2024, 43, 366–376. [Google Scholar] [CrossRef] [PubMed]
  123. Han, T.; Ai, D.; Li, X.; Fan, J.; Song, H.; Wang, Y.; Yang, J. Coronary Artery Stenosis Detection via Proposal-Shifted Spatial-Temporal Transformer in X-Ray Angiography. Comput. Biol. Med. 2023, 153, 106546. [Google Scholar] [CrossRef] [PubMed]
  124. Ning, Y.; Zhang, S.; Xi, X.; Guo, J.; Liu, P.; Zhang, C. CAC-EMVT: Efficient Coronary Artery Calcium Segmentation with Multi-Scale Vision Transformers. In Proceedings of the 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Houston, TX, USA, 9–12 December 2021; pp. 1462–1467. [Google Scholar] [CrossRef]
  125. Deng, K.; Meng, Y.; Gao, D.; Bridge, J.; Shen, Y.; Lip, G.; Zhao, Y.; Zheng, Y. TransBridge: A Lightweight Transformer for Left Ventricle Segmentation in Echocardiography. In Simplifying Medical Ultrasound; Noble, J.A., Aylward, S., Grimwood, A., Min, Z., Lee, S.-L., Hu, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 63–72. [Google Scholar]
  126. Alkhodari, M.; Kamarul Azman, S.; Hadjileontiadis, L.J.; Khandoker, A.H. Ensemble Transformer-Based Neural Networks Detect Heart Murmur in Phonocardiogram Recordings. In Proceedings of the 2022 Computing in Cardiology (CinC), Tampere, Finland, 4–7 September 2022. [Google Scholar]
  127. Meng, L.; Tan, W.; Ma, J.; Wang, R.; Yin, X.; Zhang, Y. Enhancing Dynamic ECG Heartbeat Classification with Lightweight Transformer Model. Artif. Intell. Med. 2022, 124, 102236. [Google Scholar] [CrossRef]
  128. Chu, M.; Wu, P.; Li, G.; Yang, W.; Gutiérrez-Chico, J.L.; Tu, S. Advances in Diagnosis, Therapy, and Prognosis of Coronary Artery Disease Powered by Deep Learning Algorithms. JACC Asia 2023, 3, 1–14. [Google Scholar] [CrossRef]
  129. Feng, Y.; Geng, S.; Chu, J.; Fu, Z.; Hong, S. Building and Training a Deep Spiking Neural Network for ECG Classification. Biomed. Signal Process. Control 2022, 77, 103749. [Google Scholar] [CrossRef]
  130. Yan, Z.; Zhou, J.; Wong, W.F. Energy Efficient ECG Classification with Spiking Neural Network. Biomed. Signal Process. Control 2021, 63, 102170. [Google Scholar] [CrossRef]
  131. Kovács, P.; Samiee, K. Arrhythmia Detection Using Spiking Variable Projection Neural Networks. In Proceedings of the 2022 Computing in Cardiology (CinC), Tampere, Finland, 4–7 September 2022; Volume 49. [Google Scholar] [CrossRef]
  132. Singhal, S.; Kumar, M. GSMD-SRST: Group Sparse Mode Decomposition and Superlet Transform Based Technique for Multi-Level Classification of Cardiac Arrhythmia. IEEE Sens. J. 2024. [Google Scholar] [CrossRef]
  133. Kiladze, M.R.; Lyakhova, U.A.; Lyakhov, P.A.; Nagornov, N.N.; Vahabi, M. Multimodal Neural Network for Recognition of Cardiac Arrhythmias Based on 12-Load Electrocardiogram Signals. IEEE Access 2023, 11, 133744–133754. [Google Scholar] [CrossRef]
  134. Li, Z.; Calvet, L.E. Extraction of ECG Features with Spiking Neurons for Decreased Power Consumption in Embedded Devices Extraction of ECG Features with Spiking Neurons for Decreased Power Consumption in Embedded Devices Extraction of ECG Features with Spiking Neurons for Decreased Power Consumption in Embedded Devices. In Proceedings of the 2023 19th International Conference on Synthesis, Modeling, Analysis and Simulation Methods and Applications to Circuit Design (SMACD), Funchal, Portugal, 3–5 July 2023. [Google Scholar] [CrossRef]
  135. Revathi, T.K.; Sathiyabhama, B.; Sankar, S. Diagnosing Cardio Vascular Disease (CVD) Using Generative Adversarial Network (GAN) in Retinal Fundus Images. Ann. Rom. Soc. Cell Biol. 2021, 25, 2563–2572. Available online: http://annalsofrscb.ro (accessed on 20 February 2024).
  136. Chen, J.; Yang, G.; Khan, H.R.; Zhang, H.; Zhang, Y.; Zhao, S.; Mohiaddin, R.H.; Wong, T.; Firmin, D.N.; Keegan, J. JAS-GAN: Generative Adversarial Network Based Joint Atrium and Scar Segmentations on Unbalanced Atrial Targets. IEEE J. Biomed. Health Inf. 2021, 26, 103–114. [Google Scholar] [CrossRef]
  137. Zhang, Y.; Feng, J.; Guo, X.; Ren, Y. Comparative Analysis of U-Net and TLMDB GAN for the Cardiovascular Segmentation of the Ventricles in the Heart. Comput. Methods Programs Biomed. 2022, 215, 106614. [Google Scholar] [CrossRef] [PubMed]
  138. Decourt, C.; Duong, L. Semi-Supervised Generative Adversarial Networks for the Segmentation of the Left Ventricle in Pediatric MRI. Comput. Biol. Med. 2020, 123, 103884. [Google Scholar] [CrossRef] [PubMed]
  139. Diller, G.P.; Vahle, J.; Radke, R.; Vidal, M.L.B.; Fischer, A.J.; Bauer, U.M.M.; Sarikouch, S.; Berger, F.; Beerbaum, P.; Baumgartner, H.; et al. Utility of Deep Learning Networks for the Generation of Artificial Cardiac Magnetic Resonance Images in Congenital Heart Disease. BMC Med. Imaging 2020, 20, 113. [Google Scholar] [CrossRef] [PubMed]
  140. Rizwan, I.; Haque, I.; Neubert, J. Deep Learning Approaches to Biomedical Image Segmentation. Inf. Med. Unlocked 2020, 18, 100297. [Google Scholar] [CrossRef]
  141. Van Lieshout, F.E.; Klein, R.C.; Kolk, M.Z.; Van Geijtenbeek, K.; Vos, R.; Ruiperez-Campillo, S.; Feng, R.; Deb, B.; Ganesan, P.; Knops, R.; et al. Deep Learning for Ventricular Arrhythmia Prediction Using Fibrosis Segmentations on Cardiac MRI Data. In Proceedings of the 2022 Computing in Cardiology (CinC), Tampere, Finland, 4–7 September 2022. [Google Scholar] [CrossRef]
  142. Liu, X.; He, L.; Yan, J.; Huang, Y.; Wang, Y.; Lin, C.; Huang, Y.; Liu, X. A Neural Network for High-Precise and Well-Interpretable Electrocardiogram Classification. bioRxiv 2023. [Google Scholar] [CrossRef]
  143. Lu, P.; Bai, W.; Rueckert, D.; Noble, J.A. Modelling Cardiac Motion via Spatio-Temporal Graph Convolutional Networks to Boost the Diagnosis of Heart Conditions. In Statistical Atlases and Computational Models of the Heart; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  144. Yang, H.; Zhen, X.; Chi, Y.; Zhang, L.; Hua, X.-S. CPR-GCN: Conditional Partial-Residual Graph Convolutional Network in Automated Anatomical Labeling of Coronary Arteries. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  145. Huang, F.; Lian, J.; Ng, K.-S.; Shih, K.; Vardhanabhuti, V.; Huang, F.; Lian, J.; Ng, K.-S.; Shih, K.; Vardhanabhuti, V. Citation: Predicting CT-Based Coronary Artery Disease Using Vascular Biomarkers Derived from Fundus Photographs with a Graph Convolutional Neural Network. Diagnostics 2022, 12, 1390. [Google Scholar] [CrossRef]
  146. Gao, R.; Hou, Z.; Li, J.; Han, H.; Lu, B.; Zhou, S.K. Joint Coronary Centerline Extraction And Lumen Segmentation From Ccta Using Cnntracker And Vascular Graph Convolutional Network. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1897–1901. [Google Scholar]
  147. Chakravarty, A.; Sarkar, T.; Ghosh, N.; Sethuraman, R.; Sheet, D. Learning Decision Ensemble Using a Graph Neural Network for Comorbidity Aware Chest Radiograph Screening. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1234–1237. [Google Scholar] [CrossRef]
  148. Reddy, G.T.; Praveen, M.; Reddy, K.; Lakshmanna, K.; Rajput, D.S.; Kaluri, R.; Gautam, S. Hybrid Genetic Algorithm and a Fuzzy Logic Classifier for Heart Disease Diagnosis. Evol. Intell. 2020, 13, 185–196. [Google Scholar] [CrossRef]
  149. Priyanka; Baranwal, N.; Singh, K.N.; Singh, A.K. YOLO-Based ROI Selection for Joint Encryption and Compression of Medical Images with Reconstruction through Super-Resolution Network. Future Gener. Comput. Syst. 2024, 150, 1–9. [Google Scholar] [CrossRef]
  150. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  151. Zhuang, Z.; Jin, P.; Joseph Raj, A.N.; Yuan, Y.; Zhuang, S. Automatic Segmentation of Left Ventricle in Echocardiography Based on YOLOv3 Model to Achieve Constraint and Positioning. Comput. Math. Methods Med. 2021, 2021, 3772129. [Google Scholar] [CrossRef] [PubMed]
  152. Alamelu, V.; Thilagamani, S. Lion Based Butterfly Optimization with Improved YOLO-v4 for Heart Disease Prediction Using IoMT. Inf. Technol. Control 2022, 51, 692–703. [Google Scholar] [CrossRef]
  153. Lee, S.; Xibin, J.; Lee, A.; Gil, H.W.; Kim, S.; Hong, M. Cardiac Detection Using YOLO-v5 with Data Preprocessing. In Proceedings of the 2022 International Conference on Computational Science and Computational Intelligence (CSCI 2022), Las Vegas, NV, USA, 14–16 December 2022. [Google Scholar] [CrossRef]
  154. Mortada, M.J.; Tomassini, S.; Anbar, H.; Morettini, M.; Burattini, L.; Sbrollini, A. Segmentation of Anatomical Structures of the Left Heart from Echocardiographic Images Using Deep Learning. Diagnostics 2023, 13, 1683. [Google Scholar] [CrossRef]
  155. Smirnov, D.; Pikunov, A.; Syunyaev, R.; Deviatiiarov, R.; Gusev, O.; Aras, K.; Gams, A.; Koppel, A.; Efimov, I.R. Correction: Genetic Algorithm-Based Personalized Models of Human Cardiac Action Potential. PLoS ONE 2020, 15, e0231695. [Google Scholar] [CrossRef]
  156. Kanwal, S.; Rashid, J.; Nisar, M.W.; Kim, J.; Hussain, A. An Effective Classification Algorithm for Heart Disease Prediction with Genetic Algorithm for Feature Selection. In Proceedings of the 2021 Mohammad Ali Jinnah University International Conference on Computing (MAJICC), Karachi, Pakistan, 15–17 July 2021. [Google Scholar] [CrossRef]
  157. Alizadehsani, R.; Roshanzamir, M.; Abdar, M.; Beykikhoshk, A.; Khosravi, A.; Nahavandi, S.; Plawiak, P.; Tan, R.S.; Acharya, U.R. Hybrid Genetic-Discretized Algorithm to Handle Data Uncertainty in Diagnosing Stenosis of Coronary Arteries. Expert Syst. 2020, 39, e12573. [Google Scholar] [CrossRef]
  158. Badano, L.P.; Keller, D.M.; Muraru, D.; Torlasco, C.; Parati, G. Artificial Intelligence and Cardiovascular Imaging: A Win-Win Combination. Anatol. J. Cardiol. 2020, 24, 214–223. [Google Scholar] [CrossRef] [PubMed]
  159. Souza Filho, E.M.; Fernandes, F.A.; Pereira, N.C.A.; Mesquita, C.T.; Gismondi, R.A. Ethics, Artificial Intelligence and Cardiology. Arq. Bras. De Cardiol. 2020, 115, 579–583. [Google Scholar] [CrossRef]
  160. Mathur, P.; Srivastava, S.; Xu, X.; Mehta, J.L. Artificial Intelligence, Machine Learning, and Cardiovascular Disease. Clin. Med. Insights Cardiol. 2020, 14, 1179546820927404. [Google Scholar] [CrossRef]
  161. Miller, D.D. Machine Intelligence in Cardiovascular Medicine. Cardiol. Rev. 2020, 28, 53–64. [Google Scholar] [CrossRef]
  162. Swathy, M.; Saruladha, K. A Comparative Study of Classification and Prediction of Cardio-Vascular Diseases (CVD) Using Machine Learning and Deep Learning Techniques. ICT Express 2021, 8, 109–116. [Google Scholar] [CrossRef]
  163. Salte, I.M.; Østvik, A.; Smistad, E.; Melichova, D.; Nguyen, T.M.; Karlsen, S.; Brunvand, H.; Haugaa, K.H.; Edvardsen, T.; Lovstakken, L.; et al. Artificial Intelligence for Automatic Measurement of Left Ventricular Strain in Echocardiography. JACC Cardiovasc. Imaging 2021, 14, 1918–1928. [Google Scholar] [CrossRef]
  164. Nithyakalyani, K.; Ramkumar, S.; Rajalakshmi, S.; Saravanan, K.A. Diagnosis of Cardiovascular Disorder by CT Images Using Machine Learning Technique. In Proceedings of the 2022 International Conference on Communication, Computing and Internet of Things (IC3IoT), Chennai, India, 10–11 March 2022; pp. 1–4. [Google Scholar] [CrossRef]
  165. Gao, Z.; Wang, L.; Soroushmehr, R.; Wood, A.; Gryak, J.; Nallamothu, B.; Najarian, K. Vessel Segmentation for X-Ray Coronary Angiography Using Ensemble Methods with Deep Learning and Filter-Based Features. BMC Med. Imaging 2022, 22, 10. [Google Scholar] [CrossRef] [PubMed]
  166. Galea, R.R.; Diosan, L.; Andreica, A.; Popa, L.; Manole, S.; Bálint, Z. Region-of-Interest-Based Cardiac Image Segmentation with Deep Learning. Appl. Sci. 2021, 11, 1965. [Google Scholar] [CrossRef]
  167. Tandon, A.; Mohan, N.; Jensen, C.; Burkhardt, B.E.U.; Gooty, V.; Castellanos, D.A.; McKenzie, P.L.; Zahr, R.A.; Bhattaru, A.; Abdulkarim, M.; et al. Retraining Convolutional Neural Networks for Specialized Cardiovascular Imaging Tasks: Lessons from Tetralogy of Fallot. Pediatr. Cardiol. 2021, 42, 578–589. [Google Scholar] [CrossRef] [PubMed]
  168. Stough, J.V.; Raghunath, S.; Zhang, X.; Pfeifer, J.M.; Fornwalt, B.K.; Haggerty, C.M. Left Ventricular and Atrial Segmentation of 2D Echocardiography with Convolutional Neural Networks. In Proceedings of the Medical Imaging 2020: Image Processing, Houston, TX, USA, 10 March 2020. [Google Scholar] [CrossRef]
  169. Sander, J.; de Vos, B.D.; Išgum, I. Automatic Segmentation with Detection of Local Segmentation Failures in Cardiac MRI. Sci. Rep. 2020, 10, 21769. [Google Scholar] [CrossRef] [PubMed]
  170. Masutani, E.M.; Bahrami, N.; Hsiao, A. Deep Learning Single-Frame and Multiframe Super-Resolution for Cardiac MRI. Radiology 2020, 295, 552–561. [Google Scholar] [CrossRef] [PubMed]
  171. O’Brien, H.; Whitaker, J.; Singh Sidhu, B.; Gould, J.; Kurzendorfer, T.; O’Neill, M.D.; Rajani, R.; Grigoryan, K.; Rinaldi, C.A.; Taylor, J.; et al. Automated Left Ventricle Ischemic Scar Detection in CT Using Deep Neural Networks. Front. Cardiovasc. Med. 2021, 8, 655252. [Google Scholar] [CrossRef]
  172. Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G. DS-TransUNet: Dual Swin Transformer U-Net for Medical Image Segmentation. IEEE Trans. Instrum. Meas. 2022, 71, 4005615. [Google Scholar] [CrossRef]
  173. Koresh, H.J.D.; Chacko, S.; Periyanayagi, M. A Modified Capsule Network Algorithm for Oct Corneal Image Segmentation. Pattern Recognit. Lett. 2021, 143, 104–112. [Google Scholar] [CrossRef]
  174. Khan, M.Z.; Gajendran, M.K.; Lee, Y.; Khan, M.A. Deep Neural Architectures for Medical Image Semantic Segmentation: Review. IEEE Access 2021, 9, 83002–83024. [Google Scholar] [CrossRef]
  175. Fischer, A.M.; Eid, M.; De Cecco, C.N.; Gulsun, M.A.; Van Assen, M.; Nance, J.W.; Sahbaee, P.; De Santis, D.; Bauer, M.J.; Jacobs, B.E.; et al. Accuracy of an Artificial Intelligence Deep Learning Algorithm Implementing a Recurrent Neural Network with Long Short-Term Memory for the Automated Detection of Calcified Plaques from Coronary Computed Tomography Angiography. J. Thorac. Imaging 2020, 35, S49–S57. [Google Scholar] [CrossRef]
  176. Lyu, Q.; Shan, H.; Xie, Y.; Kwan, A.C.; Otaki, Y.; Kuronuma, K.; Li, D.; Wang, G. Cine Cardiac MRI Motion Artifact Reduction Using a Recurrent Neural Network. IEEE Trans. Med. Imaging 2021, 40, 2170–2181. [Google Scholar] [CrossRef]
  177. Ammar, A.; Bouattane, O.; Youssfi, M. Automatic Spatio-Temporal Deep Learning-Based Approach for Cardiac Cine MRI Segmentation. In Networking, Intelligent Systems and Security; Mohamed, B.A., Teodorescu, H.-N.L., Mazri, T., Subashini, P., Boudhir, A.A., Eds.; Springer: Singapore, 2022; pp. 59–73. [Google Scholar]
  178. Lu, X.H.; Liu, A.; Fuh, S.C.; Lian, Y.; Guo, L.; Yang, Y.; Marelli, A.; Li, Y. Recurrent Disease Progression Networks for Modelling Risk Trajectory of Heart Failure. PLoS ONE 2021, 16, e0245177. [Google Scholar] [CrossRef] [PubMed]
  179. Fu, Q.; Dong, H. An Ensemble Unsupervised Spiking Neural Network for Objective Recognition. Neurocomputing 2021, 419, 47–58. [Google Scholar] [CrossRef]
  180. Rana, A.; Kim, K.K. A Novel Spiking Neural Network for ECG Signal Classification. J. Sens. Sci. Technol. 2021, 30, 20–24. [Google Scholar] [CrossRef]
  181. Shekhawat, D.; Chaudhary, D.; Kumar, A.; Kalwar, A.; Mishra, N.; Sharma, D. Binarized Spiking Neural Network Optimized with Momentum Search Algorithm for Fetal Arrhythmia Detection and Classification from ECG Signals. Biomed. Signal Process. Control 2024, 89, 105713. [Google Scholar] [CrossRef]
  182. Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
  183. Yan, X.; Yan, Y.; Cao, M.; Xie, W.; O’Connor, S.; Lee, J.J.; Ho, M.H. Effectiveness of Virtual Reality Distraction Interventions to Reduce Dental Anxiety in Paediatric Patients: A Systematic Review and Meta-Analysis. J. Dent. 2023, 132, 104455. [Google Scholar] [CrossRef]
  184. Kim, Y.; Panda, P. Visual Explanations from Spiking Neural Networks Using Inter-Spike Intervals. Sci. Rep. 2021, 11, 19037. [Google Scholar] [CrossRef] [PubMed]
  185. Kazeminia, S.; Baur, C.; Kuijper, A.; van Ginneken, B.; Navab, N.; Albarqouni, S.; Mukhopadhyay, A. GANs for Medical Image Analysis. Artif. Intell. Med. 2020, 109, 101938. [Google Scholar] [CrossRef]
  186. Wu, C.; Zhang, H.; Chen, J.; Gao, Z.; Zhang, P.; Muhammad, K.; Del Ser, J. Vessel-GAN: Angiographic Reconstructions from Myocardial CT Perfusion with Explainable Generative Adversarial Networks. Future Gener. Comput. Syst. 2022, 130, 128–139. [Google Scholar] [CrossRef]
  187. Laidi, A.; Ammar, M.; El Habib Daho, M.; Mahmoudi, S. GAN Data Augmentation for Improved Automated Atherosclerosis Screening from Coronary CT Angiography. EAI Endorsed. Trans. Scalable. Inf. Syst. 2023, 10, e4. [Google Scholar] [CrossRef]
  188. Olender, M.L.; Nezami, F.R.; Athanasiou, L.S.; De La Torre Hernández, J.M.; Edelman, E.R. Translational Challenges for Synthetic Imaging in Cardiology. Eur. Heart J. Digit. Health 2021, 2, 559–560. [Google Scholar] [CrossRef]
  189. Wieneke, H.; Voigt, I. Principles of Artificial Intelligence and Its Application in Cardiovascular Medicine. Clin. Cardiol. 2023, 47, e24148. [Google Scholar] [CrossRef]
  190. Hasan Rafi, T.; Shubair, R.M.; Farhan, F.; Hoque, Z.; Mohd Quayyum, F. Recent Advances in Computer-Aided Medical Diagnosis Using Machine Learning Algorithms with Optimization Techniques. IEEE Access 2021, 9, 137847–137868. [Google Scholar] [CrossRef]
  191. Chiu, J.-C.; Sarkar, E.; Liu, Y.-M.; Dong, X.; Lei, Y.; Wang, T.; Liu, L.; Wolterink, J.M.; Brune, C.; J Veldhuis, R.N. Physics in Medicine & Biology Anatomy-Aided Deep Learning for Medical Image Segmentation: A Review Deep Learning-Based Attenuation Correction in the Absence of Structural Information for Whole-Body Positron Emission Tomography Imaging Anatomy-Aided Deep Learning for Medical Image Segmentation: A Review. Phys. Med. Biol. 2021, 66, 11–12. [Google Scholar] [CrossRef]
  192. Banta, A.; Cosentino, R.; John, M.M.; Post, A.; Buchan, S.; Razavi, M.; Aazhang, B. Nonlinear Regression with a Convolutional Encoder-Decoder for Remote Monitoring of Surface Electrocardiograms. arXiv 2020, arXiv:2012.06003. [Google Scholar]
  193. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. Available online: https://github.com/facebookresearch/detr (accessed on 20 February 2024).
  194. Jannatul Ferdous, G.; Akhter Sathi, K.; Hossain, A.; Moshiul Hoque, M.; Member, S.; Ali Akber Dewan, M. LCDEiT: A Linear Complexity Data-Efficient Image Transformer for MRI Brain Tumor Classification. IEEE Access 2017, 11, 20337–20350. [Google Scholar] [CrossRef]
  195. Henein, M.; Liao, M.; Lian, Y.; Yao, Y.; Chen, L.; Gao, F.; Xu, L.; Huang, X.; Feng, X.; Guo, S. Left Ventricle Segmentation in Echocardiography with Transformer. Diagnostics 2023, 13, 2365. [Google Scholar] [CrossRef]
  196. Ahn, S.S.; Ta, K.; Thorn, S.L.; Onofrey, J.A.; Melvinsdottir, I.H.; Lee, S.; Langdon, J.; Sinusas, A.J.; Duncan, J.S. Co-Attention Spatial Transformer Network for Unsupervised Motion Tracking and Cardiac Strain Analysis in 3D Echocardiography. Med. Image Anal. 2023, 84, 102711. [Google Scholar] [CrossRef]
  197. Fazry, L.; Haryono, A.; Nissa, N.K.; Sunarno; Hirzi, N.M.; Rachmadi, M.F.; Jatmiko, W. Hierarchical Vision Transformers for Cardiac Ejection Fraction Estimation. In Proceedings of the 2022 7th International Workshop on Big Data and Information Security (IWBIS), Depok, Indonesia, 1–3 October 2022; pp. 39–44. [Google Scholar] [CrossRef]
  198. Upendra, R.R.; Simon, R.; Shontz, S.M.; Linte, C.A. Deformable Image Registration Using Vision Transformers for Cardiac Motion Estimation from Cine Cardiac MRI Images. In Functional Imaging and Modeling of the Heart; Olivier, B., Clarysse, P., Duchateau, N., Ohayon, J., Viallon, M., Eds.; Springer Nature: Cham, Switzerland, 2023; pp. 375–383. [Google Scholar]
  199. Beer, K.; Bondarenko, D.; Farrelly, T.; Osborne, T.J.; Salzmann, R.; Scheiermann, D.; Wolf, R. Training Deep Quantum Neural Networks. Nat. Commun. 2020, 11, 808. [Google Scholar] [CrossRef] [PubMed]
  200. Landman, J.; Mathur, N.; Li, Y.Y.; Strahm, M.; Kazdaghli, S.; Prakash, A.; Kerenidis, I. Quantum Methods for Neural Networks and Application to Medical Image Classification. Quantum 2022, 6, 881. [Google Scholar] [CrossRef]
  201. Shahwar, T.; Zafar, J.; Almogren, A.; Zafar, H.; Rehman, A.U.; Shafiq, M.; Hamam, H. Automated Detection of Alzheimer’s via Hybrid Classical Quantum Neural Networks. Electronics 2022, 11, 721. [Google Scholar] [CrossRef]
  202. Ovalle-Magallanes, E.; Avina-Cervantes, J.G.; Cruz-Aceves, I.; Ruiz-Pinales, J. Hybrid Classical–Quantum Convolutional Neural Network for Stenosis Detection in X-Ray Coronary Angiography. Expert Syst. Appl. 2022, 189, 116112. [Google Scholar] [CrossRef]
  203. Kumar, A.; Choudhary, A.; Tiwari, A.; James, C.; Kumar, H.; Kumar Arora, P.; Akhtar Khan, S. An Investigation on Wear Characteristics of Additive Manufacturing Materials. Mater. Today Proc. 2021, 47, 3654–3660. [Google Scholar] [CrossRef]
  204. Belli, C.; Sagingalieva, A.; Kordzanganeh, M.; Kenbayev, N.; Kosichkina, D.; Tomashuk, T.; Melnikov, A. Hybrid Quantum Neural Network For Drug Response Prediction. Cancers 2023, 15, 2705. [Google Scholar] [CrossRef]
  205. Pregowska, A.; Perkins, M. Artificial Intelligence in Medical Education: Technology and Ethical Risk. Available online: https://ssrn.com/abstract=4643763 (accessed on 20 February 2024).
  206. Rastogi, D.; Johri, P.; Tiwari, V.; Elngar, A.A. Multi-Class Classification of Brain Tumour Magnetic Resonance Images Using Multi-Branch Network with Inception Block and Five-Fold Cross Validation Deep Learning Framework. Biomed. Signal Process. Control 2024, 88, 105602. [Google Scholar] [CrossRef]
  207. Fotiadou, E.; van Sloun, R.J.; van Laar, J.O.; Vullings, R. Physiological Measurement A Dilated Inception CNN-LSTM Network for Fetal Heart Rate Estimation. Physiol. Meas. 2021, 42, 045007. [Google Scholar] [CrossRef]
  208. Tariq, Z.; Shah, S.K.; Lee, Y.; Shah, Z.; Lee, S.K.; Tariq, Y. Feature-Based Fusion Using CNN for Lung and Heart Sound Classification. Feature-Based Fusion Using CNN for Lung and Heart Sound Classification. Sensors 2022, 22, 1521. [Google Scholar] [CrossRef] [PubMed]
  209. Sudarsanan, S.; Aravinth, J. Classification of Heart Murmur Using CNN. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 818–822. [Google Scholar] [CrossRef]
  210. Fakhry, M.; Gallardo-Antolín, A. Elastic Net Regularization and Gabor Dictionary for Classification of Heart Sound Signals Using Deep Learning. Eng. Appl. Artif. Intell. 2024, 127, 107406. [Google Scholar] [CrossRef]
  211. Ainiwaer, A.; Hou, W.Q.; Qi, Q.; Kadier, K.; Qin, L.; Rehemuding, R.; Mei, M.; Wang, D.; Ma, X.; Dai, J.G.; et al. Deep Learning of Heart-Sound Signals for Efficient Prediction of Obstructive Coronary Artery Disease. Heliyon 2024, 10, e23354. [Google Scholar] [CrossRef] [PubMed]
  212. Chunduru, A.; Kishore, A.R.; Sasapu, B.K.; Seepana, K. Multi Chronic Disease Prediction System Using CNN and Random Forest. SN Comput. Sci. 2024, 5, 157. [Google Scholar] [CrossRef]
  213. Anggraeni, W.; Kusuma, M.F.; Riksakomara, E.; Wibowo, R.P.; Sumpeno, S. Combination of BERT and Hybrid CNN-LSTM Models for Indonesia Dengue Tweets Classification. Int. J. Intell. Eng. Syst. 2024, 17, 813–826. [Google Scholar] [CrossRef]
  214. Kusuma, S.; Jothi, K.R. ECG Signals-Based Automated Diagnosis of Congestive Heart Failure Using Deep CNN and LSTM Architecture. Biocybern. Biomed. Eng. 2022, 42, 247–257. [Google Scholar] [CrossRef]
  215. Shrivastava, P.K.; Sharma, M.; Sharma, P.; Kumar, A. HCBiLSTM: A Hybrid Model for Predicting Heart Disease Using CNN and BiLSTM Algorithms. Meas. Sens. 2023, 25, 100657. [Google Scholar] [CrossRef]
  216. Shaker, A.M.; Tantawi, M.; Shedeed, H.A.; Tolba, M.F. Generalization of Convolutional Neural Networks for ECG Classification Using Generative Adversarial Networks. IEEE Access 2020, 8, 35592–35605. [Google Scholar] [CrossRef]
  217. Wang, Z.; Stavrakis, S.; Yao, B. Hierarchical Deep Learning with Generative Adversarial Network for Automatic Cardiac Diagnosis from ECG Signals. Comput. Biol. Med. 2023, 155, 106641. [Google Scholar] [CrossRef]
  218. Puspitasari, R.D.I.; Ma’sum, M.A.; Alhamidi, M.R.; Kurnianingsih; Jatmiko, W. Generative Adversarial Networks for Unbalanced Fetal Heart Rate Signal Classification. ICT Express 2022, 8, 239–243. [Google Scholar] [CrossRef]
  219. Rahman, A.U.; Alsenani, Y.; Zafar, A.; Ullah, K.; Rabie, K.; Shongwe, T. Enhancing Heart Disease Prediction Using a Self-Attention-Based Transformer Model. Sci. Rep. 2024, 14, 514. [Google Scholar] [CrossRef]
  220. Wang, Q.; Zhao, C.; Sun, Y.; Xu, R.; Li, C.; Wang, C.; Liu, W.; Gu, J.; Shi, Y.; Yang, L.; et al. Synaptic Transistor with Multiple Biological Functions Based on Metal-Organic Frameworks Combined with the LIF Model of a Spiking Neural Network to Recognize Temporal Information. Microsyst. Nanoeng. 2023, 9, 96. [Google Scholar] [CrossRef]
  221. Ji, W.; Li, J.; Bi, Q.; Liu, T.; Li, W.; Cheng, L. Segment Anything Is Not Always Perfect: An Investigation of SAM on Different Real-World Applications. arXiv 2023, arXiv:2304.05750. Available online: https://segment-anything.com (accessed on 20 February 2024).
  222. El-Ghaish, H.; Eldele, E. ECGTransForm: Empowering Adaptive ECG Arrhythmia Classification Framework with Bidirectional Transformer. Biomed. Signal Process. Control 2024, 89, 105714. [Google Scholar] [CrossRef]
  223. Akan, T.; Alp, S.; Alfrad, M.; Bhuiyan, N. ECGformer: Leveraging Transformer for ECG Heartbeat Arrhythmia Classification. arXiv 2024, arXiv:2401.05434. [Google Scholar]
  224. Pawłowska, A.; Karwat, P.; Żołek, N. Letter to the Editor. Re: “[Dataset of Breast Ultrasound Images by W. Al-Dhabyani, M.; Gomaa, H. Khaled & A. Fahmy, Data in Brief, 2020, 28, 104863]”. Data Brief 2023, 48, 109247. [Google Scholar] [CrossRef] [PubMed]
  225. Pawłowska, A.; Ćwierz-Pieńkowska, A.; Domalik, A.; Jaguś, D.; Kasprzak, P.; Matkowski, R.; Fura, Ł.; Nowicki, A.; Żołek, N. Curated Benchmark Dataset for Ultrasound Based Breast Lesion Analysis. Sci. Data 2024, 11, 148. [Google Scholar] [CrossRef] [PubMed]
  226. Johnson, J.M.; Khoshgoftaar, T.M. Survey on Deep Learning with Class Imbalance. J. Big Data. 2019, 6, 27. [Google Scholar] [CrossRef]
  227. Fernando, K.R.M.; Tsokos, C.P. Dynamically Weighted Balanced Loss: Class Imbalanced Learning and Confidence Calibration of Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 2940–2951. [Google Scholar] [CrossRef] [PubMed]
  228. Habbal, A.; Ali, M.K.; Abuzaraida, M.A. Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, Applications, Challenges and Future Research Directions. Expert Syst. Appl. 2024, 240, 122442. [Google Scholar] [CrossRef]
  229. Ghubaish, A.; Salman, T.; Zolanvari, M.; Unal, D.; Al-Ali, A.; Jain, R. Recent Advances in the Internet-of-Medical-Things (IoMT) Systems Security. IEEE Internet Things J. 2021, 8, 8707–8718. [Google Scholar] [CrossRef]
  230. Avinashiappan, A.; Mayilsamy, B. Internet of Medical Things: Security Threats, Security Challenges, and Potential Solutions. In Internet of Medical Things: Remote Healthcare Systems and Applications; Hemanth, D.J., Anitha, J., Tsihrintzis, G.A., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 1–16. [Google Scholar] [CrossRef]
  231. Al-Hawawreh, M.; Hossain, M.S. A Privacy-Aware Framework for Detecting Cyber Attacks on Internet of Medical Things Systems Using Data Fusion and Quantum Deep Learning. Inf. Fusion 2023, 99, 101889. [Google Scholar] [CrossRef]
  232. Bradford, L.; Aboy, M.; Liddell, K. International Transfers of Health Data between the EU and USA: A Sector-Speciic Approach for the USA to Ensure an “adequate” Level of Protection. J. Law Biosci. 2020, 7, 1–33. [Google Scholar] [CrossRef]
  233. Scheibner, J.; Raisaro, J.L.; Troncoso-Pastoriza, J.R.; Ienca, M.; Fellay, J.; Vayena, E.; Hubaux, J.-P. Revolutionizing Medical Data Sharing Using Advanced Privacy-Enhancing Technologies: Technical, Legal, and Ethical Synthesis. J. Med. Internet Res. 2021, 23, e25120. [Google Scholar] [CrossRef]
  234. Proniewska, K.; Dołęga-Dołęgowski, D.; Pręgowska, A.; Walecki, P.; Dudek, D. Holography as a Progressive Revolution in Medicine. In Simulations in Medicine: Computer-Aided Diagnostics and Therapy; De Gruyter: Berlin, Germany; Boston, MA, USA, 2020. [Google Scholar] [CrossRef]
  235. Bui, T.X. Proceedings of the 56th Hawaii International Conference on System Sciences (HICCS), Hyatt Regency Maui, 3–6 January 2023; University of Hawaii at Manoa: Honolulu, HI, USA, 2023. [Google Scholar]
  236. Grinbaum, A.; Adomaitis, L. Moral Equivalence in the Metaverse. Nanoethics 2022, 16, 257–270. [Google Scholar] [CrossRef]
  237. Todorović, D.; Matić, Z.; Blagojević, M. Religion in Late Modern Society; A Thematic Collection of Papers of International Significance; Yugoslav Society for the Scientific Study of Religion (YSSSR): Niš, Serbia; Committee of Education and Culture of the Diocese of Požarevac and Braničevo: Požarevac, Serbia, 2022.
  238. Available online: https://www.scu.edu/ethics/metaverse/#:~:text=Do%20no%20Harm%E2%80%94Take%20no,and%20concern%20for%20other%20people (accessed on 20 February 2024).
  239. Viola, F.; Del Corso, G.; De Paulis, R.; Verzicco, R. GPU Accelerated Digital Twins of the Human Heart Open New Routes for Cardiovascular Research. Sci. Rep. 2023, 13, 8230. [Google Scholar] [CrossRef] [PubMed]
  240. Anshari, M.; Syafrudin, M.; Fitriyani, N.L.; Razzaq, A. Ethical Responsibility and Sustainability (ERS) Development in a Metaverse Business Model. Sustainability 2022, 14, 15805. [Google Scholar] [CrossRef]
  241. Chen, M. The Philosophy of the Metaverse. Ethics Inf. Technol. 2023, 25, 41. [Google Scholar] [CrossRef]
  242. Armeni, P.; Polat, I.; De Rossi, L.M.; Diaferia, L.; Meregalli, S.; Gatti, A. Digital Twins in Healthcare: Is It the Beginning of a New Era of Evidence-Based Medicine? A Critical Review. J. Pers. Med. 2022, 12, 1255. [Google Scholar] [CrossRef]
  243. Braun, M.; Krutzinna, J. Digital Twins and the Ethics of Health Decision-Making Concerning Children. Patterns 2022, 3, 100469. [Google Scholar] [CrossRef]
  244. Leonelli, S.; Tempini, N. Data Journeys in the Sciences; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  245. Coorey, G.; Figtree, G.A.; Fletcher, D.F.; Snelson, V.J.; Vernon, S.T.; Winlaw, D.; Grieve, S.M.; Mcewan, A.; Yee, J.; Yang, H.; et al. The Health Digital Twin to Tackle Cardiovascular Disease-a Review of an Emerging Interdisciplinary Field. Npj Digit. Med. 2022, 5, 126. [Google Scholar] [CrossRef]
  246. Albahri, A.S.; Duhaim, A.M.; Fadhel, M.A.; Alnoor, A.; Baqer, N.S.; Alzubaidi, L.; Albahri, O.S.; Alamoodi, A.H.; Bai, J.; Salhi, A.; et al. A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Inf. Fusion 2023, 96, 156–191. [Google Scholar] [CrossRef]
  247. Eprs. BRIEFING EU Legislation in Progress. Available online: https://epthinktank.eu/eu-legislation-in-progress/ (accessed on 20 February 2024).
  248. Jalilvand, I.; Jang, J.; Gopaluni, B.; Milani, A.S. VR/MR Systems Integrated with Heat Transfer Simulation for Training of Thermoforming: A Multicriteria Decision-Making User Study. J. Manuf. Syst. 2024, 72, 338–359. [Google Scholar] [CrossRef]
  249. Jung, C.; Wolff, G.; Wernly, B.; Bruno, R.R.; Franz, M.; Schulze, P.C.; Silva, J.N.A.; Silva, J.R.; Bhatt, D.L.; Kelm, M. Virtual and Augmented Reality in Cardiovascular Care: State-of-the-Art and Future Perspectives. JACC Cardiovasc. Imaging 2022, 15, 519–532. [Google Scholar] [CrossRef] [PubMed]
  250. Arshad, I.; de Mello, P.; Ender, M.; McEwen, J.D.; Ferré, E.R. Reducing Cybersickness in 360-Degree Virtual Reality. Multisens. Res. 2021, 35, 203–219. [Google Scholar] [CrossRef] [PubMed]
  251. Daling, L.M.; Schlittmeier, S.J. Effects of Augmented Reality-, Virtual Reality-, and Mixed Reality–Based Training on Objective Performance Measures and Subjective Evaluations in Manual Assembly Tasks: A Scoping Review. Hum. Factors 2022, 66, 589–626. [Google Scholar] [CrossRef]
  252. Kimmatudawage, S.P.; Srivastava, R.; Kachroo, K.; Badhal, S.; Balivada, S. Toward Global Use of Rehabilitation Robots and Future Considerations. In Rehabilitation Robots for Neurorehabilitation in High-, Low-, and Middle-Income Countries: Current Practice, Barriers, and Future Directions; Academic Press: London, UK, 2024; pp. 499–516. [Google Scholar] [CrossRef]
  253. Salvador, M.; Regazzoni, F.; Dede’, L.; Quarteroni, A. Fast and Robust Parameter Estimation with Uncertainty Quantification for the Cardiac Function. Comput. Methods Programs Biomed. 2023, 231, 107402. [Google Scholar] [CrossRef]
Figure 1. Health Digital Twin supported by AI and XR: a basic workflow.
Figure 1. Health Digital Twin supported by AI and XR: a basic workflow.
Electronics 13 00866 g001
Figure 2. Conceptual scheme of the application of AI in cardiology.
Figure 2. Conceptual scheme of the application of AI in cardiology.
Electronics 13 00866 g002
Figure 3. Distribution of papers in the field of medicine based on origin (a) in the field of DT technologies, (b) in the field of XR-based technologies, and (c) in AI-based algorithms in image segmentation (WoS database).
Figure 3. Distribution of papers in the field of medicine based on origin (a) in the field of DT technologies, (b) in the field of XR-based technologies, and (c) in AI-based algorithms in image segmentation (WoS database).
Electronics 13 00866 g003
Table 1. Comparison of recent (2020–2024) developments in the application of XR in cardiology and interventional cardiology.
Table 1. Comparison of recent (2020–2024) developments in the application of XR in cardiology and interventional cardiology.
XR Technology TypeHDM TypeAI SupportPerception of Real SurroundingApplication FieldReferences
MRHoloLens 2NoYesVisualization of ultrasound-guided femoral arterial cannulations[47]
MRHoloLens 2NoYesUSG visualization[48]
MRHoloLens 2NoYesVisualization of heart structures[49]
MRHoloLens 2NoYesOperation planning[50]
MRHoloLensNoYesOperation planning[51]
MRHoloLens 2NoYesVisualization of heart structures[52]
MRHoloLens 2NoYesVisualization of heart structures[53]
ARmobile phoneNoYesDiagnosis of the heart[54]
ARnoneNoYesVirtual pathology stethoscope detection[55]
ARnoneYesYesEye-tracking system[56]
ARnoneYesYesDetection of semi-opaque markers in fluoroscopy[57]
VRSimulator Stanford Virtual HeartNoNoVisualization of heart structures[58]
VRSimulator Stanford Virtual HeartNoNoVisualization of heart structures[59]
VRMeta-CathLab (concept)NoNoMerging interventional cardiology with the Metaverse[60]
VRVR glassesYesNoSleep stage classification—concept[61]
VRVirtualcpr: mobile applicationYesNoTraining in cardiopulmonary resuscitation techniques[62]
VRnoneYesNoDiagnostic of cardiovascular diseases—visualization[63]
VRnoneNoNoCardiovascular education[61]
Table 2. Top list of used AI models in cardiology, including interventional cardiology.
Table 2. Top list of used AI models in cardiology, including interventional cardiology.
AI/ML ModelApplication Fields (In General)Application Fields (In Cardiology)References
ANNsclassification, pattern recognition, image recognition, natural language processing (NLP), speech recognition, recommendation systems, prediction, cybersecurity, object manipulation, path planning, sensor fusionprediction of atrial fibrillation, acute myocardial infarctions, and dilated cardiomyopathy detection of the structural abnormalities in heart tissues[85]
[86]
RNNsordinal or temporal problems (language translation, speech recognition, NLP image captioning), time series prediction, music generation, video analysis, patient monitoring, disease progression predictionsegmentation of the heart and subtle structural changes
cardiac MRI segmentation
[87]
[88]
LSTMsordinal or temporal problems (language translation, speech recognition, NLP, image captioning), time series prediction, music generation, video analysis, patient monitoring, disease progression predictionsegmentation and classification of 2D echo images
segmentation and classification of 3D Doppler images
segmentation and classification of video graphics images and detection of the AMI in echocardiography
[89]
[90]
CNNspattern recognition, segmentation/classification, object detection, semantic segmentation, facial recognition, medical imaging, gesture recognition, video analysiscardiac image segmentation to diagnose CAD
cardiac image segmentation to diagnose Tetralogy of Fallot
localization of the coronary artery atherosclerosis
detection of cardiovascular abnormalities
detection of arrhythmia
detection of coronary artery disease
prediction of the survival status of heart failure patients
prediction of cardiovascular disease
LV dysfunction screening
prediction of premature ventricular contraction detection
[91,92]
[93]
[94]
[95]
[96,97,98,99,100,101,102,103,104,105,106,107]
[108]
[109]
[110]
[111,112]
TransformersNLP, speech processing, computer vision, graph-based tasks, electronic health records, building conversational AI systems and chatbotscoronary artery labeling
prediction of incident heart failure
arrhythmia classification
cardiac abnormality detection
segmentation of MRI in case of cardiac infarction
classification of aortic stenosis severity
LV segmentation
heart murmur detection
myocardial fibrosis segmentation
ECG classification
[113,114]
[115]
[116,117,118,119]
[120]
[121]
[122,123]
[118,124,125]
[126]
[118]
[127]
SNNspattern recognition, cognitive robotics, SNN hardware, brain–machine interfaces, neuromorphic computingECG classification
detection of arrhythmia
extraction of ECG features
[128,129,130]
[131,132,133]
[134]
GANsimage-to-image translation, image synthesis and generation, data generation for training, data augmentation, creating realistic scenesCVD diagnosis
segmentation of the LA and atrial scars in LGE CMR images
segmentation of ventricles based on MRI scans
left ventricle segmentation in pediatric MRI scans
generation of synthetic cardiac MRI images for congenital heart disease research
[135]
[136]
[137]
[138]
[139]
GNNsgraph/node classification, link prediction, graph generation, social/biological network analysis, fraud detection, recommendation systemsclassification of polar maps in cardiac perfusion imaging
analysis of CT/MRI scans
prediction of ventricular arrhythmia
segmentation of cardiac fibrosis
diagnosis of cardiac condition: LV motion in cardiac MR cine images
automated anatomical labeling of coronary arteries
prediction of CAD
automation of coronary artery analysis using CCTA
screening of cardio, thoracic, and pulmonary conditions in chest radiograph
[140,141]
[142]
[141]
[141]
[143]
[144]
[145]
[146]
[147]
QNNsoptimization of hardware operations, user interfacesclassification of ischemic heart disease[97]
GAoptimization techniques, risk prediction, gene therapies, medicine developmentclassification of heart disease[148]
Table 3. Comparison of AI models applied to cardiology, including interventional cardiology.
Table 3. Comparison of AI models applied to cardiology, including interventional cardiology.
Network TypeEvaluation MetricsInputOutputXr ConnectionDt ContentionReference
ANNaccuracy 94.32%ECG recordingsbinary classification of normal and ventricular ectopic beatsNoNo[131]
ROI 89.00%EchocardiographyAutomatic measurement of left ventricular strainNoNo[163]
accuracy 91.00%Electronic health recordsclassification and prediction of cardiovascular diseasesNoNo[162]
RNN-LSTMaccuracy 80.00%
F1 score 84.00%
3D Doppler imagesheart abnormalities classificationNoNo[89]
accuracy 97.00%
F1 score 97.00%
2D echo imagesheart abnormalities classificationNoNo[89]
(1) accuracy 85.10%
(2) accuracy 83.20%
echocardiography imagesautomated classification of acute myocardial infarction: (1) classification of the left ventricular long-axis view; (2) classification of short-axis view (papillary muscle level) NoNo[90]
accuracy 93.10%coronary computed tomography angiographydiagnostic of the coronary artery calciumNoNo[175]
accuracy 90.67%ECG recordingsprediction of the arrhythmiaNoNo[98]
RNNIoU factor 92.13%MRI cardiac imagesestimation of the cardiac state: sequential four-chamber view left ventricle wall segmentationNoNo[116]
CNNaccuracy 95.92%ECG recordingsbinary classification of normal and ventricular ectopic beatsNoNo[131]
IoU factor 61.75%MRI cardiac imagesestimation of the cardiac state: sequential four-chamber view left ventricle wall segmentationNoNo[116]
accuracy 94.00%
F1 score 95.00%
2D echo imagesheart abnormalities classificationNoNo[89]
accuracy 98.00%
F1 score 98.00%
3D Doppler imagesheart abnormalities classificationNoNo[89]
accuracy 92.00%ECG recordingsECG classificationNoNo[183]
accuracy 88.00%Electronic health recordsheart disease predictionNoNo[135]
accuracy 98.82%ECG recordingsprediction of heart failure and arrhythmiaNoNo[102]
accuracy 95.13%Electronic health recordsprediction of the survival status of heart failure patientsNoNo[108]
accuracy 99.60%ECG recordingsestimation of the fetal heart rateNoNo[207]
accuracy 99.10%heart audio recordingsheart disease classificationNoNo[208]
accuracy 97.00%heart sound signalsclassification of heart murmurNoNo[209]
accuracy 98.95%heart sound signalsclassification of heart sound signalsNoNo[210]
ROC curve 0.834heart sound signalsprediction of obstructive coronary artery diseaseNoNo[211]
accuracy 85.25%MRI image scanschronic disease predictionNoNo[212]
accuracy 99.10%heart sound signalsdiagnosis of cardiovascular diseaseNoNo[213]
CNN-LSTMaccuracy 99.52%
Dice coef. 0.989
ROC curve 0.999
ECG recordingsprediction of congestive heart failureNoNo[214]
accuracy 96.66%Heart disease Cleveland UCI datasetprediction of the heart diseaseNoNo[215]
accuracy 99.00%ECG recordingsprediction of the heart failureNoNo[106]
SNNROC curve 0.99ECG recordingsECG classificationNoNo[181]
accuracy 97.16%ECG recordingsbinary classification of normal and ventricular ectopic beatsNoNo[131]
accuracy 93.60%ECG recordingsECG classificationNoNo[182]
accuracy 85.00%ECG recordingsECG classificationNoNo[180]
accuracy 84.41%ECG recordingsECG classificationNoNo[129]
accuracy 91.00%ECG recordingsECG classificationNoNo[183]
GNNDice coef. 0.82cardiac MRI imagesprediction diverticular arrhythmiaNoNo[141]
ROC curve 0.739CT image scanprediction of coronary artery diseaseNoNo[145]
AUC 0.821chest radiographsscreening of cardio, thoracic, and pulmonary conditionsNoNo[147]
ROC area 0.9812-lead ECG recordremote monitoring of surface electrocardiogramsNoNo[192]
GANaccuracy 99.08%
Dice coef. 0.987
CT image scancardiac fat segmentationNoNo[52]
accuracy 98.00%ECG recordingsECG classificationNoNo[216]
accuracy 95.40%ECG recordingsECG classificationNoNo[217]
accuracy 68.07%CTG signal dataset fetal heart rate signal classificationNo No[218]
Dice coef. 0.880MRI image scanssegmentation of the left ventricleNoNo[138]
Transformersaccuracy 96.51%Cleveland datasetprediction of cardiovascular diseasesNoNo[219]
accuracy 98.70%heart sound signals—Mel-spectrogram, bispectral analysis, and Phonocardiogramheart sound classificationNoNo[220]
Dice coef. 0.86112-lead ECG recordarrhythmia classificationNoNo[221]
Dice coef. 0.0004ECG recordingsarrhythmia classificationNoNo[222]
Dice coef. 0.980ECG recordingsarrhythmia classificationNoNo[223]
Dice coef. 0.911ECG recordingsclassification of ECG recordingsNoNo[134]
GA-laboratory data, patient medical history, ECG, physical examinations, and echocardiogram (Z-Alizadeh Sani dataset)determination of the parameters to
prediction of the coronary artery disease (next SVM-based classifier was applied)
NoNo[157]
QNNaccuracy 84.60%Electronic health recordsclassification of ischemic cardiopathyNoNo[97]
accuracy 91.80%
Dice coef. 0.918
X-ray coronary angiographystenosis detection NoNo[202]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rudnicka, Z.; Proniewska, K.; Perkins, M.; Pregowska, A. Cardiac Healthcare Digital Twins Supported by Artificial Intelligence-Based Algorithms and Extended Reality—A Systematic Review. Electronics 2024, 13, 866. https://doi.org/10.3390/electronics13050866

AMA Style

Rudnicka Z, Proniewska K, Perkins M, Pregowska A. Cardiac Healthcare Digital Twins Supported by Artificial Intelligence-Based Algorithms and Extended Reality—A Systematic Review. Electronics. 2024; 13(5):866. https://doi.org/10.3390/electronics13050866

Chicago/Turabian Style

Rudnicka, Zofia, Klaudia Proniewska, Mark Perkins, and Agnieszka Pregowska. 2024. "Cardiac Healthcare Digital Twins Supported by Artificial Intelligence-Based Algorithms and Extended Reality—A Systematic Review" Electronics 13, no. 5: 866. https://doi.org/10.3390/electronics13050866

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop