Skip to main content

EDITORIAL article

Front. Cardiovasc. Med., 23 October 2023
Sec. Cardiovascular Imaging
Volume 10 - 2023 | https://doi.org/10.3389/fcvm.2023.1307812

Editorial: Generative adversarial networks in cardiovascular research

Qiang Zhang1,2* Tolga Cukur3,4 Hayit Greenspan5,6 Guang Yang7,8,9,10
  • 1Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, United Kingdom
  • 2Big Data Institute, University of Oxford, Oxford, United Kingdom
  • 3Department of Electrical and Electronics Engineering, Bilkent University, Ankara, Türkiye
  • 4National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara, Türkiye
  • 5Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
  • 6Department of Radiology, Icahn School of Medicine, Mount Sinai, New York, NY, United States
  • 7Bioengineering Department and Imperial-X, Imperial College London, London, United Kingdom
  • 8National Heart and Lung Institute, Imperial College London, London, United Kingdom
  • 9Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom
  • 10School of Biomedical Engineering & Imaging Sciences, King’s College London, London, United Kingdom

Editorial on the Research Topic
Generative adversarial networks in cardiovascular research

Deep generative models are a family of neural networks capable of learning the data distribution from a large set of training samples and then generating realistic new data samples. They are among the most exciting technical breakthroughs in deep learning in recent years. A popular example is Generative adversarial networks (GAN) (1), which leverage a game-theoretic interplay between a generator and adversarial discriminator for an implicit characterization of the data distribution. In cardiovascular medicine, GANs are increasingly adopted in a wide range of applications for analysing cardiovascular MRI, echocardiography, electrocardiography and patient characteristics. This Research Topic has collected articles on the application of deep GAN models to left atrial appendage selection for surgical occlusion (Zhu et al.), function analysis in coronary artery stenosis (Yong et al.), late gadolinium enhancement scar assessment (Gonzales et al.), and strain analysis (Deng et al.), using echocardiography and cardiac MRI.

In Zhu et al., an adversarial-based latent space alignment framework has been proposed for left atrial appendage (LAA) segmentation in transesophageal echocardiography (TEE). LAA segmentation and quantification are crucial in guiding the surgical procedure for the treatment of LAA-associated ischaemic strokes. However, it is challenging on TEE due to TEE image artefacts, noise and highly variable LAA structure. To address this challenge, the authors encoded the prior knowledge of LAA shapes in a latent feature space and adopted generative adversarial learning to align the automated segmentation with the prior knowledge in the latent space, therefore constraining the segmentation results. The approach was validated on 1,783 TEE images and achieved superior performance with a Dice Similarity Coefficient (DSC) of 0.83. This work demonstrated the effectiveness of GANs in enhancing deep-learning models for challenging image modalities.

Yong et al. combined a GAN approach with intravascular ultrasound (IVUS) for non-invasive function analysis of coronary artery stenosis. The current clinical standard for coronary artery function evaluation is invasive fractional flow reserve (FFR). However, IVUS, a procedure routinely used for morphological assessment, has the potential to provide function evaluation simultaneously. To assess this approach, 92 patients who received both IVUS and FFR assessments were retrospectively identified. The authors employed a SegAN (2) to automatically segment the arterial lumen contours from IVUS images, which achieved a high DSC of 0.95 and 0.97 for lumen and media–adventitia border delineation. This allows accurate calculation of IVUS-FFR in good agreement with invasive FFR (r = 0.94), and a high diagnosis accuracy of 90.7%. Powered by deep generative models, IVUS demonstrated the feasibility of achieving comparable diagnostic performance with invasive FFR, with significantly lower computation time.

Gonzales et al. validated an approach of augmenting training samples with GAN to improve the accuracy of segmentation of cardiac MRI late gadolinium enhancement (LGE)—the imaging standard for myocardial scar assessment. The study leveraged GAN-generated virtual native enhancement (VNE) (3, 4)—a new, gadolinium-free modality that resembles LGE—to expand the training set. A dataset comprising 4,716 LGE images (from 1,363 patients with hypertrophic cardiomyopathy and myocardial infarction) was retrospectively collated. LGE data were augmented with a GAN-based generator to produce VNE images. The results demonstrated that incorporating GAN-generated VNE data into the training process consistently led to enhanced segmentation performance: the models trained on only LGE yielded a DSC of 0.835, 0.838 for LGE and VNE segmentation; whereas the models trained on both LGE and VNE yield higher DSC of 0.845, 0.845. Additionally, the individual segmentation performance of the model trained with only LGE data, including extensive data augmentation (5) (0.846) was also surpassed by the same framework when the VNE data were added (0.851). This work showed data augmentation using generative models as an effective approach to improving deep learning training, especially in the scenario of limited training data.

Additionally, in Deng et al., a deep learning approach has been developed for automated strain analysis on echocardiography. Strain analysis using echocardiography has great potential to offer rapid heart function assessment in routine clinical workflow. However, it requires myocardial segmentation, which is challenging in echocardiography. The authors developed a 3D U-Net and an optical-flow network to segment the LV myocardium, track the motion, and calculate the longitudinal strain. The AI-based echocardiography interpretation demonstrated a good agreement (Spearman correlation of 0.9) with the traditional semi-automatic speck tracking echocardiography (STE), with no significant bias (mean bias −1.2 ± 1.5%), whilst much faster (15 s vs. 5–10 min). Further development of generative models to learn the prior and distribution of a representative, real in-vivo data bank may help to translate this echocardiography technique into clinical practice.

In summary, these articles have highlighted the significant potential of deep generative models in revolutionising cardiac imaging, particularly in addressing intricate tasks and various image modalities. Looking ahead, we foresee a surge in upcoming research focused on fine-tuning and utilising deep generative models for a broader range of applications. These may involve reducing doses, rectifying missing modalities, augmenting data, refining image reconstruction (6), precise segmentation (7), accurate tracking of anatomical features, and dependable classification within the field of cardiovascular medicine.

Author contributions

QZ: Writing – original draft, Project administration. TC: Project administration, Writing – review & editing. HG: Project administration, Writing – review & editing. GY: Project administration, Writing – review & editing.

Funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article.

QZ acknowledges the Oxford BHF Centre of Research Excellence grant RE/18/3/34214. TC was supported in part by a GEBIP 2015 fellowship by the Turkish Academy of Sciences and by a BAGEP 2017 fellowship by the Science Academy. GY was supported in part by the BHF (TG/18/5/34111, PG/16/78/32402), ERC IMI (101005122), the H2020 (952172), the MRC (MC/PC/21013), the Royal Society (IEC\NSFC\211235), the NVIDIA Academic Hardware Grant Program, the SABER project supported by Boehringer Ingelheim Ltd., Wellcome Leap Dynamic Resilience, and the UKRI Future Leaders Fellowship (MR/V023799/1).

Conflict of interest

QZ has authorship rights for patent WO2021/044153: “Enhancement of Medical Images”.

The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The author(s) declared that they were an editorial board member of Frontiers, at the time of submission. This had no impact on the peer review process and the final decision.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

References

1. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. Commun ACM. (2020) 63(11):139–44. doi: 10.1145/3422622

CrossRef Full Text | Google Scholar

2. Xue Y, Xu T, Zhang H, Long LR, Huang X. Segan: adversarial network with multi-scale L1 loss for medical image segmentation. Neuroinformatics. (2018) 16:383–92. doi: 10.1007/s12021-018-9377-x

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Zhang Q, Burrage MK, Shanmuganathan M, Gonzales RA, Lukaschuk E, Thomas KE, et al. Artificial intelligence for contrast-free MRI: scar assessment in myocardial infarction using deep learning-based virtual native enhancement. Circulation. (2022) 146(20):1492–503. doi: 10.1161/CIRCULATIONAHA.122.060137

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Zhang Q, Burrage MK, Lukaschuk E, Shanmuganathan M, Popescu IA, Nikolaidou C, et al. Toward replacing late gadolinium enhancement with artificial intelligence virtual native enhancement for gadolinium-free cardiovascular magnetic resonance tissue characterization in hypertrophic cardiomyopathy. Circulation. (2021) 144(8):589–99. doi: 10.1161/CIRCULATIONAHA.121.054432

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Isensee F, Jaeger PF, Kohl SA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. (2021) 18(2):203–11. doi: 10.1038/s41592-020-01008-z

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, et al. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging. (2018) 37(6):1310–21. doi: 10.1109/TMI.2017.2785879

PubMed Abstract | CrossRef Full Text | Google Scholar

7. Wu Y, Tang Z, Li B, Firmin D, Yang G. Recent advances in fibrosis and scar segmentation from cardiac MRI: a state-of-the-art review and future perspectives. Front Physiol. (2021) 12:709230. doi: 10.3389/fphys.2021.709230

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: deep generative models, generative adversarial networks (GAN), echocardiography (Echo), cardiovascular magnetic resonance, segmentation (Image processing)

Citation: Zhang Q, Cukur T, Greenspan H and Yang G (2023) Editorial: Generative adversarial networks in cardiovascular research. Front. Cardiovasc. Med. 10:1307812. doi: 10.3389/fcvm.2023.1307812

Received: 5 October 2023; Accepted: 13 October 2023;
Published: 23 October 2023.

Edited and Reviewed by: Xiang Li, Harvard Medical School, United States

© 2023 Zhang, Cukur, Greenspan and Yang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Qiang Zhang qiang.zhang@cardiov.ox.ac.uk

Download