Next Article in Journal
Angiomatoid Fibrous Histiocytoma (AFH) of the Right Arm: An Exceptional Case with Pulmonary Metastasis and Confirmatory EWSR1::CREB1 Translocation
Previous Article in Journal
A Bipartite Obturator Artery with Multiple Pelvic Branching—A Gynecologic Approach
Previous Article in Special Issue
Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial on Special Issue “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”

by
Sivaramakrishnan Rajaraman
and
Sameer Antani
*
Computational Health Research Branch, National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(11), 2615; https://doi.org/10.3390/diagnostics12112615
Submission received: 19 October 2022 / Accepted: 25 October 2022 / Published: 27 October 2022
Cardiopulmonary diseases are a significant cause of mortality and morbidity worldwide. The COVID-19 pandemic placed significant demands on clinicians and care providers, particularly in low-resource or high-burden regions. Simultaneously, advances in artificial intelligence (AI), machine learning (ML), and the increased availability of relevant images enhanced the focus on cardiopulmonary diseases. According to the recent American Lung Association report, more than 228,000 people will be diagnosed with lung cancer in the United States alone this year, with the rate of new cases varying by state [1]. Further, heart disease is indiscriminate in ethnic and racial origin, causing mortality. Additionally, infectious diseases, such as tuberculosis (TB) often coupled with the human immunodeficiency virus (HIV) comorbidity, are found with drug-resistant strains that greatly impact treatment pathways and survival rates [2]. The screening, diagnosis, and management of such cardiopulmonary diseases have become difficult owing to the limited availability of diagnostic tools and experts, particularly in low and middle-income regions. Early screening and the accurate diagnosis and staging of cardiopulmonary diseases could play a crucial role in treatment and care and potentially aid in reducing mortality. Radiographic imaging methods such as computed tomography (CT), chest-X-rays (CXRs), and echo-ultrasound are widely used in screening and diagnosis [3,4,5,6]. Research on using image-based AI, ML, particularly convolutional neural network (CNN)-based deep learning (DL) methods, can help increase access to care, reduce variability in human performance, and improve care efficiency while serving as surrogates for expert assessment [7]. We find that significant progress has been made [5,8,9,10] in DL-based medical image modality classification, segmentation, detection, and retrieval techniques which have resulted in a positive impact on clinical and biomedical research. We wanted to capture a snapshot of these advances through a Special Issue collection of peer-reviewed high-quality primary research studies and literature reviews focusing on novel AI/ML/DL methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. These published studies present state-of-the-art AI in cardiopulmonary medicine with an aim toward addressing this global health challenge.
Studying the articles in this collection, the reader will observe that the choice of the DL model depends largely on the characteristics of the data under study [11]. A study of the literature reveals that no individual DL model is optimal for a wide range of medical imaging modalities [12]. Despite delivering superior performance, the performance of DL models is shown to improve with the availability of meaningful data and computational resources [13]. The quality of medical images and their annotations also plays an important role in the success of DL models. The visual characteristics of medical images, viz., shape, size, color, texture, and orientation are unique compared to the natural stock photographic images [14]. The regions of interest (ROIs) concerning the disease manifestations or the organs in medical images are relatively small compared to natural images. Hence, it is crucial to select the optimal DL model for the medical image modality and problem under study. Unlike natural images, medical images and their associated labels are often scarcely available. Strategies including transfer learning [13,15] and multicenter collaboration [11] have been proposed to handle data scarcity issues. The transfer learning-based approaches are prominently used as they leverage the knowledge learned from a large collection of stock photographic images such as ImageNet [16] to improve performance and generalization in medical visual recognition tasks with a sparse collection of medical data and their associated labels. In this regard, Gozzi et al. [17] proposed the identification of the optimal transfer learning strategy for a CXR classification task. They followed a systematic procedure which is as follows: (i) Several ImageNet-pretrained CNN models were retrained on the publicly available CheXpert [18] CXR dataset. This approach facilitated learning CXR modality-specific feature representations. A study of the literature [19,20,21] reveals that the medical image modality-specific retraining of ImageNet-pretrained models demonstrates significant gains in related classification, segmentation, and detection tasks. The authors evaluated the classification performance achieved through multiple transfer learning methods such as image feature (embedding) extraction, fine-tuning, stacking, and tree-based classification using a private CXR dataset. They qualitatively evaluated performance using gradient-weighted class activation maps (Grad-CAM) [22]. In this regard, the authors demonstrated superior performance with a 0.856 area under the curve (AUC) using the image embeddings extracted from the penultimate layer of the CNN models and an averaging ensemble of the RF predictions, showcasing it as the optimal transfer learning strategy for the task under study. The Grad-CAM maps showed that the CNN models learned task-specific features to improve prediction performance.
In another study, Huang et al. [23] evaluated the gains achieved through transfer learning in a multi-label CXR classification task. They used a private CXR collection containing multiple abnormalities including aortic sclerosis/calcification, arterial curvature, consolidations, pulmonary fibrosis, enlarged hilar shadows, scoliosis, cardiomegaly, and intercostal pleural thickening, etc. The ImageNet-pretrained CNN models were retrained on the CheXpert and NIH CXR-14 [24] datasets to learn CXR modality-specific representations. The learned knowledge was transferred and finetuned for a related CXR classification task. They further evaluated the gains achieved through multiple transfer learning strategies such as the reuse of pretrained weights, layer transfer where some of the model weight layers were frozen, and model retraining, using the models trained on differently sized CheXpert and NIH CXR-14 datasets. It was observed that CXR modality-specific finetuning of the ImageNet-pretrained models, using the NIH CXR-14 dataset, demonstrated superior prediction performance with an accuracy of 0.935, compared to other models/methods. The authors recommend retraining the CNN models using multiple cross-institutional datasets for promising performance and generalization under conditions of sparse medical data and label availability.
DL models have demonstrated poor performance and generalization in cases where the distribution of the data used to train the models (source distribution) is different compared to the unseen real-world data (target distribution). This lack of generalization could be attributed to several factors including changes in image acquisition protocols, data formatting and labeling, patient heterogeneity based on age, gender, race, and ethnicity, and varying characteristics of the underlying disease manifestations, etc., between the source and target distribution [25]. The discrepancy in the characteristics of the source and target data may eventually lead to domain shift issues resulting in performance degradation and sub-optimal generalization. Under these circumstances, training and evaluating the models using the source data may not accurately reflect real-world settings. Karki et al. [26] discussed the generalization issues with the DL models that were trained to classify Drug-Resistant TB (DR-TB) manifestations from drug-sensitive TB (DS-TB) using CXRs. They observed sub-optimal classification performance with an AUC = 0.65 using an unseen test set in a CNN model that was trained on internal data. The authors observed poor localization using Grad-CAM activation maps as compared to the radiologist-annotated ROIs. Training a multi-task attention model using lesion location information from prior TB infection helped to improve classification performance (AUC = 0.68) on the blinded test set. The authors highlight differences in acquisition protocols and the variation in non-pathological and non-anatomical image attributes across the datasets that contributed to sub-optimal performance and generalization.
Mueller et al. [27] assessed the diagnostic performance of dual-energy subtraction radiography (DE) [28] in detecting pulmonary emphysema and compared it to the performance achieved using conventional radiography (CR). Pulmonary emphysema, a chronic obstructive pulmonary disease (COPD), blocks airflow in the lungs and causes breathing disorders. CT imaging is reported to be the most sensitive radiological imaging method for detecting and quantifying pulmonary emphysema [29]. The authors used the posteroanterior and lateral radiographic projections acquired from patients using CR, DE, and CT radiography imaging. Expert radiologists were involved in identifying the presence and degree of manifestations consistent with pulmonary emphysema in the DR and CR images while keeping CT as the reference standard. The specificity and recall in detecting and localizing the disease and the inter-reader consensus were measured. The authors observed a high consensus between the readers in identifying pulmonary emphysema manifestations using CR images (Kappa = 0.611) and a moderate consensus (Kappa = 0.433) using the DR images. The authors conclude that the diagnostic performance in terms of detecting, quantifying, and localizing pulmonary emphysema manifestations using CR and DE imaging was comparable.
Li et al. [30] performed a systematic review of the literature to analyze the additional effect of AI-based methods on the performance of physicians to detect cardiopulmonary pathologies using CXR and CT images. They followed the Place of Relevant Intermediary Approach (PRIMA) [31] to record different stages during their literature review process. The authors retrieved relevant literature on AI-based cardiopulmonary screening/diagnosis, published in the last 20 years, using Web of Science, SCOPUS, PubMed, and other literature archives. The authors analyzed human performance in terms of evaluation time, recall, specificity, accuracy, and AUC, in the presence or absence of AI-based assistive tools. It was observed that the average recall increased from 67.8% to 74.6% when human decisions were supplemented by AI assistive tools. A similar improvement was observed in terms of specificity (82.2% to 85.4%), accuracy (75.4% to 81.7%), and AUC (0.75 to 0.80). A significant reduction in the evaluation time was also observed with AI assistance.
In our work [32], we evaluated the gains achieved using modality-specific CNN backbones in a RetinaNet model toward detecting pneumonia-consistent manifestations with CXRs. We retrained ImageNet-pretrained DL models, viz., VGG-16, VGG-19, DenseNet-121, ResNet-50, EfficientNet-B0, and MobileNet on CheXpert and TBX11K datasets to learn CXR modality-specific features. The best-performing model architectures, viz., VGG-16 and ResNet-50, were used as the modality-specific classifier backbones in a RetinaNet-based object detection model. We used focal loss and focal Tversky loss functions to train the classifier backbones. The RetinaNet model was finetuned on the RSNA CXR [33] collection to detect pneumonia-consistent manifestations. We compared detection performance using various weight-initialization methods, viz., random, ImageNet-pretrained, and CXR modality-specific weights, for the classifier backbones. We observed that the VGG-16 and ResNet-50 classifier backbones, initialized with the CXR modality-specific weights, delivered superior performance compared to random and ImageNet-pretrained weight initializations. We further constructed a weighted averaging ensemble of the predictions of the top three performing models, viz., ResNet-50 with CXR image modality-specific weights trained with focal loss, ResNet-50 with CXR image modality-specific weights trained with focal Tversky loss, and ResNet-50 with random weights trained with focal loss, to arrive at the final predictions. We observed that weighted averaging delivered superior values for the mean average precision (mAP) metric (mAP: 0.3272), which was observed to be markedly superior to the state-of-the-art (mAP: 0.2547). We attribute this performance improvement to the key modifications in terms of CXR modality-specific weight initializations and ensemble learning that reduced prediction variance compared to the constituent models.
A study of the literature reveals that COVID-19 viral infection could cause acute respiratory distress syndrome and may lead to rapidly progressive and lethal pneumonia in infected patients [34]. The laboratory-based real-time reverse transcription polymerase chain reaction (rRT-PCR) test has been reported to be the most sensitive test for identifying COVID-19 infection [35]. However, there are several challenges reported in performing this test, some of which include high false negative rates, delayed processing, variability in test protocols, and reduced recall, among others. CT imaging has been reported to be an effective alternative in identifying COVID-19 disease-consistent evolution, manifestation, and progression [36]. AI-based methods applied to CT imaging could supplement clinical decision-making in identifying COVID-19, particularly in resource-constrained settings to facilitate swift referrals and improve patient care. Suri et al. [37] performed an inter-variability analysis by segmenting the lungs for assessing COVID-19 severity using CT images. The authors used two ground-truth (GT) annotations from different experts and trained U-Net [38] models to segment the lung regions of interest. The authors hypothesized that an AI model could be considered unbiased if the test performance reported with the models when trained on two different GT annotations lay within the 5% range. They further validated their hypothesis through empirical observations. It was observed that the difference in the correlation coefficient obtained using the models trained on two different GT annotations was below the 5% range, thereby showcasing a robust lung segmentation performance.
In another study, Wang et al. [39] measured the three-dimensional (3D) vascular diameter of the aorta and the pulmonary artery in Non-Contrast-Enhanced Chest CT Images to detect pulmonary hypertension. The authors proposed a novel two-stage, 3D-CNN segmentation pipeline to segment the aorta and pulmonary artery and measure the diameter in the 3D space. The authors reported superior segmentation performance in terms of the Dice similarity coefficient (DSC) metric in this segmentation task (0.97 DSC for the aorta and 0.93 DSC for the pulmonary artery). The authors discussed the benefits of such a segmentation approach in terms of providing a non-invasive, pre-operative evaluation of pulmonary hypertension for the optimal planning of surgery and reducing surgical risks.
Khan et al. [40] proposed a joint segmentation and classification network to detect pulmonary lung nodules in publicly available lung CT datasets. Performing unified segmentation and classification would not only help to learn and delineate the semantic regions of interest but also classify them into their respective categories. The authors used the VGG-SegNet [41] for nodule segmentation. The classification model was constructed by appending the classification layers to the VGG-SegNet encoder backbone. The extracted features from the penultimate layer of the trained model were concatenated with hand-crafted features extracted using a gray-level cooccurrence matrix (GLCM), local binary patterns (LBP), and pyramidal histogram of gradient (PHOG) algorithms. A radial basis function kernel-initialized support vector machine (RBF-SVM) classifier learned these concatenated features to improve classification performance with a 97.83% accuracy.
AlOthman et al. [42] proposed a novel feature extraction technique with minimal computational overload to detect and assess the severity of coronary artery disease (CAD) using CT images. The authors used the enhanced features from the accelerated segment test (FAST) to reduce the dimensions of the features extracted from a CNN model. The authors observed improved performance with this feature extraction method, demonstrating accuracies of 99.2% and 98.73% with two benchmark datasets. These findings highlighted the importance of optimal feature selection methods to improve model performance.
Germain et al. [43] analyzed whether CNN models could supersede the performance of experienced clinicians in diagnosing Cardiac Amyloidosis (CA) using Cine-Cardiovascular cine magnetic resonance (Cine-CMR) images. This disease results in the accumulation of amyloid fibrils in cardiac tissues that might lead to progressive cardiomyopathy. Cine imaging is a type of magnetic resonance imaging (MRI) sequence that captures motion. Cine-CMR is a sensitive diagnostic modality that is used to assess cardiac tissue characterizations and dysfunctions such as CA [44]. The preprocessed systolic and diastolic cine-CMR images were used to train a VGG-based CNN model to classify them as manifesting CA or left ventricular hypertrophy (LVH). The model performance was compared to the outputs of three experienced radiologists. The VGG-based CNN model significantly superseded (p < 0.05) human performance on frame-based evaluations, demonstrating an accuracy of 0.746 and AUC of 0.824 as compared to human experts (accuracy = 0.605 and AUC = 0.630). A similar performance improvement was observed in patient-based evaluations. The authors concluded that CNN models have a unique capability to identify CA manifestations in Cine-CMR images compared to trained human experts.
The electrical conductivity is observed to vary considerably among the biological tissues and the movement of gases and fluids within these tissues. Electrical impedance tomography (EIT) is a non-invasive medical imaging modality that uses surface electrodes to measure the electrical permittivity, impedance, and conductivity of biological tissues. However, there exists an inverse problem in EIT imaging in which the non-linear and noisy nature of the EIT imaging acquisition results in sub-optimal reconstruction. Recently, artificial neural networks (ANN) have gained prominence in tackling the inverse problem in EIT imaging. Rixen et al. [45] proposed an ANN model to resolve the EIT inverse problem. The authors reused the dense layers in the ANN model multiple times while considering the rotational symmetries exhibited by the EIT in the circular domain. The authors used an α-blending method to generate synthetic data and augment the training samples. Superior reconstruction performance and robustness to noise were reported with augmented training in which the ANN model demonstrated high values for the amplitude response (AR: 0.14) and low values for the position error (PE: 7.1) compared to conventional methods (AR: 0.1 and PE: 11.0).
In conclusion, the manuscripts published in this Special Issue discuss the novel, state-of-the-art methods for binary, multiclass, and multi-label classification, 2D and 3D image segmentation, object detection and localization, image reconstruction, generalization, recommendation, and inter-reader consensus analysis for identifying, segmenting, classifying, quantifying, reconstructing, and interpreting cardiopulmonary diseases using several medical imaging modalities including CT, MRI, CXRs, and EIT, among others. Nevertheless, deploying these proposed approaches in real-time settings remains an open avenue for research. We would like to express our sincere thanks to the authors for their significant contributions. We hope readers benefit from these research findings, and that the work included in this Special Issue inspires novel methods for diagnosis, treatment, and processes that could eventually promote healthcare.

Author Contributions

Conceptualization, S.R. and S.A.; methodology, S.R. and S.A.; software, S.R. and S.A.; validation, S.R. and S.A.; formal analysis, S.R.; investigation, S.A.; resources, S.A.; data curation, S.R.; writing—original draft preparation, S.R.; writing—review and editing, S.R. and S.A.; visualization, S.R.; supervision, S.A.; project administration, S.A.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Intramural Research Program of the National Library of Medicine (NLM), National Institutes of Health (NIH), USA.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Patel, B.; Priefer, R. Impact of Chronic Obstructive Pulmonary Disease, Lung Infection, and/or Inhaled Corticosteroids Use on Potential Risk of Lung Cancer. Life Sci. 2022, 294, 120374. [Google Scholar] [CrossRef]
  2. Pande, T.; Cohen, C.; Pai, M.; Ahmad Khan, F. Computer-Aided Detection of Pulmonary Tuberculosis on Digital Chest Radiographs: A Systematic Review. Int. J. Tuberc. Lung Dis. 2016, 20, 1226–1230. [Google Scholar] [CrossRef]
  3. Rajaraman, S.; Folio, L.R.; Dimperio, J.; Alderson, P.O.; Antani, S.K. Improved Semantic Segmentation of Tuberculosis—Consistent Findings in Chest x-Rays Using Augmented Training of Modality-Specific u-Net Models with Weak Localizations. Diagnostics 2021, 11, 616. [Google Scholar] [CrossRef]
  4. Zamzmi, G.; Rajaraman, S.; Hsu, L.-Y.; Sachdev, V.; Antani, S. Real-Time Echocardiography Image Analysis and Quantification of Cardiac Indices. Med. Image Anal. 2022, 80, 102438. [Google Scholar] [CrossRef]
  5. Freedman, M.T.; Lo, S.C.B.; Seibel, J.C.; Bromley, C.M. Lung Nodules: Improved Detection with Software That Suppresses the Rib and Clavicle on Chest Radiographs. Radiology 2011, 260, 265–273. [Google Scholar] [CrossRef] [Green Version]
  6. Hua, K.L.; Hsu, C.H.; Hidayati, S.C.; Cheng, W.H.; Chen, Y.J. Computer-Aided Classification of Lung Nodules on Computed Tomography Images via Deep Learning Technique. Onco Targets Ther. 2015, 8, 2015–2022. [Google Scholar] [CrossRef] [Green Version]
  7. Rajaraman, S.; Sornapudi, S.; Alderson, P.O.; Folio, L.R.; Antani, S.K. Analyzing Inter-Reader Variability Affecting Deep Ensemble Learning for COVID-19 Detection in Chest Radiographs. PLoS ONE 2020, 15, e0242301. [Google Scholar] [CrossRef]
  8. Rajaraman, S.; Jaeger, S.; Thoma, G.R.; Antani, S.K.; Silamut, K.; Maude, R.J.; Hossain, M.A. Understanding the Learned Behavior of Customized Convolutional Neural Networks toward Malaria Parasite Detection in Thin Blood Smear Images. J. Med. Imaging 2018, 5, 034501. [Google Scholar] [CrossRef]
  9. Rajaraman, S.; Antani, S. Visualizing Salient Network Activations in Convolutional Neural Networks for Medical Image Modality Classification; Springer: Singapore, 2019; Volume 1036. [Google Scholar]
  10. Shome, D.; Kar, T.; Mohanty, S.N.; Tiwari, P.; Muhammad, K.; Altameem, A.; Zhang, Y.; Saudagar, A.K.J. Covid-Transformer: Interpretable Covid-19 Detection Using Vision Transformer for Healthcare. Int. J. Environ. Res. Public Health 2021, 18, 11086. [Google Scholar] [CrossRef]
  11. Rajaraman, S.; Kim, I.; Antani, S.K. Detection and Visualization of Abnormality in Chest Radiographs Using Modality-Specific Convolutional Neural Network Ensembles. PeerJ 2020, 8, e8693. [Google Scholar] [CrossRef]
  12. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions; Springer International Publishing: New York, NY, USA, 2021; Volume 8. [Google Scholar]
  13. Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Suzuki, K. Overview of Deep Learning in Medical Imaging. Radiol. Phys. Technol. 2017, 10, 257–273. [Google Scholar] [CrossRef] [PubMed]
  15. Zamzmi, G.; Rajaraman, S.; Antani, S. UMS-Rep: Unified Modality-Specific Representation for Efficient Medical Image Analysis. Informatics Med. Unlocked 2021, 24, 100571. [Google Scholar] [CrossRef]
  16. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2014; Volume 8689 LNCS, pp. 818–833. [Google Scholar]
  17. Gozzi, N.; Giacomello, E.; Sollini, M.; Kirienko, M.; Ammirabile, A.; Lanzi, P.; Loiacono, D.; Chiti, A. Image Embeddings Extracted from CNNs Outperform Other Transfer Learning Approaches in Classification of Chest Radiographs. Diagnostics 2022, 12, 2084. [Google Scholar] [CrossRef]
  18. Irvin, J.; Rajpurkar, P.; Ko, M.; Yu, Y.; Ciurea-Ilcus, S.; Chute, C.; Marklund, H.; Haghgoo, B.; Ball, R.; Shpanskaya, K.; et al. CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Annual Conference on Innovative Applications of Artificial Intelligence, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI, Honolulu, HI, USA, 27 January–1 February 2019; pp. 590–597. [Google Scholar] [CrossRef] [Green Version]
  19. Kim, I.; Rajaraman, S.; Antani, S. Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities. Diagnostics 2019, 9, 38. [Google Scholar] [CrossRef] [Green Version]
  20. Rajaraman, S.; Sornapudi, S.; Kohli, M.; Antani, S. Assessment of an Ensemble of Machine Learning Models toward Abnormality Detection in Chest Radiographs. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Berlin, Germany, 23–27 July 2019. [Google Scholar]
  21. Rajaraman, S.; Antani, S.K. Modality-Specific Deep Learning Model Ensembles Toward Improving TB Detection in Chest Radiographs. IEEE Access 2020, 8, 27318–27326. [Google Scholar] [CrossRef]
  22. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-CAM: Why Did You Say That? arXiv 2016, arXiv:1611.07450. [Google Scholar]
  23. Huang, G.-H.; Fu, Q.-J.; Gu, M.-Z.; Lu, N.-H.; Liu, K.-Y.; Chen, T.-B. Deep Transfer Learning for the Multilabel Classification of Chest X-Ray Images. Diagnostics 2022, 12, 1457. [Google Scholar] [CrossRef]
  24. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-Ray8: Hospital-Scale Chest X-Ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1–19. [Google Scholar]
  25. Therrien, R.; Doyle, S. Role of Training Data Variability on Classifier Performance and Generalizability. Digit. Pathol. 2018, 10581, 58–70. [Google Scholar] [CrossRef]
  26. Karki, M.; Kantipudi, K.; Yang, F.; Yu, H.; Wang, Y.X.J.; Yaniv, Z.; Jaeger, S. Generalization Challenges in Drug-Resistant Tuberculosis Detection from Chest X-Rays. Diagnostics 2022, 12, 188. [Google Scholar] [CrossRef]
  27. Mueller, J.A.; Martini, K.; Eberhard, M.; Mueller, M.A.; De Silvestro, A.A.; Breiding, P.; Frauenfelder, T. Diagnostic Performance of Dual-Energy Subtraction Radiography for the Detection of Pulmonary Emphysema: An Intra-Individual Comparison. Diagnostics 2021, 11, 1849. [Google Scholar] [CrossRef] [PubMed]
  28. Rajaraman, S.; Cohen, G.; Spear, L.; Folio, L.; Antani, S. DeBoNet: A Deep Bone Suppression Model Ensemble to Improve Disease Detection in Chest Radiographs. PLoS ONE 2022, 17, e0265691. [Google Scholar] [CrossRef] [PubMed]
  29. Arthur, R. Interpretation of the Paediatric Chest X-Ray. Paediatr. Respir. Rev. 2000, 1, 41–50. [Google Scholar] [CrossRef] [PubMed]
  30. Li, D.; Pehrson, L.M.; Lauridsen, C.A.; Tøttrup, L.; Fraccaro, M.; Elliott, D.; Zając, H.D.; Darkner, S.; Carlsen, J.F.; Nielsen, M.B. The Added Effect of Artificial Intelligence on Physicians’ Performance in Detecting Thoracic Pathologies on CT and Chest X-Ray: A Systematic Review. Diagnostics 2021, 11, 2206. [Google Scholar] [CrossRef] [PubMed]
  31. Santosh, K.C.; Allu, S.; Rajaraman, S.; Antani, S. Advances in Deep Learning for Tuberculosis Screening Using Chest X-Rays: The Last 5 Years Review. J. Med. Syst. 2022, 46, 82. [Google Scholar] [CrossRef]
  32. Rajaraman, S.; Guo, P.; Xue, Z.; Antani, S.K. A Deep Modality-Specific Ensemble for Improving Pneumonia Detection in Chest X-Rays. Diagnostics 2022, 12, 1442. [Google Scholar] [CrossRef]
  33. Shih, G.; Wu, C.C.; Halabi, S.S.; Kohli, M.D.; Prevedello, L.M.; Cook, T.S.; Sharma, A.; Amorosa, J.K.; Arteaga, V.; Galperin-Aizenberg, M.; et al. Augmenting the National Institutes of Health Chest Radiograph Dataset with Expert Annotations of Possible Pneumonia. Radiol. Artif. Intell. 2019, 1, e180041. [Google Scholar] [CrossRef]
  34. Wang, B.; Jin, S.; Yan, Q.; Xu, H.; Luo, C.; Wei, L.; Zhao, W.; Hou, X.; Ma, W.; Xu, Z.; et al. AI-Assisted CT Imaging Analysis for COVID-19 Screening: Building and Deploying a Medical AI System. Appl. Soft Comput. 2021, 98, 106897. [Google Scholar] [CrossRef]
  35. Liu, C.; Yin, Q. Automatic Diagnosis of COVID-19 Using a Tailored Transformer-like Network. J. Phys. Conf. Ser. 2021, 2010, 012175. [Google Scholar] [CrossRef]
  36. Vayá, M.D.L.I.; Saborit, J.M.; Montell, J.A.; Pertusa, A.; Bustos, A.; Cazorla, M.; Galant, J.; Barber, X.; Orozco-Beltrán, D.; García-García, F.; et al. BIMCV COVID-19+: A Large Annotated Dataset of RX and CT Images from COVID-19 Patients. arXiv 2020, arXiv:2006.01174. [Google Scholar]
  37. Suri, J.S.; Agarwal, S.; Elavarthi, P.; Pathak, R.; Ketireddy, V.; Columbu, M.; Saba, L.; Gupta, S.K.; Faa, G.; Singh, I.M.; et al. Inter-Variability Study of Covlias 1.0: Hybrid Deep Learning Models for Covid-19 Lung Segmentation in Computed Tomography. Diagnostics 2021, 11, 2025. [Google Scholar] [CrossRef] [PubMed]
  38. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015. [Google Scholar]
  39. Wang, H.J.; Chen, L.W.; Lee, H.Y.; Chung, Y.J.; Lin, Y.T.; Lee, Y.C.; Chen, Y.C.; Chen, C.M.; Lin, M.W. Correction: Wang et Al. Automated 3D Segmentation of the Aorta and Pulmonary Artery on Non-Contrast-Enhanced Chest Computed Tomography Images in Lung Cancer Patients. Diagnostics 2022, 12, 967. [Google Scholar] [CrossRef] [PubMed]
  40. Khan, M.A.; Rajinikanth, V.; Satapathy, S.C.; Taniar, D.; Mohanty, J.R.; Tariq, U.; Damaševičius, R. VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images. Diagnostics 2021, 11, 2208. [Google Scholar] [CrossRef] [PubMed]
  41. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  42. AlOthman, A.F.; Sait, A.R.W.; Alhussain, T.A. Detecting Coronary Artery Disease from Computed Tomography Images Using a Deep Learning Technique. Diagnostics 2022, 12, 2073. [Google Scholar] [CrossRef]
  43. Germain, P.; Vardazaryan, A.; Padoy, N.; Labani, A.; Roy, C.; Schindler, T.H.; El Ghannudi, S. Deep Learning Supplants Visual Analysis by Experienced Operators for the Diagnosis of Cardiac Amyloidosis by Cine-CMR. Diagnostics 2022, 12, 69. [Google Scholar] [CrossRef]
  44. Oda, S.; Kidoh, M.; Nagayama, Y.; Takashio, S.; Usuku, H.; Ueda, M.; Yamashita, T.; Ando, Y.; Tsujita, K.; Yamashita, Y. Trends in Diagnostic Imaging of Cardiac Amyloidosis: Emerging Knowledge and Concepts. Radiographics 2020, 40, 961–981. [Google Scholar] [CrossRef]
  45. Rixen, J.; Eliasson, B.; Hentze, B.; Muders, T.; Putensen, C.; Leonhardt, S.; Ngo, C. A Rotational Invariant Neural Network for Electrical Impedance Tomography Imaging without Reference Voltage: RF-REIM-NET. Diagnostics 2022, 12, 777. [Google Scholar] [CrossRef] [PubMed]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rajaraman, S.; Antani, S. Editorial on Special Issue “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”. Diagnostics 2022, 12, 2615. https://doi.org/10.3390/diagnostics12112615

AMA Style

Rajaraman S, Antani S. Editorial on Special Issue “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”. Diagnostics. 2022; 12(11):2615. https://doi.org/10.3390/diagnostics12112615

Chicago/Turabian Style

Rajaraman, Sivaramakrishnan, and Sameer Antani. 2022. "Editorial on Special Issue “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”" Diagnostics 12, no. 11: 2615. https://doi.org/10.3390/diagnostics12112615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop