Next Article in Journal
Diagnosis of Obstructive Sleep Apnea Using Feature Selection, Classification Methods, and Data Grouping Based Age, Sex, and Race
Previous Article in Journal
Localized Insulin-Derived Amyloidosis in Diabetes Mellitus Type 1 Patient: A Case Report
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence for Image Analysis in Oral Squamous Cell Carcinoma: A Review

by
Vanesa Pereira-Prado
1,
Felipe Martins-Silveira
1,
Estafanía Sicco
1,
Jimena Hochmann
1,
Mario Alberto Isiordia-Espinoza
2,
Rogelio González González
3,
Deepak Pandiar
4 and
Ronell Bologna-Molina
1,3,*
1
Molecular Pathology Area, School of Dentistry, Universidad de la República, Montevideo 11400, Uruguay
2
Department of Clinics, Los Altos University Center, Institute of Research in Medical Sciences, University of Guadalajara, Guadalajara 44100, Mexico
3
Research Department, School of Dentistry, Universidad Juárez del Estado de Durango, Durango 34000, Mexico
4
Department of Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Chennai 600077, India
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(14), 2416; https://doi.org/10.3390/diagnostics13142416
Submission received: 26 June 2023 / Revised: 12 July 2023 / Accepted: 17 July 2023 / Published: 20 July 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Head and neck tumor differential diagnosis and prognosis have always been a challenge for oral pathologists due to their similarities and complexity. Artificial intelligence novel applications can function as an auxiliary tool for the objective interpretation of histomorphological digital slides. In this review, we present digital histopathological image analysis applications in oral squamous cell carcinoma. A literature search was performed in PubMed MEDLINE with the following keywords: “artificial intelligence” OR “deep learning” OR “machine learning” AND “oral squamous cell carcinoma”. Artificial intelligence has proven to be a helpful tool in histopathological image analysis of tumors and other lesions, even though it is necessary to continue researching in this area, mainly for clinical validation.

1. Introduction

Oral cancer is the sixth most common cancer around the world, with a high risk of morbidity and mortality [1]. Its diagnosis largely relies upon the correlation of clinical features and histopathological parameters, with more than 90% of the cases being morphologically diagnosed as oral squamous cell carcinoma (OSCC) [2]. OSCC is an aggressive type of oral cancer, and its prognosis can be affected according to diagnostic delay [3]. The diagnosis of OSCC is sometimes difficult to establish, mainly due to the heterogeneity of the clinical lesions associated with the subjectivity that the histopathological interpretation of some cases may present, according to the empirical experience of the oral pathologist community [4].
Further, given the existing controversy in the differential diagnosis of various head and neck cancers, achieving an objective histomorphological characterization based on novel technologies arises as an auxiliary tool for its interpretation with digital images. In recent decades, important advances have been made in the field of microscopy as a result of scientific and technological development in four fundamental aspects: optical parameters (improving contrast, spatial resolution, and reducing aberrations), sensors (compact digital cameras with sensors of high resolution, allowing increasingly better quality images at lower costs), the lighting sources that are used (which are more stable, precise and economical light sources with less environmental impact), and the computational network (both in hardware and in software, allowing to acquire, store, process, analyze and even reconstruct images of greater complexity and quality) [5]. At present, histopathological analysis of tissue biopsies by an oral pathologist is the gold standard for the diagnosis of OSCC. Slide digital scanners have provided new insight into tissue histopathology with several advantages in the field, such as making possible the application of computerized image analysis and machine learning (ML) techniques. Algorithms are now being developed for research, disease detection, diagnosis, and prognosis prediction, supporting the view of the pathologist. This objective characterization and pattern recognition of tissues and structures in digital slides is important not only from a diagnostic point of view but also in order to understand the biological mechanisms of the pathological process for research purposes. In this review, we briefly focused on the novel applications of digital histopathological image analysis in OSCC.

2. Materials and Methods

A literature search was performed in PubMed—MEDLINE online database in order to develop the present review. The keywords that were used in the portal were the following: “artificial intelligence”, “deep learning”, “machine learning”, and “oral squamous cell carcinoma”. These words were combined with the booleans “AND” and “OR” in the following order: “artificial intelligence” OR “deep learning” OR “machine learning” AND “oral squamous cell carcinoma”. The literature search was carried out from inception until May 2023 without any date restrictions applied. Moreover, a manual search was performed in the references of the selected articles in order to identify additional studies. Additionally, restricted access articles were recovered from the institutional access of Universidad de la República called “TIMBO”.
The digital search results yielded a total of 419 articles. The titles and abstracts of each article were analyzed, and the articles containing relevant information were selected for full-text evaluation. Finally, a total of 42 articles were selected and included in the present review.

3. General Concepts of Artificial Intelligence

Artificial Intelligence (AI) is a part of computer science dedicated to the development of algorithms with the main objective of performing functions traditionally associated with human intelligence. For this purpose, the computer system acquires information from input or past data by means of the different subsets of AI, such as ML, neural networks (NN), and deep learning (DL) [6,7]. ML is an arm of AI that explores the construction of computational algorithms in order to build a computer system that learns from a predefined database. ML methods can be categorized according to the training data availability, the algorithm process, and the segmentation model applied; the three more used are supervised learning (where the data is pre labeled by the operator), unsupervised learning (where the data is unlabeled), and semi-supervised learning (where the data is labeled and unlabeled). As for deep learning (DL), it is a subfield of AI which is based on neural networks (NN). These artificial networks are composed of multiple interconnecting neuron layers. DL uses complex models that exceed the capabilities of machine learning tools such as logistic regression and support vector machines [8]. It is useful to review various types of learning tasks such as supervised, unsupervised, semi-supervised (hybrid), and reinforcement. The taxonomy of these tasks is based on how they are used to solve various problems (Figure 1) [9,10,11].
DL is a subset of AI that uses artificial NN consisting of input and output data [12]. In this context, in the field of digital pathology, another important method for image-based DL is image segmentation. Image segmentation relies upon the separation of multiple parts from an initial whole slide image (WSI) in order to obtain images or objects of interest and cluster them according to their optical properties [13]. The acquisition of WSI allows the analysis and amplification of high-resolution and high-quality images, facilitating the visualization of stained tissue slides and even the exchange of these cases between oral pathologists [14].

4. Digital Image Analysis

The image processing and analysis system commonly used is the so-called ImageJ 1.53t©, which is an open-access software designed for the study of multidimensional pictures and is also the inception of FIJI 1.53t©, another open-access software available in the market [15]. It is considered highly practical, reproducible, impartial, and efficient, more than anything enriched by the scientific community that shares the constant development of tools called plugins. A plugin is a small part of the software dedicated to a specific task, such as color deconvolution, cell counter, color segmentation, and watershed transform, among others. This tool is created to assist pathologists and clinical practitioners in the decision-making processes and to reduce diagnosis delay and prognosis errors with the aim of promoting patients’ health.
According to Alabi et al., ML applications in oral cancer cover a large panel of areas that range from clinical-pathological-genomic data combination to image and autofluorescence information; they determined that deep NN was one of the most widely used approaches for oral cancer analysis [16]. Deep NN is an AI model inspired by the human brain, processing information one layer at a time, extracting and labeling relevant data through the layers in order to classify new data [17]. Despite the precision and objectiveness of the models, the authors stated that diagnosis and prognosis are hard to achieve with few real contributions to the medical field. Algorithms are usually constructed for a specific type of tissue, making it difficult to generalize the procedures for other entities that do not share the same characteristics. Moreover, studies can be limited by the number of images available to process, making it challenging to obtain an adequate amount of training and testing datasets.

5. Whole Slide Images

In pathology, the use of WSI serves several benefits. WSI provides the opportunity to transform entire tissues on glass slides into digital high-resolution virtual slides [18]. Figure 2 shows an overview of a specific field of a WSI of OSCC. In this context, WSI appears as an important tool for the use of different AI algorithms for both diagnostic and research fields. This tool seems to have several advantages when compared to more conventional techniques such as optical microscopy since, in addition to the possibility of using AI, as already mentioned, the digitized file allows the maintenance of the quality of the staining techniques used in the tissue and it becomes portable and easy to share with other pathologists worldwide [19].
Nowadays, some studies have applied digital images and, more specifically, WSI for the use of AI methods for different approaches in head and neck squamous cell carcinoma (HNSCC). These methodologies have the advantage of automated quantification in WSIs of tissue slides, which enables the development of more robust analyzes of results, reducing the bias of the operators. In the study of Sung et al., it was used immunohistochemistry, WSI, and semiautomated tools in order to propose a scoring system for predicting pathological risk in OSCC [20]. Regardless of the specific results of the study, the authors demonstrated the use of a designed novel method that made it possible through WSI and image findings for the quantification and automated measurement of tumor area, tumor-infiltrating lymphocytes, and tumor budding. In another study, it was demonstrated the development of an automated score for the quantification of tumor-associated stroma infiltrating lymphocytes in WSIs [21]. Based on the advantages of these modern techniques, the methodology allowed the quantification of lymphocytes in the adjacent areas of tumor-associated stroma using the spatial co-occurrence statistics of both tumor-associated stroma and lymphocytes. This study showed for the first time an automated quantitative score of lymphocytic infiltration in the tumor-associated stroma of HNSCC. These are some examples of novel studies that demonstrate the objectivity and reproducibility advantages of automated quantification through WSIs.
WSIs have also served as a tool to investigate parameters with diagnostic purposes in OSCC; for example, another study’s authors digitized WSIs of 90 cases of OSCCs to develop an AI training method that can recognize both cellular and structural atypia in this type of tumor [22]. From that, the authors used a convolutional neural network (CNN) model to train and evaluate 90.059 image patches of OSCC with different sizes and showed that the method focused on both cellular and structural atypia, concluding that AI could be trained in order to evaluate these parameters and may be suitable for diagnosis of OSCC. In this context, Halicek et al. used 192 digitized WSIs to investigate the ability to detect squamous cell carcinoma [23]. The digitized histological images from patients with HNSCC were used to train, validate, and test a CNN, and the study also showed the potential of WSI to increase both efficiency and accuracy of pathologists in the detection of squamous cell carcinoma. As described, several advantages are related to advances in digital pathology and, in particular, to the use of WSIs. With digitized files, pathologists around the world can work with digital capture, storage, sharing, visualization, and with the use of different advanced techniques for numerous analyzes through specific software. Moreover, conservation over time and storage capacity are also benefits that this kind of technology has and should be considered a short-term implementation through pathology centers.

6. Image Segmentation

Automatic segmentation of digitized histological images in regions that represent different types of tissues is of high importance in developing digital diagnosis, prognosis, and therapeutic tools. The segmentation technique is a computational procedure that processes digital images by grouping pixels with similar colorimetric properties in regions that probably represent objects of interest (for example, cells, vessels, and other structures in the tissue). These can then be characterized geometrically in order to obtain qualitative and quantitative information about the objects that they represent [24,25]. The most commonly used soft-wares for applying this technique are ImageJ and FIJI, as we mentioned before, which are constantly developing new plugins and tools in order to achieve specific objectives; thresholding, StarDist, watershed transform, trainable WEKA segmentation, Labkit, among others, are some of the ones used, and also explained below, for image segmentation.
Pattern recognition techniques are other types of segmentation methods in which certain characteristics are selected, such as color, shape, and size, and afterward, results are clustered in regions that can correspond to histological classes [24,25]. Most of the methods applied to histopathological images stained with H&E, immunohistochemistry, and histochemical procedures are based on thresholding procedures, starting from a threshold value of color or intensity where the object of interest is identified [26]. In this manner, thresholding is a tool that clusters pixels based on shared characteristics. The result is a binary image with a value of 1 or 0 for the object of interest and the background. This procedure is part of a complex process that allows pathologists to determine the presence or absence of certain components in tissue samples, such as in the steps used in PereiraPrado V et al. [27] for comparing odontogenic entities.
Another relatively recent segmentation technique based on DL and NN is StarDist (Germany, 2018). It consists of the detection of a cell nucleus predicting its morphological profile, and being flexible and precise enough to compete with other segmentation methods. StarDist is based on a star-convex-polygon shape to approximately represent the round nuclear morphology [28]. This novel model allows for processing images not only for histological H&E format but also for fluorescent nuclei detection. Obtaining nuclear definition makes it possible to also establish several morphological nuclear parameters as well as to study nuclear density and condensation (Figure 3).
Watershed transform is a morphological segmentation method that splits objects of interest with watershed lines in catchment basins, constructing three-dimensional topographic maps (considering the intensity as the altitude of the sample) in which water immersion is simulated. In this manner, watershed lines are defined when different catchment basins contact each other. For histopathological purposes, this technique, using nuclear location and intensity, allows for the segmentation of tissues in virtual cells or v-cells to obtain morphological characteristics of cells and layers [29]. In order to apply the plugin, images must be in 8-bit binary type (0 and 255), and white background would be the one segmented.
Trainable WEKA (stands for Waikato Environment for Knowledge Analysis, New Zealand, 2009) Segmentation as well as Labkit (stands for Labeling and Segmentation Toolkit for Big Image Data, Germany, Figure 4) are two open softwares that combine FIJI, ML, and FF algorithms to segment and classify pixels [30,31]. In order to use these tools, the operator has to know and label which objects of interest want to be recognized by the software prior to its training. Labeled objects of interest are used as examples to train the model and classify new images [30,31]. In this manner, histological images can be segmented in order to determine structural components, from epithelial and connective layers to nuclei, basal membranes, cells, and vessels. Moreover, this kind of technique allows for the differentiation of images according to their staining, for example, segmenting objects stained with immunohistochemistry and the background with Mayer’s hematoxylin.

7. Comparing Data Results

One of the problems that AI procedures pertain to the comparison between samples. Pathologists should operate under the same protocol in order to compare their results: magnification, resolution, size of the image, and staining standardization procedures, among others. This can also fall into bias due to the use of the same dataset for model selection and evaluation of the model. To avoid bias, Mahmood et al. recommended the division of the dataset into three groups: model training, optimal model selection, and validation of the model, as well as adding new data to the last two groups [32]. Moreover, the authors suggested the inclusion of samples from different pathology centers in order to increase diversity and biological variations from different demographic locations. Further, the application of supervised training techniques requires the human annotation of parameters to train the segmentation model; multiple pathologists should perform this task to minimize subjectivity and reduce inter-pathologist variation. Pre-processing images when there is no possibility of standardization of the manipulation of the samples is also possible: adjusting contrast and reducing noise helps to delineate structures and differentiate tissues; using filters to achieve this is recommended, facilitating standardization of images and managing to compare them.

8. Advances in Oral Cancer Research

Diagnosis and prognosis determination using AI software must rely on the knowledge of cancerous tissues and the identification of specific parameters that make the difference between states. In this manner, several research papers on this topic have been made about specific characteristics of these entities: keratinization and keratin pearl areas [33], stage differentiation using linear layer NN classifier and hyperspectral imaging [34,35,36], cell nuclei segmentation [37], immunohistochemical biomarkers [38,39], texturalshapecolor features [40], among others. In this section, some of the recent advances in the use of AI specific to OSCC are presented.
A study by Pratama et al. determined the possibility of classifying OSCC from different sites based on RNA sequencing data using CNN, resulting in a poorer performance when differentiating histopathological features [41]. In another study, Santer et al. exploited the classification of cervical lymph nodes in locally advanced OSCC with ML and DL, finding an accuracy of 86% for the training and testing sets [42]. Irrespective of the selected software, this suggests that quantitative AI analysis is a promising diagnostic support tool.
The research conducted by Das et al. differentiated OSCC from normal tissue using 1224 histopathological images of the oral cavity (290 normal tissue and 934 cancerous tissue) and applied CNN frames. As a result, this DL approach showed 82% accuracy compared to other state-of-the-art models, suggesting its application as an automated tool to identify oral cancer [43]. In another study, Yang et al. demonstrated that a developed custom-made DL model improved the accuracy and the speed of the diagnosis of OSCC [44]. Rahman et al. showed that a model of transfer learning using AlexNet in the CNN to predict oral cancer using OSCC biopsy images by means of performance parameters such as classification accuracy, classification miss rate, sensitivity, specificity, F1-score, positive predicted value, negative predicted value, false positive ratio, false negative ratio, likelihood positive ratio, likelihood negative ratio, and Fowlkes–Mallows index reached 90.06% of accuracy [45]. Therefore, the authors proposed that the studied model can be perfected by collaborations in the medical field.
In terms of the prediction of survival of oral cancer patients, Kim et al. compared in a retrospective study of 225 patients a DL-based survival prediction method (DeepSurv) with classical statistical methods [46]. By calculating Harrell’s c-index, it was demonstrated that DeepSurv presented the best performance among the compared methods. Similarly, Tseng et al. established through data mining a model for predicting a 5-year disease-free survival rate and a 5-year disease-specific survival rate of oral cancer patients using the traditional statistical method of logistic regression compared to the decision tree method and the artificial NN model [47]. The results showed a superiority of the decision tree and artificial NN when compared to the traditional method. Both studies demonstrate that AI may help in the prediction of the prognosis of oral cancer; however, more studies are necessary in this sense in order to support the provided information.

9. Role of AI in Immunofluorescence and Immunohistochemistry Oral Cancer Images

The application of immunohistochemistry and immunofluorescence for studying oral cancer behavior is widely known [48]. Biomarker expression analysis through quantitative digital techniques facilitates the interpretation of their possible implication, whether there is the expression or not in the pathological digital images. Kawamura et al. studied the expression of VEGF-C, VEGF-D, NRP1, NRP2, CCR7, and SEMA3E in 1854 images of 76 patients with OSCC using a multilayer perceptron NN [49]. Results determined an accuracy of the model of 98.6% to assess the staining levels (high or low) without considering morphological features and also associating the results with the presence of cervical lymph node metastasis. The authors suggested that this model identifies cervical lymph node metastasis from primary tongue tumors.
Multiplex immunofluorescence imaging to predict combined positive scores of certain markers (such as PD-L1) has been analyzed in head and neck squamous cell carcinoma with DL and ML techniques [50,51]. Manual scoring by a pathologist consists of the score of PD-L1 expression on at least 100 tumor cells, which is laborious, time-consuming, and subjective. The proposed AI approach increases the number of tumors analyzed by the period of time, as well as the likelihood of responsiveness to immunotherapies [50,51]. Tsakiroglou et al. also studied PD-L1 immunofluorescence staining, as well as other markers, in oropharyngeal squamous cell carcinomas using DL and CNN to pre-process images and then segment them with QuPath software. The authors determined a new tool to support diagnosis and therapies targeting the PD-1/ PD-L1 pathway of immune escape [42].
Table 1 shows a compilation of the research work applying AI for the study of oral cancer.

10. Discussion

Leaving aside clinical considerations that contribute to the differential diagnosis of oral cancer and that the actual gold standard for tumor identification is the biopsy’s histopathological analysis by a pathologist, it is important to highlight that AI holds great promise in this manner and is gaining weight in medical areas [52,53].
However, there are opposite sides of concern regarding the utility of AI in oral cancer. From the research that this review compiles, some authors state that due to the lack of algorithm generalization, sample manipulation, amount and quality of tissue, and tissue variable morphology between patients with the same diagnosis, it is hard to obtain a precise diagnosis only with AI [16,54,55,56].
As already mentioned, the use of AI methods in the oncology field still needs large-scale validation. To regulate the use of AI-driven models in clinical oncology, it is crucial to provide evidence showcasing the efficiency and security of these models through further clinical cancer research taking into account factors such as integrity, origin, retention, and distribution of data, the precision and reliability of selected models, ethical considerations and the inclusion of patients in predictive model usage, as well as the legal ramifications associated with the utilization of patient data [57]. According to Luchini et al., there is currently documented evidence of 71 AI-associated devices that have already obtained official approval from the Federal Drug Administration (FDA) [58], being cancer diagnostics the most important field. Indeed, for the regulation of the development of AI in the oncology field, a comprehensive and interdisciplinary approach in light of all the achievements attained in this field should be followed.
On the other hand, it has been studied the possibility of identifying cellular and structural atypia in OSCC suggesting that combining WSI with AI algorithms could allow the automatic evaluation of these parameters and may be suitable for its diagnosis [22,23]. ImageJ and FIJI software and the plugins mentioned previously, such as thresholding, StarDist, watershed transform, trainable WEKA segmentation, Labkit, have been widely used as free open tools for digital analysis, introducing new ways of developing AI for specific purposes [24,25,26,27,28,29,30,31].
From a biological standpoint, studying the characteristics and histopathological changes of potentially malignant disorders that can occur previous to an OSCC would allow an early diagnosis and prevent its development. Several studies have been carried out in this line of research, as well as in the identification of elementary oral lesions, applying different models of AI. Authors state that contributing to the diagnosis of high-risk oral cancer lesions could improve patient survival rates when being assisted with AI models [56,57,58,59,60].
Although there is no valid AI technique for the diagnosis of oral cancer, DL and ML methods would assist clinicians in the making of decisions with more objective information, improving patient management and treatment options, and closing the gap between patients in remote areas with less accessible medical assistance.

11. Conclusions

The present systematic review discussed some of the advancements in the use of different AI methods in OSCC. Although much progress has been made in image analysis processes that could help health professionals on improving prognoses and diagnoses or treatment election, it is still a wide area that needs further research, primarily to enhance the understanding of the different digital methods. Furthermore, further research on the topic can validate the different tools in artificial intelligence with the possibility of greater security for their future use in clinical practice.

12. Future Directions

For future research, there are several limitations to surpass in order to conduct more sustainable results. Scanning the slides should be the gold standard in order to obtain WSI, being necessary to have the correct equipment. Increasing the number of cases and images included in the studies is also an important matter, supporting the evidence with more precise and unbiased results. Due to the rarity of the lesions, pathology centers should aim to work together, establishing a standardized protocol to follow when obtaining quality digital images in order to validate the results. Moreover, to increase the value of AI in clinical practice, more research is required on the clinical validation of these tools.

Author Contributions

All authors participated in the development of this study. V.P.-P., F.M.-S., E.S., J.H. and R.B.-M. performed the digital search, selection, analysis, extraction of information, and drafted the manuscript. V.P.-P., M.A.I.-E., R.G.G. and D.P. carried out the methodological evaluation strategy. V.P.-P. and F.M.-S. established all figures. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created in the present study.

Conflicts of Interest

There are no conflict of interest associated with any of the senior authors or co-authors who contributed their efforts to this manuscript.

References

  1. U.S. Department of Health & Human Services, National Institutes of Health, NIH Fact Sheets Home, Oral Cancer. Available online: https://report.nih.gov/nihfactsheets/ViewFactSheet.aspx?csid=106 (accessed on 23 May 2023).
  2. Tandon, P.; Dadhich, A.; Saluja, H.; Bawane, S.; Sachdeva, S. The prevalence of squamous cell carcinoma in different sites of oral cavity at our Rural Health Care Centre in Loni, Maharashtra—A retrospective 10-year study. Contemp. Oncol. 2017, 2, 178–183. [Google Scholar] [CrossRef] [PubMed]
  3. McCullough, M.J.; Prasad, G.; Farah, C.S. Oral mucosal malignancy and potentially ma-lignant lesions: An update on the epi-demiology, risk factors, diagnosis and management. Aust Dent J. 2010, 55 (Suppl. S1), 61–65. [Google Scholar] [CrossRef] [PubMed]
  4. Elmakaty, I.; Elmarasi, M.; Amarah, A.; Abdo, R.; Malki, M.I. Accuracy of Artificial Intelligence-Assisted Detection of Oral Squamous Cell Carcinoma: A Systematic Review and Meta-Analysis. Crit. Rev. Oncol. Hematol. 2022, 178, 103777. [Google Scholar] [CrossRef] [PubMed]
  5. Karhana, S.; Bhat, M.; Ninawe, A.; Dinda, A.K. Advances in microscopy and their applications in biomedical research. In Primers in Bio-Medical Imaging Devices and Systems, Biomedical Imaging Instrumentation; Academic Press: Cambridge, MA, USA, 2022; Chapter 11; pp. 185–212. ISBN 9780323856508. [Google Scholar] [CrossRef]
  6. Gupta, R.; Srivastava, D.; Sahu, M.; Tiwari, S.; Ambasta, R.K.; Kumar, P. Artificial intelligence to deep learning: Machine intelligence approach for drug discovery. Mol. Divers 2021, 25, 1315–1360. [Google Scholar] [CrossRef]
  7. Ossowska, A.; Kusiak, A.; Świetlik, D. Artificial Intelligence in Dentistry—Narrative Review. Int. J. Environ. Res. Public Health 2022, 19, 3449. [Google Scholar] [CrossRef]
  8. Arel, I.; Rose, D.C.; Karnowski, T.P. Deep machine learning-a new frontier in artificial intelligence research. IEEE Comput. Intell. 2010, 5, 13–18. [Google Scholar] [CrossRef]
  9. Xu, J.; Meng, Y.; Qiu, K.; Topatana, W.; Li, S.; Wei, C.; Chen, T.; Chen, M.; Ding, Z.; Niu, G. Applications of Artificial Intelligence Based on Medical Imaging in Glioma: Current State and Future Challenges. Front. Oncol. 2022, 12, 892056. [Google Scholar] [CrossRef]
  10. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  11. Koteluk, O.; Wartecki, A.; Mazurek, S.; Kołodziejczak, I.; Mackiewicz, A. How Do Machines Learn? Artificial Intelligence as a New Era in Medicine. J. Pers. Med. 2021, 11, 32. [Google Scholar] [CrossRef]
  12. Zhang, A.; Lipton, Z.C.; Li, M.; Smola, A.J. Dive into Deep Learning. arXiv 2021, arXiv:2106.11342. [Google Scholar] [CrossRef]
  13. Maier-Hein, L.; Vedula, S.S.; Speidel, S.; Navab, N.; Kikinis, R.; Park, A.; Eisenmann, M.; Feussner, H.; Forestier, G.; Giannarou, S.; et al. Surgical data science for next-generation interventions. Nat. Biomed. Eng. 2017, 1, 691–696. [Google Scholar] [CrossRef] [Green Version]
  14. Pantanowitz, L.; Sinard, J.H.; Henricks, W.H.; Fatheree, L.A.; Carter, A.B.; Contis, L.; Beckwith, B.A.; Evans, A.J.; Lal, A.; Parwani, A.V. Validating Whole Slide Imaging for Diagnostic Purposes in Pathology: Guideline from the College of American Pathologists Pathology and Laboratory Quality Center. Arch. Pathol. Lab. Med. 2013, 137, 1710–1722. [Google Scholar] [CrossRef] [Green Version]
  15. Rasband, W.S. (1997–2015) ImageJ [Homepage of U.S. National Institutes of Health]. Available online: http://rsb.info.nih.gov/ij/ (accessed on 23 May 2023).
  16. Alabi, R.O.; Youssef, O.; Pirinen, M.; Elmusrati, M.; Mäkitie, A.A.; Leivo, I.; Almangush, A. Machine learning in oral squamous cell carcinoma: Current status, clinical concerns and prospects for future—A systematic review. Artif. Intell. Med. 2021, 115, 102060. [Google Scholar] [CrossRef]
  17. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  18. Kumar, N.; Gupta, R.; Gupta, S. Whole Slide Imaging (WSI) in Pathology: Current Perspectives and Future Directions. J. Digit. Imaging 2020, 33, 1034–1040. [Google Scholar] [CrossRef]
  19. Roy, M.; Wang, F.; Teodoro, G.; Bhattarai, S.; Bhargava, M.; Rekha, T.S.; Aneja, R.; Kong, J. Deep learning based registration of serial whole-slide histopathology images in different stains. J. Pathol. Inform. 2023, 14, 100311. [Google Scholar] [CrossRef]
  20. Sung, Y.E.; Kim, M.; Lee, Y.S. Proposal of a scoring system for predicting pathological risk based on a semiautomated analysis of whole slide images in oral squamous cell carcinoma. Head Neck 2021, 43, 1581–1591. [Google Scholar] [CrossRef]
  21. Shaban, M.; Raza, S.E.A.; Hassan, M.; Jamshed, A.; Mushtaq, S.; Loya, A.; Batis, N.; Brooks, J.; Nankivell, P.; Sharma, N.; et al. A digital score of tumour-associated stroma infiltrating lymphocytes predicts survival in head and neck squamous cell carcinoma. J. Pathol. 2021, 256, 174–185. [Google Scholar] [CrossRef]
  22. Oya, K.; Kokomoto, K.; Nozaki, K.; Toyosawa, S. Oral squamous cell carcinoma diagnosis in digitized histological images using convolutional neural network. J. Dent. Sci. 2023, 18, 322–329. [Google Scholar] [CrossRef]
  23. Halicek, M.; Shahedi, M.; Little, J.V.; Chen, A.Y.; Myers, L.L.; Sumer, B.D.; Fei, B. Detection of squamous cell carcinoma in digitized histological images from the head and neck using convolutional neural networks. Proc. SPIE Int. Soc. Opt. Eng. 2019, 10956, 112–120. [Google Scholar] [CrossRef]
  24. Kistenev, Y.V.; Vrazhnov, D.A.; Nikolaev, V.V.; Sandykova, E.A.; Krivova, N.A. Analysis of Collagen Spatial Structure Using Multiphoton Microscopy and Machine Learning Methods. Biochemistry 2019, 84, 108–123. [Google Scholar] [CrossRef] [PubMed]
  25. Abdulhamit, S. Practical Machine Learning for Data Analysis Using Python. In Chapter 3—Machine Learning Techniques; Academic Press: Cambridge, MA, USA, 2020; pp. 91–202. ISBN 9780128213797. [Google Scholar] [CrossRef]
  26. Khameneh, F.D.; Razavi, S.; Kamasak, M. Automated segmentation of cell membranes to evaluate HER2 status in whole slide images using a modified deep learning network. Comput. Biol. Med. 2019, 110, 164–174. [Google Scholar] [CrossRef] [PubMed]
  27. Prado, V.P.; Landini, G.; Taylor, A.M.; Vargas, P.; Molina, R.B. Spatial distribution of CD34 protein in primordial odontogenic tumour, ameloblastic fibroma and the tooth germ. J. Oral Pathol. Med. 2022, 52, 181–187. [Google Scholar] [CrossRef] [PubMed]
  28. Schmidt, U.; Weigert, M.; Broaddus, C.; Myers, G. Cell detection with star-convex polygons. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018. [Google Scholar]
  29. Vincent, L.; Soille, P. Watersheds in digital spaces: An efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef] [Green Version]
  30. Arganda-Carreras, I.; Kaynig, V.; Rueden, C.; Eliceiri, K.W.; Schindelin, J.; Cardona, A.; Seung, H.S. Trainable Weka Segmentation: A machine learning tool for microscopy pixel classification. Bioinformatics 2017, 33, 2424–2426. [Google Scholar] [CrossRef] [Green Version]
  31. Arzt, M.; Deschamps, J.; Schmied, C.; Pietzsch, T.; Schmidt, D.; Tomancak, P.; Haase, R.; Jug, F. LABKIT: Labeling and Segmentation Toolkit for Big Image Data. Front. Comput. Sci. 2022, 4, 10. [Google Scholar] [CrossRef]
  32. Mahmood, H.; Shaban, M.; Indave, B.; Santos-Silva, A.; Rajpoot, N.; Khurram, S. Use of artificial intelligence in diagnosis of head and neck precancerous and cancerous lesions: A systematic review. Oral Oncol. 2020, 110, 104885. [Google Scholar] [CrossRef]
  33. Das, D.K.; Chakraborty, C.; Sawaimoon, S.; Maiti, A.K.; Chatterjee, S. Automated identification of keratinization and keratin pearl area from in situ oral histological images. Tissue Cell 2015, 47, 349–358. [Google Scholar] [CrossRef]
  34. Prabhakar, S.K.; Rajaguru, H. Performance analysis of linear layer neural networks for oral cancer classification. In Proceedings of the 2017 6th ICT International Student Project Conference (ICT-ISPC), Johor, Malaysia, 23–24 May 2017; pp. 1–4. [Google Scholar] [CrossRef]
  35. Jeyaraj, P.R.; Nadar, E.R.S. Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. J. Cancer Res. Clin. Oncol. 2019, 145, 829–837. [Google Scholar] [CrossRef]
  36. Halicek, M.; Lu, G.; Little, J.V.; Wang, X.; Patel, M.; Griffith, C.C.; El-Deiry, M.W.; Chen, A.Y.; Fei, B. Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging. J. Biomed. Opt. 2017, 22, 060503. [Google Scholar] [CrossRef]
  37. Mookiah, M.R.; Chakraborty, C.; Paul, R.R.; Ray, A.K. Hybrid segmentation, characteriza-tion and classification of basal cell nuclei from histopathological images of normal oral mucosa and oral submucous fibrosis. Expert Syst. Appl. 2012, 39, 1062–1077. [Google Scholar]
  38. Hu, F.; Vishwanath, K.; Beumer, H.W.; Puscas, L.; Afshari, H.R.; Esclamado, R.M.; Scher, R.; Fisher, S.; Lo, J.; Mulvey, C.; et al. Assessment of the sensitivity and specificity of tissue-specific-based and anatomical-based optical biomarkers for rapid detection of human head and neck squamous cell carcinoma. Oral Oncol. 2014, 50, 848–856. [Google Scholar] [CrossRef] [Green Version]
  39. Hameed, K.A.S.; Banumathi, A.; Ulaganathan, G. Cell Nuclei Classification and Im-munohistochemical Scoring of Oral Cancer Tissue Images: Machine-learning Approach. Asian J. Res. Soc. Sci. Humanit. 2016, 6, 732–747. [Google Scholar]
  40. Rahman, T.Y.; Mahanta, L.B.; Das, A.K.; Sarma, J.D. Automated oral squamous cell carcinoma identification using shape, texture and color features of whole image strips. Tissue Cell 2019, 63, 101322. [Google Scholar] [CrossRef]
  41. Pratama, R.; Hwang, J.J.; Lee, J.H.; Song, G.; Park, H.R. Authentication of differential gene expression in oral squamous cell carcinoma using machine learning applications. BMC Oral Health 2021, 21, 281. [Google Scholar] [CrossRef]
  42. Santer, M.; Kloppenburg, M.; Gottfried, T.M.; Runge, A.; Schmutzhard, J.; Vorbach, S.M.; Mangesius, J.; Riedl, D.; Mangesius, S.; Widmann, G.; et al. Current Applications of Artificial Intelligence to Classify Cervical Lymph Nodes in Patients with Head and Neck Squamous Cell Carcinoma—A Systematic Review. Cancers 2022, 14, 5397. [Google Scholar] [CrossRef]
  43. Das, M.; Dash, R.; Mishra, S.K. Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network. Int. J. Environ. Res. Public Health 2023, 20, 2131. [Google Scholar] [CrossRef]
  44. Yang, S.; Li, S.; Liu, J.; Sun, X.; Cen, Y.; Ren, R.; Ying, S.; Chen, Y.; Zhao, Z.; Liao, W. Histopathology-Based Diagnosis of Oral Squamous Cell Carcinoma Using Deep Learning. J. Dent. Res. 2022, 101, 1321–1327. [Google Scholar] [CrossRef]
  45. Rahman, A.-U.; Alqahtani, A.; Aldhafferi, N.; Nasir, M.U.; Khan, M.F.; Khan, M.A.; Mosavi, A. Histopathologic Oral Cancer Prediction Using Oral Squamous Cell Carcinoma Biopsy Empowered with Transfer Learning. Sensors 2022, 22, 3833. [Google Scholar] [CrossRef]
  46. Kim, D.W.; Lee, S.; Kwon, S.; Nam, W.; Cha, I.-H.; Kim, H.J. Deep learning-based survival prediction of oral cancer patients. Sci. Rep. 2019, 9, 6994. [Google Scholar] [CrossRef] [Green Version]
  47. Tseng, W.-T.; Chiang, W.-F.; Liu, S.-Y.; Roan, J.; Lin, C.-N. The Application of Data Mining Techniques to Oral Cancer Prognosis. J. Med. Syst. 2015, 39, 59. [Google Scholar] [CrossRef]
  48. Kawamura, K.; Lee, C.; Yoshikawa, T.; Hani, A.; Usami, Y.; Toyosawa, S.; Tanaka, S.; Hiraoka, S. Prediction of cervical lymph node metastasis from immunostained specimens of tongue cancer using a multilayer perceptron neural network. Cancer Med. 2022, 12, 5312–5322. [Google Scholar] [CrossRef] [PubMed]
  49. Vahadane, A.; Sharma, S.; Mandal, D.; Dabbeeru, M.; Jakthong, J.; Garcia-Guzman, M.; Majumdar, S.; Lee, C.-W. Development of an automated combined positive score prediction pipeline using artificial intelligence on multiplexed immunofluorescence images. Comput. Biol. Med. 2023, 152, 106337. [Google Scholar] [CrossRef] [PubMed]
  50. Puladi, B.; Ooms, M.; Kintsler, S.; Houschyar, K.S.; Steib, F.; Modabber, A.; Hölzle, F.; Knüchel-Clarke, R.; Braunschweig, T. Automated PD-L1 Scoring Using Artificial Intelligence in Head and Neck Squamous Cell Carcinoma. Cancers 2021, 13, 4409. [Google Scholar] [CrossRef] [PubMed]
  51. Tsakiroglou, A.M.; Fergie, M.; Oguejiofor, K.; Linton, K.; Thomson, D.; Stern, P.L.; Astley, S.; Byers, R.; West, C.M.L. Spatial proximity between T and PD-L1 expressing cells as a prognostic biomarker for oropharyngeal squamous cell carcinoma. Br. J. Cancer 2019, 122, 539–544. [Google Scholar] [CrossRef] [PubMed]
  52. Alwakid, G.; Gouda, W.; Humayun, M.; Jhanjhi, N.Z. Diagnosing Melanomas in Dermoscopy Images Using Deep Learning. Diagnostics 2023, 13, 1815. [Google Scholar] [CrossRef]
  53. Ewals, L.J.S.; van der Wulp, K.; Borne, B.E.E.M.v.D.; Pluyter, J.R.; Jacobs, I.; Mavroeidis, D.; van der Sommen, F.; Nederend, J. The Effects of Artificial Intelligence Assistance on the Radiologists’ Assessment of Lung Nodules on CT Scans: A Systematic Review. J. Clin. Med. 2023, 12, 3536. [Google Scholar] [CrossRef]
  54. Mahmood, H.; Shaban, M.; Rajpoot, N.; Khurram, S.A. Artificial Intelligence-based methods in head and neck cancer diagnosis: An overview. Br. J. Cancer 2021, 124, 1934–1940. [Google Scholar] [CrossRef]
  55. Walsh, T.; Macey, R.; Kerr, A.R.; Lingen, M.W.; Ogden, G.R.; Warnakulasuriya, S. Diagnostic tests for oral cancer and potentially malignant disorders in patients presenting with clin-ically evident lesions. Cochrane Database Syst. Rev. 2021, 20, CD010276. [Google Scholar]
  56. Chen, W.; Zeng, R.; Jin, Y.; Sun, X.; Zhou, Z.; Zhu, C. Artificial Neural Network Assisted Cancer Risk Prediction of Oral Precancerous Lesions. BioMed Res. Int. 2022, 2022, 7352489. [Google Scholar] [CrossRef]
  57. Kumar, K.S.; Miskovic, V.; Blasiak, A.; Sundar, R.; Pedrocchi, A.L.G.; Pearson, A.T.; Prelaj, A.; Ho, D. Artificial Intelligence in Clinical Oncology: From Data to Digital Pathology and Treatment. Am. Soc. Clin. Oncol. Educ. Book 2023, 43, e390084. [Google Scholar] [CrossRef]
  58. Luchini, C.; Pea, A.; Scarpa, A. Artificial intelligence in oncology: Current applications and future perspectives. Br. J. Cancer 2021, 126, 4–9. [Google Scholar] [CrossRef]
  59. Zhang, X.; Gleber-Netto, F.O.; Wang, S.; Martins-Chaves, R.R.; Gomez, R.S.; Vigneswaran, N.; Sarkar, A.; William, W.N.; Papadimitrakopoulou, V.; Williams, M.; et al. Deep learning-based pathology image analysis predicts cancer progression risk in patients with oral leukoplakia. Cancer Med. 2023, 12, 7508–7518. [Google Scholar] [CrossRef]
  60. Gomes, R.F.T.; Schmith, J.; de Figueiredo, R.M.; Freitas, S.A.; Machado, G.N.; Romanini, J.; Carrard, V.C. Use of Artificial Intelligence in the Classification of Elementary Oral Lesions from Clinical Images. Int. J. Environ. Res. Public Health 2023, 20, 3894. [Google Scholar] [CrossRef]
Figure 1. Basic artificial intelligence subtypes based on the computer system learning process [9,10,11].
Figure 1. Basic artificial intelligence subtypes based on the computer system learning process [9,10,11].
Diagnostics 13 02416 g001
Figure 2. Overview of the scan of a histological slide and the result as a whole slide image (Motic EasyScan One Digital Slide Scanner, Motic Asia, China, Hong Kong) (Material extracted from the equipment of the Molecular Pathology Area, School of Dentistry, Universidad de la República, Montevideo, Uruguay).
Figure 2. Overview of the scan of a histological slide and the result as a whole slide image (Motic EasyScan One Digital Slide Scanner, Motic Asia, China, Hong Kong) (Material extracted from the equipment of the Molecular Pathology Area, School of Dentistry, Universidad de la República, Montevideo, Uruguay).
Diagnostics 13 02416 g002
Figure 3. StarDist 2D segmentation in an OSCC image stained with immunohistochemistry. Note the precise nuclei identification and the superposition of both images.
Figure 3. StarDist 2D segmentation in an OSCC image stained with immunohistochemistry. Note the precise nuclei identification and the superposition of both images.
Diagnostics 13 02416 g003
Figure 4. Labkit segmentation display with an OSCC image stained with immunohistochemistry. Note that the tool allows for selecting the number of classes to classify and segment the image. The segmentation results identify several tissue components that can be measured and compared.
Figure 4. Labkit segmentation display with an OSCC image stained with immunohistochemistry. Note that the tool allows for selecting the number of classes to classify and segment the image. The segmentation results identify several tissue components that can be measured and compared.
Diagnostics 13 02416 g004
Table 1. Resume of AI applications mentioned in the present study.
Table 1. Resume of AI applications mentioned in the present study.
AuthorsObjectiveMethodsSamplesAccuracy
Sung et al., 2021 [20]Establish a scoring system for predicting pathological risk in OSCCImmunohistochemistry, WSI, and semiautomated tools (in image J) for quantification of tumor area, tumor-infiltrating lymphocytes, and tumor budding256 patientsNot determined
Shaban M et al., 2021 [21]Establish an automated score for the quantification of tumor-associated stroma infiltrating lymphocytesDL segmentation automated algorithm for the quantification of tumor-associated stroma infiltrating lymphocytes in WSIs342 SCC cases from different sites of the head and neck, 100 OSCC, and 95 OPSCC. 85%
Oya K et al., 2022 [22]Establish diagnostic parameters in OSCCWSI. CNN using EfficientNet B090 cases of OSCC99.65%
Halicek et al., 2019 [23]Investigate the ability of AI to detect SCCWSI, Aperio ImageScope, CNN192 tissue specimens from 84 HNSCC patients83.7%
Das et al., 2015 [33]Identification of keratinization and keratin pearl areas in OSCCChan–Vese segmentation model 10 OSCC patients95.08%
Prabhakar et al., 2017 [34]Determine stage differentiation in oral cancerLinear layer NN classifier75 oral cancer patients100% in T1,
85.19% in T2, 84.21% in T3 and 94.12% in T4
Jeyaraj et al., 2019 [35] Determine a classification of oral cancerDL CNN in hyperspectral imaging1140 tumor sample information94.5%
Halicek et al., 2017 [36]Determine a classification of oral cancerDL CNN using TensorFlow in hyperspectral imagingDL CNN in hyperspectral imaging80%
Mookiah et al., 2012 [37]Determine a classification of oral submucosa fibrosisSupervised and unsupervised cell nuclei segmentation12 oral sub mucosa fibrosis patients and 10 normal tissue patientsLinear kernel based support vector machine 99.66%, Bayesian classifier 96.56% and Gaussian mixture model 90.37%.
Hu et al., 2014 [38] Establish a rapid detection method for HNSCCLinear-discriminant models using two or more measured optical biomarkers57 patients50–95%
Hameed et al., 2016 [39]Determine an immunohistochemical scoring of oral cancer tissue imagesML classifiers such as support vector machine, k-nearest neighbor, linear discriminant analysis, and naive Bayes12 OSCC samples96.09%
Rahman et al., 2020 [40]Evaluate malignancies in OSCC using digital imagingWSI. Matlab. Classification methods: Decision tree, Support Vector Machine (SVM), Logistic Regression, Linear Discriminant and K-Nearest Neighbor42 samples99.4%
Decision Tree Classifier, 100% SVM and Logistic regression, 100%
SVM, Logistic regression and Linear Discriminant
Pratama et al., 2021 [41]To aid diagnosis of OSCCRNA sequencing data using ML and CNN337 OSCC and other tissue samples83%
Das et al., 2023 [43]Aid the early detection of OSCCDL, CNN290 normal tissue and 934 cancerous tissue82%
Yang et al., 2022 [44]Assist pathologists in detecting OSCC from histopathology imagesDL2025 images92%
Rahman et al., 2022 [45]Predict oral cancer different parametersA model of transfer learning using AlexNet, CNN2511 OSCC images, 2435 healthy tissue images90.06%
Kim et al., 2019 [46]Prediction of survival of oral cancer patientsDL-based survival prediction method (DeepSurv)225 patients81%
Tseng et al., 2015 [47]Prediction of 5-year disease-free survival rate and 5-year disease-specific survival rate of oral cancer patientsData mining
logistic regression compared to the decision tree method and the artificial NN model. WEKA.
673 cancer patients63.3%
Kawamura et al., 2023 [48]Predict lymph node metastasis in cancer by classifying the level of immunohistochemical markersMultilayer perceptron NN76 patients with OSCC98.6%
Vahadane A et al., 2023 [49] Puladi B et al., 2021 [50]Obtain reproducible and reliable scores of immunofluorescence imaging of PD-L1WSI. DL and ML. QuPath. Matlab.54 HNSCC97.2%
Tsakiroglou et al., 2020 [51]Quantify the frequencies of cell–cell spatial interactions occurring in the PD1/PD-L1 pathwayWSI. DL and CNN. QuPath. 72 OPSCC88.3%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pereira-Prado, V.; Martins-Silveira, F.; Sicco, E.; Hochmann, J.; Isiordia-Espinoza, M.A.; González, R.G.; Pandiar, D.; Bologna-Molina, R. Artificial Intelligence for Image Analysis in Oral Squamous Cell Carcinoma: A Review. Diagnostics 2023, 13, 2416. https://doi.org/10.3390/diagnostics13142416

AMA Style

Pereira-Prado V, Martins-Silveira F, Sicco E, Hochmann J, Isiordia-Espinoza MA, González RG, Pandiar D, Bologna-Molina R. Artificial Intelligence for Image Analysis in Oral Squamous Cell Carcinoma: A Review. Diagnostics. 2023; 13(14):2416. https://doi.org/10.3390/diagnostics13142416

Chicago/Turabian Style

Pereira-Prado, Vanesa, Felipe Martins-Silveira, Estafanía Sicco, Jimena Hochmann, Mario Alberto Isiordia-Espinoza, Rogelio González González, Deepak Pandiar, and Ronell Bologna-Molina. 2023. "Artificial Intelligence for Image Analysis in Oral Squamous Cell Carcinoma: A Review" Diagnostics 13, no. 14: 2416. https://doi.org/10.3390/diagnostics13142416

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop