Next Article in Journal
Oligometastatic Disease in Non-Small-Cell Lung Cancer: An Update
Previous Article in Journal
Testing for Lynch Syndrome in Endometrial Carcinoma: From Universal to Age-Selective MLH1 Methylation Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancements in Oncology with Artificial Intelligence—A Review Article

1
Department of Internal Medicine, Medstar Washington Hospital Center, Washington, DC 20010, USA
2
Department of Medicine, P.S.G Institute of Medical Sciences and Research, Coimbatore 641004, Tamil Nadu, India
3
School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
4
Department of Pediatric Cardiology, University of Minnesota, Minneapolis, MN 55454, USA
5
Department of Pulmonary and Critical Care, Texas A&M University, College Station, TX 77843, USA
*
Author to whom correspondence should be addressed.
Cancers 2022, 14(5), 1349; https://doi.org/10.3390/cancers14051349
Submission received: 21 February 2022 / Revised: 26 February 2022 / Accepted: 28 February 2022 / Published: 6 March 2022
(This article belongs to the Topic Artificial Intelligence in Cancer Diagnosis and Therapy)

Abstract

:

Simple Summary

With the advancement of artificial intelligence, including machine learning, the field of oncology has seen promising results in cancer detection and classification, epigenetics, drug discovery, and prognostication. In this review, we describe what artificial intelligence is and its function, as well as comprehensively summarize its evolution and role in breast, colorectal, and central nervous system cancers. Understanding the origin and current accomplishments might be essential to improve the quality, accuracy, generalizability, cost-effectiveness, and reliability of artificial intelligence models that can be used in worldwide clinical practice. Students and researchers in the medical field will benefit from a deeper understanding of how to use integrative AI in oncology for innovation and research.

Abstract

Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology.

1. Introduction

Artificial intelligence (AI) is a field in which computers are programmed to mimic human intelligence. The abundance of data in the field of medicine makes it a good candidate for problem solving using machine learning (ML). In oncology, ML can be used to diagnose and classify tumors, detect early-stage tumors, gather genetic and histopathological data, assist in pre- and post-operative planning, and predict overall survival outcomes [1]. Deep Learning (DL), a type of ML, has proven to be effective in automating time-consuming steps such as detection and segmentation of lesions [2,3,4].
AI-based models have demonstrated excellent accuracy rates of cancer detection on screening mammography and breast cancer (BC) prediction based on genetics and hormonal factors [5,6,7]. AI plays a crucial role in early detection, classification, histopathological aspects, genetics, and molecular markers detection in colorectal cancer (CRC) [8,9,10]. As a result of extensive data in present-day screening and improvements in life expectancy caused by early detection of breast and colon cancer, we review the potential of AI-based diagnostics and therapeutics. Because mammograms and colonoscopies are widely used in the general population worldwide, AI can be used extensively in future studies on cancer screening to build generalizable AI systems [11]. AI has made its way into other cancer types, which we do not review here. For instance, lung cancer screening is reserved for smokers, and the United States Preventive Services Task Force (USPSTF) approved low-dose chest computed tomography (CT) scans in 2013, and prostate cancer screening has not yet been approved universally [11,12]. CNS cancers are relatively rare and have a poor prognosis. Studying AI in such rare tumors can provide a scope of precision of AI integration in improving the current standard management. In the area of central nervous system (CNS) tumors, AI and radiomics have notably enhanced detection rates and reduced several time-consuming steps in glioma grading, pre- and intraoperative planning, and postoperative follow-up [13,14,15].
This review article outlines how AI works in simple terminology that medical professionals can understand, how it has improved breast cancer screening, colon polyp detection, and colorectal cancer screening, as well as the implications it has in the management of CNS tumors. A literature search was conducted on PubMed, Google Scholar, arXiv, and Scopus. This is not a systematic review but a narrative review of the literature. We conclude with existing obstacles and future speculations of standardizing AI screening in oncology, as well as proposals for integrating AI basics into medical school curricula.

2. How Does Artificial Intelligence Work?

AI is a broad concept that aims to simulate human cognitive ability. ML, an approach to AI, is the study of how computer systems can learn to perform a task or predict an outcome without being explicitly programmed [16]. Mitchell et al. (1997) succinctly defines this learning process as follows: A computer program is said to learn from experience (E) with respect to some class of tasks (T) and performance measure (P), if its performance at tasks in T, as measured by P, improves with experience E. A simple example of such a task is the classification of suspicious abnormality on a screening mammogram as probable malignant or benign [17]. To learn to perform this task, a computer program would experience a dataset containing examples of correctly classified cases of benign and malignant breast lesions and come up with a model that can generalize beyond these data. Its ability to then classify previously unseen examples of breast lesions correctly would be evaluated through a quantitative measure of its performance, such as accuracy, sensitivity, and specificity.

2.1. Subtypes of Machine Learning

Algorithms for ML are typically categorized into supervised, unsupervised, or reinforcement learning. Supervised learning algorithms experience a dataset that contains a label (or correct answer) for each data point. Examples of supervised learning algorithms include support vector machine (SVM) [18,19], linear regression, logistic regression, and k-nearest neighbors [20,21]. In contrast, unsupervised algorithms such as k-means clustering [22,23], affinity propagation [24], and gaussian mixture model [25] study a dataset that does not contain labels and learn to derive structure from the given data. A reinforcement learning system trains an agent to behave in an environment by assigning it with a reward for desired behaviors or penalizing it for undesired ones. The overall objective of an ML algorithm can be interpreted as learning an approximate function of the data. This function should take as input a set of features that describe the data and output a prediction corresponding to the learning task. Classical ML algorithms are generally good at approximating linear or simple non-linear functions [13,26].

2.2. Deep Learning

DL is a type of ML that enables the learning of complex non-linear functions of the data. Most modern DL methods use neural networks as their learning model, which are loosely inspired by neuroscience [27]. The fundamental computational unit of a neural network is called a neuron. It computes a weighted sum of its inputs and then applies a non-linear operation (often called the activation function) to the sum to compute the output (See Figure 1a). Common activation functions include sigmoid, tanh, and rectified linear activation unit (ReLU) functions. A neural network comprises one or more layers of neurons, with each layer feeding on the outputs of the previous layer. Information flows forward through the network from the input, through a series of intermediate layers (called hidden layers) and finally to the output (see Figure 1b). As the number of layers and units within a layer increase, a neural network can represent functions of increasing complexity. This architecture gives neural networks the ability to learn their own complex features instead of being constrained to the hand-picked features provided as input to the model.
During training, the parameters of the neural network are learned in order to fit the dataset for a given task. This corresponds to minimizing some notion of a cost function, which measures the model’s performance on the task. After each forward pass through the network, the cost function is used to compute the error between the predicted and expected output. An algorithm called backpropagation allows this cost information to flow backward through the neural network while adjusting the network parameters. Backpropagation computes the gradients of the cost function with respect to the network parameters, which determine the level of adjustment to be made to the parameters in each iteration [28]. These gradients are then used to update the network parameters using an optimization algorithm such as stochastic gradient descent (SGD) [29,30].
Apart from the simple feed-forward model discussed above, there are other specialized architectures of neural networks suited for specific tasks. For instance, convolutional neural networks (CNNs) have a grid-like topology and are well suited to process two or three-dimensional inputs such as images [31]. CNNs are designed to capture spatial context and learn correlations between local features, due to which they yield superior performance on image tasks, such as the classification of breast lesions in a screening mammogram as probable malignant or benign (See Figure 1c). CNN-based architectures have also been applied to biomedical segmentation applications [32]. However, CNNs face computational and memory efficiency limitations in three-dimensional (3D) segmentation tasks. More efficient methods have been proposed for the segmentation of 3D data, such as magnetic resonance imaging (MRI) volumes [33]. A recent architecture, occupancy networks for semantic segmentation (OSS-Net) [34], is built upon occupancy networks (O-Net) and contains efficient representations for 3D geometry, which allows for more accurate and faster 3D segmentation [35].
Another family of neural networks, called recurrent neural networks (RNNs), are designed to operate on sequential data. RNNs are well equipped to process sequential inputs of variable lengths for tasks such as machine translation and language modeling. Long Short Term Memory networks (LSTMs) are a special kind of RNNs capable of learning long-term dependencies between inputs [36]. Another technique called attention allows a model to selectively focus on parts of the input data as needed by enhancing specific parts of the input and diminishing others [37]. Recently, a network architecture called the Transformer has achieved state-of-the-art performance in a number of machine learning tasks [38]. Transformers discard recurrence and convolutions entirely, instead relying exclusively on attention mechanisms. Attention-based transformers have demonstrated state-of-the-art segmentation performance and may prove relevance to the field of oncology [39].

3. Breast Cancer

BC is the most prevalent cancer originally reported in National Cancer Institute Statistics, 2020 [40]. BC is a major cause of cancer-related mortality after lung cancer [41]. The death rates of BC have decreased annually from 1989 to 2017, attributed to the advancements in screening and therapies [41]. AI has shown enormous benefits in screening mammograms, BC predictive tools formulation, and drug development [5,6,42,43,44].

3.1. Screening Mammogram

A screening mammogram is one of the most widely performed screening tests, but these mammograms have limitations of very high false positive and false negative rates [14,42]. The AI models reduced the workload and resulted in a 69% reduction in false positive rates and a higher sensitivity rate in screening mammograms [2,42]. AI in BC screening has good accuracy rates with some methodological issues and evidence gaps [14,45].
In the context of mammography, DL algorithms such as CNNs are principally used; the mechanism of the algorithm is illustrated in Figure 1c. The performance of AI is measured by sensitivity, specificity, the area under the curve (AUC), and computation time [46]. Different DL models have been studied with various classification systems to identify abnormalities in mammograms, with overall sensitivity rates ranging from 88% to 96% [47,48,49]. Detection rates are augmented by the positive reinforcement of an AUC over 0.96 after biopsy confirmation [50]. A new AI model from Transpara 1.4.0 screenpoint medical BV, Nijmegen, the Netherlands, expedites interpretation and reduces workload by 20–50% by excluding mammograms with a low likelihood of cancer, allowing radiologists to concentrate on challenging cases [2,51]. The detection performance of radiologists using AI-aided systems was compared to radiologists using conventional systems. Radiologists with AI-aided systems achieved higher AUC rates, sensitivity, and classification performance [52,53].
Conventional computer-aided detection (CADe) in mammograms is hampered by high false positive and false negative rates. AI-based CAD systems have proven to reduce false positive rates by 69% and increase in sensitivity ranging from 84% to 91% [42,54]. The concept of double readers (mammogram read by two radiologists independently or together) is used in Europe to reduce false positives and false negatives. The use of AI in place of the second reader maintained a non-inferior performance and reduced the workload by 88% in a simulation study [55]. In another study, a single radiologist assessment was combined with an AI algorithm achieved higher interpretative accuracy with a specificity of 92% vs. 85.9% of a single radiologist’s interpretation. However, any single AI algorithm did not outperform radiologists’ accuracy rates [14]. Double readers are not a standard practice in the United States, but a prospect of cost-effective AI integration with radiologists can increase overall sensitivity. However, the acceptable miss rate threshold should be carefully considered. Another study used the breast imaging reporting and data system (BI-RADS) to incorporate radiologists’ subjective thresholds while using evidence-based data to train AI. The study showed a reduction in false positives by 47.3% and a slight increase in false negatives by 26.7% [56]. AI also has the advantage of not increasing the interpretation time. AI CADe takes 20% less time than traditional CADe, but the same amount of time as radiologists [57]. Although further studies are required to assess the exact costs of AI mammography, the overall reduction in false positives could make it cost-effective [57]. DL models are being incorporated into digital breast tomosynthesis, and contrast-enhanced digital mammography datasets for volumetric assessment of breasts in three dimensions to further increase detection accuracy and reduce workload by 70% [7,58,59]. Radiomics is an approach to extract relevant quantitative properties, also known as features, from clinical, histopathological, and radiological data. It has been applied to breast imaging to further improve accuracy rates [60]. A more detailed description of radiomics is described in Section 5.2.

3.2. Genetics and Hormonal Aspects in Breast Cancer Prediction

Artificial neural networks (ANNs) achieved remarkable accuracy, measured by AUC of 0.909, 0.886, and 0.883, when assessed for their ability to predict 5-, 10-, 15-year BC-related survival rates, respectively, based on factors such as age, tumor size, axillary nodal status, histological type, mitotic count, nuclear pleomorphism, and axillary nodal status [61]. Hybrid-DL models incorporate genetics, histopathology, and radiology data, which outperform traditional models such as Gail (which calculates BC risk in the next five years based on medical and reproductive history, not takes into account BRCA gene association) and Tyrer–Cuzick models (calculates the likelihood of carrying BRCA1 or BRCA2 mutations based on personal and familial historical data) [5,6].

4. Colonic Polyps and Colorectal Cancer

CRC is the third most common cancer in the United States, with the incidence of approximately 147,950 new cases in the year 2020. AI has shown great success in screening, diagnosis, and treatment of CRC. AI is bringing about a new era for CRC screening and detection with computer-assisted techniques for adenoma detection and characterization, computer-aided drug delivery techniques, and robotic surgery. Other benefits of AI include the incorporation of ANN to effectively screen with personal health data [62].

4.1. Colorectal Cancer Screening

By detecting adenomas and preventing progression to carcinoma, screening has significantly reduced the incidence of CRC over the past decade. This has resulted in recommendations for routine screening starting at age 45 [63]. The current screening methods for CRCs include invasive procedures (colonoscopy (gold standard) and flexible sigmoidoscopy), minimally invasive procedures (capsular endoscopy), and non-invasive procedures (CT colonography or virtual colonoscopy, stool for occult blood, fecal immunochemical test, and multitarget stool DNA).
A few AI models have been tested to predict the risk of CRC and high-risk colonic polyps (CPs) from historical data and complete blood counts (CBCs). One such software, ColonFLag, predicts polyps and CRCs according to age, sex, CBC, and demographic information. Scores were compared to gold standard colonoscopy and converted to percentiles, then categories were made, such as CRC, high-risk polyps, and benign polyps [64]. Another retrospective study (MeScore, Calgary, Alberta, Canada) compared CBC results 3–6 months before colonoscopy with those from colonoscopy in two unrelated groups (Israeli and the UK). AUC for CRC diagnosis was 0.82 ± 0.01. Specificity for 50% detection was 87 ± 2% a year before diagnosis and 85 ± 2% for localized cancers [65]. Study results point to the possibility of an early and noninvasive preliminary screening that can be integrated into electronic medical records to flag high-risk patients who can then be aggressively screened to balance the risks and benefits of colonoscopy in young people. Another ANN model designed to screen a large population based only on personal health information from big data also achieved optimal results [62]. However, these models are not currently practiced and require further validation for generalizability.

4.2. Colonic Polyps Detection

Colonoscopy is the gold standard invasive testing for the detection of colonic adenoma and CRC. An adenoma is the most common precancerous lesion. Adenoma detection rate (ADR) measures a gastroenterologist’s ability to detect an adenoma. ADR is inversely related to the adenoma miss rate and the risk of post-colonoscopy CRC. ADR ranges from 7% to 53%, while AMRs vary from 6% to 27% based on healthcare facilities. Several factors have been postulated to explain these differences, including quality of preprocedural bowel preparation, time of withdrawal, operator experience and training, procedure sedation, cecal intubation rate, visualization of flexures (blind spots), and use of image enhanced endoscopy and presence of flat or diminutive (less than 5 mm) and small (<10 mm but >5 mm) polyps. Studies show that endoscopists with higher ADR during screening colonoscopy are more effective in preventing subsequent CRC risk for patients [66,67].
In recent years, CADe and computer-aided diagnosis (CADx) systems have been developed to automate polyp detection during colonoscopy and further characterize them. Because of its ability to detect diminutive polyps, real-time AI-aided colonoscopy has a greater ADR than colonoscopy (OR 1.53, 95% CI 1.32–1.77; p < 0.001), derived from a metanalysis data [4,68,69]. An AI system, GI Genius, uses green squares to highlight suspicious lesions during a colonoscopy by generating a sound for each marker and displaying it as a video of the endoscopy. Several meta-analyses demonstrate excellent detection rates for polyp detection using AI-assisted algorithms with AUC 0.90, sensitivity 95%, and specificity 88% [8].

4.3. Colon Polyps Classification

AI-based classification of CP into cancerous vs. non-cancerous lesions on CT colonography and capsular endoscopy is a fascinating discovery. CT colonography differentiation by texture analysis based on gradient and curvature of high-order images and random forest models significantly improved the accuracy of the classification of CPs [70,71]. AI-assisted CAD model revealed an inverse correlation of CP sphericity with adenoma detection sensitivity and a direct correlation with adenoma detection accuracy. This model can effectively detect flat colonic lesions and CRCs on CT colonography [72]. Capsule endoscopy is another noninvasive diagnostic tool for gastrointestinal tract inspection, but it is a time-consuming process to process a large amount of data. Stack sparse autoencoding with image manifold constraint, a DL-based AI, is utilized to correctly identify capsular polyps from capsular endoscopic images with a rate of 98% accuracy and time effectiveness [73]. An ANN model with logistic regression showed a predictive risk of distant metastasis in CRC patients based on several clinical factors, such as pathologic stage grouping, first treatment, sex, age at diagnosis, ethnicity, marital status, and high-risk behavior variables [74]. With DL models, tumors can be segmented and delineated more accurately, and faster region-based CNNs are trained to read MRI images, enabling faster and more accurate diagnosis of CRC metastasis [75,76].

4.4. Histopathological Aspects, Genetics, and Molecular Marker Detection

Histopathological characterization is the gold standard for the classification of polyps [77]. However, one of the biggest challenges is the significant intra- and interobserver variability. The use of DL and CNN models to automate image analysis can allow pathologists to classify CPs with an overall accuracy of 95% or more [10]. These DL models analyze whole slides and hematoxylin- and eosin-stained slides to identify four different stages, including normal mucosa, early preneoplastic lesions, adenomas, and cancer [9,10,78].
AI-based models were used to identify gene expressions, gene profiling, and non-coding micro-ribonucleotides (mi-RNAs) for diagnosis, prognosis, and targeted therapy planning [79,80,81]. The use of near-infrared (NIR) spectroscopy and counter propagation artificial neural networks (CP-ANNs) in the determination of mutant vs. wild B-rapidly accelerated fibrosarcoma (BRAF) gene mutations were shown to be highly accurate, specific, and sensitive [79]. Mutant BRAF is associated with a poor prognosis, and this AI model can assist in prognosticating and managing these patients aggressively. Backpropagation and learning vector quantization (LVQ) neural networks demonstrate a remarkable role in assessing the genetic profiling database from the cancer genome atlas (TCGA) in improving CRC diagnosis [81]. Several neural networks, including S-Kohonen, backpropagation, and SVM, were compared for predicting the risk of relapse after surgery. The S-Kohonen neural network was found to be the most accurate [82]. Non-coding mi-RNA plays an important role in tumorigenesis and progression of cancer by interfering with various cell signaling pathways, including, WNT/beta-catenin, phosphoinositide-3-kinase (PI3 K)/protein kinase B (Akt), epidermal growth factor receptor (EGFR), NOTCH1, mechanistic target of rapamycin (mTOR), and TP53. The identification of miRNAs through AI models aids in the diagnosis, prognosis, and targeted treatment of CRCs [80,83,84,85,86].
In the early detection of CRC, ML-based AI can help isolate circulating tumor cells in peripheral smear and analyze serum specific biomarkers, such as leucine-rich alpha-2-glycoprotein 1 (LRG1), EGFR, inter-alpha trypsin inhibitor heavy chain family member 4 (ITIH4), hemopexin (HPX), and superoxide dismutase 3 (SOD3) [87,88].

5. Central Nervous System Cancers

In the United States, primary brain tumors have an annual incidence of 14.8 per 100,000 people and have a male predominance. Despite significant advances in imaging modalities, surgical techniques, chemotherapy, radiotherapy, and radiosurgery, primary brain tumors such as glioblastoma multiforme (GBM) remains challenging to manage [89]. GBM is one of the primary intracranial neoplasms and accounts for nearly 60% of all primary brain tumors worldwide. Primary or metastatic CNS cancers are challenging to manage because of their rapid proliferation, prominent neovascularization, invasion to distant sites, and poor response to chemotherapy due to the blood–brain barrier. Clinical management includes initial observation, grading, accessing the depth of infiltration, segmentation and location of the tumor, histopathological evaluation, and identification of molecular markers. As a result, clinicians have to manually compile all the data for validation in order to formulate a treatment plan. In this regard, AI has proven to be useful in the diagnosis and management of CNS malignancies [26].

5.1. Central Nervous System Neoplasm Detection

AI has made significant advances in the diagnosis and classification of brain tumors in recent years. MRI is currently the gold standard tool for tumor detection and characterization [90]. Conventional MRI methods such as T1 and T2 weighted imaging and fluid-attenuated inversion recovery (FLAIR) sequences have the disadvantage of nonspecific contrast enhancement and a high likelihood of missing tumor foci infiltration. In order to enhance detection chances, perfusion MRI with dynamic susceptibility-weighted contrast material enhancement, dynamic contrast enhancement, and arterial spin labeling are also used to evaluate the neoangiogenic properties of brain tumors such as GBM. In addition to identifying tissue microstructure, diffusion-weighted imaging shows neoplastic infiltration in areas of the brain that appear normal on conventional magnetic resonance (MR) images. The use of MR spectroscopy can also be used to identify chemical metabolites such as choline, creatine, and N-acetyl aspartate, which are useful for glioma grading and identifying tumor infiltrated regions [91]. By automating these steps, AI has enhanced detection rates and efficiency of radiologists, which, in turn, has reduced the amount of time traditionally spent in diagnosing a disease. CNN-based DL can also detect millimeter-sized brain tumors and can distinguish GBMs from metastatic brain lesions [3,92]. MRI technologies provide structured anatomical information on tumors, but tumor differentiation is always based on histopathological evaluation, which is invasive, time-consuming, and expensive. It remains challenging to identify low-grade gliomas from high-grade gliomas on imaging, even with AI systems. Attention-based transformers are currently being investigated for the first time in glioma classification, and their use may offer a breakthrough [39,93].

5.2. Radiomics

A comprehensive analysis of clinical, histopathological, and radiological data combined with ML/DL image processing has paved the way for a new translational field in neuro-oncology called radiomics [60,94,95]. AI-based radiomics provides enhanced noninvasive tumor characterization by enabling histopathologic classification/grading within minutes even at surgery time, prognostication, monitoring, and treatment response evaluation [96,97]. AI algorithms are able to analyze these images at the pixel level, so they can provide information not visible to the human eye and allow for more accurate grading [3]. Radiomics involves a set of the complex multi-step processes with manual, automatic, and semi-automatic segmentations. Two main types of radiomics are described: feature-based and DL-based. Both provide more accurate and reliable results than human readers. The feature-based radiomics algorithms evaluate subsets of specific features from segmented regions and volumes of interest (VOI) into mathematical representations. This multistep process includes image pre-processing (noise reduction, spatial resampling, and intensity modification), precise tumor segmentation (manual vs. DL-based techniques), feature extraction (histogram-based, textural, and higher-order statistics features), feature selection (filter methods, wrapper approaches, and embedded techniques), and model generation and evaluation (neural networks, SVM, decision trees/ random forests, linear regression, and logistic regression models) [95,98]. DL radiomics use CNNs, in which the model learns in a cascading fashion without any prior description of features and requires a large amount of data in the learning process. The cascading technique processes data to obtain useful information, removes redundancies, and prevents overfitting [27,31,98].

5.3. Histopathological Aspects, Genetics, and Molecular Marker Detection

Traditional histopathological evaluation of cranial tumors identifies the microscopic features with areas of neovascularization, central necrosis, endothelial hyperplasia, and regions of infiltration. These are sometimes overlapping and could lead to false-positive results [99]. To overcome this complexity, digital slide scanners are now used to convert microscopic slides into image files interpreted by AI-based algorithms such as SVM and decision trees. SVMs have shown higher precision rates [98]. The AI-based algorithms analyze pathological specimens of gliomas and predict outcomes based on genetic and molecular markers, including isocitrate dehydrogenase (IDH) mutation status, 1 p/19 co-deletion status, O-6-methylguanine-DNA methyltransferase (MGMT) methylation status, epidermal growth factor receptor splice variant III (EGFRvIII), Ki-67 marker expression, prediction of p53 status in gliomas, prediction of mutations in BRAF, and catenin β-1 in craniopharyngiomas [96,98,100,101,102,103]. IDH mutation leads to the accumulation of an oncometabolite called D-2 hydroxyglutarate. This mutation is an important prognosticator in GBM. CNN-based AI has detected this biomarker from conventional MRI modalities [100]. O-6-MGMT promoter hypermethylation (encoding for DNA repair protein), which is exhibited in about 33%–57% diffuse gliomas, is a better prognostic factor owing to increased sensitivity to alkylating agents such as temozolomide [98,101,104]. AI types such as supervised machine learning combined with texture features have been found to detect this methylation status. Performing principal component analysis on the final layer of CNN indicated that features, such as nodular and heterogeneous enhancement and “masslike FLAIR edema”, predicted MGMT methylation status with up to 83% accuracy [105]. EGFRvIII mutation is found in about 40% of GBM. Tumors with this mutation have been found to exhibit deep peritumoral infiltration, which is consistent with a more aggressive phenotype. EGFR mutation is also associated with increased neovascularization and cell density [106]. 1 p/19 codeletion status has been shown to have a protective effect on the prognosis. This codeletion is observed in oligodendrogliomas [102]. CNN-based AI can be employed to detect this codeletion. Ki-67 marker expression indicates tumor cell proliferation. Traditionally, this marker is detected via immunohistochemical studies on the extracted tumor sample. This method is invasive and time-consuming. Identifying this marker is essential in making a differential diagnosis and treatment plan. AI-based radiomics has been developed to detect this marker from fluorodeoxyglucose positron emission tomography (FDG PET) and MRI images [107].

5.4. AI in Pre- and Intra-Operative Planning, Postoperative Follow-Up, and Metastasis

5.4.1. Preoperative Assessment

Segmentation, volumetric assessment, and differentiating the tumor from healthy brain tissue and peripheral edema, quantitative measurements such as risk stratification, treatment response, and outcome prognosis are essential elements in the treatment planning of CNS tumors [108,109]. In traditional radiographic imaging, contrast-enhanced radiographic images are used to estimate tumor volume or burden; however, single-dimension imaging may not be as accurate in the volumetric assessment of nonuniform tumors, such as high-grade tumors including GBMs. Another challenge is differentiating tumor borders from surrounding edema [110]. AI algorithms such as the random forest, CNN, and SVM have been applied to the tumor segments to overcome these challenges, and they have been shown to provide precise and accurate localization of the tumor. A two-step protocol with CNN and transfer learning models led to precise and accurate localization of glioma [111]. 3D-U-Net CNN on 18 -fluoroethyl-tyrosine-PET, when used for automated segmentation of gliomas, showed 88% sensitivity, 78% positive prediction, 99% negative prediction, and 99% specificity [32,112].

5.4.2. Intraoperative Modalities

High-grade tumors such as GBM have a rapid proliferation rate and invade the surrounding regions beyond the enhancing regions on the radiological images, and excision of these areas could be missed [26,113]. AI-based DL algorithms have been developed to facilitate the surgeons to remove maximum tumor regions and less of the normal healthy brain tissue simultaneously. Three-dimensional CNNs have shown promising results in aiding stereotactic radiation therapy planning. It is often difficult to differentiate among primary brain tumors, primary CNS lymphoma, and brain metastases in some situations. However, AI-based algorithms such as decision tree and multivariate logistic regression models have been developed to differentiate among these entities by using diffusion tensor imaging and dynamic susceptibility-weighted contrast-enhanced MRI [114,115,116].

5.4.3. Postoperative Surveillance

MRI with gadolinium contrast is the standard for determining postoperative tumor growth and tumor response [117]. CNN-based AI algorithm techniques determine accurate tumor size compared to linear methods. The ability of CNN models to differentiate the true progression from pseudo-progression and ML algorithms to differentiate radiation necrosis from tumor recurrence is revolutionary [109,110,118]. Additionally, CNN and SVM create a superior model to predict the treatment response and survival outcomes from clinical, imaging, genetic, and molecular marker data [26].

6. Precision and Personalized Medicine

AI has moved towards an era of personalized treatment in oncology with remarkable aid in oncologic drug development, clinical decision support systems, chemotherapy, immunotherapy, and radiation therapy [43]. AI algorithms have been developed to assess several factors such as oncogenetic mutation profile and drug sensitivity prediction showing overall expected prognosis, efficacy, and adverse effects with a particular treatment option in a patient with particular cancer [43,119]. In a study, an ML algorithm was designed to predict the effects of chemotherapy drugs, including gemcitabine and taxols, in correlation to patients’ genetic signatures [120]. In another study, an AI-based screening system based on homologous recombination (HR) deficiency was developed to detect cancer cells with HR defects can further narrow patients who would benefit from poly ADP-ribose polymerase (PARP) inhibitors in BC patients [44]. A DL algorithm was used to identify anticancer drugs that inhibit PI3K alpha and tankyrase, promising targets for CRC treatment [121]. An ML-based drug specificity detection by examining protein–protein interactions of anticancer drug and S100A9, a calcium-binding protein, may represent a potential therapeutic target for CRC [122]. These avenues of discovery of new anticancer targeted therapy by ML models is a fascinating step towards much effective therapeutic options. ML models can also be trained to interpret screening data to predict responses to new drugs or combinational therapies [123]. An ability to synthesize and assess a large amount of chemical data also plays a role in cancer drug development by narrowing the prediction towards a specific formula; beyond the traditional experimental methods in which DL systems are currently being explored [124,125]. Learning clinical big data of cancer patients with AI can generate personalized treatment options based on DL assessed factors, including clinical, genetic, cancer-type, and stage of cancer of a patient [126]. Moreover, AI application in radiotherapy is quite distinct. AI can help radiologists plan radiation treatment regimens with automation software as effective as conventional treatment layouts in a robust, time-effective manner [127,128]. With the upcoming role of immunotherapy in managing various cancers, ML-based platforms are trained to predict the therapeutic response of immunotherapy effects in programmed cell death protein 1 (PD-1) sensitive advanced solid tumors [129,130]. AI can thus support and even surpass the capability of humans in anticancer drug development and aid in personalized treatment plans in a time-effective manner.

7. Generalizing Artificial Intelligence, Barriers, and Future Directions

A number of factors challenge the generalizability of AI systems, including possible bias, external validation of AI performance, the requirement for heterogeneous data and standardized techniques [46].

7.1. AI Performance Interpretation

In order for AI to perform in clinical practice, it must be both internally and externally validated. In internal validation, the accuracy of AI is compared to expected results when AI algorithms are tested by using previously used questions [131]. Internal validation performance tools rely on sensitivity, specificity, and AUC. The problem with interpreting AUC is that it does not consider the clinical context. For instance, different sensitivity and specificity can provide similar AUCs. In order to measure AI performance, studies should report AUC along with sensitivities and specificities at clinically relevant thresholds, this is referred to as “net benefit” [132]. As an example, high false-positive and false-negative rates continue to be a challenge in DL screening mammograms, for which balancing the net benefit would be important [42]. Thus, prior to concluding that an AI system can outperform a human reader, it is important to carefully interpret its diagnostic performance. Furthermore, the sensitivity, specificity, and accuracy of diagnostic tests are independent of real-life prevalence. As a result, robust clinical diagnostic, and predictive performance verification of AI for clinical applicability requires external validation. For external validation, a representative patient population and prospectively collected data would be necessary to train AI algorithms [131]. Moreover, internal validation poses the challenge of overestimating AI performance by familiarizing itself too much with training data, known as overfitting [131]. By separating unused training datasets, including newly recruited patients, and comparing results with those of independent investigators at different sites, it is possible to improve generalizability and minimize overfitting [131]. In a recent study, curated large mammogram screening datasets from the UK and the US revealed a promising path to generalizing AI performance [55].

7.2. Standardization of Techniques

An AI model that could be universally applicable must be taught a large amount of heterogeneous clinical data in order to become generalizable [3,54,107]. AI-based infrastructure and data storage systems are not available at all institutes, which is one of the biggest barriers [133]. There is also a lack of standardization of staining reagents, protocols, and section thicknesses of radiologic images, which can further hinder the generalizability of AI in clinical practice worldwide [1,54]. A number of automated CNN-based tools such as HistoQC, Deep Focus, and GAN-based image generators are being developed by societies such as the American College of Radiology Data Science Institute to standardize image sections [1,91]. In the field of radiomics, another challenge involves compliance with appropriate quality controls, ranging from image processing to feature extraction and from mechanics and feature extraction to algorithms for making predictions [134]. There are several emerging initiatives using DLs and CNNs to normalize or standardize images, including, “image biomarker standardization technique” [134,135]. ML algorithms are treated as a “black box” because of a lack of understanding of its inner working. This can pose a challenge when dealing with regulated healthcare data. This necessitates transparent AI algorithms and the interpretation of AI-based results to ensure no mistakes are made [26,136]. A few recently developed methods, such as saliency maps and principal component analysis, are helping interpret the workings of these algorithms [105,137].

7.3. Bias in Artificial Intelligence

Quality and quantity of data are key factors that determine the performance and objectivity of an ML system. AI can be biased in a number of ways—from assumptions made by engineers who develop AI to bias in the data used to train it. When training data are derived from a homogenous population, they may be poorly generalizable, which can potentially exacerbate racial/ethnic disparities, for example [138]. Thus, when training the AI, it is important to include diverse ethnic, age, and sex groups, as well as examples of benign and malignant tumors. Similarly, to integrate precision medicine and AI in real-world clinical settings, it is necessary to consider environmental factors, limitations of care in resource-poor locations, and co-morbidities [139]. There is also the possibility of bias introduced when radiologists’ opinion is regarded as the “gold standard” rather than the actual ground truth or the absolute outcome of the case, benign or malignant [46]. As an example, several AI models in screening mammography are compared with radiologists instead of the gold standard biopsy results, introducing bias [46]. In order to overcome this problem, including interval cancers in testing sets and relying on reports from experienced radiologists might be helpful.

7.4. Ethical and Legal Perspectives

Creating future models that address the ethical issues and challenges of incorporating AI into preexisting systems requires an awareness of these issues. Few societies, such as the Department of Health and Social Care, the US Food and Drug Administration, and other global partnerships, oversee and regulate the use of AI in medicine [46,140]. The National Health Service (NHS) Trusts in the United Kingdom regulate the use of patient care data in AI in an anonymized format for research purposes [46]. In order for AI in oncology to achieve global standardization, more international organizations must be formed that can oversee future AI studies within ethical and legal boundaries to protect patient privacy.

8. Integrative Training of Computer Science and Medical Professionals

In order for AI to be effectively integrated into healthcare in general, as well as oncology, formal training of medical professionals and researchers would be critical. Numerous societies and reviews have recommended formal training, but current medical education and health informatics standards do not include mandatory AI education, and competency standards have yet to be established [141,142]. There have been efforts in the radiology community to determine students’ opinions about AI applications in radiology in order to develop formal training tools. A few of these are frameworks for teaching, principles for regulating the use of AI tools, special training for evaluating AI technology, and integrating computer science, health informatics, and statistics curriculum during medical school [143,144,145]. Few institutes in the United States have proposed initiatives for AI in medical education, which were originally submitted by the American Medical Association. Among these initiatives are medical students working with data specialists, radiology residents working with technology base companies to develop computer-aided detection in mammography, offering a summer course by scientists or engineers to update new technologies, and involving medical students in engineering labs to create innovative ideas in health care [136]. Another framework would provide AI training for students in various fields, including medical students, health informatics students, and computer science students [142]. In order to improve patient care, medical students should become proficient in interpreting AI technologies, comparing efficiency in patient care and discussing ethical issues related to using AI tools [142]. Furthermore, medical professionals should understand the limitations and barriers of AI in clinical applications, as well as the distinction between correct and incorrect information [146,147]. In health informatics, students should be taught how to apply appropriate ML algorithms to analyze complicated medical data, integrate data analytics, and formulate questions to visualize large data sets. Students studying computer science should be trained in Python, R, and SQL programming in order to solve complex medical problems [142]. Education tools that integrate medical professionals, health informatics students, and computer science students can pave the way for further developments in the fields of medicine and oncology.

9. Conclusions

Computer systems are capable of learning tasks and predicting outcomes without being explicitly programmed through AI. DL, a subset of ML, utilizes neural networks and enables learning complex, non-linear functions from data. CNNs are well suited to process two- to three-dimensional inputs such as images, while RNNs can handle sequential inputs of variable length such as textual data. Recently developed attention-based DL systems are capable of selectively focusing on data, resulting in better accuracy in cancer detection rates. AI has shown promising results in oncology in several areas, including detection and classification, molecular characterization of tumors, cancer genetics, drug discovery, predicting treatment outcomes and survival rates, and moving the trend towards personalized medicine. In screening mammography, various DL models have demonstrated non-inferior cancer detection performance, with overall sensitivity rates of 88–96%. Radiologists with AI-assisted systems have achieved higher AUC rates and have reduced their workloads. Different real time CADe and CADx AI systems have demonstrated a higher ADR by automating polyp detection and detecting diminutive polyps during colonoscopy. The use of machines to improve cancer detection at an early stage on screening mammograms and colonoscopies has the potential to be tested for application across the globe for more efficient patient care. Several AI-based cancer detection methods have been developed for other cancer types, including lung, prostate, and cervical cancer. It is possible to pursue future objectives to implement AI worldwide in all cancer types.
CNS tumors such as GBM continue to have a poor prognosis. AI-based radiomics allows for the identification of tumors without invasive methods, by allowing for the classification and grading of tumors within minutes. Radiomics is largely used in CNS tumors identification and grading. State-of-the-art attention-based transformers are currently being studied to improve glioma classification. Analyzing histopathological, genetic, or molecular markers can be made easier with AI. With the advancement of AI, oncology has moved to a more personalized era. AI has revolutionized drug development, clinical decision support systems, chemotherapy, immunotherapy, and radiotherapy.
A better understanding of the ethical implications of the use of AI, including its performance interpretation, standardization of techniques, and the identification and correction of bias, is required for more reliable, accurate, and generalizable AI models. Global organizations must be formed to provide guidance and regulation of AI in oncology. Formal integrated training for medical, health informatics, and computer science students could drive further advances of AI in medicine and oncology.

Author Contributions

N.V., V.R. and U.S. were involved in the literature search and writing the manuscript. K.G. and K.R. contributed to the literature search. S.R.S. conceptualized the idea, was involved in writing, review, and revision of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research study received no external funding.

Conflicts of Interest

All authors declare that they have no conflict of interest.

References

  1. Shimizu, H.; Nakayama, K.I. Artificial intelligence in oncology. Cancer Sci. 2020, 111, 1452–1460. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Rodriguez-Ruiz, A.; Lång, K.; Gubern-Merida, A.; Teuwen, J.; Broeders, M.; Gennaro, G.; Clauser, P.; Helbich, T.H.; Chevalier, M.; Mertelmeier, T.; et al. Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study. Eur. Radiol. 2019, 29, 4825–4832. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Aneja, S.; Chang, E.; Omuro, A. Applications of artificial intelligence in neuro-oncology. Curr. Opin. Neurol. 2019, 32, 850–856. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, P.; Berzin, T.M.; Glissen Brown, J.R.; Bharadwaj, S.; Becq, A.; Xiao, X.; Liu, P.; Li, L.; Song, Y.; Zhang, D.; et al. Real-time automatic detection system increases colonoscopic polyp and adenoma detection rates: A prospective randomised controlled study. Gut 2019, 68, 1813–1819. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Yala, A.; Lehman, C.; Schuster, T.; Portnoi, T.; Barzilay, R. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology 2019, 292, 60–66. [Google Scholar] [CrossRef] [Green Version]
  6. Akselrod-Ballin, A.; Chorev, M.; Shoshan, Y.; Spiro, A.; Hazan, A.; Melamed, R.; Barkan, E.; Herzel, E.; Naor, S.; Karavani, E.; et al. Predicting breast cancer by applying deep learning to linked health records and mammograms. Radiology 2019, 292, 331–342. [Google Scholar] [CrossRef] [PubMed]
  7. Raya-Povedano, J.L.; Romero-Martín, S.; Elías-Cabot, E.; Gubern-Mérida, A.; Rodríguez-Ruiz, A.; Álvarez-Benito, M. AI-based Strategies to Reduce Workload in Breast Cancer Screening with Mammography and Tomosynthesis: A Retrospective Evaluation. Radiology 2021, 300, 57–65. [Google Scholar] [CrossRef]
  8. Lui, T.K.L.; Guo, C.G.; Leung, W.K. Accuracy of artificial intelligence on histology prediction and detection of colorectal polyps: A systematic review and meta-analysis. Gastrointest. Endosc. 2020, 92, 11–22.e6. [Google Scholar] [CrossRef]
  9. Korbar, B.; Olofson, A.; Miraflor, A.; Nicka, C.; Suriawinata, M.; Torresani, L.; Suriawinata, A.; Hassanpour, S. Deep learning for classification of colorectal polyps on whole-slide images. J. Pathol. Inform. 2017, 8, 30. [Google Scholar] [CrossRef]
  10. Sena, P.; Fioresi, R.; Faglioni, F.; Losi, L.; Faglioni, G.; Roncucci, L. Deep learning techniques for detecting preneoplastic and neoplastic lesions in human colorectal histological images. Oncol. Lett. 2019, 18, 6101–6107. [Google Scholar] [CrossRef] [Green Version]
  11. Gates, T.J. Screening for cancer: Evaluating the evidence. Am. Fam. Physician 2001, 63, 513–522. [Google Scholar] [PubMed]
  12. Pinsky, P.F. Lung cancer screening with low-dose CT: A world-wide view. Transl. Lung Cancer Res. 2018, 7, 234–242. [Google Scholar] [CrossRef] [PubMed]
  13. Rajkomar, A.; Dean, J.; Kohane, I. Machine Learning in Medicine. N. Engl. J. Med. 2019, 380, 1347–1358. [Google Scholar] [CrossRef] [PubMed]
  14. Schaffter, T.; Buist, D.S.M.; Lee, C.I.; Nikulin, Y.; Ribli, D.; Guan, Y.; Lotter, W.; Jie, Z.; Beng, H.D.; Wang, S.; et al. Evaluation of Combined Artificial Intelligence and Radiologist Assessment to Interpret Screening Mammograms. JAMA Netw. Open 2020, 3, e200265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Rasouli, P.; Moghadam, A.D.; Eslami, P.; Pasha, M.A.; Aghdaei, H.A.; Mehrvar, A.; Nezami-Asl, A.; Iravani, S.; Sadeghi, A.; Zali, M.R. The role of artificial intelligence in colon polyps detection. Gastroenterol. Hepatol. Bed Bench 2020, 13, 191–199. [Google Scholar] [PubMed]
  16. Mitchell, T.; Jordan, M. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 255–260. [Google Scholar]
  17. Mitchell, T. Machine Learning; McGraw Hill: New York, NY, USA, 1997; pp. 870–877. [Google Scholar]
  18. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. Training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
  19. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  20. Fix, E.; Hodges, J.L. Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties. Int. Stat. Rev./Rev. Int. Stat. 1989, 57, 238. [Google Scholar] [CrossRef]
  21. Cover, T.M.; Hart, P.E. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  22. Hartigan, J.A.; Wong, M.A. Algorithm AS 136 A K-Means Clustering Algorithm. J. R. Stat. Soc. Ser. B Methodol. 2012, 28, 100–108. [Google Scholar] [CrossRef]
  23. Macqueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 21 June–18 July 1965, 27 December 1965–7 January 1966; University of California Press: Berkeley, CA, USA, 1967; Volume 1, pp. 281–297. [Google Scholar]
  24. Frey, B.J.; Dueck, D. Clustering by passing messages between data points. Science 2007, 315, 972–976. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Clarke, M.R.B.; Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis. J. R. Stat. Soc. Ser. A 1974, 137, 442. [Google Scholar] [CrossRef]
  26. Daisy, P.S.; Anitha, T.S. Can artificial intelligence overtake human intelligence on the bumpy road towards glioma therapy? Med. Oncol. 2021, 38, 53. [Google Scholar] [CrossRef] [PubMed]
  27. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  28. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  29. Robbins, H.; Monro, S. A Stochastic Approximation Method. Ann. Math. Stat. 1951, 22, 400–407. [Google Scholar] [CrossRef]
  30. Kiefer, J.; Wolfowitz, J. Stochastic Estimation of the Maximum of a Regression Function. Ann. Math. Stat. 1952, 23, 462–466. [Google Scholar] [CrossRef]
  31. Vaz, J.M.; Balaji, S. Convolutional neural networks (CNNs): Concepts and applications in pharmacogenomics. Mol. Divers. 2021, 25, 1569–1584. [Google Scholar] [CrossRef]
  32. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Lecture Notes in Computer Science Series; Springer: Cham, Switzelrand, 2015; Volume 9351, pp. 234–241. [Google Scholar]
  33. Milletari, F.; Navab, N.; Ahmadi, S.A. V-Net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 4th International Conference on 3D Vision, 3DV, Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  34. Reich, C.; Prangemeier, T.; Cetin, Ö.; Koeppl, H. OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data. arXiv 2021, arXiv:2110.10640. [Google Scholar]
  35. Mescheder, L.; Oechsle, M.; Niemeyer, M.; Nowozin, S.; Geiger, A. Occupancy networks: Learning 3D reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 4455–4465. [Google Scholar]
  36. Hochreiter, S.; Urgen Schmidhuber, J. Long Shortterm Memory. Neural Comput. 1997, 9, 17351780. [Google Scholar] [CrossRef]
  37. Luong, M.T.; Pham, H.; Manning, C.D. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 1412–1421. [Google Scholar]
  38. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Advances in Neural Information Processing Systems 30, Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Curran Associates: Red Hook, NY, USA, 2017; pp. 5999–6009. [Google Scholar]
  39. Prangemeier, T.; Reich, C.; Koeppl, H. Attention-Based Transformers for Instance Segmentation of Cells in Microstructures. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 16–19 December 2020; pp. 700–707. [Google Scholar]
  40. National Cancer Institute. Cancer Statistics. Available online: https://www.cancer.gov/about-cancer/understanding/statistics (accessed on 28 January 2022).
  41. DeSantis, C.E.; Ma, J.; Gaudet, M.M.; Newman, L.A.; Miller, K.D.; Goding Sauer, A.; Jemal, A.; Siegel, R.L. Breast cancer statistics, 2019. CA Cancer J. Clin. 2019, 69, 438–451. [Google Scholar] [CrossRef] [PubMed]
  42. Batchu, S.; Liu, F.; Amireh, A.; Waller, J.; Umair, M. A Review of Applications of Machine Learning in Mammography and Future Challenges. Oncology 2021, 99, 483–490. [Google Scholar] [CrossRef] [PubMed]
  43. Liang, G.; Fan, W.; Luo, H.; Zhu, X. The emerging roles of artificial intelligence in cancer drug development and precision therapy. Biomed. Pharmacother. 2020, 128, 110255. [Google Scholar] [CrossRef] [PubMed]
  44. Gulhan, D.C.; Lee, J.J.K.; Melloni, G.E.M.; Cortés-Ciriano, I.; Park, P.J. Detecting the mutational signature of homologous recombination deficiency in clinical samples. Nat. Genet. 2019, 51, 912–919. [Google Scholar] [CrossRef] [PubMed]
  45. Houssami, N.; Kirkpatrick-Jones, G.; Noguchi, N.; Lee, C.I. Artificial Intelligence (AI) for the early detection of breast cancer: A scoping review to assess AI’s potential in breast screening practice. Expert Rev. Med. Devices 2019, 16, 351–362. [Google Scholar] [CrossRef]
  46. Hickman, S.E.; Baxter, G.C.; Gilbert, F.J. Adoption of artificial intelligence in breast imaging: Evaluation, ethical constraints and limitations. Br. J. Cancer 2021, 125, 15–22. [Google Scholar] [CrossRef]
  47. Agnes, S.A.; Anitha, J.; Pandian, S.I.A.; Peter, J.D. Classification of Mammogram Images Using Multiscale all Convolutional Neural Network (MA-CNN). J. Med. Syst. 2020, 44, 30. [Google Scholar] [CrossRef]
  48. Rodriguez-Ruiz, A.; Lång, K.; Gubern-Merida, A.; Broeders, M.; Gennaro, G.; Clauser, P.; Helbich, T.H.; Chevalier, M.; Tan, T.; Mertelmeier, T.; et al. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists. J. Natl. Cancer Inst. 2019, 111, 916–922. [Google Scholar] [CrossRef]
  49. Al-antari, M.A.; Al-masni, M.A.; Kim, T.S. Deep Learning Computer-Aided Diagnosis for Breast Lesion in Digital Mammogram. Adv. Exp. Med. Biol. 2020, 1213, 59–72. [Google Scholar]
  50. Aboutalib, S.S.; Mohamed, A.A.; Berg, W.A.; Zuley, M.L.; Sumkin, J.H.; Wu, S. Deep learning to distinguish recalled but benign mammography images in breast cancer screening. Clin. Cancer Res. 2018, 24, 5902–5909. [Google Scholar] [CrossRef] [Green Version]
  51. Yala, A.; Schuster, T.; Miles, R.; Barzilay, R.; Lehman, C. A deep learning model to triage screening mammograms: A simulation study. Radiology 2019, 293, 38–46. [Google Scholar] [CrossRef] [PubMed]
  52. Watanabe, A.T.; Lim, V.; Vu, H.X.; Chim, R.; Weise, E.; Liu, J.; Bradley, W.G.; Comstock, C.E. Improved Cancer Detection Using Artificial Intelligence: A Retrospective Evaluation of Missed Cancers on Mammography. J. Digit. Imaging 2019, 32, 625–637. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Rodríguez-Ruiz, A.; Krupinski, E.; Mordang, J.J.; Schilling, K.; Heywang-Köbrunner, S.H.; Sechopoulos, I.; Mann, R.M. Detection of breast cancer with mammography: Effect of an artificial intelligence support system. Radiology 2019, 290, 305–314. [Google Scholar] [CrossRef] [PubMed]
  54. Chan, H.P.; Samala, R.K.; Hadjiiski, L.M. CAD and AI for breast cancer—Recent development and challenges. Br. J. Radiol. 2020, 93, 20190580. [Google Scholar] [CrossRef]
  55. McKinney, S.M.; Sieniek, M.; Godbole, V.; Godwin, J.; Antropova, N.; Ashrafian, H.; Back, T.; Chesus, M.; Corrado, G.C.; Darzi, A.; et al. International evaluation of an AI system for breast cancer screening. Nature 2020, 577, 89–94. [Google Scholar] [CrossRef]
  56. Zeng, J.; Gimenez, F.; Burnside, E.S.; Rubin, D.L.; Shachter, R. A Probabilistic Model to Support Radiologists’ Classification Decisions in Mammography Practice. Med. Decis. Mak. 2019, 39, 208–216. [Google Scholar] [CrossRef]
  57. Mayo, R.C.; Leung, J.W.T. Impact of artificial intelligence on women’s imaging: Cost-benefit analysis. Am. J. Roentgenol. 2019, 212, 1172–1173. [Google Scholar] [CrossRef]
  58. Zhang, X.; Zhang, Y.; Han, E.Y.; Jacobs, N.; Han, Q.; Wang, X.; Liu, J. Classification of whole mammogram and tomosynthesis images using deep convolutional neural networks. IEEE Trans. Nanobiosci. 2018, 17, 237–242. [Google Scholar] [CrossRef]
  59. Gao, F.; Wu, T.; Li, J.; Zheng, B.; Ruan, L.; Shang, D.; Patel, B. SD-CNN: A shallow-deep CNN for improved breast cancer diagnosis. Comput. Med. Imaging Graph. 2018, 70, 53–62. [Google Scholar]
  60. Tagliafico, A.S.; Piana, M.; Schenone, D.; Lai, R.; Massone, A.M.; Houssami, N. Overview of radiomics in breast cancer diagnosis and prognostication. Breast 2020, 49, 74–80. [Google Scholar] [CrossRef] [Green Version]
  61. Lundin, M.; Lundin, J.; Burke, H.B.; Toikkanen, S.; Pylkkänen, L.; Joensuu, H. Artificial neural networks applied to survival prediction in breast cancer. Oncology 1999, 57, 281–286. [Google Scholar] [CrossRef] [PubMed]
  62. Nartowt, B.J.; Hart, G.R.; Muhammad, W.; Liang, Y.; Stark, G.F.; Deng, J. Robust Machine Learning for Colorectal Cancer Risk Prediction and Stratification. Front. Big Data 2020, 3, 6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Shaukat, A.; Kahi, C.J.; Burke, C.A.; Rabeneck, L.; Sauer, B.G.; Rex, D.K. ACG Clinical Guidelines: Colorectal Cancer Screening 2021. Am. J. Gastroenterol. 2021, 116, 458–479. [Google Scholar] [CrossRef] [PubMed]
  64. Hilsden, R.J.; Heitman, S.J.; Mizrahi, B.; Narod, S.A.; Goshen, R. Prediction of findings at screening colonoscopy using a machine learning algorithm based on complete blood counts (ColonFlag). PLoS ONE 2018, 13, e0207848. [Google Scholar] [CrossRef]
  65. Kinar, Y.; Kalkstein, N.; Akiva, P.; Levin, B.; Half, E.E.; Goldshtein, I.; Chodick, G.; Shalev, V. Development and validation of a predictive model for detection of colorectal cancer in primary care by analysis of complete blood counts: A binational retrospective study. J. Am. Med. Inform. Assoc. 2016, 23, 879–890. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  66. Corley, D.A.; Jensen, C.D.; Marks, A.R.; Zhao, W.K.; Lee, J.K.; Doubeni, C.A.; Zauber, A.G.; de Boer, J.; Fireman, B.H.; Schottinger, J.E.; et al. Adenoma Detection Rate and Risk of Colorectal Cancer and Death. N. Engl. J. Med. 2014, 370, 1298–1306. [Google Scholar] [CrossRef] [Green Version]
  67. Coe, S.G.; Wallace, M.B. Assessment of adenoma detection rate benchmarks in women versus men. Gastrointest. Endosc. 2013, 77, 631–635. [Google Scholar] [CrossRef]
  68. Mori, Y.; Kudo, S.E.; Berzin, T.M.; Misawa, M.; Takeda, K. Computer-aided diagnosis for colonoscopy. Endoscopy 2017, 49, 813–819. [Google Scholar] [CrossRef] [Green Version]
  69. Nazarian, S.; Glover, B.; Ashrafian, H.; Darzi, A.; Teare, J. Diagnostic accuracy of artificial intelligence and computer-aided diagnosis for the detection and characterization of colorectal polyps: Systematic review and meta-analysis. J. Med. Internet Res. 2021, 23, e27370. [Google Scholar] [CrossRef]
  70. Song, B.; Zhang, G.; Lu, H.; Wang, H.; Zhu, W.; Pickhardt, P.J.; Liang, Z. Volumetric texture features from higher-order images for diagnosis of colon lesions via CT colonography. Int. J. Comput. Assist. Radiol. Surg. 2014, 9, 1021–1031. [Google Scholar] [CrossRef]
  71. Grosu, S.; Wesp, P.; Graser, A.; Maurus, S.; Schulz, C.; Knösel, T.; Cyran, C.C.; Ricke, J.; Ingrisch, M.; Kazmierczak, P.M. Machine learning-based differentiation of benign and premalignant colorectal polyps detected with CT colonography in an asymptomatic screening population: A proof-of-concept study. Radiology 2021, 299, 326–335. [Google Scholar] [CrossRef] [PubMed]
  72. Taylor, S.A.; Iinuma, G.; Saito, Y.; Zhang, J.; Halligan, S. CT colonography: Computer-aided detection of morphologically flat T1 colonic carcinoma. Eur. Radiol. 2008, 18, 1666–1673. [Google Scholar] [CrossRef] [PubMed]
  73. Yuan, Y.; Meng, M.Q.H. Deep learning for polyp recognition in wireless capsule endoscopy images. Med. Phys. 2017, 44, 1379–1389. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Biglarian, A.; Bakhshi, E.; Gohari, M.R.; Khodabakhshi, R. Artificial neural network for prediction of distant metastasis in colorectal cancer. Asian Pac. J. Cancer Prev. 2012, 13, 927–930. [Google Scholar] [CrossRef] [Green Version]
  75. Lu, Y.; Yu, Q.; Gao, Y.; Zhou, Y.; Liu, G.; Dong, Q.; Ma, J.; Ding, L.; Yao, H.; Zhang, Z.; et al. Identification of metastatic lymph nodes in MR imaging with faster region-based convolutional neural networks. Cancer Res. 2018, 78, 5135–5143. [Google Scholar] [CrossRef] [Green Version]
  76. Trebeschi, S.; Van Griethuysen, J.J.M.; Lambregts, D.M.J.; Lahaye, M.J.; Parmer, C.; Bakers, F.C.H.; Peters, N.H.G.M.; Beets-Tan, R.G.H.; Aerts, H.J.W.L. Deep Learning for Fully-Automated Localization and Segmentation of Rectal Cancer on Multiparametric, M.R. Sci. Rep. 2017, 7, 5301. [Google Scholar] [CrossRef]
  77. Lieberman, D.A.; Rex, D.K.; Winawer, S.J.; Giardiello, F.M.; Johnson, D.A.; Levin, T.R. Guidelines for colonoscopy surveillance after screening and polypectomy: A consensus update by the us multi-society task force on colorectal cancer. Gastroenterology 2012, 143, 844–857. [Google Scholar] [CrossRef] [Green Version]
  78. Yoon, H.; Lee, J.; Oh, J.E.; Kim, H.R.; Lee, S.; Chang, H.J.; Sohn, D.K. Tumor Identification in Colorectal Histology Images Using a Convolutional Neural Network. J. Digit. Imaging 2019, 32, 131–140. [Google Scholar] [CrossRef]
  79. Zhang, X.; Yang, Y.; Wang, Y.; Fan, Q. Detection of the BRAF V600E mutation in colorectal cancer by NIR spectroscopy in conjunction with counter propagation artificial neural network. Molecules 2019, 24, 2238. [Google Scholar] [CrossRef] [Green Version]
  80. Galamb, O.; Barták, B.K.; Kalmár, A.; Nagy, Z.B.; Szigeti, K.A.; Tulassay, Z.; Igaz, P.; Molnár, B. Diagnostic and prognostic potential of tissue and circulating long non-coding RNAs in colorectal tumors. World J. Gastroenterol. 2019, 25, 5026–5048. [Google Scholar] [CrossRef]
  81. Wang, Q.; Wei, J.; Chen, Z.; Zhang, T.; Zhong, J.; Zhong, B.; Yang, P.; Li, W.; Cao, J. Establishment of multiple diagnosis models for colorectal cancer with artificial neural networks. Oncol. Lett. 2019, 17, 3314–3322. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  82. Hu, H.P.; Niu, Z.J.; Bai, Y.P.; Tan, X.H. Cancer classification based on gene expression using neural networks. Genet. Mol. Res. 2015, 14, 17605–17611. [Google Scholar] [CrossRef] [PubMed]
  83. Chang, K.H.; Miller, N.; Kheirelseid, E.A.H.; Lemetre, C.; Ball, G.R.; Smith, M.J.; Regan, M.; McAnena, O.J.; Kerin, M.J. MicroRNA signature analysis in colorectal cancer: Identification of expression profiles in stage II tumors associated with aggressive disease. Int. J. Colorectal Dis. 2011, 26, 1415–1422. [Google Scholar] [CrossRef] [PubMed]
  84. Amirkhah, R.; Farazmand, A.; Gupta, S.K.; Ahmadi, H.; Wolkenhauer, O.; Schmitz, U. Naïve Bayes classifier predicts functional microRNA target interactions in colorectal cancer. Mol. Biosyst. 2015, 11, 2126–2134. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  85. Herreros-Villanueva, M.; Duran-Sanchon, S.; Martín, A.C.; Pérez-Palacios, R.; Vila-Navarro, E.; Marcuello, M.; Diaz-Centeno, M.; Cubiella, J.; Diez, M.S.; Bujanda, L.; et al. Plasma MicroRNA Signature Validation for Early Detection of Colorectal Cancer. Clin. Transl. Gastroenterol. 2019, 10, e00003. [Google Scholar] [CrossRef]
  86. Xuan, P.; Dong, Y.; Guo, Y.; Zhang, T.; Liu, Y. Dual convolutional neural network based method for predicting disease-related miRNAs. Int. J. Mol. Sci. 2018, 19, 3732. [Google Scholar] [CrossRef] [Green Version]
  87. Gupta, P.; Gulzar, Z.; Hsieh, B.; Lim, A.; Watson, D.; Mei, R. Analytical validation of the CellMax platform for early detection of cancer by enumeration of rare circulating tumor cells. J. Circ. Biomark. 2019, 8, 1849454419899214. [Google Scholar] [CrossRef]
  88. Ivancic, M.M.; Megna, B.W.; Sverchkov, Y.; Craven, M.; Reichelderfer, M.; Pickhardt, P.J.; Sussman, M.R.; Kennedy, G.D. Noninvasive Detection of Colorectal Carcinomas Using Serum Protein Biomarkers. J. Surg. Res. 2020, 246, 160–169. [Google Scholar] [CrossRef]
  89. Hanif, F.; Muzaffar, K.; Perveen, K.; Malhi, S.M.; Simjee, S.U. Glioblastoma multiforme: A review of its epidemiology and pathogenesis through clinical presentation and treatment. Asian Pac. J. Cancer Prev. 2017, 18, 3–9. [Google Scholar]
  90. Brindle, K.M.; Izquierdo-García, J.L.; Lewis, D.Y.; Mair, R.J.; Wright, A.J. Brain tumor imaging. J. Clin. Oncol. 2017, 35, 2432–2438. [Google Scholar] [CrossRef] [Green Version]
  91. Rudie, J.D.; Rauschecker, A.M.; Bryan, R.N.; Davatzikos, C.; Mohan, S. Emerging Applications of Artificial Intelligence in Neuro-Oncology. Radiology 2019, 290, 607–618. [Google Scholar] [CrossRef] [PubMed]
  92. Artzi, M.; Bressler, I.; Ben Bashat, D. Differentiation between glioblastoma, brain metastasis and subtypes using radiomics analysis. J. Magn. Reson. Imaging 2019, 50, 519–528. [Google Scholar] [CrossRef] [PubMed]
  93. Wang, W.; Chen, C.; Ding, M.; Yu, H.; Zha, S.; Li, J. TransBTS: Multimodal Brain Tumor Segmentation Using Transformer. In Medical Image Computing and Computer Assisted Intervention–Miccai 2021, Proceedings of the 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Lecture Notes in Computer Science Series; Springer: Cham, Switzerland, 2021; Volume 12901, pp. 109–119. [Google Scholar]
  94. Aerts, H.J.W.L. The potential of radiomic-based phenotyping in precision medicine: A review. JAMA Oncol. 2016, 2, 1636–1642. [Google Scholar] [CrossRef] [PubMed]
  95. Rizzo, S.; Botta, F.; Raimondi, S.; Origgi, D.; Fanciullo, C.; Morganti, A.G.; Bellomi, M. Radiomics: The facts and the challenges of image analysis. Eur. Radiol. Exp. 2018, 2, 36. [Google Scholar] [CrossRef]
  96. Forghani, R. Precision Digital Oncology: Emerging Role of Radiomics-based Biomarkers and Artificial Intelligence for Advanced Imaging and Characterization of Brain Tumors. Radiol. Imaging Cancer 2020, 2, e190047. [Google Scholar] [CrossRef] [PubMed]
  97. National Cancer Institute. Artificial Intelligence Expedites Brain Tumor Diagnosis. Available online: https://mednar.com/mednar/desktop/en/service/link/track?redirectUrl=https%3A%2F%2Fwww.cancer.gov%2Fnews-events%2Fcancer-currents-blog%2F2020%2Fartificial-intelligence-brain-tumor-diagnosis-surgery&collectionCode=MEDNAR-NCI&searchId=5ee02aa9-a656-481b-bbb7 (accessed on 28 January 2022).
  98. Abdel Razek, A.A.K.; Alksas, A.; Shehata, M.; AbdelKhalek, A.; Abdel Baky, K.; El-Baz, A.; Helmy, E. Clinical applications of artificial intelligence and radiomics in neuro-oncology imaging. Insights Imaging 2021, 12, 152. [Google Scholar] [CrossRef] [PubMed]
  99. Bera, K.; Schalper, K.A.; Rimm, D.L.; Velcheti, V.; Madabhushi, A. Artificial intelligence in digital pathology—New tools for diagnosis and precision oncology. Nat. Rev. Clin. Oncol. 2019, 16, 703–715. [Google Scholar] [CrossRef]
  100. Wu, S.; Meng, J.; Yu, Q.; Li, P.; Fu, S. Radiomics-based machine learning methods for isocitrate dehydrogenase genotype prediction of diffuse gliomas. J. Cancer Res. Clin. Oncol. 2019, 145, 543–550. [Google Scholar] [CrossRef] [Green Version]
  101. Korfiatis, P.; Kline, T.L.; Coufalova, L.; Lachance, D.H.; Parney, I.F.; Carter, R.E.; Buckner, J.C.; Erickson, B.J. MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas. Med. Phys. 2016, 43, 2835–2844. [Google Scholar] [CrossRef]
  102. Li, Y.; Liu, X.; Xu, K.; Qian, Z.; Wang, K.; Fan, X.; Li, S.; Wang, Y.; Jiang, T. MRI features can predict EGFR expression in lower grade gliomas: A voxel-based radiomic analysis. Eur. Radiol. 2018, 28, 356–362. [Google Scholar] [CrossRef]
  103. Chen, X.; Wang, Y.; Yu, J.; Tong, Y.; Shi, Z.; Chen, L.; Chen, H.; Yang, Z. Noninvasive molecular diagnosis of craniopharyngioma with MRI-based radiomics approach. BMC Neurol. 2019, 19, 6. [Google Scholar] [CrossRef] [PubMed]
  104. Houy, N.; Le Grand, F. Personalized oncology with artificial intelligence: The case of temozolomide. Artif. Intell. Med. 2019, 99, 101693. [Google Scholar] [CrossRef] [PubMed]
  105. Chang, P.; Grinband, J.G.; Weinberg, B.D.; Bardis, M.; Bardis, M.; Cadena, G.; Su, M.Y.; Cha, S.; Filippi, C.G.; Bota, D.; et al. Deep-learning convolutional neural networks accurately classify genetic mutations in gliomas. Am. J. Neuroradiol. 2018, 39, 1201–1207. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  106. Tykocinski, E.S.; Grant, R.A.; Kapoor, G.S.; Krejza, J.; Bohman, L.E.; Gocke, T.A.; Chawla, S.; Halpern, C.H.; Lopinto, J.; Melhem, E.R.; et al. Use of magnetic perfusion-weighted imaging to determine epidermal growth factor receptor variant III expression in glioblastoma. Neuro-Oncology 2012, 14, 613–623. [Google Scholar] [CrossRef] [Green Version]
  107. Lohmann, P.; Galldiks, N.; Kocher, M.; Heinzel, A.; Filss, C.P.; Stegmayr, C.; Mottaghy, F.M.; Fink, G.R.; Jon Shah, N.; Langen, K.J. Radiomics in neuro-oncology: Basics, workflow, and applications. Methods 2021, 188, 112–121. [Google Scholar] [CrossRef]
  108. Reardon, D.A.; Galanis, E.; DeGroot, J.F.; Cloughesy, T.F.; Wefel, J.S.; Lamborn, K.R.; Lassman, A.B.; Gilbert, M.R.; Sampson, J.H.; Wick, W.; et al. Clinical trial end points for high-grade glioma: The evolving landscape. Neuro Oncol. 2011, 13, 353–361. [Google Scholar] [CrossRef] [Green Version]
  109. Peng, L.; Parekh, V.; Huang, P.; Lin, D.D.; Sheikh, K.; Baker, B.; Kirschbaum, T.; Silvestri, F.; Son, J.; Robinson, A.; et al. Distinguishing True Progression From Radionecrosis After Stereotactic Radiation Therapy for Brain Metastases with Machine Learning and Radiomics. Int. J. Radiat. Oncol. Biol. Phys. 2018, 102, 1236–1243. [Google Scholar] [CrossRef]
  110. Shaver, M.M.; Kohanteb, P.A.; Chiou, C.; Bardis, M.D.; Chantaduly, C.; Bota, D.; Filippi, C.G.; Weinberg, B.; Grinband, J.; Chow, D.S.; et al. Optimizing neuro-oncology imaging: A review of deep learning approaches for glioma imaging. Cancers 2019, 11, 829. [Google Scholar] [CrossRef] [Green Version]
  111. Cui, S.; Mao, L.; Jiang, J.; Liu, C.; Xiong, S. Automatic semantic segmentation of brain gliomas from MRI images using a deep cascaded neural network. J. Healthc. Eng. 2018, 2018, 4940593. [Google Scholar] [CrossRef]
  112. Blanc-Durand, P.; Van Der Gucht, A.; Schaefer, N.; Itti, E.; Prior, J.O. Automatic lesion detection and segmentation of18F-FET PET in gliomas: A full 3D U-Net convolutional neural network study. PLoS ONE 2018, 13, e0195798. [Google Scholar] [CrossRef]
  113. Hambardzumyan, D.; Bergers, G. Glioblastoma: Defining Tumor Niches. Trends Cancer 2015, 1, 252–265. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  114. Charron, O.; Lallement, A.; Jarnet, D.; Noblet, V.; Clavier, J.B.; Meyer, P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput. Biol. Med. 2018, 95, 43–54. [Google Scholar] [CrossRef] [PubMed]
  115. Liu, Y.; Stojadinovic, S.; Hrycushko, B.; Wardak, Z.; Lau, S.; Lu, W.; Yan, Y.; Jiang, S.B.; Zhen, X.; Timmerman, R.; et al. A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PLoS ONE 2017, 12, e0185844. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  116. Wang, S.; Kim, S.; Chawla, S.; Wolf, R.L.; Knipp, D.E.; Vossough, A.; O’Rourke, D.M.; Judy, K.D.; Poptani, H.; Melhem, E.R. Differentiation between glioblastomas, solitary brain metastases, and primary cerebral lymphomas using diffusion tensor and dynamic susceptibility contrast-enhanced MR imaging. Am. J. Neuroradiol. 2011, 32, 507–514. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Liu, Y.; Xu, X.; Yin, L.; Zhang, X.; Li, L.; Lu, H. Relationship between glioblastoma heterogeneity and survival time: An MR imaging texture analysis. Am. J. Neuroradiol. 2017, 38, 1695–1701. [Google Scholar] [CrossRef] [Green Version]
  118. Zhang, Z.; Yang, J.; Ho, A.; Jiang, W.; Logan, J.; Wang, X.; Brown, P.D.; McGovern, S.L.; Guha-Thakurta, N.; Ferguson, S.D.; et al. A predictive model for distinguishing radiation necrosis from tumour progression after gamma knife radiosurgery based on radiomic features from MR images. Eur. Radiol. 2018, 28, 2255–2263. [Google Scholar] [CrossRef]
  119. Lind, A.P.; Anderson, P.C. Predicting drug activity against cancer cells by random forest models based on minimal genomic information and chemical properties. PLoS ONE 2019, 14, e0219774. [Google Scholar] [CrossRef] [Green Version]
  120. Dorman, S.N.; Baranova, K.; Knoll, J.H.M.; Urquhart, B.L.; Mariani, G.; Carcangiu, M.L.; Rogan, P.K. Genomic signatures for paclitaxel and gemcitabine resistance in breast cancer derived by machine learning. Mol. Oncol. 2016, 10, 85–100. [Google Scholar] [CrossRef]
  121. Berishvili, V.P.; Voronkov, A.E.; Radchenko, E.V.; Palyulin, V.A. Machine Learning Classification Models to Improve the Docking-based Screening: A Case of PI3K-Tankyrase Inhibitors. Mol. Inform. 2018, 37, 1800030. [Google Scholar] [CrossRef]
  122. Lee, J.; Kumar, S.; Lee, S.Y.; Park, S.J.; Kim, M. Development of predictive models for identifying potential S100A9 inhibitors based on machine learning methods. Front. Chem. 2019, 7, 779. [Google Scholar] [CrossRef]
  123. Sharma, A.; Rani, R. Ensembled machine learning framework for drug sensitivity prediction. IET Syst. Biol. 2020, 14, 39–46. [Google Scholar] [CrossRef] [PubMed]
  124. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; Li, B.; Madabhushi, A.; Shah, P.; Spitzer, M.; et al. Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discov. 2019, 18, 463–477. [Google Scholar] [CrossRef] [PubMed]
  125. Baskin, I.I. The power of deep learning to ligand-based novel drug discovery. Expert Opin. Drug Discov. 2020, 15, 755–764. [Google Scholar] [CrossRef] [PubMed]
  126. Printz, C. Artificial intelligence platform for oncology could assist in treatment decisions. Cancer 2017, 123, 905. [Google Scholar] [CrossRef] [Green Version]
  127. Lou, B.; Doken, S.; Zhuang, T.; Wingerter, D.; Gidwani, M.; Mistry, N.; Ladic, L.; Kamen, A.; Abazeed, M.E. An image-based deep learning framework for individualising radiotherapy dose: A retrospective analysis of outcome prediction. Lancet Digit. Health 2019, 1, e136–e147. [Google Scholar] [CrossRef] [Green Version]
  128. Meyer, P.; Noblet, V.; Mazzara, C.; Lallement, A. Survey on deep learning for radiotherapy. Comput. Biol. Med. 2018, 98, 126–146. [Google Scholar] [CrossRef]
  129. Sun, R.; Limkin, E.J.; Vakalopoulou, M.; Dercle, L.; Champiat, S.; Han, S.R.; Verlingue, L.; Brandao, D.; Lancia, A.; Ammari, S.; et al. A radiomics approach to assess tumour-infiltrating CD8 cells and response to anti-PD-1 or anti-PD-L1 immunotherapy: An imaging biomarker, retrospective multicohort study. Lancet Oncol. 2018, 19, 1180–1191. [Google Scholar] [CrossRef]
  130. Bulik-Sullivan, B.; Busby, J.; Palmer, C.D.; Davis, M.J.; Murphy, T.; Clark, A.; Busby, M.; Duke, F.; Yang, A.; Young, L.; et al. Deep learning using tumor HLA peptide mass spectrometry datasets improves neoantigen identification. Nat. Biotechnol. 2019, 37, 55–71. [Google Scholar] [CrossRef]
  131. Park, S.H.; Han, K. Methodologic guide for evaluating clinical performance and effect of artificial intelligence technology for medical diagnosis and prediction. Radiology 2018, 286, 800–809. [Google Scholar] [CrossRef]
  132. Halligan, S.; Altman, D.G.; Mallett, S. Disadvantages of using the area under the receiver operating characteristic curve to assess imaging tests: A discussion and proposal for an alternative approach. Eur. Radiol. 2015, 25, 932–939. [Google Scholar] [CrossRef] [Green Version]
  133. Halling-Brown, M.D.; Warren, L.M.; Ward, D.; Lewis, E.; Mackenzie, A.; Wallis, M.G.; Wilkinson, L.S.; Given-Wilson, R.M.; McAvinchey, R.; Young, K.C. OPTIMAM mammography image database: A large-scale resource of mammography images and clinical data. Radiol. Artif. Intell. 2021, 3, e200103. [Google Scholar] [CrossRef] [PubMed]
  134. Zwanenburg, A.; Leger, S.; Vallières, M.; Löck, S. Image biomarker standardisation initiative. Radiology 2020, 295, 328–338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  135. Drozdzal, M.; Chartrand, G.; Vorontsov, E.; Shakeri, M.; Di Jorio, L.; Tang, A.; Romero, A.; Bengio, Y.; Pal, C.; Kadoury, S. Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 2018, 44, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  136. Paranjape, K.; Schinkel, M.; Panday, R.N.; Car, J.; Nanayakkara, P. Introducing artificial intelligence training in medical education. JMIR Med. Educ. 2019, 5, e16048. [Google Scholar] [CrossRef] [PubMed]
  137. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; Volume 8689, pp. 818–833. [Google Scholar]
  138. Noseworthy, P.A.; Attia, Z.I.; Brewer, L.P.C.; Hayes, S.N.; Yao, X.; Kapa, S.; Friedman, P.A.; Lopez-Jimenez, F. Assessing and Mitigating Bias in Medical Artificial Intelligence: The Effects of Race and Ethnicity on a Deep Learning Model for ECG Analysis. Circ. Arrhythmia Electrophysiol. 2020, 13, e007988. [Google Scholar] [CrossRef] [PubMed]
  139. Johnson, K.B.; Wei, W.Q.; Weeraratne, D.; Frisse, M.E.; Misulis, K.; Rhee, K.; Zhao, J.; Snowdon, J.L. Precision Medicine, AI, and the Future of Personalized Health Care. Clin. Transl. Sci. 2021, 14, 86–93. [Google Scholar] [CrossRef] [PubMed]
  140. Van Ginneken, B.; Schaefer-Prokop, C.M.; Prokop, M. Computer-aided diagnosis: How to move from the laboratory to the clinic. Radiology 2011, 261, 719–732. [Google Scholar] [CrossRef]
  141. Mantas, J.; Ammenwerth, E.; Demiris, G.; Hasman, A.; Haux, R.; Hersh, W.; Hovenga, E.; Lun, K.C.; Marin, H.; Martin-Sanchez, F.; et al. Recommendations of the international medical informatics association (IMIA) on education in biomedical and health informatics. Methods Inf. Med. 2010, 49, 105–120. [Google Scholar]
  142. Hasan Sapci, A.; Aylin Sapci, H. Artificial intelligence education and tools for medical and health informatics students: Systematic review. JMIR Med. Educ. 2020, 6, e19285. [Google Scholar] [CrossRef]
  143. The Royal College of Radiologists. Clinical Radiology Webinars. Available online: https://www.rcr.ac.uk/clinical-radiology/events/webinars (accessed on 28 January 2022).
  144. SFR-IA Group; CERF; French Radiology Community. Artificial intelligence and medical imaging 2018: French Radiology Community white paper. Diagn. Interv. Imaging 2018, 99, 727–742. [Google Scholar] [CrossRef]
  145. Tang, A.; Tam, R.; Cadrin-Chênevert, A.; Guest, W.; Chong, J.; Barfett, J.; Chepelev, L.; Cairns, R.; Mitchell, J.R.; Cicero, M.D.; et al. Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology. Can. Assoc. Radiol. J. 2018, 69, 120–135. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  146. Park, S.H.; Do, K.H.; Kim, S.; Park, J.H.; Lim, Y.S. What should medical students know about artificial intelligence in medicine? J. Educ. Eval. Health Prof. 2019, 16, 1149130. [Google Scholar] [CrossRef] [PubMed]
  147. Hasan Sapci, A.; Aylin Sapci, H. Teaching hands-on informatics skills to future health informaticians: A competency framework proposal and analysis of health care informatics curricula. JMIR Med. Inform. 2020, 8, e15748. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a): Neuron, the fundamental computational unit of a neural network, computes the weighted sum of its inputs (X1, X2, X3) and applies a non-linear operation to give output (Y). (b): An example of a feedforward neural network with two hidden layers, with five and four neurons, respectively. (c): An example of a convolutional neural network (CNN) applied to the classification of a screening mammogram as probable malignant or benign.
Figure 1. (a): Neuron, the fundamental computational unit of a neural network, computes the weighted sum of its inputs (X1, X2, X3) and applies a non-linear operation to give output (Y). (b): An example of a feedforward neural network with two hidden layers, with five and four neurons, respectively. (c): An example of a convolutional neural network (CNN) applied to the classification of a screening mammogram as probable malignant or benign.
Cancers 14 01349 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Vobugari, N.; Raja, V.; Sethi, U.; Gandhi, K.; Raja, K.; Surani, S.R. Advancements in Oncology with Artificial Intelligence—A Review Article. Cancers 2022, 14, 1349. https://doi.org/10.3390/cancers14051349

AMA Style

Vobugari N, Raja V, Sethi U, Gandhi K, Raja K, Surani SR. Advancements in Oncology with Artificial Intelligence—A Review Article. Cancers. 2022; 14(5):1349. https://doi.org/10.3390/cancers14051349

Chicago/Turabian Style

Vobugari, Nikitha, Vikranth Raja, Udhav Sethi, Kejal Gandhi, Kishore Raja, and Salim R. Surani. 2022. "Advancements in Oncology with Artificial Intelligence—A Review Article" Cancers 14, no. 5: 1349. https://doi.org/10.3390/cancers14051349

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop