Introduction

Meningiomas are the most common primary intracranial tumor in adults, being more frequent in middle-aged women [1]. The average age-adjusted yearly incidence rate is 7.86 cases per 100.000 individuals, which has increased during the past 30 years due to the improvement of diagnostic imaging [2]. Magnetic resonance imaging (MRI) is the modality of choice for their radiological diagnosis and follow-up, whereas computed tomography (CT) is used when patients cannot undergo MRI. The World Health Organization (WHO) classification of central nervous system tumors of 2016 grades meningiomas into three groups: grade I (slowly growing tumors), grade II (atypical meningioma), and grade III (anaplastic or malignant meningioma) [3]. Among these, grade II and III meningiomas are associated with high rates of recurrence and premature mortality [4]. Although conventional imaging is usually reliable for meningioma evaluation, it still presents some limitations, in particular in determining pathological grading from preoperative scans [5].

The term radiomics includes different quantitative radiological image analysis techniques, ranging from first order statistics to texture analysis [6]. These produce large amounts of data that can be challenging to process with classical statistical methods but may contribute novel imaging biomarkers. Machine learning (ML), a subfield of artificial intelligence, has seen growing interest in medicine and especially in radiology for numerous applications [7,8,9,10]. In particular, supervised learning, based on labeling of data by an expert, is mainly employed for classification and regression tasks. Among the promises of ML for clinical practice, there are automatic detection and characterization of lesions and the possibility to predict response to therapy and risk of recurrence [11,12,13]. Regarding neuroradiology, it has shown good results in different applications, especially in the field of neuro-oncology [14,15,16]. In recent years, the number of investigations based on these techniques published allows for data pooling potentially achieving higher levels of evidence through systematic reviews and/or meta-analyses.

Aim of this systematic review is to analyze the methodological quality of prospective and retrospective studies published on radiomics analyses of intracranial meningiomas. Furthermore, a meta-analysis of those employing ML algorithms for the MRI preoperative assessment of meningioma grading has been performed.

Materials and methods

Literature search

The PRISMA-DTA (Preferred Reporting Items for Systematic Reviews and Meta-analysis for Diagnostic Test Accuracy) statement was used for this systematic review [17]. Primary publications in English using radiomics and/or ML in MRI exams of meningioma patients, published between 01/01/2000 and 30/06/2020, were searched for on multiple electronic databases (PubMed, Scopus, and Web of Science). The search terms consisted of machine learning OR artificial intelligence OR radiomics OR texture AND meningioma; the detailed search string is presented in the supplementary materials.

Two researchers determined the eligibility of the articles though title and abstract evaluation. Case reports, non-original investigations (e.g., editorials, letters, reviews), and studies not focused on the topic of interest were excluded. The full text of articles in which radiomics was employed on CT or MRI images of intracranial meningiomas were obtained for further evaluation. The reference lists of included studies were also screened for potentially eligible articles and those evaluating the grading of meningioma through ML were selected to perform a meta-analysis.

Data collection and study evaluation

The radiomics quality score (RQS) was used to evaluate the methodological quality of the studies included in the systematic review whereas the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) was used to assess the risk of bias of the studies included in the meta-analysis [18, 19]. For studies included in the meta-analysis, the predictive accuracy was quantified using the AUC for the receiver operator characteristic (ROC) analysis [20]. The number of low (grade I) and high (grade II–III) lesions used to test the model, the source of the dataset, MRI sequences employed to extract the features, ML algorithm, and type of validation were also recorded.

The RQS is a tool developed to assess the methodological quality of studies using radiomics. It evaluates image acquisition, radiomics features extraction, data modeling, model validation, and data sharing. Each of the 16 items it comprises is rated, and the summed total score ranges from −8 to 36, converted to a percentage score where −8 to 0 is defined as 0% and 36 as 100% [18] (Table 1). Three readers with previous experience in radiomics independently assigned an RQS score to each article included in the systematic review.

Table 1 Overview of radiomics quality score items and mode of the respective scores in the reviewed studies

The QUADAS-2 evaluates the risk of bias in different domains (“patient selection,” “index test,” “reference standard,” and “flow and timing”) and can be personalized according to the specific research question [21]. It was assessed in consensus by two readers for each of the studies selected for the meta-analysis.

Statistical analysis

Continuous variables are presented as mean ± standard deviation. Following previous experiences both with RQS and other scoring systems [22, 23], inter-reader reproducibility was evaluated by calculating the intraclass correlation coefficient (ICC) for the total RQS score obtained by each study. In accordance with recent guidelines, a two-way, random-effects, single-rater, absolute agreement ICC model was used [24]. For the remaining descriptive statistics, RQS score assigned by the most expert reader is reported.

Regarding the meta-analysis, the AUC standard error was calculated from the total number of positive and negative meningiomas patients. The I2 value was used to assess statistical heterogeneity, providing an estimate of the percentage of variability among included studies. I2 values of 0–25%, 25–50%, 50–75%, and >75% represent very low, low, medium, and high heterogeneity, respectively. The I2 statistic describes the percentage of variation across studies that is due to heterogeneity rather than chance [25]. I2 was calculated as follows: I2 = 100% × (Q − df)/Q. The weight of each study was calculated with the inverse variance method [26]. The results from all included studies were pooled, and an overall estimate of effect size was evaluated using a random effect model. This approach helped in reducing heterogeneity among studies. Publication bias was examined using the effective sample size funnel plot described by Egger et al. [27]. Two-sided p values ≤ 0.05 were considered statistically significant.

The described statistical analyses were performed using R (v3.6.2, “irr” and “auctestr” packages) and MedCalc Statistical Software (version 16.4.3, Ostend, Belgium; https://www.medcalc.org) [28].

Results

Literature search

In total, 256 articles were obtained from the initial search, of which 96 were duplicates. Of the remaining 163, 140 were rejected based on the selection criteria. Finally, 23 articles were included in the systematic review, 8 of which were eligible for the meta-analysis. The described flowchart is represented in Fig. 1, whereas Table 2 contains details on study aim, ML method, and performance.

Fig. 1
figure 1

Study selection process flowchart

Table 2 Overview of study aim, ML method, and performance for the included studies

Study evaluation

The RQS total and percentage scores were respectively 6.96 ± 4.86 and 19 ± 13% (Figs. 2, 3). A detailed report of the RQS item score by the most expert reader is shown in Table 3. Inter-reader reproducibility resulted moderate to good, with an ICC = 0.75 (95% confidence intervals, 95% CI = 0.54–0.88). RQS scores assigned by the other readers are presented in the supplementary materials.

Fig. 2
figure 2

Histogram (bars, bin number = 10) and kernel density estimation (line) plot of RQS percentage score distribution

Fig. 3
figure 3

RQS percentage score line plot in relation to publication year. Bars represent 95% confidence intervals, calculated with bootstrapping (1000 iterations)

Table 3 Radiomics quality scores for all included articles

Regarding the evaluation of the risk of bias through the QUADAS-2, the number of studies with high, unclear, and low risk of bias was respectively 0, 7, and 2, for the four domains (patient selection, index test, reference standard, and flow and timing) (Fig. 4). In particular, 4 studies scored an unclear risk of bias in the patient selection domain as the authors did not clearly report the steps of patient selection process [31, 37, 40, 47]. One study scored an unclear risk of bias in index test domain as the radiomics feature extraction was performed from both diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) maps [37]. Finally, the time elapsed between MRI and neurosurgery was not reported in 6 studies, thus scoring an unclear risk of bias in the flow and timing domain [31,30,31,32,45, 49]. All the studies included in the meta-analysis had low concerns regarding applicability for the three domains (patient selection, index test, and reference standard).

Fig. 4
figure 4

Methodological quality of the studies included in the meta-analysis according to the QUADAS 2 tool for risk of bias and applicability concerns. Green, yellow, and red circles represent low, unclear, and high risk of bias, respectively

Meta-analysis

The articles included in the meta-analysis are reported in Table 4. The ML models for meningioma characterization showed an overall pooled AUC = 0.88 (95% CI = 0.84–0.93) with a standard error of 0.02 (Figs. 5 and 6). Study heterogeneity was 82.5% (p < 0.001).

Table 4 Characteristics of the studies included in the meta-analysis
Fig. 5
figure 5

Funnel plot asymmetry test for publication bias in the literature evaluation for high-grade meningioma characterization

Fig. 6
figure 6

Forest plot of single studies for the pooled area under the curve (AUC) and 95% CI of high-grade meningioma characterization

Subgroup analysis was performed to compare studies evaluating the performance of ML for meningioma characterization using patients from a single institution (n = 4) and from multiple centers (n = 4). The pooled AUC was 0.88 (95% CI = 0.84–0.92), standard error 0.02, and heterogeneity 42.17% (p < 0.001) in the single institution group and the pooled AUC was 0.88 (95% CI = 0.81–0.95), standard error 0.03, and heterogeneity 88.60% (p < 0.001) in the multi-center group.

Of the included studies, 5 used only post-contrast T1-weighted. Their pooled AUC was 0.87 (95% CI = 0.82–0.92), standard error 0.02, and heterogeneity 56.34% (p = 0.05). On the other hand, 3 studies also used conventional MR sequences, including T1-weighted and T2-weighted imaging, in addition to contrast-enhanced T1-weighted imaging. Their pooled AUC was 0.91 (95% CI = 0.85–0.97), standard error 0.03, and heterogeneity 85.94% (p < 0.001).

In a subgroup analysis based on pre-processing image type, the pooled AUC of 6 studies included in the analysis was 0.89 (95% CI = 0.85–0.94), standard error 0.02, and heterogeneity 83.01% (p < 0.001). The remaining studies reported an AUC value respectively of 0.93 and 0.78.

Four studies applied exclusively k-fold cross-validation for training and testing of the model. Their pooled AUC was 0.92 (95% CI = 0.88–0.97), standard error 0.02, and heterogeneity 76.52% (p = 0.005). The remaining studies (n = 4) employed a test set, in 2 cases paired with k-fold cross-validation. Their pooled AUC was 0.84 (95% CI = 0.78–0.90), standard error 0.03, and heterogeneity 62.09% (p < 0.005). The corresponding plots of subgroup analyses are presented in the supplementary materials.

Discussion

Radiomics has numerous potential applications in neuroradiology and could help in obtaining additional quantitative information from routine medical images. Even though there are ongoing efforts to standardize radiomic feature extraction, their use is not yet justified outside of the research field [50]. The RQS is a recently introduced score whose aim is to evaluate the methodological quality of radiomics-based investigations. It could help identifying high-quality results among the large number of publications in this field as well as issues limiting their value and applicability. The average RQS of the articles included in our systematic review was low (6.96, 19%), reflecting a lacking overall methodological quality. This finding is in line with previous systematic reviews performing a quality assessment with the RQS tool in other fields of radiology. In detail, Ursprung et al reported a total RQS score of 3.41 ± 4.43 (9.4% average) in a review of renal cell carcinoma radiomics CT and MRI studies, Stanzione et al 7.93 ± 5.13 (23 ± 13%) for prostate MRI, and Granzier et al 20.9% for breast MRI [22, 51, 52]. Therefore, the problems affecting radiomics studies and limiting the RQS score seem to be general and not restricted to a specific application. The current situation can be at least in part explained by an exponential growth in interest and number of papers submitted using radiomics, a dynamic also experienced in the wider field of ML [7]. On the other hand, the RQS scoring system is relatively new and has been used in a limited number of occasions [18, 22, 51,39,53]. Therefore, further revisions and improvements after initial feedback may produce a different weighting of each item and/or modifications in the items themselves. In our review, we wish to highlight that all studies collected 0 points on items 3, 4, 10, 11, and 15. In detail, feature robustness to scanner or temporal variability was never tested, also due to the retrospective nature of all the investigations. Similarly, a prospective validation of the radiomics signature in appropriate trials was missing as well as a cost-effectiveness analysis.

Regarding the studies included in the meta-analysis, the QUADAS-2 assessment revealed an overall low risk of bias but also highlighted some critical issues. In particular, in one paper, DWI was used for feature extraction together with ADC maps [37]. As ADC maps are derived from DWI, it would be more appropriate to only use one of the two for feature extraction and probably ADC maps are preferable due to their qualitative nature. Furthermore, only two studies reported time elapsed between the MRI exam and surgery, a possible source of bias that should always be specified [11, 48] None of the articles selected scored a high risk of bias in relation to the reference standard as histopathological grading was always employed. Overall, radiomics features analyzed with a ML approach turned out to be promising for meningioma grading, with an AUC of 0.88. All the included studies used handcrafted radiomics except for Zhu et al. who employed deep learning [48]. This is understandable given that deep learning requires a large amount of data to be advantageous over other ML algorithms, often not available in this setting. Almost all studies (n = 7) performed a 3D segmentation of the lesion, though it is still not clear whether this approach is clearly better than 2D segmentation [48]. Only Morin et al. trained a model using radiomics features together with demographic data [40]. Despite this, its AUC value is among the lowest (0.78) suggesting that these may not be essential in the preoperative definition of meningioma grading. It is also interesting to note that most (n = 5) of the studies used linear ML models [11, 31, 37, 48, 49] while only one included a data augmentation technique [33].

In the subgroup analyses, AUC was higher (0.91 vs 0.87) for studies (n = 3) that paired T1 contrast-enhanced sequences with other sequences [11, 31, 40]. This finding supports the use of multiple imaging sequences rather than relying exclusively on T1 contrast-enhanced sequences for future investigations. Similarly, the good accuracy (AUC = 0.89) obtained by studies (n = 6) who included image pre-processing in their pipeline also suggests the usefulness of this step [11, 33, 37, 45, 48, 49]. While the AUC for single institution (n = 4) and multicenter studies was equally high (AUC = 0.88), external testing of ML models is always preferable to demonstrate their ability to generalize. Similarly, while k-fold cross-validation helps in extracting more information and reliable results from small datasets, its exclusive use may present some issues as there is no final model whose performance can be tested on unseen data. In all, 4 studies only employed cross-validation, with better results than the remaining (AUC = 0.92 vs 0.84) [11, 40, 48, 49]. Ideally, it would be preferable to use cross-validation for model tuning and initial testing followed by further assessment on new data, as done in 2 cases (AUC = 0.82 and 0.83). This approach combines the advantages of both testing strategies [48, 49].

As previously reported, the presentation of accuracy metrics in radiomics and ML studies is often inconsistent and incomplete [21]. Due to this situation, our meta-analysis could only employ AUC values as these were the most commonly reported. However, sensitivity and specificity analysis could have provided additional insights if feasible.

Indeed, ROC AUC treats sensitivity and specificity as equally important overall when averaged across all thresholds. For example, poor sensitivity could mean missed diagnosis and delayed treatment or even death, whereas poor specificity means unnecessary test. ROC AUC ignores clinical differentials in “misclassification cost” and, therefore, risks finding a new test worthless when patients and physicians would consider otherwise. ROC AUC weighs changes in sensitivity and specificity equally only where the curve slope equals one. Other points assign different weights, determined by curve shape and without considering clinically meaningful information, e.g., a 5 % improvement in sensitivity contributes less to AUC at high specificity than at low specificity. Thus, AUC can consider a test that increases sensitivity at low specificity superior to one that increases sensitivity at high specificity [54].

Greater care should be given in future research to avoid this issue, ideally confusion matrices should always be reported if possible.

The ability to distinguish low-grade from high-grade meningiomas based on preoperative MR images could influence personalized treatment decisions. In particular, in patients with meningiomas at certain locations where biopsy is difficult to obtain due to a high risk of mortality and morbidity (e.g., petroclival meningiomas), a tailored radiation treatment in the high-grade forms may be recommended [55]. Furthermore, in asymptomatic patients with small meningiomas, radiotherapy may be avoided for benign lesions, while high-grade meningiomas could undergo radiation treatment before resection [56]. Therefore, noninvasive MRI prediction of meningioma grading could address in the future small meningioma treatment strategy, also without histological confirmation. However, radiomics are not currently ready for clinical implementation due to the issues found in RQS.

Our study has some limitations that should be acknowledged. The RQS is relatively recent and a purely methodological scoring system and does not consider differences in study aim. Regarding the meta-analysis, a relatively low number of papers met the selection criteria. While the QUADAS-2 analysis presented some unclear elements, no high-risk sources of bias were identified. Study heterogeneity was high, but this is in line with other machine learning meta-analyses and diagnostic meta-analyses in general [21, 57, 58]. Finally, not all articles were specified if the WHO 2016 classification of central nervous system tumors was used. However, meningioma grading did not change substantially compared to the previous version, except for the introduction of brain invasion as a criterion for the diagnosis of grade II lesions [3].

In conclusion, radiomics studies show promising results for improving management of intracranial meningiomas, though they require more methodological rigor. The prediction of meningioma grading from preoperative brain MRI also demonstrated good results in our meta-analysis. Well-designed, prospective trials are necessary to demonstrate their validity and reporting of methods and results has to be standardized prior to their use in daily clinical practice.