Next Article in Journal
Grading Investment Diversification Options in Presence of Non-Historical Financial Information
Previous Article in Journal
Evaluation Procedures for Forecasting with Spatiotemporal Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Performance Assessment of Supervised Classifiers for Designing Intrusion Detection Systems: A Comprehensive Review and Recommendations for Future Research

by
Ranjit Panigrahi
1,†,
Samarjeet Borah
1,
Akash Kumar Bhoi
2,
Muhammad Fazal Ijaz
3,†,
Moumita Pramanik
1,
Rutvij H. Jhaveri
4 and
Chiranji Lal Chowdhary
5,*
1
Department of Computer Applications, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar 737136, Sikkim, India
2
Department of Electrical and Electronics Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar 737136, Sikkim, India
3
Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
4
Department of Computer Science and Engineering, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India
5
School of Information Technology & Engineering, Vellore Institute of Technology, Vellore 632014, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work and are first co-authors.
Mathematics 2021, 9(6), 690; https://doi.org/10.3390/math9060690
Submission received: 17 February 2021 / Revised: 15 March 2021 / Accepted: 20 March 2021 / Published: 23 March 2021
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Supervised learning and pattern recognition is a crucial area of research in information retrieval, knowledge engineering, image processing, medical imaging, and intrusion detection. Numerous algorithms have been designed to address such complex application domains. Despite an enormous array of supervised classifiers, researchers are yet to recognize a robust classification mechanism that accurately and quickly classifies the target dataset, especially in the field of intrusion detection systems (IDSs). Most of the existing literature considers the accuracy and false-positive rate for assessing the performance of classification algorithms. The absence of other performance measures, such as model build time, misclassification rate, and precision, should be considered the main limitation for classifier performance evaluation. This paper’s main contribution is to analyze the current literature status in the field of network intrusion detection, highlighting the number of classifiers used, dataset size, performance outputs, inferences, and research gaps. Therefore, fifty-four state-of-the-art classifiers of various different groups, i.e., Bayes, functions, lazy, rule-based, and decision tree, have been analyzed and explored in detail, considering the sixteen most popular performance measures. This research work aims to recognize a robust classifier, which is suitable for consideration as the base learner, while designing a host-based or network-based intrusion detection system. The NSLKDD, ISCXIDS2012, and CICIDS2017 datasets have been used for training and testing purposes. Furthermore, a widespread decision-making algorithm, referred to as Techniques for Order Preference by Similarity to the Ideal Solution (TOPSIS), allocated ranks to the classifiers based on observed performance reading on the concern datasets. The J48Consolidated provided the highest accuracy of 99.868%, a misclassification rate of 0.1319%, and a Kappa value of 0.998. Therefore, this classifier has been proposed as the ideal classifier for designing IDSs.

1. Introduction

The footprint of artificial intelligence-enabled Internet of Things (IoT) devices [1] in our day-to-day life attracts hackers and potential intrusions. In 2017, WannaCry ransomware, a self-propagating malware, devastatingly impacted computing resources by infecting more than 50,000 NHS systems [2]. The network threats such as WannaCry become a nightmare for the security manager and remain an open research area. Many intrusion detection schemes have been proposed to counter malicious activities in a computer network [3,4,5,6]. All the network anomaly counter mechanisms are either unsupervised, supervised, or a combination of both. The supervised algorithms are rigorously used to design state-of-the-art intrusion detectors. This is because the ability to learn from examples makes the supervised classifiers robust and powerful. In data science, an array of supervised classifiers exists, and each one of them claims to be the best among others. However, in the real world of classification processes, the scenario is somewhat different. The supervised classifiers are susceptible to misclassification if overfit or underfit during the training process [7]. Another aspect is a class-imbalance issue [8] in the underlying dataset of a classification model. A supervised classifier always favors the majority class if the training is incorporated on a high class-imbalance dataset [9,10]. Apart from the class-imbalance issue, the data purity also decides the performance of the supervised classifiers. The data are stored and are available in numerous formats and include several outliers such as missing class information, NULL, and NaN values. The raw data with outliers drastically limit the performance of the classifiers. The classifiers reveal unrealistic results with the data of outliers [11,12]. This leads to the development of robust and versatile classifiers for impure data. In this regard, numerous researchers are concerned about pattern recognition, and data extraction [13,14], which is the main objective of data mining, and perhaps one of the motivational aspects for exploring [15,16,17] supervised machine learning algorithms. Numerous classification mechanisms are available in the literature to handle impure data, especially in designing full-bodied network intrusion detection systems (IDS). However, the central question of the researchers is associated with the selection of the optimum classifiers to develop a base learner for IDS.
Furthermore, there is a lack of a standard guideline to select the most suitable classifier for their datasets. Multiple studies have been conducted on the before-mentioned problem. However, most of the proposed studies available in the literature evaluate the classifiers using standard performance measures such as classification accuracy and false-positive rate [18,19,20,21,22]. It is worth mentioning that the quality of a classifier does not depend only on the classification accuracy. Other performance measures such as misclassification rate, precision, recall, and F-Score empirically define the classifier’s performance quality. Therefore, it is necessary to study a comprehensive review that can be used as a guideline to analyze classifiers using various performance measures in various datasets. Therefore, the main objective of this paper is to examine several research papers in the field of host-based and network-based intrusion detection considering multiple aspects. This study analyzes the type of classification used, the datasets used to consider the sample size, performance measures discussed in evaluating classifier performance, inferences, and research gaps encountered.
Moreover, the proposed study provides a guideline for designing a host-based or network-based intrusion detection system. This study’s main contribution is to present an in-depth analysis of fifty-four widely used classifiers considering six different classifier groups across thirteen performance measures. These classifiers are comprehensively analyzed through three well-recognized binary and multiclass NSLKDD, ISCXIDS2012, and CICIDS2017 datasets. The decision-making algorithm referred to as Techniques for Order Preference by Similarity to the Ideal Solution (TOPSIS) [23,24] is incorporated as a reliable feature to allocate weight to these classifiers. These weights are subsequently used for ranking the performance of the classifiers. Consequently, the best classifier for a dataset and the best of each group of classifiers is proposed. Moreover, the best classifier across all the datasets is suggested as the most generic classifier for designing an IDS.
The research of this analysis is structured as follows. In Section 2, the most recent study of supervised classifiers is delineated; the materials and methods has been mentioned in Section 3. Furthermore, in Section 4, the results of the analysis has been discussed. Section 5 is dedicated for J48Consolidated classifier, followed by the conclusion in Section 6.

2. Related Works

Supervised classifiers are extensively used in the field of network security. The most potential applications of machine learning techniques are in risk assessment after the deployment of various security apparatus [25], identifying risks associated with various network attacks and in predicting the extent of damage a network threat can do. Apart from these, supervised classification techniques have been explored and analyzed by numerous researchers in a variety of application areas. Most of those studies’ analyses focused on a detailed exploration to validate a theory or performance evaluation to come across a versatile classifier [26,27,28]. The performance of supervised classifiers has been explored in intrusion detection [29], robotics [18], semantic web [19], human posture recognition [30], face recognition [20], biomedical data classification [31], handwritten character recognition [22] and land cover classification [21]. Furthermore, an innovative semi-supervised heterogeneous ensemble classifier called Multi-train [32] was also proposed, where a justifiable comparison was made with other supervised classifiers, such as k-Nearest Neighbour (kNN), J48, Naïve Bayes, and random tree. Multi-train was also successfully achieved, and its prediction accuracy of unlabeled data was improved, which, therefore, can reduce the risk of incorrectly labeling the unlabeled data. A study on this topic, which exclusively deals with classifiers’ accuracy measures using multiple standard datasets, is proposed by Labatut et al. [33]. An empirical analysis of supervised classifiers was carried out by Caruana et al. [34] using eleven datasets with eight performance measures, where the calibrated boosted trees appeared as the best learning algorithm. Besides, a systematic analysis of supervised classifiers was carried out by Amancio et al. [35] under varying classifiers’ settings.
The focus of this paper is to analyze the performance of various supervised classifiers using IDS datasets. Therefore, the authors have decided to review related articles in the literature that examined different classifiers using IDS datasets. The classifier analysis is expected to provide a platform for the researchers to devise state-of-the-art IDSs and quantitative risk assessment schemes for various cyber defense systems. Numerous studies and their detailed analytical findings related to supervised classifiers have been outlined in Table 1.
Table 1 summarizes the taxonomy of analyzed articles. In the last column, an attempt has been made to outline the inferences/limitation or research gaps encountered. The summarization of these analyses provides scope for meta-analysis about the supervised classifiers, which ultimately shows direction or justification for further investigation in the field of supervised classification using intrusion detection datasets. From Table 1, it has been observed that the decision tree and function-based approaches are mostly explored. The usage statistics of supervised classifiers are presented in Figure 1.
According to Figure 1, J48 (C4.5) and Random Forest of decision trees and function-based SVM and Multilayer Perceptron (Neural Network) have been analyzed considerably by numerous researchers. In this work, the authors have tried to understand the reason behind decision trees’ popularity and function-based approaches. Therefore, the authors have summarized the performance metrics results used to explore those classifiers in the analyzed papers. Most of the researchers focused on accuracy scores; therefore, the authors used the accuracy score as a base measure to understand the reason behind the use of decision trees and function-based classifiers.
Therefore, in this study, the authors have calculated the minimum, maximum, and average accuracy of Bayes, Decision trees, Functions, Lazy, and Rules group of classifiers concerning the literature outlined in Table 1. The calculated detection accuracy of the research papers surveyed is presented in Figure 2. In Figure 2, almost all groups of classifiers show a maximum accuracy rate of more than 99%.
Similarly, considering the average accuracy, the Lazy classifiers are far ahead of different groups of classifiers. Despite having an impressive accuracy rate, the Lazy group classifiers were deeply analyzed by a handful of researchers [48,49,50]. On the other hand, decision trees and function-based classifiers were the center point of many research papers. Consequently, in this paper, the authors have decided to explore multiple classifiers of all the classifier groups. In this work, fifty-four state-of-the-art classifiers of six different classifier groups were analyzed. The classifier groups were created based on their functionality and the guidelines presented by Frank et al. [59]. The classifiers under evaluation and their groups are presented in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 under six different classifier groups.

3. Materials and Methods

The authors used Weka 3.8.1 [59] software in a CentOS platform on the Param Shavak supercomputing facility provided by the Centre for Development of Advanced Computing (CDAC), India. The supercomputing system consists of 64 GB RAM with two multicore CPUs, each with 12 cores having a performance of 2.3 Teraflops. To evaluate all the classifiers of Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, the authors have considered samples of NSLKDD [118,119,120], ISCXIDS2012 [121], and CICIDS2017 [122] datasets. The training and testing sample size for each dataset is outlined in Table 8. The training and testing samples were generated with 66% and a 34% split of the total sample size.
All three NSLKDD, CICIDS2017 and ISCXIDS2012, have a high-class imbalance. Additionally, NSLKDD and CICIDS2017 are multi-class, and the ISCXIDS2012 dataset contains binary class information. The performance of a classifier cannot be explored only through its accuracy and detection rate. Therefore, the authors have considered a variety of performance measures such as training time, testing time, model accuracy, misclassification rate, kappa, mean absolute error, root mean squared error, relative absolute error, root relative squared error, true positive rate, false-positive rate, precision, and receiver operating curve (ROC). The ROC value reveals the real performance on class imbalance datasets such as the CICIDS2017 and the NSL-KDD. Similarly, the Matthews correlation coefficient (MCC) and precision-recall curve (PRC) are useful for evaluating binary classification on the ISCXIDS2012 dataset.
The experiment for evaluating classifiers covers five essential steps [123], such as dataset selection, classification, weight calculation using multi-criteria decision making, weight to rank transformation, and finally, global rank generation. Figure 3 shows the methodology used by the authors.
The authors have conducted all five steps iteratively for all datasets and classifiers under evaluation. In the initial steps from the pool of datasets, a dataset has been selected. The dataset initially contains several tuples with variable class densities. From each dataset, the requisite number of random samples were generated. The output of this step has been presented in Table 8. This procedure was conducted deliberately to ensure that all the classifiers were not biased for a specific dataset. The second step began by classifying each dataset using each classifier that is presented in the classifier pool. The performance of each classifier was tabulated for future reference. The process has been recursively conducted for each dataset.
The third and fourth steps jointly work to achieve the research objectives. In this process, the average performance score of each group of classifiers has been analyzed. Additionally, each group’s ranking has also been calculated to retrieve the best classifier group specific to the dataset. All the group’s classifiers with better results were considered to evaluate their consistent performance across the three datasets. Furthermore, considering the performances of the best performing group’s classifiers, the authors have calculated the weight and rank of each classifier of that group, specific to each dataset. The authors aimed to provide a reliable evaluation of the best classifier for each dataset.
The final step involved global weight and rank calculation. At this stage, the global weight of a classifier of the best performing group was calculated based on the ranking received for each dataset. The average performance results of those included in the group with the better score across the three datasets were based on the individual score of each classifier. The scores were further arranged in ascending order to provide a clear presentation about the best performance classifier.
All the five steps of methodologies included a two-stage procedure. First, the best classifier group was selected, and the second-best classifier was proposed. The best classifier and classifier group were based on an extensively used conventional multiple-criteria decision-making (MCDM) method named TOPSIS. Before applying TOPSIS, the performance outcome of each classifier and each classifier group were calculated. Therefore, the authors have calculated 13 performance metrics of the classifiers.
Furthermore, the authors considered only eight performance measures, i.e., testing time per instance, accuracy, kappa value, mean absolute error, false-positive rate, precision, and receiver operating curve value for weighting and ranking purpose. On the one hand, these eight measures are in line with the aim of this research. On the other hand, all the other performance metrics can be calculated through one of these measures that are considered in this study. Consequently, the significance of those 17 measures did not affect the weighting and ranking process. The algorithmic method of the weighting of each classifier and classifier group based on TOPSIS has been demonstrated in Table 9.
It should be noted that in algoWeighting, C1, C2, C3,………, Cn are the classifier or classifier group labels, and P1, P2, P3,………, Pk are the performance or average performance score, respectively.
The algorithm begins with constructing a decision matrix Md, where the nth classifier or classifier group is the performance outcome for kth performance measure. The decision matrix is the basis for the evaluation of the best classifier. It helps the decision-making module (TOPSIS) to calculate the weight for each feature.
At the second stage, a weightage normalized decision matrix has been calculated, which is the weight of the jth performance measures.
The idea behind allocating appropriate weight to performance measures is in its ability to rank classifiers specific to domain area and learning environment. For instance, in high class-imbalance learning, the performance measure Matthews correlation coefficient (MCC), Kappa, and receiver operating curve (ROC) value should be given more weightage than other performance matrices. The datasets used here were class imbalance in nature; therefore, more emphasis has been given to performance matrices suitable for the class imbalance environment. In this regard, eight performance matrices have been shortlisted, and corresponding weights have been allocated for TOPSIS processing. The weight for eight performance measures is presented in Table 10. Another reason for not considering all the performance matrices is because other performance measures themselves can be derived from the matrices presented in Table 10. For instance, detection accuracy can be calculated from True Positives (TP) and True Negatives (TN). Therefore, the True Positive Rate (TPR) and True Negative Rate (TNR) have been dropped from calculating weight for classifiers. In this way, out of the 13 performance measures, only eight performance measures have been selected.
The algorithm includes a positive and negative ideal solution to calculate the separation measure of each classifier/classifier group, which supports the calculation of each classifier or group’s score. The scores are used to rank the classifiers. The procedure followed here for calculating the rank of classifiers has been presented in Table 11.

4. Results and Discussion

The presented analysis to reach the best classifier was conducted through a top-to-bottom approach. Firstly, the best classifier group has been identified through intergroup analysis. Secondly, the best performing classifier of that best classifier group has been acknowledged through intragroup analysis.

4.1. Intergroup Performance Analysis

Under intergroup performance analysis, the authors have calculated the classifier group performance as a whole. The classifier’s group performances for NSLKDD, ISCXIDS2012, and CICIDS2017 datasets have been listed in Table 12, Table 13 and Table 14, respectively.
According to Table 12, decision tree classifiers present reliable results in all the fields of performance metrics, except training and testing time. On the one hand, the decision tree classifiers consume training and testing times of 4.18 s and 0.03 s, respectively. Similarly, the Bayes group of classifiers has a fast response in training and testing time but presents low-quality performance metrics. The ROC and MCC values are suitable for evaluating classifier groups’ performance, considering the class imbalance learning. Therefore, by observing the average ROC and MCC of classifier groups on the NSL-KDD dataset, the authors have seen that the decision tree behaves far better than other classifier groups. The authors found a similar observation concerning the ISCXIDS2012 dataset. Table 6 shows the group performance of supervised classifiers for the ISCXIDS2012 dataset. The decision tree classifiers showed the highest amount of average accuracy of 97.3519%, but the average testing time per instance was low and on par with Bayes and Miscellaneous classifiers. Nevertheless, decision tree classifiers were far ahead of their peer classifier groups, with a higher average ROC value of 0.985. The authors have also conducted intergroup performance analysis on CICIDS2017. The average, maximum, and minimum performance reading has been outlined in Table 12. The decision tree classifiers reveal an impressive amount of accuracy and ROC values of 99.635 and 0.999, respectively.
Furthermore, the decision tree classifiers present consistent performance metrics for all three intrusion detection datasets NSLKDD, ISCXIDS2012, and CICIDS2017. However, before concluding that decision trees are best for these datasets by considering a limited number of parameters, the authors have decided to identify all these classifier groups’ actual weight and rank through TOPSIS. The classifier group with the highest weight and rank will be pointed out as the best classifier for these IDS datasets. This will improve the proposed study’s relevance and background to find the best classifier within the winning classifier group.
Figure 4 presents the weights and ranks of classifier groups for all three IDS datasets. The decision tree classifier presents the highest performance. Moreover, the decision trees present a consistent performance for all the IDS datasets. Therefore, the decision tree can be considered as the best method for the development of reliable IDSs.

4.2. Intragroup Performance Analysis

In the intergroup analysis, the authors conclude that decision tree classifiers reveal the best performance for imbalanced IDS datasets. The authors have decided to conduct an intragroup analysis of decision trees for NSLKDD, ISCXIDS2012, and CICIDS2017 datasets. The intragroup study aims to identify the best decision tree within the decision tree group of classifiers for the concerned datasets. Several performance outcomes of decision tree classifiers for NSLKDD, ISCXIDS2012, and CICIDS2017 datasets have been analyzed through Figure 5, Figure 6 and Figure 7.
The J48Consolidated classifier shows better accuracy for the NSL-KDD dataset. The sample size of NSLKDD here is an imbalance in nature. Therefore, these measures play a significant role in finding the best classifier. Considering the ROC value, the ForestPA performs better as compared to J48Consolidated. Additionally, both ForestPA and J48Consolidated show similar performance in terms of the MCC value. Consequently, the authors did not find sufficient scope for deciding an ideal decision tree classifier for the NSLKDD dataset.
Furthermore, the decision tree classifiers’ performance on a sample of the ISCXIDS2012 dataset is presented in Figure 6. The Functional Trees (FT), J48Consolidated, NBTree, and SysFor classifiers consumed a significant amount of computational time. Nevertheless, the rest of the decision trees consumed 0.001 s of testing time per instance. The J48Consolidated algorithm was limited by presenting the longest amount of time to detect an anomalous instance. However, this computation time consumption supports the fact that J48Consolidated provides the highest accuracy of 98.5546%, which leads to the lowest misclassification rate of 1.4454%. Moreover, J48Consolidated seems to lead the decision trees group with the best Kappa value (0.9711).
The test results of decision trees on a CICIDS2017 dataset are presented in Figure 7. The J48Consolidated algorithm provides high-quality results in the class imbalance instances of the CICIDS2017 dataset. J48Consolidated scores the highest accuracy with a low misclassification rate. However, considering the ROC and MCC values, the J48 presents better performance than the J48Consolidated. Therefore, it is not clear about the best classifiers, which can be considered as the base learner for future IDS.
In the case of ISCXIDS2012, J48Consolidated also presents consistent results in all performance measures. However, in the case of NSL-KDD and CICIDS2017, it was not possible to find the best classifier. Therefore, the authors have also considered TOPSIS to allocate individual decision tree classifiers’ weight and rank. The average weight and rank of decision tree classifiers for all datasets have also been calculated to find the best classifier for all the datasets. The average weight and rank across all the datasets are not significant in identifying a suitable classifier because an IDS is designed considering a specific dataset or environment. However, average weight and rank will play a relevant role in the conclusion concerning the most versatile classifier conducted in this study. The average ranks and weights of all the classifiers for all the three IDS datasets are represented in Figure 8.
The J48Consolidated classifier has the highest rank across all the datasets. Moreover, J48Consolidated presents the highest weight of 0.964 for the ISCXIDS2012 dataset. The J48Consolidated decision tree classifier is best for the high-class imbalance NSLKDD and CICIDS2017 and ISCXIDS2012 datasets. Therefore, J48Consolidated will be a suitable classifier for designing IDS base learners using either NSLKDD, ISCXIDS2012, and CICIDS2017 datasets.

4.3. Detailed Performance Reading of All the Classifiers

Table 15, Table 16 and Table 17 provide a detailed insight of all the supervised classifiers in six distinct groups. These tables outlined thirteen performance metrics. However, the authors have identified the best classifier group (decision tree) and the best classifier (J48Consolidated). Nevertheless, other classifiers can have different performances considering other datasets. Therefore, while designing IDSs, the authors suggest further evaluation of supervised classifiers based on specific computing and network environments.

5. J48Consolidated—A C4.5 Classifier Based on C4.5

J48Consolidated has been presented as the best classifier considering the decision tree group. Therefore, this section provides an in-depth analysis of J48Consodated.

5.1. Detection Capabilities of J48Consolidated

In this section, the J48Consolidated classifier is analyzed, considering the classification of the attack detection process. The classification threshold and the percentage of detection have been taken into consideration while analyzing attack classes. The attack-wise classification output for NSLKDD, ISCXIDS, and CICIDS2017 datasets has been presented in Figure 9, Figure 10 and Figure 11, respectively.
The detection output for the NSLKDD dataset remains consistently good for DoS, Probe, R2L, U2R, and Normal classes with the increase in detection threshold. The U2R attack class shows low false positives, whereas few regular instances are misclassified during the classification process. Overall, the J48Consolidated classifier exhibited satisfactory performance for the NSLKDD dataset.
ISCXIDS2012 is a binary class dataset; therefore, J48Consolidated seems to generate false alarms. However, the presented results are low compared to the number of correctly classified instances (true positives and true negatives).
Finally, the individual J48Consolidated evaluation presents an effective classification considering six attack groups of the CICIDS2017 dataset. The classifier also differentiates regular instances with attack instances during the classification process.

5.2. Classification Output of J48Consolidated

The three IDS datasets are considered for a specific environment. The correlation of attributes, attacks, and benign instances varied from dataset to dataset. Therefore, J48Consolidated shows a different classification performance considering different IDS datasets. The classification output of J48Consolidated for NSLKDD, ISCXIDS2012, and CICIDS2017 datasets has been outlined in Figure 12, Figure 13 and Figure 14, respectively.
Figure 12 shows that the J48Consolidated classifier presents a reliable classification in the NSLKDD dataset. Nevertheless, J48Consolidated also produced false alarms for positive and negative instances. Therefore, the authors recommend incorporating filter components such as data standardization and effective feature selection while designing IDSs using J48Considated. A filter component not only smooths the underlying data, but will also improve classification performance.
On the one hand, for the ISCXIDS2012 dataset, J48Consolidated dramatically showed improvement in classification. The classifier showed few false alarms. On the other hand, J48Consolidated successfully detected almost all the instances of the ISCXIDS2012 binary dataset. Therefore, the classifier achieved the highest TOPSIS score of 0.964 (Figure 8); thus, contributing to the highest average rank.
Finally, for the CICIDS2017 dataset, the J48Consolidated classifier presented a low number of false alarms. The six attack groups of the CICIDS2017 dataset presented a consistent classification with a detection accuracy of 99.868% (Table 17) and a low false positive of 0.000011.
A reliable IDS benchmark dataset must fulfill 11 criteria [122], such as complete network configuration, attack diversity, overall traffic, thorough interaction, labeled dataset, full capture, existing protocols, heterogeneity, feature set, anonymity, and metadata. The CICIDS2017 [123] dataset fulfills these criteria. Furthermore, CICIDS2017 is recent and focuses on the latest attack scenarios. The J48Consolidated classifier presented the best results for the CICIDS2017 dataset with an accuracy of 99.868%. Consequently, the J48Consolidated classifier can be assumed as an effective IDS with the CICIDS2017 dataset. Nevertheless, the authors recommend the incorporation of feature selection procedures at the preprocessing stage to extract the most relevant features of the dataset and promote system performance.

6. Conclusions

This paper analyzed fifty-four widely used classifiers spanning six different groups. These classifiers were evaluated on the three most popular intrusion detection datasets, i.e., NSLKDD, ISCXIDS2012 and CICIDS2017. The authors have extracted a sufficient number of random samples from these datasets, which retained the same class imbalance property of the original datasets. Consequently, multi-criteria decision-making has been used to allocate weight to these classifiers for different datasets. The rank of the classifiers was also finalized using those weights. First, an intragroup analysis has been conducted to find the best classifier group. Secondly, an intragroup analysis of the best classifier group has been made to find the best classifiers for the intrusion detection datasets. The authors analyzed thirteen performance metrics. Therefore, the best classifier has been selected impartially. On the one hand, the intergroup analysis presented the decision tree group of classifiers as the best classifier group, followed by the Rule-based classifiers, whereas the intragroup study identified J48Consolidated as the best classifier for high-class imbalance considering NSLKDD, CICIDS2017 and ISCXIDS2012 datasets. The J48Consolidated classifier provided the highest accuracy of 99.868%, a misclassification rate of 0.1319%, and a Kappa value of 0.998.
This study presented an in-depth analysis that provides numerous outcomes for IDS designers. Comparing fifty-four classifiers on intrusion detection datasets through thirteen performance matrices and ranking them is the main contributory work of this article. Nevertheless, the present study has limitations. Further investigation is required considering other datasets and other specific application domains. Moreover, the number of classes, class-wise performance observation, and classifiers’ performance based on varying sample sizes should be carried out to understand the detailed aspects of the classifiers. The scalability and robustness of the classifiers were not tested. As a future work, many other IDS datasets can be used for ascertaining performance of the classifiers. Many recent ranking algorithms can be used as voting principle to obtain exact ranks of classifiers. Many other recent rule-based, decision forest classifiers were covered in this article; those classifiers can be analyzed to understand the real performance of the classifiers and classifier groups. Finally, J48Consolidated, which evolved as an ideal classifier out of this analysis, can be used along with a suitable feature selection technique to design robust intrusion detection systems.

Author Contributions

The individual contributions for this research are specified below: Conceptualization, R.P., S.B. and M.F.I.; Data curation, R.P. and M.F.I.; Formal analysis, A.K.B., M.P., C.L.C. and R.H.J.; Funding acquisition, R.H.J., C.L.C. and M.F.I.; Investigation; R.P., S.B., M.F.I. and A.K.B.; Methodology, R.P., S.B., C.L.C., M.F.I. and M.P.; Project administration, S.B., R.H.J., C.L.C. and A.K.B.; Resources, S.B., A.K.B., C.L.C. and M.P.; Software, R.P., C.L.C., M.F.I. and M.P.; Supervision, S.B., A.K.B., R.H.J.; Validation, R.P., M.F.I., C.L.C. and M.P.; Visualization, R.P., S.B., M.F.I., R.H.J. and A.K.B.; Writing—Review and editing, R.P., M.F.I., S.B., C.L.C., M.P., R.H.J. and A.K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Sejong University research fund.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found here: NSL-KDD—https://www.unb.ca/cic/datasets/nsl.html (accessed on 1 February 2021), ISCXIDS2012—https://www.unb.ca/cic/datasets/ids.html (accessed on 1 February 2021), CICIDS2017—https://www.unb.ca/cic/datasets/ids-2017.html (accessed on 1 February 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AbbreviationDescription
TTTesting Time
ACCAccuracy
KVKappa Value
MAEMean Absolute Error
RMSERoot Mean Squared Error
RAERelative Absolute Error
RRSERoot Relative Squared Error
FPRFalse Positive Rate
PREPrecession
ROCReceiver Operating Curve
MCCMatthews Correlation Coefficient
PRCPrecision Recall Curve
TOPSISTechniques for Order Preference by Similarity to the Ideal Solution
IDSIntrusion Detection System
IoTInternet of Things
LWLLocally Weighted Learning
RLKNNRseslib K-Nearest Neighbor
CRConjunctive Rule
DTBLDecision Table
DTNBDecision Table Naïve Bayes
FURIAFuzzy Rule Induction
NNGENearest Neighbor with Generalization
OLMOrdinal Learning Method
RIDORRIpple-DOwn Rule learner
BFTBest-First Decision Tree
CDTCriteria Based Decision Tree
LADTLogit Boost based Alternating Decision Tree
LMTLogistic Model Trees
NBTNaïve Bayes based Decision Tree
REPTReduces Error Pruning Tree
RFRandom Forest
RTRandom Tree
SCSimple Cart
CHIRPComposite Hypercubes on Iterated Random Projections
FLRFuzzy Lattice Reasoning
HPHyper Pipes
VFIVoting Feature Intervals
TPTrue Positives
TNTrue Negatives
TPRTrue Positive Rate
TNRTrue Negative Rate
FTFunctional Trees

References

  1. Chavhan, S.; Gupta, D.; Chandana, B.N.; Khanna, A.; Rodrigues, J.J.P.C. IoT-based Context-Aware Intelligent Public Transport System in a metropolitan area. IEEE Internet Things J. 2019, 7, 6023–6034. [Google Scholar] [CrossRef]
  2. Chen, Q.; Bridges, R.A. Automated behavioral analysis of malware: A case study of wannacry ransomware. In Proceedings of the 16th IEEE International Conference on Machine Learning and Applications, ICMLA 2017, Cancun, Mexico, 18–21 December 2017; Institute of Electrical and Electronics Engineers Inc.: New York, NY, USA, December 2017; Volume 2017, pp. 454–460. [Google Scholar]
  3. Liang, W.; Li, K.C.; Long, J.; Kui, X.; Zomaya, A.Y. An Industrial Network Intrusion Detection Algorithm Based on Multifeature Data Clustering Optimization Model. IEEE Trans. Ind. Inform. 2020, 16, 2063–2071. [Google Scholar] [CrossRef]
  4. Jiang, K.; Wang, W.; Wang, A.; Wu, H. Network Intrusion Detection Combined Hybrid Sampling with Deep Hierarchical Network. IEEE Access 2020, 8, 32464–32476. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Li, P.; Wang, X. Intrusion Detection for IoT Based on Improved Genetic Algorithm and Deep Belief Network. IEEE Access 2019, 7, 31711–31722. [Google Scholar] [CrossRef]
  6. Yang, H.; Wang, F. Wireless Network Intrusion Detection Based on Improved Convolutional Neural Network. IEEE Access 2019, 7, 64366–64374. [Google Scholar] [CrossRef]
  7. Lever, J.; Krzywinski, M.; Altman, N. Model selection and overfitting. Nat. Methods 2016, 13, 703–704. [Google Scholar] [CrossRef]
  8. Krawczyk, B. Learning from imbalanced data: Open challenges and future directions. Prog. Artif. Intell. 2016, 5, 221–232. [Google Scholar] [CrossRef] [Green Version]
  9. Pes, B. Learning from high-dimensional biomedical datasets: The issue of class imbalance. IEEE Access 2020, 8, 13527–13540. [Google Scholar] [CrossRef]
  10. Wang, S.; Yao, X. Multiclass imbalance problems: Analysis and potential solutions. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 1119–1130. [Google Scholar] [CrossRef] [PubMed]
  11. Ho, T.K.; Basu, M. Complexity measures of supervised classification problems. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 289–300. [Google Scholar]
  12. Kelly, M.G.; Hand, D.J.; Adams, N.M. Supervised classification problems: How to be both judge and jury. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin, Germany, 1999; Volume 1642, pp. 235–244. [Google Scholar]
  13. Kuncheva, L.I. Combining Pattern Classifiers: Methods and Algorithms: Second Edition; Wiley: Hoboken, NJ, USA, 2014; Volume 9781118315, ISBN 9781118914564. [Google Scholar]
  14. Jain, A.K.; Duin, R.P.W.; Mao, J. Statistical pattern recognition: A review. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 4–37. [Google Scholar] [CrossRef] [Green Version]
  15. Lashkari, A.H.; Gil, G.D.; Mamun, M.S.I.; Ghorbani, A.A. Characterization of tor traffic using time based features. In Proceedings of the ICISSP 2017 3rd International Conference on Information Systems Security and Privacy, Porto, Portugal, 19–21 February 2017; SciTePress: Setúbal, Portugal, 2017; Volume 2017-Janua, pp. 253–262. [Google Scholar]
  16. Robert, C. Machine Learning, a Probabilistic Perspective. CHANCE 2014, 27, 62–63. [Google Scholar] [CrossRef]
  17. Maindonald, J. Pattern Recognition and Machine Learning; Journal of Statistical Software: Los Angeles, CA, USA, 2007; Volume 17. [Google Scholar]
  18. Frasca, T.M.; Sestito, A.G.; Versek, C.; Dow, D.E.; Husowitz, B.C.; Derbinsky, N. A Comparison of Supervised Learning Algorithms for Telerobotic Control Using Electromyography Signals. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, AAAI 2016, Phoenix, AZ, USA, 12–17 February 2016; pp. 4208–4209. Available online: www.aaai.org (accessed on 12 May 2020).
  19. Soru, T.; Ngomo, A.C.N. A comparison of supervised learning classifiers for link discovery. ACM Int. Conf. Proceeding Ser. 2014, 41–44. [Google Scholar] [CrossRef]
  20. Arriaga-Gómez, M.F.; De Mendizábal-Vázquez, I.; Ros-Gómez, R.; Sánchez-Ávila, C. A comparative survey on supervised classifiers for face recognition. In Proceedings of the International Carnahan Conference on Security Technology, Hatfield, UK, 13–16 October 2014; Volume 2014, pp. 1–6. [Google Scholar]
  21. Shiraishi, T.; Motohka, T.; Thapa, R.B.; Watanabe, M.; Shimada, M. Comparative assessment of supervised classifiers for land use-land cover classification in a tropical region using time-series PALSAR mosaic data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1186–1199. [Google Scholar] [CrossRef]
  22. Micó, L.; Oncina, J. Comparison of fast nearest neighbour classifiers for handwritten character recognition. Pattern Recognit. Lett. 1998, 19, 351–356. [Google Scholar] [CrossRef]
  23. Sianaki, O.A. Intelligent Decision Support System for Energy Management in Demand Response Programs and Residential and Industrial Sectors of the Smart Grid. Ph.D. Thesis, Curtin University, Bentley, WA, Australia, 2015. [Google Scholar]
  24. Hwang, C.; Masud, A. Multiple Objective Decision Making—Methods and Applications: A State-Of-The-Art Survey; Springer: New York, NY, USA, 2012. [Google Scholar]
  25. Radanliev, P.; De Roure, D.; Page, K.; Van Kleek, M.; Santos, O.; Maddox, L.; Burnap, P.; Anthi, E.; Maple, C. Design of a dynamic and self-adapting system, supported with artificial intelligence, machine learning and real-time intelligence for predictive cyber risk analytics in extreme environments—Cyber risk in the colonisation of Mars. Saf. Extrem. Environ. 2021, 1–12. [Google Scholar] [CrossRef]
  26. Wu, X.; Kumar, V.; Ross, Q.J.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Yu, P.S.; et al. Top 10 algorithms in data mining. Knowl. Inf. Syst. 2008, 14, 1–37. [Google Scholar] [CrossRef] [Green Version]
  27. Kotsiantis, S.B.; Zaharakis, I.; Pintelas, P. Supervised machine learning: A review of classification techniques. Emerg. Artif. Intell. Appl. in Comput. Eng. 2007, 160, 3–24. [Google Scholar]
  28. Demusar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  29. Chand, N.; Mishra, P.; Krishna, C.R.; Pilli, E.S.; Govil, M.C. A comparative analysis of SVM and its stacking with other classification algorithm for intrusion detection. In Proceedings of the 2016 International Conference on Advances in Computing, Communication and Automation, ICACCA, Dehradun, India, 8–9 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–6. [Google Scholar]
  30. Htike, K.K.; Khalifa, O.O. Comparison of supervised and unsupervised learning classifiers for human posture recognition. In Proceedings of the International Conference on Computer and Communication Engineering (ICCCE 2010), Kuala Lumpur, Malaysia, 11–13 May 2010. [Google Scholar] [CrossRef]
  31. Tuysuzoglu, G.; Yaslan, Y. Gözetimli Siniflandiricilar ve Topluluk Temelli Sözlükler ile Biyomedikal Veri Siniflandirilmasi. In Proceedings of the 25th Signal Processing and Communications Applications Conference, SIU 2017, Antalya, Turkey, 15–18 May 2017; pp. 1–4. [Google Scholar] [CrossRef]
  32. Gu, S.; Jin, Y. Multi-train: A semi-supervised heterogeneous ensemble classifier. Neurocomputing 2017, 249, 202–211. [Google Scholar] [CrossRef] [Green Version]
  33. Labatut, V.; Cherifi, H. Accuracy Measures for the Comparison of Classifiers. arXiv 2012, arXiv:abs/1207.3790. [Google Scholar]
  34. Caruana, R.; Niculescu-Mizil, A. An empirical comparison of supervised learning algorithms. In Proceedings of the 23rd International Conference on Machine Learning, Hong Kong, China, 18–22 December 2006; Volume 148, pp. 161–168. [Google Scholar]
  35. Amancio, D.R.; Comin, C.H.; Casanova, D.; Travieso, G.; Bruno, O.M.; Rodrigues, F.A.; Da Fontoura Costa, L. A systematic comparison of supervised classifiers. PLoS ONE 2014, 9, e94137. [Google Scholar] [CrossRef]
  36. Araar, A.; Bouslama, R. A comparative study of classification models for detection in ip networks intrusions. J. Theor. Appl. Inf. Technol. 2014, 64, 107–114. [Google Scholar]
  37. Gharibian, F.; Ghorbani, A.A. Comparative study of supervised machine learning techniques for intrusion detection. In Proceedings of the Fifth Annual Conference on Communication Networks and Services Research (CNSR 2007), Fredericton, NB, Canada, 14–17 May 2007; pp. 350–355. [Google Scholar]
  38. Panda, M.; Patra, M.R. A comparative study of data mining algorithms for network intrusion detection. In Proceedings of the 1st International Conference on Emerging Trends in Engineering and Technology, ICETET 2008, Maharashtra, India, 16–18 July 2008; pp. 504–507. [Google Scholar]
  39. Srinivasulu, P.; Nagaraju, D.; Kumar, P.R.; Rao, K.N. Classifying the network intrusion attacks using data mining classification methods and their performance comparison. Int. J. Comput. Sci. Netw. Secur. 2009, 9, 11–18. [Google Scholar]
  40. Wu, S.Y.; Yen, E. Data mining-based intrusion detectors. Expert Syst. Appl. 2009, 36, 5605–5612. [Google Scholar] [CrossRef]
  41. Jalil, K.A.; Kamarudin, M.H.; Masrek, M.N. Comparison of machine learning algorithms performance in detecting network intrusion. In Proceedings of the 2010 International Conference on Networking and Information Technology, Manila, Philippines, 11–12 June 2010; pp. 221–226. [Google Scholar] [CrossRef]
  42. Amudha, P.; Rauf, H.A. Performance analysis of data mining approaches in intrusion detection. In Proceedings of the 2011 International Conference on Process Automation, Control and Computing, Coimbatore, India, 20–22 July 2011. [Google Scholar]
  43. China Appala Naidu, R.; Avadhani, P.S. A comparison of data mining techniques for intrusion detection. In Proceedings of the IEEE International Conference on Advanced Communication Control and Computing Technologies (ICACCCT), Ramanathapuram, India, 23–25 August 2012; pp. 41–44. [Google Scholar]
  44. Kalyani, G. Performance Assessment of Different Classification Techniques for Intrusion Detection. IOSR J. Comput. Eng. 2012, 7, 25–29. [Google Scholar] [CrossRef]
  45. Thaseen, S.; Kumar, C.A. An analysis of supervised tree based classifiers for intrusion detection system. In Proceedings of the 2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering, Salem, India, 21–22 February 2013; pp. 294–299. [Google Scholar]
  46. Revathi, S.; Malathi, A. A Detailed Analysis on NSL-KDD Dataset Using Various Machine Learning Techniques for Intrusion Detection. Int. J. Eng. Res. Technol. 2013, 2, 1848–1853. [Google Scholar]
  47. Robinson, R.R.R.; Thomas, C. Ranking of machine learning algorithms based on the performance in classifying DDoS attacks. In Proceedings of the 2015 IEEE Recent Advances in Intelligent Computational Systems, RAICS 2015, Trivandrum, Kerala, 10–12 December 2015; pp. 185–190. [Google Scholar]
  48. Choudhury, S.; Bhowal, A. Comparative analysis of machine learning algorithms along with classifiers for network intrusion detection. In Proceedings of the 2015 International Conference on Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), Avadi, India, 6–8 May 2015; pp. 89–95. [Google Scholar]
  49. Jain, A.; Rana, J.L. Classifier Selection Models for Intrusion Detection System (Ids). Inform. Eng. Int. J. 2016, 4, 1–11. [Google Scholar]
  50. Bostani, H.; Sheikhan, M. Modification of supervised OPF-based intrusion detection systems using unsupervised learning and social network concept. Pattern Recognit. 2017, 62, 56–72. [Google Scholar] [CrossRef]
  51. Belavagi, M.C.; Muniyal, B. Performance Evaluation of Supervised Machine Learning Algorithms for Intrusion Detection. Procedia Comput. Sci. 2016, 89, 117–123. [Google Scholar] [CrossRef] [Green Version]
  52. Almseidin, M.; Alzubi, M.; Kovacs, S.; Alkasassbeh, M. Evaluation of Machine Learning Algorithms for Intrusion Detection System. In Proceedings of the 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, 14–16 September 2017. [Google Scholar]
  53. Amira, A.S.; Hanafi, S.E.O.; Hassanien, A.E. Comparison of classification techniques applied for network intrusion detection and classification. J. Appl. Log. 2017, 24, 109–118. [Google Scholar]
  54. Aksu, D.; Üstebay, S.; Aydin, M.A.; Atmaca, T. Intrusion detection with comparative analysis of supervised learning techniques and fisher score feature selection algorithm. In Communications in Computer and Information Science; Springer: Berlin, Germany, 2018; Volume 935, pp. 141–149. [Google Scholar]
  55. Nehra, D.; Kumar, K.; Mangat, V. Pragmatic Analysis of Machine Learning Techniques in Network Based IDS. In Proceedings of the International Conference on Advanced Informatics for Computing Research; Springer: Berlin/Heidelberg, Germany, 2019; pp. 422–430. [Google Scholar]
  56. Mahfouz, A.M.; Venugopal, D.; Shiva, S.G. Comparative Analysis of ML Classifiers for Network Intrusion Detection; Springer: Berlin/Heidelberg, Germany, 2020; pp. 193–207. [Google Scholar]
  57. Rajagopal, S.; Siddaramappa Hareesha, K.; Panduranga Kundapur, P. Performance analysis of binary and multiclass models using azure machine learning. Int. J. Electr. Comput. Eng. 2020, 10, 978. [Google Scholar] [CrossRef] [Green Version]
  58. Ahmim, A.; Ferrag, M.A.; Maglaras, L.; Derdour, M.; Janicke, H. A Detailed Analysis of Using Supervised Machine Learning for Intrusion Detection; Springer: Berlin/Heidelberg, Germany, 2020; pp. 629–639. [Google Scholar]
  59. Frank, E.; Hall, M.A.; Witten, I.H. The WEKA Workbench. Online Appendix for “Data Mining: Practical Machine Learning Tools and Techniques.”; Morgan Kaufmann: Burlington, VT, USA, 2016; p. 128. [Google Scholar]
  60. Su, J.; Zhang, H.; Ling, C.X.; Matwin, S. Discriminative parameter learning for Bayesian networks. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1016–1023. [Google Scholar]
  61. Yu, S.-Z. Hidden semi-Markov models. Artif. Intell. 2010, 174, 215–243. [Google Scholar] [CrossRef] [Green Version]
  62. Ghahramani, Z. An introduction to Hidden Markov Models and Bayesian Networks. In Hidden Markov Models; World Scientific: Singapore, 2001; Volume 15. [Google Scholar]
  63. Zhang, H. Exploring conditions for the optimality of naïve bayes. Proc. Int. J. Pattern Recognit. Artif. Intell. 2005, 19, 183–198. [Google Scholar] [CrossRef]
  64. John, G.H.; Langley, P. Estimating Continuous Distributions in Bayesian Classifiers George. Proc. Elev. Conf. Uncertain. Artif. Intell. 1995, 42, 338–345. [Google Scholar]
  65. Puurula, A. Scalable Text Classification with Sparse Generative Modeling. In Lecture Notes in Computer Science; Springer: New York, NY, USA, 2012; pp. 458–469. [Google Scholar]
  66. Balakrishnama, S.; Ganapathiraju, A. Linear Discriminant Analysis—A Brief Tutorial; Institute for Signal and information Processing: Philadelphia, PA, USA, 1998; Volume 18, pp. 1–8. [Google Scholar]
  67. Fan, R.E.; Chang, K.W.; Hsieh, C.J.; Wang, X.R.; Lin, C.J. LIBLINEAR: A library for large Linear Classification. J. Mach. Learn. Res. 2008, 9, 1871–1874. [Google Scholar]
  68. Chang, C.C.; Lin, C.J. LIBSVM: A Library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 1–27. [Google Scholar] [CrossRef]
  69. Kleinbaum, D.G.; Klein, M. Introduction to Logistic Regression; Springer: New York, NY, USA, 2010; pp. 1–39. [Google Scholar]
  70. Windeatt, T. Accuracy/diversity and ensemble MLP classifier design. IEEE Trans. Neural Netw. 2006, 17, 1194–1211. [Google Scholar] [CrossRef]
  71. Hertz, J.; Krogh, A.; Palmer, R.G. Introduction to the Theory of Neural Computation; Elsevier Science Publishers: Amsterdam, The Netherlands, 2018; ISBN 9780429968211. [Google Scholar]
  72. Yang, Q.; Cheng, G. Quadratic Discriminant Analysis under Moderate Dimension. Stat. Theory. 2018. Available online: http://arxiv.org/abs/1808.10065 (accessed on 12 May 2020).
  73. Frank, E. Fully Supervised Training of Gaussian Radial Basis Function Networks in WEKA; University of Waikato: Hamilton, New Zealand, 2014; Volume 04. [Google Scholar]
  74. Schwenker, F.; Kestler, H.A.; Palm, G. Unsupervised and Supervised Learning in Radial-Basis-Function Networks. In Self-Organizing Neural Networks; Physica Verlag: Heidelberg, Germany, 2002; pp. 217–243. [Google Scholar]
  75. Kyburg, H.E. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference by Judea Pearl. J. Philos. 1991, 88, 434–437. [Google Scholar] [CrossRef]
  76. Kecman, V. Support Vector Machines—An Introduction; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1–47. [Google Scholar]
  77. Keerthi, S.S.; Shevade, S.K.; Bhattacharyya, C.; Murthy, K.R.K. Improvements to Platt’s SMO algorithm for SVM classifier design. Neural Comput. 2001, 13, 637–649. [Google Scholar] [CrossRef]
  78. Aha, D.W.; Kibler, D.; Albert, M.K. Instance-Based Learning Algorithms. In Machine Learning; Springer: Berlin, Germany, 1991; Volume 6, pp. 37–66. [Google Scholar]
  79. Cleary, J.G.; Trigg, L.E. K*: An Instance-based Learner Using an Entropic Distance Measure. In Machine Learning Proceedings 1995; Morgan Kaufmann: New York, NY, USA, 1995. [Google Scholar]
  80. Wojna, A.; Latkowski, R. Rseslib 3: Library of rough set and machine learning methods with extensible architecture. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2019; Volume 10810 LNCS, pp. 301–323. [Google Scholar]
  81. Frank, E.; Hall, M.; Pfahringer, B. Locally Weighted Naive Bayes; University of Waikato: Hamilton, New Zealand, 2012. [Google Scholar]
  82. Atkeson, C.G.; Moore, A.W.; Schaal, S. Locally Weighted Learning. Artif. Intell. Rev. 1997, 11, 11–73. [Google Scholar] [CrossRef]
  83. Zimek EM (Documentation for extended WEKA including Ensembles of Hierarchically Nested Dichotomies). Available online: http://www.dbs.ifi.lmu.de/~zimek/diplomathesis/implementations/EHNDs/doc/weka/clusterers/FarthestFirst.html (accessed on 12 May 2020).
  84. Kohavi, R. The power of decision tables. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 1995; Volume 912, pp. 174–189. [Google Scholar]
  85. Hall, M.A.; Frank, E. Combining naive bayes and decision tables. In Proceedings of the FLAIRS Conference, Coconut Grove, FL, USA, 15–17 May 2008; Volume 2118, pp. 318–319. [Google Scholar]
  86. Hühn, J.; Hüllermeier, E. FURIA: An algorithm for unordered fuzzy rule induction. Data Min. Knowl. Discov. 2009, 19, 293–319. [Google Scholar] [CrossRef] [Green Version]
  87. Cohen, W.W. Fast Effective Rule Induction. In Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995. [Google Scholar]
  88. Stefanowski, J. Rough set based rule induction techniques for classification problems. In Rough Set Based Rule Induction Techniques for Classification Problems; Intelligent Techniques & Soft Computing: Aachen, Germany, 1998; Volume 1, pp. 109–113. [Google Scholar]
  89. Sylvain, R. Nearest Neighbor with Generalization; University of Canterbury: Christchurch, New Zealand, 2002. [Google Scholar]
  90. Martin, B. Instance-Based Learning: Nearest Neighbor with Generalization; University of Waikato: Hamilton, New Zealand, 1995. [Google Scholar]
  91. Ben-David, A. Automatic Generation of Symbolic Multiattribute Ordinal Knowledge-Based DSSs: Methodology and Applications. Decis. Sci. 1992, 23, 1357–1372. [Google Scholar] [CrossRef]
  92. Holte, R.C. Very Simple Classification Rules Perform Well on Most Commonly Used Datasets. Mach. Learn. 1993, 11, 63–90. [Google Scholar] [CrossRef]
  93. Frank, E.; Wang, Y.; Inglis, S.; Holmes, G.; Witten, I.H. Using model trees for classification. Mach. Learn. 1998, 32, 63–76. [Google Scholar] [CrossRef]
  94. Thangaraj, M. Vijayalakshmi Performance Study on Rule-based Classification Techniques across Multiple Database Relations. Int. J. Appl. Inf. Syst. 2013, 5, 1–7. [Google Scholar]
  95. Pawlak, Z. Rough set theory and its applications to data analysis. Cybern. Syst. 1998, 29, 661–688. [Google Scholar] [CrossRef]
  96. Frank, E. ZeroR. Weka 3.8 Documentation. 2019. Available online: https://weka.sourceforge.io/doc.stable-3-8/weka/classifiers/rules/ZeroR.html (accessed on 12 May 2020).
  97. Suthaharan, S. Decision Tree Learning. In Machine Learning Models and Algorithms for Big Data Classification. Integrated Series in Information Systems; Springer: Berlin/Heidelberg, Germany, 2016; pp. 237–269. [Google Scholar]
  98. Abellán, J.; Moral, S. Building Classification Trees Using the Total Uncertainty Criterion. Int. J. Intell. Syst. 2003, 18, 1215–1225. [Google Scholar] [CrossRef] [Green Version]
  99. Adnan, M.N.; Islam, M.Z. Forest PA: Constructing a decision forest by penalizing attributes used in previous trees. Expert Syst. Appl. 2017, 89, 389–403. [Google Scholar] [CrossRef]
  100. Gama, J. Functional trees. Mach. Learn. 2004, 55, 219–250. [Google Scholar] [CrossRef]
  101. Salzberg, S.L. C4.5: Programs for Machine Learning by J. Ross Quinlan. In Machine Learning; Morgan Kaufmann Publishers, Inc.: New York, NY, USA, 1993; Volume 16, pp. 235–240. [Google Scholar]
  102. Ibarguren, I.; Pérez, J.M.; Muguerza, J.; Gurrutxaga, I.; Arbelaitz, O. Coverage-based resampling: Building robust consolidated decision trees. Knowledge-Based Syst. 2015, 79, 51–67. [Google Scholar] [CrossRef]
  103. Hayashi, Y.; Tanaka, Y.; Takagi, T.; Saito, T.; Iiduka, H.; Kikuchi, H.; Bologna, G.; Mitra, S. Recursive-rule extraction algorithm with J48graft and applications to generating credit scores. J. Artif. Intell. Soft Comput. Res. 2016, 6, 35–44. [Google Scholar] [CrossRef] [Green Version]
  104. Holmes, G.; Pfahringer, B.; Kirkby, R.; Frank, E.; Hall, M. Multiclass alternating decision trees. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2002; Volume 2430, pp. 161–172. [Google Scholar]
  105. Landwehr, N.; Hall, M.; Frank, E. Logistic model trees. Mach. Learn. 2005, 59, 161–205. [Google Scholar] [CrossRef] [Green Version]
  106. Sumner, M.; Frank, E.; Hall, M. Speeding up Logistic Model Tree induction. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Samos, Greece, 18–20 July 2005; Volume 3721 LNAI, pp. 675–683. [Google Scholar]
  107. Jiang, L.; Li, C. Scaling up the accuracy of decision-tree classifiers: A naive-bayes combination. J. Comput. 2011, 6, 1325–1331. [Google Scholar] [CrossRef]
  108. Kalmegh, S. Analysis of WEKA Data Mining Algorithm REPTree, Simple Cart and RandomTree for Classification of Indian News. Int. J. Innov. Sci. Eng. Technol. 2015, 2, 438–446. [Google Scholar]
  109. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
  110. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  111. Witten, I.H.; Frank, E.; Hall, M.A.; Pal, C.J. Data Mining: Practical Machine Learning Tools and Techniques; Elsevier: Amsterdam, The Netherlands, 2016; ISBN 9780128042915. [Google Scholar]
  112. Islam, Z.; Giggins, H. Knowledge Discovery through SysFor: A Systematically Developed Forest of Multiple Decision Trees kDMI: A Novel Method for Missing Values Imputation Using Two Levels of Horizontal Partitioning in a Data set View project A Hybrid Clustering Technique Combining a Novel Genetic Algorithm with K-Means View project Knowledge Discovery through SysFor-a Systematically Developed Forest of Multiple Decision Trees. 2011. Available online: https://www.researchgate.net/publication/236894348 (accessed on 11 May 2020).
  113. Wilkinson, L.; Anand, A.; Tuan, D.N. CHIRP: A new classifier based on composite hypercubes on iterated random projections. In Proceedings of the Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011; pp. 6–14. [Google Scholar]
  114. Athanasiadis, I.N.; Kaburlasos, V.G.; Mitkas, P.A.; Petridis, V. Applying Machine Learning Techniques on Air Quality Data for Real-Time Decision Support 1 Introduction 2 Decision support systems for assessing air quality in real—time 3 The σ—FLNMAP Classifier. First Int. Symp. Inf. Technol. Environ. Eng. 2003, 2–7. Available online: http://www.academia.edu/download/53083886/Applying_machine_learning_techniques_on_20170511-3627-1jgoy73.pdf (accessed on 11 May 2020).
  115. Deeb, Z.A.; Devine, T. Randomized Decimation HyperPipes; Penn State University: University Park, PA, USA, 2010. [Google Scholar]
  116. Demiröz, G.; Altay Güvenir, H. Classification by voting feature intervals. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 1997; Volume 1224, pp. 85–92. [Google Scholar]
  117. Ingre, B.; Yadav, A. Performance analysis of NSL-KDD dataset using ANN. In Proceedings of the 2015 International Conference on Signal Processing and Communication Engineering Systems, Guntur, India, 2–3 January 2015; pp. 92–96. [Google Scholar] [CrossRef] [Green Version]
  118. Ibrahim, L.M.; Taha, D.B.; Mahmod, M.S. A comparison study for intrusion database (KDD99, NSL-KDD) based on self organization map (SOM) artificial neural network. J. Eng. Sci. Technol. 2013, 8, 107–119. [Google Scholar]
  119. Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A detailed analysis of the KDD CUP 99 data set. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009; pp. 1–6. [Google Scholar]
  120. Shiravi, A.; Shiravi, H.; Tavallaee, M.; Ghorbani, A.A. Toward developing a systematic approach to generate benchmark datasets for intrusion detection. Comput. Secur. 2012, 31, 357–374. [Google Scholar] [CrossRef]
  121. Gharib, A.; Sharafaldin, I.; Lashkari, A.H.; Ghorbani, A.A. An Evaluation Framework for Intrusion Detection Dataset. In Proceedings of the ICISS 2016—2016 International Conference on Information Science and Security, Jaipur, India, 19–22 December 2016; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2017. [Google Scholar]
  122. Panigrahi, R.; Borah, S. Design and Development of a Host Based Intrusion Detection System with Classification of Alerts; Sikkim Manipal University: Sikkim, India, 2020. [Google Scholar]
  123. Sharafaldin, I.; Lashkari, A.H.; Ghorbani, A.A. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the ICISSP 2018, Madeira, Portugal, 22–24 January 2018; pp. 108–116. [Google Scholar]
Figure 1. Usage statistics of supervised classifiers.
Figure 1. Usage statistics of supervised classifiers.
Mathematics 09 00690 g001
Figure 2. Comparison of classification accuracy in various classifier groups found in the literature.
Figure 2. Comparison of classification accuracy in various classifier groups found in the literature.
Mathematics 09 00690 g002
Figure 3. The methodology of classification to rank allocations of supervised classifiers.
Figure 3. The methodology of classification to rank allocations of supervised classifiers.
Mathematics 09 00690 g003
Figure 4. Weights and ranks of supervised classifier groups.
Figure 4. Weights and ranks of supervised classifier groups.
Mathematics 09 00690 g004
Figure 5. Performance of decision tree classifiers for NSLKDD dataset.
Figure 5. Performance of decision tree classifiers for NSLKDD dataset.
Mathematics 09 00690 g005
Figure 6. Performance of decision tree classifiers for ISCXIDS2012 dataset.
Figure 6. Performance of decision tree classifiers for ISCXIDS2012 dataset.
Mathematics 09 00690 g006
Figure 7. Performance of decision tree classifiers for CICIDS2017 dataset.
Figure 7. Performance of decision tree classifiers for CICIDS2017 dataset.
Mathematics 09 00690 g007
Figure 8. Techniques for Order Preference by Similarity to the Ideal Solution (TOPSIS) weights and ranks of decision tree classifiers for NSLKDD, ISCXIDS2012 and CICIIDS2017 dataset.
Figure 8. Techniques for Order Preference by Similarity to the Ideal Solution (TOPSIS) weights and ranks of decision tree classifiers for NSLKDD, ISCXIDS2012 and CICIIDS2017 dataset.
Mathematics 09 00690 g008
Figure 9. Detection (%) of attacks and normal class labels of NSL-KDD multi-class dataset.
Figure 9. Detection (%) of attacks and normal class labels of NSL-KDD multi-class dataset.
Mathematics 09 00690 g009
Figure 10. Detection (%) of attacks and normal class labels of ISCXIDS2012 binary class dataset.
Figure 10. Detection (%) of attacks and normal class labels of ISCXIDS2012 binary class dataset.
Mathematics 09 00690 g010
Figure 11. Detection (%) of attacks and normal class labels of CICIDS2017 multi class dataset.
Figure 11. Detection (%) of attacks and normal class labels of CICIDS2017 multi class dataset.
Mathematics 09 00690 g011
Figure 12. Classification of J48Consolidated on NSL-KDD dataset.
Figure 12. Classification of J48Consolidated on NSL-KDD dataset.
Mathematics 09 00690 g012
Figure 13. Classification of J48Consolidated on ISCXIDS2012 dataset.
Figure 13. Classification of J48Consolidated on ISCXIDS2012 dataset.
Mathematics 09 00690 g013
Figure 14. Classification of J48Consolidated on CICIDS2017 dataset.
Figure 14. Classification of J48Consolidated on CICIDS2017 dataset.
Mathematics 09 00690 g014
Table 1. Detailed findings and analysis of supervised classifiers.
Table 1. Detailed findings and analysis of supervised classifiers.
Inferences/Observations/Limitations/Research GapsWith 20 features, BayesNet shows the highest amount of accuracy of 99.3% for classifying DDoS attacks, and PART shows 98.9% for classifying Probe attacks. No class imbalance issue was found. Tested on an older dataset, which is now obsolete. Completely ignored U2R and R2L attacks. Hence, classifiers performance may vary with the inclusion of U2R and R2L instancesGaussian classifier seems to be effective for R2L and Probe attacks with the highest detection rate of 0.136 and 0.874, respectively. Naïve Bayes proved suitable for U2R attacks with the highest detection rate of 0.843, Decision Tree and Random Forest classified DoS attacks with the highest detection rate of 0.972. Considering the highest detection rate of three training sets is not convincing. Instead, the average detection rate could have highlighted better classifiers for the given scenario.A decent number of performance measures were used to analyze the classifiers, Other state-of-the-art classifiers are missing from the comparison. Dataset sample size, number of features considered are not precise. Although the Naïve Bayes proved to be a better classifier in FP Rate, the ID3 performs far ahead than the Naïve Bayes. Class imbalance issues are not considered during evaluation.The accuracy of the induction tree is promising, with an overall rate of 99.839/%. Although it is appreciable that the induction tree performs well in the class imbalance KDD’99 dataset, the size of the training set and the class-wise breakup of training stances are not precise. The reason for considering different training instances for three different classifiers is not clear. Considering the ROC area, it is evident that the Induction tree correctly classified Neptune, Smurf, pod, teardrop, port sweep, and back attack instances.C4.5 scores the highest average accuracy of 64.94% as compared to 62.7% of SVM.
Considering attacks accuracy, C4.5 seems to be suitable for detecting Probe, DoS, and U2R attacks, whereas SVM classifies R2L threats better.
Class imbalance issue is not addressed.
J48 (C4.5) proved to be an accurate classifier for classifying test instances.
Data extraction and the preprocessing procedure is not clearly defined.
The training set is a high-class imbalance, so the evaluation of the classifiers in terms of accuracy and detection rate is not sufficient.
Performance measures usedAccuracy
Kappa
Mean Absolute Error
Root Mean Squared Error
Detection RateAccuracy, Kappa, RMSE, Precision
Recall, FP Rate
Precision, Recall
FN Rate,
F-Measure
Accuracy, MA Error, RMS Error
RA Error, RRS Error, TP Rate
FP Rate, Precision, Recall, F-Measure, ROC Area
Accuracy
Detection Rate
FP Rate
Accuracy
Detection Rate
FP Rate
Dataset, Features and Sample SizeDataset: KDD’99
FS procedure: Information Gain
Number of Features Selected: 20
Training instances: 492,842
Testing Instances: N/A
Dataset: KDD’99
Features Selected: All features
Training instances: 270,000
Testing Instances: 311,029
Dataset: KDD’99
Features Selected: All features
Dataset: KDD’99
Features Selected: All features
Training instances: N/A
Testing Instances: 19,870
Dataset: KDD’99
Features Selected: All features
Training instances: N/A
Dataset: KDD’99
Features Selected: All features
Training instances: 311,029
Testing Instances: 494,014
Classification TypeMulti Class
Normal
DoS
Probe
Multi Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
Neptune
Smurf
guess_passwd
Pod
Multi-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
Classifiers EvaluatedJ48 (C4.5),
BayesNet,
Naïve Bayes,
Part,
Multilayer Perceptron,
SVM
Gaussian,
Naïve Bayes,
Decision Tree (C4.5),
Random Forest
Naïve Bayes
J48(C4.5)
ID3
Induction Tree
Naïve Bayes
ANN
C4.5
SVM
SVM
J48 (C4.5)
Multilayer Perceptron
Author/Year/ReferenceAraar et al. (2005) [36]Gharibian et al. (2007) [37]Panda et al. (2008) [38]Srinivasulu et al. (2009) [39]Wu et al. (2009) [40]Jalil et al. (2010) [41]
Inferences/Observations/Limitations/Research GapsRandom Forest appears to be effective for detecting DoS and Probe attacks.
NB Tree is useful for detecting R2L and U2R attacks
The classifiers’ performances are measured in a binary environment.
Performance many vary in a multiclass environment with a very high-class imbalance rate.
C5.0 decision tree shows the highest detection rate of 98.75% for the KDD dataset’s testing samples. Both DoS and Probe attacks are detected with 99.56% and 97.25% of the detection rate. The sample size and the basis of selecting the sample size in not defined in the research.J48 evolved as the best classifier with 99.13% accuracy.
OneR is very fast in classifying instances.
The basis of sampling, training, and testing size is not mentioned.
How the classifiers will behave in a class imbalance situation is not defined.
Brilliantly evaluated.
It can be extended to other groups of classifiers.
NBTree achieves 97.76% highest accuracy.
Random Forest proves to provide a high accuracy rate for classifying threats. Considering 15 features, Random Forest shows an accuracy rate of 99.8% for Normal, 99.1% for DoS, 98.9% for Probe, 98.7% for U2R, and 97.9% for R2L. Average accuracy of Random forest achieves 98.88% for 15 features of NSL-KDD datasetkNN proved to be the best classifier in terms of accuracy.
No benchmark datasets were used for the evaluation of classifiers.
Class imbalance issue has not been explored.
Performance measures usedAccuracy
Detection Rate
FP Rate
Testing time
Detection RateTesting time, Accuracy, TP Rate
FP Rate, MA Error, RMS Error, RA Error, RRS Error
Training time, Accuracy, MAE, RMSE, Kappa, Recall, Precision, F-Measure, Precision, FP RateAccuracyAccuracy
Dataset, Features and Sample SizeDataset: KDD’99
Feature Selection Technique: CFS
Features: 7
Training instances: N/A
Testing Instances: N/A
Dataset: KDD’99
Feature Selection Technique: N/A
Training instances: N/A
Testing Instances: N/A
Dataset: NSLKDD
Training instances: N/A
Testing instances: 2747
Dataset: NSL-KDD
Feature election Techniques:
CONS: 12 features, CFS: 3 features
Training instances: 25,192
Testing instances: 11,850
Dataset: NSL-KDD
Feature Selection Techniques: CFS
Features: 15
Training instances: 125,937
Testing instances: 22,544
Dataset: Artificial Dataset
Feature Selection Scheme: CFS
Features: 2 to 10
Training instances: N/A
Testing instances: N/A
Classification TypeBinary Class
Normal
Instances of any one other class.
Multi-Class
Normal
DoS
Probe
N/AMulti-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
N/A
Classifiers EvaluatedJ48,
Naïve Bayes,
NB Tree,
Random Forest
SVM,
Ripper Rule,
C5.0 decision tree
Naive Bayes,
J48,
OneR,
PART,
RBF Network
ADTree, C4.5, LADTree, NBTree,
Random Tree, Random Forest, REP Tree
Random Forest
J48
SVM
CART
Naïve Bayes
Naïve Bayes, Bayes Net, C4.5, Random Forest, CART, kNN,
Logistic Regression, MLP, SVM
Author/Year/ReferenceAmudha et al. (2011) [42]Naidu et al. (2012) [43]Kalyani et al. (2012) [44]Thaseen et al. (2013) [45]Revathi et al. (2013) [46]Amancio et al. (2014) [35]
Inferences/Observations/
Limitations/Research Gaps
Random Forest shows the highest accuracy of 97.75% and 100% for the LLsDDoS and CAIDA Conficker dataset. J48 and Random Forest both show equal highest accuracy of 99.26% for the CAIDA DDoS 2007 dataset. Class imbalance issue has not been addressed. The type of classification, whether binary or multiclass, is not clear.Random Forest shows the highest amount of accuracy of 91.52%.
Considering False Positive Rate, BayesNet seems to be better.
The test could have been conducted with varying sample sizes or with the maximum sample size possible to confirm the suitable classifier.
Proposed two IDS models for classifying the different type of attack instances.
Random Forest and Fuzzy Logic seem to be ideal classifiers for classifying various attacks. The training time of a classifier does not provide a clear picture of designing an IDS. Hence, testing time per instance would provide a precise result.
PART shows the highest accuracy of 99.97%
Many other prominent classifiers are missed from the evaluation.
Tested on an obsolete dataset. Declaring the best classifier just based on accuracy may not reveal the real capabilities of the classifier.
Other measures, such as ROC and PRC values, should be considered for judging the classifiers’ performance in class imbalance learning.
Random Forest proved to be the best classifier, among others.
The class imbalance issue found as NSL-KDD is a class imbalance dataset. A similar test on other state-of-the-art classifiers are required
Random Forest shows the highest accuracy of 93.77%
Class imbalance issues found with Normal-U2R and Normal-R2L instances. Tested on an obsolete dataset
Performance measures usedAccuracy
FN Rate
FP Rate
Precision
Recall
Training Time, Sensitivity, Specificity,
Accuracy, FP Rate, Kappa, F-Measure, Precision, ROC,
TP Rate
FP Rate
Training Time
Accuracy, Recall, Precision, F-Measure, TP Rate, TN Rate
ROC Area
Kappa
Accuracy
F-Measure
ROC Value
Precision
Recall
Accuracy, FP Rate, FN Rate, TP Rate,
Precision, ROC value, RMS Error
Dataset, Features and Sample SizeDatasets: LLsDDoS, CAIDA, DdoS2007, Conficker
Feature Selection Procedure: Manual, Features Selected: 7
Training and Testing Instances: N/A
Dataset: NSL-KDD
Features Selected: All features
Training instances: 1166
Testing instances: 7456
Dataset: KDD’99
Feature Selection Technique: Information Gain
Features: 20
Training and Testing instances: N/A
Dataset: KDD’99
No. of Features: All features
Training and Testing instances: N/A
Dataset: NSL-KDD
No. of Features selected: All
Training and Testing instances: N/A
Dataset: KDD’99
No. of features: All
Training instances: 148,753
Testing instances: 60,000
Classification TypeN/AMulti-Class, Normal, DoS
Probe
U2R
R2L
Multi-Class
Normal, DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
Classifiers EvaluatedNaïve Bayes,
RBF Network,
Multilayer Perceptron,
BayesNet,
IBK,
J48 (C4.5),
Random Forest
BayesNet, Logistic, IBk, JRip, PART, J48,
Random Forest, Random Tree, REPTree
Bayes Net, Naïve Bayes, C4.5, ID3, NBTree, Fuzzy Logic
SVM, Decision Table,
JRip, OneR, MLP, IBk
Decision Table, JRip, ZeroR, OneR, PART, BayesNet, Naïve Bayes, MLP, SMO, Simple Logistic, IBk
Kstar, LWL
Logistic Regression
Gaussian Naïve Bayes, SVM,
Random Forest
J48 (C4.5), Random Forest, Random Tree, Decision Table, Multilayer Perceptron, Naïve Bayes, BayesNet
Author/Year/ReferenceRobinson et al. (2015) [47]Choudhury et al. (2015) [48]Jain et al. (2016) [49]Bostani et al. (2017) [50]Belavagi et al. (2016) [51]Almseidin et al. (2017) [52]
Inferences/Observations/Limitations/Research GapsThe best classifier to classify attacks of the NSL-KDD dataset in an anomalous traffic condition: DOS attacks—Multilayer Perceptron, Probe attacks—BFTree, U2R attacks—J48, R2L attacks—Naïve Bayes. Overall, all the classifiers except Naïve Bayes worked well with the NSL-KDD dataset. No performance measures were used to validate the classifiers in this class imbalance situation; therefore, the classifier seems to be ideal, but it may not be consistent in this scenario.Decision Tree shows the highest accuracy of 99%.
Class imbalance issue, not present.
Class wise samples contradict the total training data size.
Random Forest proved to be the best classifier, among others. The class imbalance issue found as NSL-KDD was a class imbalance dataset.
Similarly, the U2R and R2L attacks were not perfectly detected due to inherent class-imbalance issue. A similar test on other state-of-the-art classifiers is required
With all the features of the NSL-KDD dataset, the J48 classifier outperforms all other classifiers. With a reduced feature set through information gain feature selection, the IBk seems to be a better classifier. The under-sampling of highly dominant classes and over a sampling of poor classes improves the detection accuracy of R2L and U2R attacks.The two-class decision forest model evolved as the best detection scheme with a detection accuracy of 99.2%. The generic, exploits, shellcode, and worms attacks were also detected with 99%, 94.49%, 91.79% and 90.9% accuracy, respectively. The evaluation has been carried out with the cutting-edged Microsoft Azure Machine Learning Studio to handle huge instances of the UNSW NB-15 dataset.The Random Forest emerged as the best classifier for multi attacks scenarios. On the other hand, in a binary attack scenario, the C4.5 was found to be the best classifier for detection.
Performance measures usedAccuracy
FP Rate
TP Rate
FN Rate
Precision
Recall
F-Score
Accuracy
Recall
Precision
F-Measure
Accuracy
F-Measure
Precision
Recall
Accuracy,
True Positive Rate,
False Positive Rate,
Precision,
Recall
F-Measure,
ROC Area
Accuracy,
Precision,
Recall,
F1-Score,
AUC,
False Alarm Rate, Training Time, Testing Time
Detection Rate,
True Negative Rate,
False Alarm Rate, Accuracy, Training Time, Testing Time
Dataset, Features and Sample SizeDataset: NSL-KDD
Feature Selection Technique: Sequential Floating Forward Selection (SFFS), No of Features: 26
Training instances:
125,973
Testing instances: 22,544
Dataset: CICIDS 2017
Feature Selection Techniques: Fisher Score, No of Features: 30,
Training instances: 203,171
Testing instances: 22,575
Dataset: KDD’ 99,
NSL-KDD,
No. of features: All
Testing instances:
KDD’ 99
Sample Size: 494,021
NSL-KDD
Sample Size: 125,973
Dataset: NSL-KDD
Separately evaluated on Information Gain Feature Selection and All Features, 10-fold cross validation on instances of the dataset
Dataset: UNSW NB-15
Feature Selection Scheme: Mutual information
Training samples: 1,75,341
Testing samples: 82,332
Dataset: CICIDS2017,
Feature Selection Techniques: Manual feature selection. Features having unique values for each instance of the dataset has been considered.
Training instances: 40,000
Testing instances: 40,000
Classification TypeMulti-Class
Normal
DoS
Probe
U2R
R2L
Binary
Benign
DoS
Multi-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal
DoS
Probe
U2R
R2L
Multi-Class
Normal,
Analysis,
Backdoor,
Reconnaissance,
Shellcode,
Worms,
DOS,
Fuzzers,
Generic,
Exploits
Multi-Class
Benign, DoS, PortScan, Bot, Brute Force, Web Attacks, Infiltration
Classifiers EvaluatedNaïve Bayes
BF Tree
J48
Multilayer Perceptron
NB Tree
RFT
SVM
IBk(k-NN)
Decision Tree
Random Forest
J48 (C4.5)
BayesNet
Naïve Bayes
SVM
Naïve Bayes,
Logistic Regression,
MLP,
SVM,
IBk,
J48 (C4.5)
Average Perceptron,
Bayes point machine,
Boosted Decision Tree,
Decision Forest,
Decision Jungle,
Locally deep SVM, Logistic Regression
J48 (C4.5),
ForestPA,
Random Forest,
REP Tree,
Jrip, FURIA,
RIdor, MLP, RBF, LIBSVM, SVM, Naïve Bayes
Author/Year/ReferenceAziz et al. (2017) [53]Aksu et al. (2018) [54]Nehra et al. (2019) [55]Mahfouz et al. (2020) [56]Rajagopal et al. (2020) [57]Ahmim et al. (2020) [58]
Table 2. Bayes classifiers for evaluation.
Table 2. Bayes classifiers for evaluation.
Sl. No.Name of ClassifiersShort Name
1Discriminative Multinomial Naive Bayes [60]DMNB
2Hidden Markov Models [61,62]HMM
3Naive Bayes [63,64]NB
4Sparse Generative Model [65]SGM
Table 3. Functions classifiers.
Table 3. Functions classifiers.
Sl. No.Name of ClassifiersShort Name
1Linear Discriminant Analysis [66]LDA
2LibLINEAR [67]LLNR
3LibSVM [68]LSVM
4Logistic Regression [69]LR
5Multilayer Perceptron—With one hidden layer [70]MLPH
6Multilayer Perceptron—Back Propagation Neural Network [71]MLPB
7Quadratic Discriminant Analysis [72]QDA
8Radial Basis Function [73]RBF
9Radial Basis Function Network [74]RBFN
10Simple Logistic Regression [75]SLR
11Sequential Minimal Optimization [76,77]SMO
Table 4. Lazy group classifiers.
Table 4. Lazy group classifiers.
Sl. No.Name of ClassifiersShort Name
1IB1 (Nearest Neighbor approach) [78]IB1
2IBk (k-nearest neighbor approach) [78]IBK
3IBkLG (k-nearest neighbor with Log and Gaussian kernel) [78]IBKLG
4KStar [79]KSTAR
5Local Knn [80]LKNN
6Locally Weighted Learning [81,82]LWL
7Rseslib Knn [80]RLKNN
Table 5. Rule-based classifiers.
Table 5. Rule-based classifiers.
Sl. No.Name of ClassifiersShort Name
1Conjunctive Rule [83]CR
2Decision Table [84]DTBL
3Decision Table Naïve Bayes hybrid classifier [85]DTNB
4Fuzzy Rule Induction [86]FURIA
5JRip [87]JRIP
6MODLEM [88]MODLEM
7Nearest Neighbor with Generalization [89,90]NNGE
8Ordinal Learning Method [91]OLM
9OneR [92]ONER
10PART [93]PART
11RIpple-DOwn Rule learner [94]RIDOR
12Rough Set [95]ROUGHS
13ZeroR [96]ZEROR
Table 6. Decision tree classifiers.
Table 6. Decision tree classifiers.
Sl. No.Name of ClassifiersShort Name
1Best-First Decision Tree [97]BFT
2Criteria Based Decision Tree [98]CDT
3ForestPA [99]FPA
4Functional Tree [100]FT
5J48 [101]J48
6J48Consolidated [101,102,103]J48C
7J48Graft [104]J48G
8Logit Boost-based Alternating Decision Tree [105]LADT
9Logistic Model Trees [106,107]LMT
10Naïve Bayes based Decision Tree [108]NBT
11Reduces Error Pruning Tree [109]REPT
12Random Forest [110,111]RF
13Random Tree [111]RT
14Simple Cart [112]SC
15SysFor [113]SF
Table 7. Miscellaneous classifiers.
Table 7. Miscellaneous classifiers.
Sl. No.Name of ClassifiersShort Name
1Composite Hypercubes on Iterated Random Projections [114]CHIRP
2Fuzzy Lattice Reasoning [115]FLR
3Hyper Pipes [116]HP
4Voting Feature Intervals [117]VFI
Table 8. Miscellaneous classifiers.
Table 8. Miscellaneous classifiers.
DatasetsSample SizeTraining InstancesTesting Instances
NSLKDD778151352646
ISCXIDS2012549436261868
CICIDS2017891758853032
Table 9. The algorithm algoWeighting.
Table 9. The algorithm algoWeighting.
Input:
C : = { C 1 , C 2 , C 3 , ,   C n } //Classifiers or classifiers groups
P : = { P 1 , P 2 , P 3 , ,   P k } //Performance measures
Output:
Classifiers group with weights W i .
begin
Step 1. Decision matrix construction
M d : = [ C 1 P 1 C 1 P 2 C 1 P k C 2 P 1 C 2 P 2 C 2 P k     C n P 1 C n P 2 C n P k ] . // n = clasfiers and k = performance outcomes
Step 2. Decision matrix normalization
fori: = 0 ton.
begin
forj: = 0 tok.
begin
r i j : =   x i j m i n ( x j ) m a x ( x j )   m i n ( x j ) .
end
end
Step 3. Formation of weighted normalized matrix
V i j W j r i j . // Wj= weight allocated for performance matric j
Step 4. Estimation of positive (A+) and negative (A) ideal solution
A + : = { V 1 + , , V n + } , A : = { V 1 , , V n } .
Step 5. Estimation of separation point of each classifier/classifier group
S i + j = 1 n ( V i j V j + ) 2 , //positive ideal solutions
S i j = 1 n ( V i j V j ) 2 , //negative ideal solutions
Step 6. Weight estimation of classifiers
W i : = S i ( S i + S i + ) .
end
Table 10. Weights allocated to various performance measures.
Table 10. Weights allocated to various performance measures.
Performance MeasuresWeight Allocated
Testing Time1
Accuracy8
Kappa4
Mean Absolute Error (MAE)2
False Positive Rate (FPR)5
Precision7
Receiver Operating Curve (ROC) Value6
Matthews Correlation Coefficient (MCC)3
Table 11. The algorithm rankClassifiers.
Table 11. The algorithm rankClassifiers.
Input:
C : = { C 1 , C 2 , C 3 , , C p   } . //Classifiers or classifiers groups
W d : = { W 1 , W 2 , W 3 , ,   W q } . //Classifiers’ weight for dataset d
Output:
Classifiers/Classifier group labels with rank R
begin
Step 1. Import list of classifiers
C : = { C 1 , C 2 , C 3 , ,   C n }
Step 2. Import classifiers weights
W : = { W 1 , W 2 , W 3 , ,   W n }
Step 3. Calculate average weight of classifiers for each dataset
W c d i = 0 d C p W q i | d |
Step 4. Rank classifiers based on descending order of their weight
R c R a n k d e s c ( W c d )
end
Table 12. Overall performance of classifier groups for NSLKDD dataset.
Table 12. Overall performance of classifier groups for NSLKDD dataset.
MiscellaneousAvg1.4100.02049.69850.3020.3720.2450.39380.759100.7620.9550.04595.46495.46495.4640.7450.4370.552
Max5.5500.04084.39275.2460.7930.3170.549104.430140.7010.9930.07899.28899.28899.2880.8960.8010.751
Min0.0100.01024.75415.6090.0710.0620.25020.57164.0820.9220.00792.16792.16792.1670.5380.1800.285
Decision TreeAvg4.1800.03095.4604.5400.9400.0270.1218.82631.0350.9960.00499.56199.56199.5610.9880.9430.963
Max39.9700.13097.61913.5680.9690.0940.20030.90351.3340.9990.02099.93899.93899.9380.9980.9700.993
Min0.0200.00186.4322.3810.8230.0130.0904.16023.0060.9800.00198.00298.00298.0020.9710.8410.886
RulesAvg1.5900.04082.12117.8590.7630.0810.20026.84651.3140.9880.01298.75498.75498.7540.8980.7670.799
Max7.6200.20097.24174.3390.9640.3040.433100.000111.1280.9990.10099.93599.93599.9350.9930.9650.976
Min0.0010.00125.6612.7590.0000.0120.1023.80726.0880.9000.00189.99189.99189.9910.5000.0000.240
LazyAvg48.96015.56090.7309.2700.8760.0500.16516.47442.3550.9960.00499.60199.60199.6010.9690.8760.919
Max333.50067.29095.72934.4670.9440.2090.31368.82180.2390.9990.01299.88099.88099.8800.9910.9450.971
Min0.0010.14065.5334.2710.5340.0200.1226.72431.1670.9880.00198.77598.77598.7750.9270.5250.825
FunctionsAvg2.9900.12072.06127.9390.6290.1550.29250.92874.8680.9910.00999.13099.13099.1300.8870.6390.737
Max9.3701.22092.02638.8130.8950.2620.37186.34295.2270.9970.01699.67599.67599.6750.9460.8970.866
Min0.0200.00161.1877.9740.5010.0320.17910.50945.8040.9840.00398.40098.40098.4000.7700.5200.510
BayesAvg0.0400.01041.04358.9570.2660.2580.36984.97994.6960.9660.03496.59696.59696.5960.6940.2820.479
Max0.0800.02070.82481.5190.6100.3220.405106.223103.9200.9870.06498.66498.66498.6640.8890.6280.745
Min0.0100.00118.48129.1760.0000.1760.30957.83579.1400.9360.01393.59193.59193.5910.5000.0000.240
Performance MeasuresTraining Time (s)Testing Time (s)Model Accuracy (%)M.C.R. (%)Kappa StatisticsM.A.E. rR.M.S.E.R.A.E. (%)R.R.S.E. (%)True Positive RateFalse Positive RatePrecision (%)Sensitivity (%)F-MeasureROC ValueMCC ValuePRC Area
Table 13. Overall performance of classifier groups for ISCXIDS2012 dataset.
Table 13. Overall performance of classifier groups for ISCXIDS2012 dataset.
MiscellaneousAvg0.7400.01057.54842.4520.1450.4280.54385.557108.5760.9950.00599.54099.54099.5400.5900.1760.573
Max2.9400.03077.35649.0900.5450.4990.70199.813140.0980.9970.00699.69999.69999.6990.7710.5700.717
Min0.0010.00150.91022.6450.0000.2260.47645.28195.1520.9940.00399.40699.40699.4060.5000.0000.500
Decision TreeAvg5.1700.01097.3522.6480.9470.0360.1527.19730.3381.0000.00099.97399.97399.9730.9850.9470.980
Max60.4800.04098.5555.3000.9710.0810.21316.13542.6491.0000.00099.98799.98799.9870.9980.9710.998
Min0.0200.00194.7001.4450.8940.0210.1074.17521.3841.0000.00099.95199.95199.9510.9680.8950.954
RulesAvg0.6100.02089.96010.0310.8000.1140.24322.75848.5640.9990.00199.90799.90799.9070.9050.8080.890
Max3.4300.16097.91250.9100.9580.5000.529100.000105.7021.0000.00499.98299.98299.9820.9920.9590.991
Min0.0010.00149.0902.0880.0000.0230.1394.67027.8630.9960.00099.60599.60599.6050.5000.0000.500
LazyAvg14.0709.22092.5517.4490.8510.0890.25217.74750.2930.9990.00199.92399.92399.9230.9400.8550.920
Max92.18029.72097.32317.8270.9460.2730.36754.61473.2821.0000.00299.97299.97299.9720.9900.9460.987
Min0.0010.01082.1732.6770.6410.0300.1535.99530.5600.9980.00099.82599.82599.8250.8840.6740.866
FunctionsAvg2.3400.17070.87329.1270.4130.3430.47168.68694.1240.9970.00399.73099.73099.7300.7390.4510.731
Max18.7201.86090.36449.0900.8070.4910.70198.163140.0980.9990.00599.90699.90699.9060.9290.8070.924
Min0.0100.00150.9109.6360.0000.1700.30233.98660.3960.9950.00199.49899.49899.4980.5000.0000.500
BayesAvg0.0200.01050.66949.3310.0040.4980.55299.558110.3310.9950.00599.48699.48699.4860.5760.0040.563
Max0.0500.02050.91049.7860.0210.5000.70299.983140.2810.9960.00699.61099.61099.6100.7910.0580.746
Min0.0010.00150.21449.090−0.0050.4930.50098.60399.9690.9940.00499.37399.37399.3730.500−0.0430.500
Performance MeasuresTraining Time (s)Testing Time (s)Model Accuracy (%)M.C.R. (%)Kappa StatisticsM.A.E.R.M.S.E.R.A.E. (%)R.R.S.E. (%)True Positive RateFalse Positive RatePrecision (%)Sensitivity (%)F-MeasureROC ValueMCC ValuePRC Area
Table 14. Overall performance of classifier groups for CICIDS2017 dataset.
Table 14. Overall performance of classifier groups for CICIDS2017 dataset.
MiscellaneousAvg0.7500.02098.9611.0390.9870.0790.14133.50241.1131.0000.00099.98999.98999.9890.9960.9880.987
Max2.9000.03099.8351.8470.9980.2250.32395.10893.9571.0000.00099.99899.99899.9981.0000.9980.999
Min0.0100.01098.1530.1650.9780.0010.0220.2006.3151.0000.00099.97999.97999.9790.9890.9780.968
Decision TreeAvg19.1500.04099.6350.3650.9960.0020.0300.8568.8471.0000.00099.99699.99699.9960.9990.9960.997
Max258.8300.18099.8680.6930.9980.0050.0441.88812.8891.0000.00099.99999.99999.9991.0000.9981.000
Min0.0300.00099.3070.1320.9920.0000.0190.1605.6481.0000.00099.99399.99399.9930.9970.9920.990
RulesAvg1.4900.02086.52813.4720.8350.0400.09717.10928.1230.9990.00199.87499.87499.8740.9310.8360.857
Max8.7900.05099.86881.3000.9980.2360.344100.000100.0001.0000.00799.99999.99999.9991.0000.9981.000
Min0.0000.00018.7010.1320.0000.0010.0200.2005.8610.9930.00099.25899.25899.2580.5000.0000.173
LazyAvg24.60022.38094.9735.0270.9380.0220.0649.51318.5471.0000.00099.95399.95399.9530.9980.9380.993
Max158.19074.39099.80231.8600.9980.1480.25462.51673.7491.0000.00399.99899.99899.9981.0000.9980.999
Min0.0000.03068.1400.1980.6090.0010.0240.2396.9180.9970.00099.70399.70399.7030.9910.6060.982
FunctionsAvg18.4200.43086.70213.2980.8370.0650.16627.68048.1780.9990.00199.87699.87699.8760.9330.8430.871
Max115.9504.47099.37373.4500.9920.2100.45888.857133.2701.0000.00799.99599.99599.9950.9990.9920.998
Min0.0300.01026.5500.6270.0970.0020.0410.75812.0180.9930.00099.29599.29599.2950.5480.2320.241
BayesAvg0.0300.02043.04156.9590.3470.1720.26572.82277.1120.9940.00699.44099.44099.4400.7110.3450.472
Max0.0700.07098.31889.1160.9800.2460.353104.155102.6271.0000.01099.98599.98599.9850.9990.9790.996
Min0.0100.00110.8841.6820.0010.0050.0632.00418.2510.9900.00099.02499.02499.0240.5000.0000.173
Performance MeasuresTraining Time (s)Testing Time (s)Model Accuracy (%)M.C.R. (%)Kappa StatisticsM.A.E. R.M.S.E.R.A.E. (%)R.R.S.E. (%)True Positive RateFalse Positive RatePrecision (%)Sensitivity (%)F-MeasureROC ValueMCC ValuePRC Area
Table 15. Performance outcome of supervised classifiers on NSL-KDD dataset.
Table 15. Performance outcome of supervised classifiers on NSL-KDD dataset.
PRC 0.690.240.750.24 0.750.510.870.840.690.840.730.710.760.840.58 0.910.940.970.970.90.830.91 0.430.970.980.97
MCC 0.6300.50 0.540.520.90.720.570.750.530.570.650.710.56 0.930.940.940.950.930.530.93 0.280.930.930.95
ROC 0.890.50.890.5 0.890.770.950.940.870.930.880.880.910.940.83 0.970.980.990.990.960.930.97 0.780.990.990.99
PRE 95.823188993.590848398.663624998.307638 98.39992298.556025399.674849499.197247798.893337799.370241898.977722999.201775599.357045999.527748899.2714257 99.631066999.6624399.682821999.791889299.783918498.774868799.8802779 89.991050199.018381999.28280299.6497204
FPR 0.041768110.064091520.013363750.01692362 0.016000780.014439750.003251510.008027520.011066620.006297580.010222770.007982250.006429540.004722510.00728574 0.003689330.00337570.003171780.002081110.002160820.012251310.00119722 0.10008950.009816180.007171980.0035028
RRSE 79.1399102.58693.1367103.92 81.884295.226945.80466.150175.590262.056491.819176.976672.404766.206189.427 36.637836.054636.065931.167238.748180.238637.5756 87.536839.218436.934128.0692
RAE 70.4239105.43357.8345106.223 61.327545.424710.509447.225259.844429.67856.374962.352653.754847.379286.3418 6.72417.17127.127910.87887.52168.82127.0727 76.42222.980719.50855.2622
RMSE 0.3090.40.3630.405 0.3190.3710.1790.2580.2950.2420.3580.30.2820.2580.349 0.1430.1410.1410.1220.1510.3130.147 0.3410.1530.1440.109
MAE 0.2140.320.1760.322 0.1860.1380.0320.1430.1820.090.1710.1890.1630.1440.262 0.020.0220.0220.0330.0230.2090.022 0.2320.070.0590.016
KV 0.6100.4520 0.5020.5360.8950.7080.5710.7430.5010.5640.6490.7060.543 0.9330.9340.9340.9440.9250.5340.929 0.3160.9220.9270.946
MCR 29.17681.51943.61381.519 37.18834.4677.974321.95831.85919.53938.81332.35126.49322.07134.618 5.1024.98874.98874.27065.706734.4675.3666 50.7185.93355.51784.1194
ACC 70.82418.48156.38718.481 62.81265.53392.02678.04268.14180.46161.18767.64973.50777.92965.382 94.89895.01195.01195.72994.29365.53394.633 49.28294.06794.48295.881
TT 0.0010.010.020.01 0.020.011.220.010.010.010.010.010.020.0010.01 0.90.560.5327.2667.2912.270.14 0.0010.010.010.02
Name of ClassifiersBayes GroupDMNBHMMNBSGMFunction-basedLDALLNRLSVMLRMLPMLPQDARBFRBFNSLRSMOLazy GroupsIB1IBKIBKLGKSTARLKNNLWLRLKNNRule-basedCRDTBLDTNBFURIA
PRC0.960.910.910.480.690.970.930.950.24 0.970.970.990.940.960.990.970.890.960.980.970.970.950.970.97 0.750.290.440.73
MCC0.960.940.930.460.740.970.950.960 0.950.940.950.930.970.970.960.840.960.960.950.950.960.920.94 0.80.180.180.59
ROC0.990.970.970.710.860.990.970.980.5 0.990.9910.980.9910.990.970.990.990.990.990.980.990.99 0.90.540.650.9
PRE99.827989499.830721599.841036798.674733599.4144399.924904999.904851799.934504298.5119678 98.001958998.470388399.63180399.472452299.931208499.938284399.816455699.393683599.861870399.899615499.899074699.921560299.628122799.640226199.9069906 95.942228392.166968394.45974999.2879644
FPR0.001720110.001692790.001589630.013252670.00585570.000750950.000951480.000654960.01488032 0.019980410.015296120.003681970.005275480.000687920.000617160.001835440.006063170.00138130.001003850.001009260.00078440.003718770.003597740.00093009 0.040577720.078330320.055402510.00712036
RRSE29.153636.228437.9705111.12874.4226.087632.769827.5715100 29.348831.034826.796535.125526.72623.006127.399351.334129.689628.12531.085228.990629.077137.218830.567 64.0822140.701101.66496.6
RAE7.11756.57467.222161.861327.74295.12615.37923.8067100 6.18568.60647.669.43884.165.35585.699830.90315.98759.79658.53336.46234.83812.30216.4675 20.570699.1673104.4398.8697
RMSE0.1140.1410.1480.4330.290.1020.1280.1070.39 0.1140.1210.1050.1370.1040.090.1070.20.1160.110.1210.1130.1130.1450.119 0.250.5490.3960.377
MAE0.0220.020.0220.1880.0840.0160.0160.0120.304 0.0190.0260.0230.0290.0130.0160.0170.0940.0180.030.0260.020.0150.0370.02 0.0620.3010.3170.3
KV0.9560.9340.9280.4010.720.9640.9460.9620 0.9490.9420.9520.9250.9640.9690.960.8230.9540.9560.9440.9530.9580.9170.941 0.7930.0710.0710.553
MCR3.36364.98875.4846.93921.0512.75894.08162.872374.339 3.85494.42183.62815.70672.72112.3813.061213.5683.51473.36364.23283.59033.21246.34924.4974 15.60975.24675.24635.11
ACC96.63695.01194.5253.06178.94997.24195.91896.86325.661 96.14595.57896.37294.29397.27997.61996.93986.43296.48596.63695.76796.4196.78893.65195.503 84.39224.75424.75464.89
TT0.010.070.190.010.0010.010.0010.20.001 0.0010.0010.010.130.0010.080.0010.010.0010.050.0010.0010.0010.010.1 0.040.010.010.01
Name of ClassifiersJRIPMODLEMNNGEOLMONERPARTRIDORROUGHSZERORDecision TreesBFTCDTFPAFTJ48J48CJ48GLADTLMTNBTREPTRFRTSCSFMiscellaneousCHIRPFLRHPVFI
Table 16. Performance outcome of supervised classifiers on ISCXIDS2012 dataset.
Table 16. Performance outcome of supervised classifiers on ISCXIDS2012 dataset.
PRC 0.510.50.750.5 0.880.50.540.890.920.580.880.680.760.890.52 0.890.890.930.990.960.870.92 0.780.960.990.98
MCC 0.060−0.040 0.6900.250.710.810.180.70.310.540.710.08 0.850.850.850.950.940.670.89 0.690.90.940.96
ROC 0.510.50.790.5 0.90.50.560.90.930.540.890.690.780.90.53 0.920.920.940.990.970.880.94 0.830.970.990.98
PRE 99.372867499.429953399.531530399.6097789 99.824348299.497781999.55759299.843015899.906394899.561899.851631499.703460799.79189199.874805399.6205372 99.912974799.913878899.914764399.971820899.969961699.825436999.9532906 99.777601299.935844999.965154999.9754771
FPR 0.006271330.005700470.00468470.00390221 0.001756520.005022180.004424080.001569840.000936050.0043820.001483690.002965390.002081090.001251950.00379463 0.000870250.000861210.000852360.000281790.000300380.001745630.00046709 0.002223990.000641550.000348450.00024523
RRSE 99.969299.9786140.281101.097 71.0846140.098131.83469.000260.396196.349282.184597.095180.955468.5047137.865 55.324455.309155.322630.560134.621273.282447.6323 72.001541.365631.88827.8625
RAE 99.97199.98398.60399.675 50.17198.16386.92351.02634.65892.16233.98695.64964.47253.27695.059 15.30815.35515.3136.30025.994754.61411.347 52.16718.04111.9664.6704
RMSE 0.50.50.7020.506 0.3560.7010.6590.3450.3020.4820.4110.4860.4050.3430.69 0.2770.2770.2770.1530.1730.3670.238 0.360.2070.160.139
MAE 0.50.50.4930.499 0.2510.4910.4350.2550.1730.4610.170.4780.3220.2660.475 0.0770.0770.0770.0320.030.2730.057 0.2610.090.060.023
KV 0.0210−0.0050 0.65800.1160.680.8070.0660.6590.3060.5060.6870.06 0.8470.8470.8470.9460.940.6410.887 0.6560.8990.9440.958
MCR 49.78649.0949.35849.09 17.02449.0943.46915.8999.63645.93216.9734.74324.51815.57847.538 7.65527.65527.65522.67672.997917.8275.6745 17.0775.03212.78372.0878
ACC 50.21450.9150.64250.91 82.97650.9156.53184.10190.36454.06983.0365.25775.48284.42252.463 92.34592.34592.34597.32397.00282.17394.326 82.92394.96897.21697.912
TT 0.0010.0010.020.01 0.010.0011.860.0010.010.0010.0010.010.010.0010.001 0.430.420.4325.9629.727.60.01 0.0010.0010.010.001
Name of ClassifiersBayes GroupDMNBHMMNBSGMFunction-basedLDALLNRLSVMLRMLPMLPQDARBFRBFNSLRSMOLazy GroupsIB1IBKIBKLGKSTARLKNNLWLRLKNNRule-basedCRDTBLDTNBFURIA
PRC0.980.960.950.670.880.980.960.960.5 0.970.9810.970.9710.990.970.990.980.980.990.950.980.97 0.720.50.510.56
MCC0.950.940.940.50.840.960.950.940 0.950.940.960.960.960.970.960.910.950.90.950.960.940.950.95 0.57000.13
ROC0.990.970.970.720.920.990.970.970.5 0.980.9910.980.9810.990.980.990.980.990.990.970.990.98 0.770.50.520.57
PRE99.97126899.973193899.971232399.748858699.926385299.981508899.976713699.97645799.6049811 99.966417299.962535299.974256399.975191899.974180499.98745499.979956399.954254999.97476199.951308899.977876899.975609299.972817299.98048799.9818513 99.699174399.406191999.43869499.6167652
FPR0.000287320.000268060.000287680.002511410.000736150.000184910.000232860.000235430.00395019 0.000335830.000374650.000257440.000248080.00025820.000125460.000200440.000457450.000252390.000486910.000221230.000243910.000271830.000195130.00018149 0.003008260.005938080.005613060.00383235
RRSE30.865133.361835.5365105.70257.412828.263633.361833.7169100 28.8432.392624.85228.272228.892221.383626.202137.852229.020142.649131.125628.671835.536530.048629.3342 95.1522140.09899.820699.2345
RAE7.82925.56656.315855.87916.4855.67775.56655.6857100 6.06368.78877.15354.91064.17495.64365.21316.1356.50110.9267.80996.35656.31587.07334.8823 45.28198.16399.81398.97
RMSE0.1540.1670.1780.5290.2870.1410.1670.1690.5 0.1440.1620.1240.1410.1450.1070.1310.1890.1450.2130.1560.1430.1780.150.147 0.4760.7010.4990.496
MAE0.0390.0280.0320.2790.0820.0280.0280.0280.5 0.030.0440.0360.0250.0210.0280.0260.0810.0330.0550.0390.0320.0320.0350.024 0.2260.4910.4990.495
KV0.9460.9440.9370.4360.8350.9580.9440.9430 0.9510.9440.9570.9580.9580.9710.960.9120.9540.8940.9480.9550.9370.9520.954 0.545000.037
MCR2.67672.78373.158527.9448.24412.08782.78372.837350.91 2.46252.78372.14132.08782.08781.44541.98074.38972.30195.29982.62312.24843.15852.4092.3019 22.64549.0949.0948.983
ACC97.32397.21696.84272.05691.75697.91297.21697.05649.09 97.53897.21697.85997.91297.91298.55598.01995.6197.69894.797.37797.75296.84297.59197.698 77.35650.9150.9151.017
TT0.0010.010.040.0010.0010.0010.0010.160.01 0.0010.0010.0010.020.0010.040.0010.0010.0010.010.0010.0010.0010.0010.01 0.030.0010.0010.001
Name of ClassifiersJRIPMODLEMNNGEOLMONERPARTRIDORROUGHSZERORDecision TreesBFTCDTFPAFTJ48J48CJ48GLADTLMTNBTREPTRFRTSCSFMiscellaneousCHIRPFLRHPVFI
Table 17. Performance outcome of supervised classifiers on CICIDS2017 dataset.
Table 17. Performance outcome of supervised classifiers on CICIDS2017 dataset.
PRC 0.550.1710.17 0.990.790.2410.7110.910.9810.98 1110.9910.981 0.35111
MCC 0.400.980 0.930.840.230.990.620.990.990.750.960.990.98 1110.9710.611 0.19111
ROC 0.840.510.5 10.910.5510.85110.97111 111110.991 0.78111
PRE 99.435179899.024263499.9848299.314295 99.942001799.862710299.294824799.993491499.752232199.991290499.994699499.848986699.974323999.994588399.9882371 99.997528699.997554799.997580399.977768299.997809999.703371699.9984293 99.258044899.996066599.998454699.9967141
FPR 0.00564820.009757370.00015180.00685705 0.000579980.00137290.007051750.00006510.002477680.00008710.0000530.001510130.000256760.00005410.00011763 0.00002470.00002450.00002420.000222320.00002190.002966280.0000157 0.007419550.00003930.00001550.0000329
RRSE 85.7682101.80118.2509102.627 34.589158.5609133.2712.017874.851513.941912.309865.86924.775612.235187.5349 7.47187.46837.470819.27687.471873.74876.9175 89.053810.27755.86127.8759
RAE 81.4361103.6942.0038104.155 9.482317.15788.85731.59346.57991.93360.758143.26996.46971.868586.5127 0.27930.40210.29682.58040.279362.51640.2394 79.26974.28540.37210.3516
RMSE 0.2950.350.0630.353 0.1190.2010.4580.0410.2570.0480.0420.2260.0850.0420.301 0.0260.0260.0260.0660.0260.2540.024 0.3060.0350.020.027
MAE 0.1920.2450.0050.246 0.0220.0410.210.0040.110.0050.0020.1020.0150.0040.204 7E-049E-047E-040.0067E-040.1486E-04 0.1870.019E-048E-04
KV 0.40900.980 0.9280.8280.0970.9920.6690.9880.9920.7770.9620.9920.982 0.9970.9970.9970.9730.9970.6090.998 0.2360.9960.9980.996
MCR 47.92289.1161.682189.116 5.936714.18273.450.692626.7810.95650.626618.3053.13320.69261.5172 0.23090.23090.23092.20980.230931.860.1979 61.6430.32980.13190.2968
ACC 52.07810.88498.31810.884 94.06385.81826.5599.30773.21999.04499.37381.69596.86799.30798.483 99.76999.76999.76997.7999.76968.1499.802 38.35899.6799.86899.703
TT 0.0010.0010.070.01 0.030.014.470.030.010.010.020.010.070.010.02 1.530.750.6374.3950.5628.760.03 0.0010.010.030.01
Name of ClassifiersBayes GroupDMNBHMMNBSGMFunction-basedLDALLNRLSVMLRMLPMLPQDARBFRBFNSLRSMOLazy GroupsIB1IBKIBKLGKSTARLKNNLWLRLKNNRule-basedCRDTBLDTNBFURIA
PRC10.9910.690.9710.9910.17 0.99111111110.9911111 10.970.981
MCC1110.730.981110 0.991111110.990.990.9911110.99 10.980.981
ROC1110.840.991110.5 111111111111111 10.9911
PRE99.99631399.99636399.998557799.758362299.984085799.998027499.997340199.998152999.3843856 99.992889599.995932599.996639999.996677499.998289599.998902699.99854599.995444799.993367399.993889899.997318599.997786899.996705599.997922299.9954358 99.99799199.979235399.980018699.9974836
FPR0.00003690.00003640.00001440.002416380.000159140.00001970.00002660.00001850.00615614 0.00007110.00004070.00003360.00003320.00001710.0000110.00001450.00004560.00006630.00006110.00002680.00002210.00003290.00002080.0000456 0.00002010.000207650.000199810.0000252
RRSE9.33089.78286.314882.092221.13337.47378.93057.4718100 12.88898.88727.0559.0756.30635.64817.41429.421212.235111.29628.88726.08717.97437.974311.5493 6.314820.943893.957443.2378
RAE0.5380.47880.199533.71552.23440.29640.3990.2793100 1.47920.56010.88760.59570.29920.15961.8881.59771.86850.63840.56010.51990.46420.46420.8583 0.19952.194595.107936.5055
RMSE0.0320.0340.0220.2820.0730.0260.0310.0260.344 0.0440.0310.0240.0310.0220.0190.0260.0320.0420.0390.0310.0210.0270.0270.04 0.0220.0720.3230.149
MAE0.0010.0015E-040.080.0057E-049E-047E-040.236 0.0040.0010.0020.0017E-044E-040.0050.0040.0040.0020.0010.0010.0010.0010.002 5E-040.0050.2250.086
KV0.9960.9950.9980.6710.9780.9970.9960.9970 0.9920.9960.9960.9960.9980.9980.9980.9940.9920.9940.9960.9970.9970.9970.993 0.9980.9780.9780.996
MCR0.36280.39580.164927.8691.8470.23090.32980.230981.3 0.69260.32980.29680.29680.16490.13190.16490.46170.69260.52770.32980.26390.26390.26390.5937 0.16491.8141.8470.3298
ACC99.63799.60499.83572.13198.15399.76999.6799.76918.701 99.30799.6799.70399.70399.83599.86899.83599.53899.30799.47299.6799.73699.73699.73699.406 99.83598.18698.15399.67
TT0.0010.020.050.040.0010.010.0010.020.001 0.010.0010.010.180.010.0010.120.010.010.0010.0010.090.0010.0010.1 0.020.010.010.03
Name of ClassifiersJRIPMODLEMNNGEOLMONERPARTRIDORROUGHSZERORDecision TreesBFTCDTFPAFTJ48J48CJ48GLADTLMTNBTREPTRFRTSCSFMiscellaneousCHIRPFLRHPVFI
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Panigrahi, R.; Borah, S.; Bhoi, A.K.; Ijaz, M.F.; Pramanik, M.; Jhaveri, R.H.; Chowdhary, C.L. Performance Assessment of Supervised Classifiers for Designing Intrusion Detection Systems: A Comprehensive Review and Recommendations for Future Research. Mathematics 2021, 9, 690. https://doi.org/10.3390/math9060690

AMA Style

Panigrahi R, Borah S, Bhoi AK, Ijaz MF, Pramanik M, Jhaveri RH, Chowdhary CL. Performance Assessment of Supervised Classifiers for Designing Intrusion Detection Systems: A Comprehensive Review and Recommendations for Future Research. Mathematics. 2021; 9(6):690. https://doi.org/10.3390/math9060690

Chicago/Turabian Style

Panigrahi, Ranjit, Samarjeet Borah, Akash Kumar Bhoi, Muhammad Fazal Ijaz, Moumita Pramanik, Rutvij H. Jhaveri, and Chiranji Lal Chowdhary. 2021. "Performance Assessment of Supervised Classifiers for Designing Intrusion Detection Systems: A Comprehensive Review and Recommendations for Future Research" Mathematics 9, no. 6: 690. https://doi.org/10.3390/math9060690

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop