Abstract

Ensemble learning, as a kind of method to improve the generalization ability of classifiers, is often used to improve the model effect in the field of deep learning. However, the present ensemble learning methods mostly adopt voting fusion in combining strategies. This strategy has difficulty mining effective information from the classifiers and cannot effectively reflect the relationship between different classifiers. Ensemble learning based on the evidential inference rule (ER rule) can effectively excavate the internal relationships among different classifiers and has a certain interpretability. However, the ER rule depends on the weight distribution of different combination strategies, and the setting of the evidence weight will affect the accuracy and stability of the model. Therefore, this paper proposes a new ensemble learning method based on multiple fusion weighted evidential reasoning rules and constructs an ensemble learning framework for data fusion and decision mapping. This framework takes the evidence weight, confidence, and feature data of each classifier as input and the integration results as output. The weight of evidence was determined by multiple fusion weights of the entropy weight method and order relation method. Finally, the integrated learning process is set up by the ER algorithm. The method proposed in this paper is verified by multiple datasets. Experimental results show that the surface construction model has good performance, and the defects of single weighting instability are greatly improved under the premise of improving the integration effect.

1. Introduction

Integrated learning is an important branch of machine learning. Ensemble learning can reduce generalization error and improve classification accuracy by constructing multiple weak learners and combining them into one strong learner. Integrated learning is widely used in image processing, intrusion detection, software product quality inspection, and many other fields. With the rapid development of machine learning, integrated learning plays an important role in many fields.

At present, many scholars are studying integrated learning. These methods include the voting method [1], averaging method [2], learning method [3], and D-S theory [46]. Breiman et al. [7] proposed the bagging algorithm, which is also called the self-help method. Bagging is a sampling method used to obtain the distribution and belief interval of statistics. The simple majority voting method is used as the combination strategy to integrate multiple models and obtain the final result. Yang et al. [8] proposed an ensemble learning method based on a support vector machine (SVM). This method uses random undersampling and synthetic minority class oversampling technology in combination with ways to deal with datasets and the formation of multiple data subsets. Then, the boosting algorithm is used to train a stronger support vector machine classifier model. The final result is obtained by the integration of voting. This method has good detection performance. Coscrato et al. [9] proposed an NN-stacking method (NNS) for improved stacking, which generalizes Breiman’s approach by allowing linear parameters to vary with input features, enabling NNS to perform better in different regions of the feature space using different base models in general. Daho et al. [10] proposed a random subspace algorithm based on random forest. The random subspace trains each classifier by using random partial features instead of all features to reduce the correlation between each classifier. Then, the final result is obtained by majority voting or combining prior probabilities.

In most integrated learning methods of voting by ballot, such as the combination strategy of the average method and fusion method, it is difficult to mine the effective information from the classifier, and the connection between the various classifiers, as well as each classifier, cannot be reflected. The internal characteristics in the process of combining multiple classifiers make it difficult to achieve an ideal result. This phenomenon is like a person without a driver’s license driving a formula 1 car and not taking advantage of the full performance of the car. Therefore, it is necessary to establish a decision-making mechanism for a variety of classifiers that can maximize the due level.

In 2013, Yang and Xu [11] established the ER rule, which considered the weight and reliability of evidence. The ER rule is an extension of D-S theory. The study clearly distinguishes between the importance and reliability of evidence and emphasizes that ER rules can be used effectively for data fusion. However, when ER rules are used for ensemble learning, the setting of the evidence weight has a great impact on the accuracy of the model. The proposed model demonstrates the problems in setting the weight of evidence. Therefore, a solution to these problems is proposed. The ER rule has been widely used in many fields, such as evaluation and decision-making, due to its strong data fusion ability in the aspect of uncertainty and its characteristics of considering the weight of evidence and the reliability of evidence. ER is widely applied in many fields, such as safety assessment [12], life assessment [13], comprehensive decision-making, and fault diagnosis [14]. Evidential reasoning rules are a new field in ensemble learning. The average voting method cannot reflect the characteristics of each classifier. If the difference is too large under the condition of the base model, it also cannot obtain a good result. Evidence theory is an important part of the field of intelligent decision-making, using the Dempster rule according to fusion, uncertainty reasoning, and strong flexibility. However, this algorithm cannot solve conflicting evidence because there are problems such as exponential explosion [15].

However, an ensemble learning model with an excellent decision-making mechanism can be built for multiple classifiers by using evidential reasoning rules [11, 16], which can connect all classifiers internally and give full play to the effectiveness of each classifier. However, the model based on ER rules has high requirements on the reliability of evidence weight. After several experiments, the surface evidence weight has a great influence on the final integration effect of ER integration learning. In the process of repeated experiments, the difference in the weight of evidence will lead to a gap of the accuracy of the integration results of more than 3%. The weights used literature [1727] are specified by experts who actually evaluate industrial process indicators. However, this kind of the artificial subjective weighting method has strong controlling power and easily causes result deviation. In addition, some researchers have recently improved the setting of evidence weights. Wu et al. [19] proposed popular calibration weighting methods, such as generalized regression, generalized exponential skew, and verified their effectiveness. Combining the maximum deviation method, the coefficient of variation method and the entropy weight method, the index optimal adaptive weighting model was established [20]. Literature [21] proposes a new weighted majority voting set (WMVE) method to assign different weights to classifiers [22]. Previous studies [23, 24] have proposed the validity and reliability of combined weighting. Literature [25]the combinatorial weighting method is adopted to improve the accuracy of ensemble learning method based on evidential reasoning rules, but the stability of combinatorial weighting method is not explained, and the weight is not disturbed fusion. It can be said that solving the stability and reliability of evidence weight is the primary task of ensemble learning methods based on evidential reasoning rules. In summary, considering the influence of the weight distribution on ER evasive reasoning rules, this paper optimizes the final weight from the two aspects of the volatility and importance of each model data. Three different methods are used to combine the entropy weight method and the order relation method. This allows the target weights to balance volatility and importance.

In summary, a new ensemble learning model is constructed based on ER rules. The contributions of this paper are as follows. (1) In the ER rule, the weight of evidence is determined by the multiple fusion combination weighting method. It can further find effective information in the process of evidence combination, consider the fluctuation factor and importance factor of evidence, and set a more reasonable and reliable weight of evidence for the ER rule. (2) The multiple fusion combination weighting method is used to determine evidence to improve its stability and reliability for analysis. The concept of weight difference is proposed, and an effective experimental analysis of weight difference is carried out. (3) Three different fusion and weight assignment strategies are proposed, and experimental analysis is carried out.

The rest of this paper is organized as follows. Section 2 proposes the problems to be solved, and Section 3 establishes the use of three combination strategies to determine the weight and introduces two weight assignment methods: the entropy weight method and the order relation method. In Section 4, an integrated learning model based on ER rules is constructed. Section 5 illustrates a case to verify the effectiveness and stability of the multiple fusion weighting method introduced in Section 4. Section 6 gives the conclusions of this paper.

On the premise of improving the final effect of ensemble learning, the problem focuses on ensemble learning of evidential reasoning rules. The instability of the results of a large number of single-weighted experiments leads to the poor final results of ensemble learning, which is the focus of this study.

2. Problem Definition

Problem 1. Integrated learning strategy formulation.
A comprehensive study was conducted. Based on multiple deep learning intelligent cognitive result processing models, the weight and belief degree of evidence are obtained as the input rules of the evidential reasoning model. The rules of evidence reasoning model is used as an integration model to calculate the fusion of multiple sets of different weights of evidence and evidence of belief degrees. The final integration results are also obtained. Assuming that a total of multiple X classifiers are integrated, the process of the ER rule-based integrated learning model is as follows:where represents the classification result of classifier , , represents the classification process, is the dataset, and is the parameter set in the classification process. After the classification results are obtained, the model integration results can be described aswhere is the integral process of the model, is the parameter set in the collection process, stands for the final ensemble learning result, and and represents the parameter set in the classification process. To solve the above-given problems, this paper proposes an integrated learning model based on multiple fusion weighted ER rules, as shown in Figure 1. It consists of two parts: combinatorial weights and ER rules, which are detailed in Sections 3 and 4, respectively.

Problem 2. Parameter settings in the ensemble learning process.
The main body consists of 2 parts: a set of evidence weights and belief degree .
, represents the number of datasets, each dataset needs to generate a weight in an ER-based ensemble rule, and represents the current dataset weight solution. The calculation of the weight can depend on the data of the dataset itself or the accuracy of the dataset. To solve the problem of excessive dependence and stability of a single weighting, the weights can be obtained as follows:where and represent the objective weight method and entropy weight method and subjective weight method and order relation method weight, respectively. DATA represents the original neural network DATA. ACC represents the accuracy rate of each model of the original neural network. represents the final fusion weight, and and represent the entropy weight method and order relation method, respectively. stands for combining the subjective weight and objective weight. In the ER-based ensemble learning process, the weight has a huge influence on the ensemble learning result. However, the belief degree has a relatively influence on the ensemble learning result.

Problem 3. Evidence weight combination strategy formulation and how to improve the accuracy and stability.
A single weighting will separate the subjective and objective weights. In many cases, there will be a large gap between the weights obtained by the subjective weighting method and the objective weighting method. In this case, playing both subjective and objective roles is needed to make the overall weight more ideal. In Chapter 5, the concept of weight difference (the difference between the subjective weight and objective weight) precision is used to reflect the relationship between the subjective and objective gap and the results of multiple fusion ensemble learning. The multiple fusion strategy based on the entropy weight-order relationship method is as follows.
The final fusion weights are perturbation fusion, matrix estimation method fusion, and dispersion maximization method fusion, respectively. The effect and influence of these three combined strategies are analyzed and studied. These three different combinations of strategies are described as follows. The first is to objectively change the previous subjective impression. The second and third methods are both subjective and objective fusion. The former is biased to the establishment of subjectivity, and the latter is the two groups of weight fusion operations. These three fusion strategies will be introduced in detail in Section 3.

3. Multiple Integration Empowerment

The processing of ensemble data is a very important part of ensemble learning. The weight of each ensemble classifier is determined according to the characteristics of the data. The order relation method and entropy weight method are used to determine the weight of each integrator, which is more scientific, universal and robust than the single weight method.

The entropy weight method is an objective weighting method [26]. Entropy is a physical concept of thermodynamics and a measure of the disorder degree or disorder degree of a system. The larger the entropy is, the more chaotic the system (the less information it carries), while the smaller the entropy is, the more orderly the system (the more information it carries). In the entropy-weight method, the larger the entropy of information is, the smaller its dispersion degree, the smaller its role in the evaluation, and the smaller its weight. Through the entropy weight method, we can determine the volatility of the integrated data. The higher the volatility of the integrated data is, the greater the weight. The entropy weight method can reflect the discriminative ability of the index deeply and determining the weight is an objective weight method. Its credibility and accuracy are higher than those of subjective weights, but it is not smart enough to consider the influence between indicators. For example, without the guidance of business experience, such as correlation and hierarchy, the weight may be distorted. It is highly dependent on samples, and the weight will fluctuate to some extent with the constant change in the modeling samples.

The order relation method is selected to determine the subjective weight, and the integrated data problem is solved by the subjective and objective fusion weight. The order relation method determines the weight by the importance criterion. Compared with the objective weighting scheme, the subjective weighting scheme is more direct and superficial, but it lacks the judgment of the objective weighting method on the internal mechanism of the data. Therefore, the purpose of the multiple fusion weights adopted in this paper is to combine the objectivity and importance of data to achieve a more misaligned data weight.

Sections 3.1 and 3.2 introduce two basic weighting schemes. All fusion schemes are developed by these two basic weighting schemes.

3.1. The Use of the Entropy Weight Method to Calculate the Objective Weight

Step 1. The proportion of the index value of the evaluation index of the model of the generated data are calculated:

Step 2. The entropy value e of the indicator is calculated:

Step 3. The entropy weight of the indicator is calculated:

3.2. The Use of the Order Relation Method to Calculate the Subjective Weight
Step 1: the order relationship is determined. The influence of each criterion on the optimal goal is compared, that is, , and the order of the importance degree between each index at each level is determined. The more important the index is, the higher the order relationship. The proportion in the optimal target is determined, that is, the order relationship between the criterion layer and the target layer is determined. The unique order relation is determined as follows: .Step 2: the ratio judgment of the relative importance between indicators is given. The ratio is used to measure the importance of the sequential relationship indicator. The experts’ rational judgment on the ratio of the importance of indicator to isStep 3: according to the order relation and value, the sorting weight is calculated, and the calculation formula is as follows:

Through the subjective and objective multiple fusion weights, the weight with the dual characteristics of volatility and importance is obtained, and then the integration results with higher accuracy are obtained through the integration of the evidential reasoning rules.

3.3. Realization of Multiple Fusion and Empowerment
3.3.1. Research on the Fusion of Disturbance

This method obtains the weight value by the entropy weight method. Then, this value is used to interfere with the order relation method and subjectively change the importance of different models. Specifically, it is described as follows:where is the weight of the final order relation method. is the objective weight of the entropy weight method, and the function perturbs the index value of the order relation method through the weight obtained by the entropy weight method. The function represents the final weight of the order relation method by using the perturbed index value. For example, the weights determined by the entropy weight method of our three models are 0.4, 0.3, and 0.3, and the accuracies of the three models are 80%, 80%, and 80%, respectively. If the objective weight of the entropy weight method is not considered, the index value of the order relation method is 1.1, and the index value changes after perturbation are 1.2 and 1.

3.3.2. Research on the Matrix Estimation Method

The weight of the order relation method and entropy weight method and the weight of direct proportion fusion can be obtained. According to the order relation analysis method, the subjective weight set of indicators can be obtained, and the objective weight set can be determined simultaneously. Due to the difference between subjective weight and objective weight , the weight is not necessarily reasonable. To obtain more reasonable weights and realize the integration of subjectivity and objectivity, it is necessary to meet the weight difference between subjective weights and objective weights. The weight combination should be as small as possible. At the same time, for different evaluation indicators, the relative coefficient sum of subjective weight and objective weight will change with the different relative importance of the subjective weight and objective weight. Combined with the basic idea of matrix estimation, the optimization model of the combined weight is as follows:

The important coefficients of subjective and objective weights of indicators are as follows:

For each indicator, the smaller is, the better, and then the final optimization model can be transformed into the following equation:

Multiple fusion weights with both subjective and objective considerations can be obtained by solving equation (13).

Based on this approach, the weight of evidential reasoning rules has both volatility and importance. After integration, the accuracy can be further improved and has a strong fault tolerance rate.

3.3.3. Research on the Level Difference Maximization

The idea of determining the weight of the maximum deviation is as follows. Under the attribute, if the difference between the attribute values of all decision schemes is small, then the attribute plays a small role in the scheme ranking. In contrast, the attribute values of schemes differ greatly, indicating that attributes play an important role in scheme sorting. Therefore, the greater the deviation of the scheme attribute values is, the greater the weight that should be assigned, the smaller the deviation is, and the smaller the weight that should be assigned. If there is no difference in the attribute value of all decision schemes under the attribute, then the attribute has no effect on the scheme ordering, and its weight is zero. The weights obtained by the order relation method and entropy weight method are calculated by maximizing the deviation to obtain the new weights.

It is supposed that the set of schemes is and the set of attributes is . Then, is the attribute value of scheme under attribute , is the normalized result of the decision matrix , the weight vector of the attribute is , and . Thus, the comprehensive attribute value of scheme can be calculated:

Attribute represents the total variance of all decision plans and other decision plans. The weight vector should be selected to maximize the total variance of all attributes of all decision plans to construct the deviation function:

The weight vector of attributes can be calculated by solving the following single-objective optimization problem method:

In this paper, the level difference maximization is described as

After the above-given steps, the subjective weight and objective weight of classifier are used to obtain the combined weight by level difference maximization. Finally, it is judged whether the value range of the combined weight is satisfied. If the combined weight meets the condition, it will be retained; otherwise, it will be regularized.

3.3.4. The Combination Strategies of the Three Different Weights Are as Follows

The weight of evidence is an important part of the ER rule, and the weight method is the focus of this paper. Some representative weighting methods are selected for definition in this section. This section confirms that the subjective weight will eventually be used in Section 4.2.3. The subjective weighting method is a weighting method based on the decision-maker’s subjective information, which can reflect the decision-maker’s emphasis on different attributes and flexibly grasp the importance of each decision attribute. However, its flexibility and variability can be too subjective. The subjective weighting method chosen in this paper is the order relation method (G1). The objective weighting method is based on the original data and a series of operations to obtain the weight. This method has a strong mathematical basis to support.

4. Multi-Integrated Learning Based on Evidential Reasoning Rules

The ER rule is an intelligent decision mechanism with good performance. Through this decision mechanism, multiple groups of classifiers are combined to improve the classification effect. The specific process is shown in Figure 2.

A cognitive model of multiple intelligences based on the objective world picture is studied. Intelligent identification from the subjective and objective data results of the entropy weight method-sequence relationship function is conducted to determine the weight of evidence. The other part is generated by the statistical belief level of evidence coupled with the ER rules to generate an intelligent decision model [27] and make smart decisions.

4.1. Generation of Multiple Intelligent Cognitive Models

The evolution of artificial intelligence is a process in which the range or type of simulation of human intelligence or cognitive activities is constantly expanded and the simulation ability is constantly improved. This process is closely related to the philosophical cognitive views derived from or expressed by the artificial intelligence of different paradigms. This is because when artificial intelligence seeks to “think like a human” or react in a way similar to a human, it must involve the problem of how people think and react, that is, how to conduct cognition.

It captures information through the external world, converts it into images, texts, numbers, and other effective data, and organizes them into datasets. It also generates an excellent and reliable intelligent cognitive model by constantly changing the parameters through repeated deep learning iterative training and learning rate adjustment.

4.2. Construction of the Intelligent Decision Model

In the digital age, due to the explosion of information and the multiplication of organizational complexity, the difficulty, and uncertainty of decision-making have correspondingly multiplied. We need an intelligent decision model that can deal with a large amount of data. The intelligent decision model adopted in this paper can not only fuse large-scale data but also have good decision performance.

The data generated by the results of intelligent cognition are used as evidence input and are then effectively fused by ER rules. The entropy weight method-order relation method is used as the evidence weight, and different intelligent cognitive models are used as the accuracy to complete intelligent decisions.

4.2.1. Evidence Weight

The combination of subjective and objective weighting methods reduces the subjectivity of relying on expert knowledge and reduces the instability caused by a single weighting method. The combination of subjective and objective methods makes the intelligent decision model more reliable and stable and improves the accuracy to a certain degree. The entropy weight method can be obtained by using the information entropy input evidence of volatility. The higher the volatility is, the higher the reliability. It is generally believed that the more distinct or discrete the intelligent cognitive model is to the data, the better the reliability of the intelligent learning model. The sequence relations can distinguish between the importance of subjective cognition. Simply speaking, the importance of an intelligent cognitive model can be determined by its accuracy. By combining the two weights, a more convincing weight of evidence can be obtained. The process is shown in Figure 3.

4.2.2. Reliability of Evidence

The reliability of evidence is a characteristic of evidence that reflects the ability of evidence to provide a correct assessment or solution to a hypothesis. Based on the definition of evidence reliability, when using ER rules for ensemble learning, each classifier in the ensemble learning process is regarded as separate evidence, and the classification accuracy of each classifier on the dataset is the ability to correctly evaluate the sample. Therefore, through mathematical statistics, the probability of a classifier correctly classifying a dataset is calculated as its reliability.

4.2.3. Evidence Fusion

Assuming that the data obtained from each classifier in the integration process are taken as a completely independent piece of evidence, there are pieces of evidence in total. The category is considered to be the assessment level, and the probability considers the classifier’s judgment on the sample category as the belief level corresponding to the assessment level. The reliability distribution of each piece of evidence can be expressed aswhere is the evaluation level. is the belief degree of the evaluation strategy , which is the evaluation level under the evidence. is the identification framework including all evaluation levels, and is the belief degree of indicator M relative to the identification framework, that is, the global ignorance and satisfaction, where and .

The reliability and weight are explained in 4.2.1 and 4.2.2, respectively. The weighted belief distribution of reliable evidence m iswhere is a power set, is the mixed probability mass of at the evaluation level and satisfies the following condition:where is the regularized coefficient. is the basic probability quality of M at the evaluation level , and represents the empty set.

As completely independent evidence, each classifier has M pieces of evidence, so the belief combination degree is determined by the following formula of C pieces of evidence:where is the degree of support to the hypothesis by the fusion results of C completely independent evidence, and and

Assuming that the utility of the evaluation level is , the expected effect U of the final model is

The expected utility is compared with the evaluation level ; if , then the result category of integration using evidentiary inference rules is .

5. Case Study

The data of this experiment came from five different public datasets of GitHub, Kaggle, and Aistidio. The content included four classification datasets of weather, apple, large fish, flower, and porcelain, and the proportion of the verification set was 9 : 1. Five different deep learning models were used to predict and classify each image. The specific experimental steps are as follows.

The dataset was imported into different deep learning models, the intelligent cognitive model was optimized through a large number of iterations, the cognitive results were obtained, and the probability of each flower picture belonging to different categories was generated.

The results of intelligent cognition were standardized, and the original index matrix was formed. The weight of multiple fusion data was calculated by the entropy weight method and order relation method, the weight was taken as the weight of evidence reasoning rules, and the coefficient of variation method was added as the comparison group.

The accuracy of each model’s intelligent cognitive results was the accuracy of evidentiary inference rules.

With the intelligent cognitive results as evidence, ER rules are combined to fuse the intelligent cognitive results of the four intelligent cognitive models to form a decision model, and the calculation accuracy is the intelligent decision result.

The four deep learning algorithms used in this experiment all adopt the ReLU activation function at the convolution layer and the softmax function at the fully connected layer as the output layer.

5.1. Comparison of ER Classifier and Vote Classifier

The integrated learning method based on evidential reasoning rules is to analyze the evidence theory of a single classifier and then use the evidence synthesis rules to synthesize the results of each classifier to get the final result. The voting rule is to simply vote the results of multiple single classifiers and select the result with the most votes as the final result. The ensemble learning method based on evidential reasoning rules needs to conduct evidence theory analysis on the classification results of each classifier and determine the evidence synthesis rules when the classifier is integrated, which will generate some additional computational load and decision burden but make the integration process more reliable through these decisions. However, the voting method does not need evidence theory analysis but simply votes the results of multiple classifiers, which is relatively simple to deal with. The integrated learning method based on evidential reasoning rules is suitable for classification problems, especially in the case of uncertainty and contradiction and can make full use of different characteristics of different classifiers to improve the accuracy of classification. The result of the ER ensemble is the average of the results of the five classifiers using three different fusion weighting strategy sets, and the voting classifiers result is the simple voting of the five classifiers. The results are shown in Table 1.

It is not difficult to see from Table 1 that the ER-based ensemble learning method improves the accuracy compared with the voting method in the evidence theory analysis and internal data association processing of the classification results of each classifier when it is integrated with the classifier. Especially in the case of a large gap between classifiers, the improvement results will be more significant.

5.2. Classification Experiment Based on Large Fish Recognition
5.2.1. Group 1

This group is the data results of large fish: EW represents the entropy weight method, G1 represents the order relation method, C.V represents the coefficient of variation method, & represents perturbation fusion weighting, @ represents fusion weighting using the matrix estimation method, and ∗ represents fusion weighting using the range maximization method. In this article, the abbreviations D, M, I, E, and R refer to the five classifiers DenseNet121, MobileNetV2, InceptionV3, EfficientNet, and ResNet152V2, respectively. Five classifier combinations, DIE, DIR, IER, MIE, and MIR, were used as research objects. The data for large fish are shown as follows.

As shown in Figure 4 and Table 2, in the fish data set. (1) Compared with the single weighting method, the multiple fusion weighting method improves the precision of ensemble learning to a certain extent. (2) Compared with the entropy weighting method and the highest order relation, the average increase is 0.125. (3) In most cases, each different fusion weighting method is better than the single weighting method. Entropy weight method and order relation method can measure subjective and objective weight well, and there is a certain gap between subjective and objective. Therefore, the multiple fusion weight distribution based on entropy weight method and order relation method makes use of the fluctuation characteristics of entropy weight method and the important characteristics of order relation method and improves the integration accuracy. (4) The laboratory data eventually strike a balance between importance and objectivity and produce good results.

The experimental results are best when the subjective and objective differences are moderate and there is no obvious overfitting or underfitting.

5.3. Classification Experiment Based on Weather Recognition
5.3.1. Group 2

This is a weather data set with the same labelling as the large fish data set.

As shown in Figure 5 and Table 3, it can be summarized as follows. (1) For the integration of the fusion of empowerment in general, the fusion method of empowerment gives the highest increase of 0.01, which shows almost no effect. (2) By observing the experimental data, it was found that when the weight of the entropy weight method and sequence data relations act, the weights obtained are almost unanimous, which indicates the importance of timeliness and objectivity intensity. (3) When the accuracy of the base learner is high, the objective weight and subjective weight are increasingly consistent, so the weight result obtained by multiple fusion weighting will not be better. (4) When the base learner obtains a high accuracy, it will affect the improvement of ensemble learning. Of course, this does not mean that more different properties of our fusion strategy will produce better results.

When the subjective and objective differences are small, the natural weight fusion effect is not improved.

5.4. Classification Experiment Based on Apples
5.4.1. Third Group

This group is the apple data set.

As shown in Figure 6 and Table 4, it can be summarized as follows. (1) The effects of the entropy weight method, i.e., its volatility and importance, are very poor; there is a huge difference in MIR, and the sequence relations are within 1.7% accuracy for the individual empowerment and entropy weight method. The combination of the three different strategies makes the whole method eventually maintain a high level of accuracy; although in some cases, the increase is not large. (2) It is even decreased slightly, but the overall accuracy remains at a healthy level. The average integration accuracy of the three kinds of fusion weights of the five different model combinations was 0.8802, 0.8099, 0.8576, 0.8623, and 0.8207, representing improvements of −0.26%, 0.17%, −0.38%, 0.42%, and 0.31% compared to the highest value. Compared with using the entropy weight method on the problems, it is improved by 1.44%, 0.17%, 1.39%, 1.45%, and 0.31%, respectively. (3) It can be seen that the weight generated by multiple fusion weighting is very stable overall. Even if a certain weighting scheme has a large deviation, it can still maintain a high integration level. The reason for the huge difference is that partial overfitting or underfitting may occur. Such overfitting or underfitting is difficult to detect in the neural network, and it will bring great loss to the integrated learning results if the weighting method of a separate nature is adopted. In this case, the combination weighting strategy can greatly reduce our loss. Through the use of the above three groups, it is not difficult to find that the multiple fusion weighting strategy has strong stability and robustness in the case of single property distortion.

When the subjective and objective weights differ greatly and there may be partial overfitting, the fusion weights will give the model higher stability.

5.5. Classification Experiment Based on Flower Class
5.5.1. Fourth Group

This group is a flower-like dataset.

As shown in Figure 7 and Table 5, the results of the above dataset were analyzed effectively. A greater weight was produced by volatility and importance, and the weight difference between the two is called the weight difference. The larger the weight difference is, the better the fusion effect and the greater the significance of multiple fusion. The smaller the difference in power is, the smaller the gain.

The difference between subjective weight and objective weight is moderate, and the experimental results are good after weight fusion.

5.6. Experimental Validity Analysis

To demonstrate the effectiveness of the combination weighting method in this paper, comparative studies are conducted in this subsection.

The fusion empowerment method includes the subjective intention of decision makers and the mathematical theory of objective weighting. To illustrate its validity, the order relation analysis method (G1) and entropy weight (EW) method are compared. The accuracy of the three different fusion methods was compared with the control group, and the results of the comparative study are shown in Table 6.

As shown in Table 6, blue indicated that the fusion method was higher than one of the control group and green indicated that the fusion method was higher than the control group. Red represents the best result of this classifier integration. Of the 60 fusion weights, only one of the four was not higher than any control group. And, the fusion weight of green and red accounted for more than 70 percent. The optimization of the overall result is improved by 0.1-0.2 percentage. In addition, the fusion empowerment method can improve by 0.3-0.4 compared with the single empowerment method. Although the improvement effect is not as good as the ideal, the integration result is limited by the overall accuracy of the model and the difference of the data set. If the learning model with large difference is adopted for multiple fusion, the integration learning effect will be more significant. In addition, the fusion weighting method is more scientific and reliable than the single weighting method, considering the multiple nature of the data set, so that both subjective and objective weights can be reflected. The purpose of this paper is to show that the integration learning of multiple fusion can optimize the integration results to a certain extent under different data sets and maintain the final integration results to a high level. The results show that the fusion weighting method can improve the stability and accuracy of the comprehensive results to a certain extent.

The stability of the results of the multiple fusion weighting method is shown in Figure 8. The Y-axis is the ranking order of accuracy, and the X-axis is 20 groups of comparison experiments. The results show that the area of the three methods of the multifusion weighting method is smaller than that of the single weighting method to some extent. Obviously, the fusion weighting method is better than the single weighting method in terms of stability.

5.7. Weight Difference Analysis

The following figure shows the effectiveness and stability analysis of three different combination strategies. In different cases, the integration effect is different. As shown in Figure 5, the effect of multiple fusion weighting is significantly benefited from: first, there is no distortion of subjective and objective weights; second, the weight difference between subjective and objective weights is appropriate. As shown in Figure 6, when the subjective weight is close to the objective weight, the effect of fusion weighting almost does not exist. Another reason is that the higher the accuracy of the high base classifier, the less significant the ensemble result. As shown in Figure 7, once the surface base classifier of this graph is overfitted or underfitted, the weight difference between subjective weight and objective weight will be very large. In this case, the effect of multiple fusion weighting will not be very good, but the stability is stronger than that of single weighting.

As shown in Figure 9, axis X represents the accuracy difference, axis Y represents the weight difference, EW&G1-low represents the difference in the EW&G1 combination strategy minus the lower accuracy value in G1 or EW, and EW&G1-high represents the difference in the EW&G1 combination strategy minus the higher accuracy value in G1 or EW. With the larger weight difference, the combination of weights becomes more significant. At the same time, the combination of weights is needed to stabilize the weight and obtain a better weight index. Overall, the three different combination strategies are more accurate in most cases than the lower combination strategies in G1 and EW. About 70% of the combined weighting methods are higher than either EW or G1 and are better than single weighting methods in terms of stability. The effect difference of the three different combination strategies is not obvious. Locally, the weight difference loses its efficacy when the weight difference is 0–0.13. This stage can be called subjective and objective basic consistency, and the positive utility of the weighting method is the best when the weight difference is 0.13–0.3. This stage can be called subjective and objective and complements each other when the weight difference is 0.3–0.45. Large fluctuations and multiple fusion empowerment were produced due to the large difference in subjective and objective causes, although to a certain extent, the loss was reduced, but there were very obvious differences between the effects of multiple fusion empowerment. Multiple sets of fusion of empowerment were better than the worst single empowerment, but in many cases worse than the best single empowerment, overall improving the stability and reducing considerable damage.

5.8. The Experimental Conclusion

In the field of ensemble learning based on ER rules, the problems in ensemble learning include the following. On the one hand, the number of classifiers is small, so it is difficult to judge the quality of classifiers according to expert experience. On the other hand, the objective weighting method relies too much on the samples, and it is difficult to set reasonable evidence weights to improve the accuracy of ensemble learning. To solve these problems, a multiple fusion weighting method is proposed. This method considers the subjective judgment of expert experience and the objective information of sample data, overcomes the subjectivity or objectivity of the evidence weight to a certain extent, and makes the weight of evidence more reasonable. The multiple fusion weight strategy is more stable and accurate than the single weight strategy in most environments. On this basis, the weight of evidence is regularized to further improve the effect of the ER rule-based ensemble learning model.

6. Conclusion

There are three innovations in this paper. First, a new ensemble learning model based on evidential reasoning rules is proposed, which can provide more intelligent decision-making compared with existing ensemble learning models. Second, three fusion strategies of the order relation method and entropy weight method are proposed. Through different fusion strategy methods, more selectivity is provided, and the stability and reliability of evidence weight are greatly improved, which also confirms the effectiveness. Finally, through a large number of experiments, some reasons for the loss of stability caused by the integration based on the rule of evidence are obtained, which also lays a foundation for further work to solve the stability. In this paper, no study on different base classifiers and no discussion on subclassification for the classifier integration with very large differentiation were provided. In addition, there are many strategies for the combination of fusion weights that are worth further research, such as game theory. Future work is as follows: (1) the integration weighting scheme should be simplified, and the efficiency of integration weighting should be improved. (2) Fusion weighting should be made more efficient under various integration conditions, and adaptability to most integration learning should be proven. (3) Weights of different properties should be used for fusion.

Data Availability

The data used to support the study come from the image classification datasets of the Kaggle platform and Baidu AI Studio platform. For the source URL of the datasets, please visit: (1) Large scale Fish dataset: https://www.kaggle.com/crowww/a-large-scale-fish-dataset. (2) Weather dataset: https://aistudio.baidu.com/aistudio/datasetdetail/13165. (3) Flowers recognition dataset: https://www.kaggle.com/alxmamaev/flowers-recognition. (4) Fruits 360 dataset: https://www.kaggle.com/moltean/fruits.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

YiZhe Zhang and YunYi Zhang contributed equally to this work. Z.W. and K.T. conceptualized the study; Q.M. and G.Z. proposed the methodology; K.L. processed the software; YZ.Z. and YY.Z. validated the study; Z.W. and K.T carried out the formal analysis; W.H. investigated the study; YZ.Z. curated the data; YZ.Z. wrote the original draft; YY.Z. and YZ.Z.wrote the review and edited; W.H. visualized the study; and K.T. and G.Z supervised for the study. Al authors have read and agreed to the published version of the manuscript.

Acknowledgments

This work was supported in part by the Postdoctoral Science Foundation of China under Grant no. 2020M683736, in part by the Teaching Reform Project of Higher Education in Heilongjiang Province under Grant no. SJGY20210456, in part by the Natural Science Foundation of Heilongjiang Province of China under Grant no. LH2021F038, in part by the haiyan foundation of Harbin Medical University Cancer Hospital under Grant no. JJMS2021-28, and in part by the graduate academic innovation project of Harbin Normal University under Grant nos. HSDSSCX2022-17, HSDSSCX2022-18, and HSDSSCX2022-19.