Abstract

The existing numerous adaptive variants of differential evolution (DE) have been improved the search ability of classic DE to certain extent. Nevertheless, those variants of DE do not obtain the promising performance in solving black box problems with unknown features, which is mainly because the adaptive rules of those variants are designed according to their designers’ cognition on the problem features. To enhance the optimization ability of DE in optimizing black box problems with unknown features, a differential evolution with autonomous selection of mutation strategies and control parameters (ASDE) is proposed in this paper, inspired by autonomous decision-making mechanism of reinforcement learning. In ASDE, a historical experience archive with population features is utilized to preserve accumulated historical experience of the combination of mutation strategies and control parameters. Furthermore, the accumulated historical experience can be autonomously mapped into rules repository, and the individuals can choose the combination of mutation strategies and control parameters according to those rules. Additionally, an updating and utilization mechanism of the historical experience is designed to assure that the historical experience can be effectively accumulated and utilized efficiently. Compared with some state-of-the-art intelligence algorithms on 15 functions of CEC2015, 28 functions of CEC2017, and parameter extraction problems of the photovoltaic model, ASDE has the advantages of solution accuracy, convergence speed, and robustness in solving black box problems with unknown features.

1. Introduction

Differential evolution (DE) algorithm is an effective and efficient global search engine for complex optimization problems, to solve the Chebyshev polynomial problem at first proposed by Storn and Price [1]. Since DE is a simple and robust optimizer, DE has been a hot topic in the field of intelligent optimization algorithm. In DE, a new solution of the problem is generated by using the scaled difference vector between two distinct solution randomly selected from the candidate solution set. Meanwhile, the one-to-one selection strategy is utilized to choose the better solution between parent and offspring individual to propagate to next generation. In addition, the implementation of DE only needs a few lines of code in any standard programming language, which makes it easy to realize for engineers of different optimization fields. Over the past two decades, DE and other intelligent algorithms have gained the promising performance in solving numerous practical engineering problems, such as, chemical engineering [25], electrical engineering [69], scheduling optimization [10, 11], image processing [1215], and structural optimization of neural network [16, 17]. The convergence of DE and its variants is proved in the literature [18].

The performance of DE is significantly influenced by mutation strategies and control parameters, and numerous research has been done for finding out the proper mutation strategies or/and control parameters.

In terms of adjusting control parameters, in the literature [19], F is in between 0.4 and 0.95, and the initialized F is 0.9. Moreover, CR should be between 0 and 0.2 when the objective problem is separable, and [0.9, 1] otherwise. To improve the performance of DE, Mohamed A. K. and Mohamed A. W. [20] proposed an enhanced AGDE algorithm, named EAGDE for short. In EAGDE, population size gradually decreases according to a nonlinear population size reduction.

In terms of designing mutation strategies, Cui et al. [21] proposed an adaptive multiple-elites-guided composite DE with a shift mechanism, and the best one of two trial vector generated by two elites-guided trial vector generation strategies, respectively, is adopted to participate in the selection. In addition, a shift mechanism is used to avoid falling into local trap. Mohamed A. W. and Mohamed A. K. [22] constructed a novel mutation strategy, which uses two random selected vectors of the top and the bottom 100p% individuals of the population while the third vector is chosen randomly from the middle individuals.

However, the suitable mutation strategies and control parameters are problem specific. To assign appropriate mutation strategies or/and control parameters to the population, numerous adaptive variants were proposed to further improve the search ability of DE. These adaptive variants can be divided into adaptive control parameters, adaptive mutation strategies, and adaptive topological neighborhood.

In terms of adaptive control parameters, Zhao et al. [23] proposed a DE with self-adaptive control parameters and strategy for unconstrained optimization problems (SLADE). The Cauchy distribution and normal distribution were utilized to adaptively update the mutation factor and crossover rate CR. Ghosh et al. [24] designed a very simple and flexible technology for adjusting and CR online, and the adaptive method was based on the objection function values of different individuals in population. A new parameter with adaptive learning mechanism DE was proposed by Meng et al. [25] for solving the inconvenience selection problem of control parameter. Brest et al. [26] proposed the DE with autoadaptive control parameters for solving stagnation in local optima. A self-adapting control parameter in DE was proposed by Brest et al. [27], in which the control parameters were adjusted by means of evolution. A variant of DE (JADE) was proposed by Zhang and Sanderson [28], in which the values of and are sampled from the Cauchy distribution and Gaussian distribution, respectively. In JADE, the control parameters of probability distribution are updated by adaptive strategy.

Tanabe and Fukunaga [29] proposed a success history based DE (SHADE), in which the historical memory archive is utilized to adaptive update the control parameters of probability distribution, instead of gradually update mechanism of JADE. Zhou et al. [10] proposed a new DE algorithm, in which control parameter values are adaptively determined according to their historical performances.

Regarding adaptive mutation strategies, in the above literature [23], the mutation strategy assigned to each individual is adaptively chose from the candidate strategy pool for matching different stage of evolution search according to their previous successful experience. Yu et al. [30] proposed a novel mutation DE for global optimization, in which the adaptive mutation is carried out for the current individual when individuals cluster around the local optimal solution. Mallipeddi and Suganthan [31] proposed a DE algorithm with ensemble of parameters and mutation and crossover strategies (EPSDE). In EPSDE, a set of different mutation strategies and the values of each control parameter coexist throughout the evolution search and compete to produce offspring solutions. In the literature [32], a new DE algorithm was proposed, which divides the population into three subpopulations according to the fitness value and applies three mutation strategies to exploration or exploitation. Qin et al. [33] proposed an adaptive DE algorithm (SaDE), in which several mutation strategies with different features are put into a candidate pool. After a certain interval period, a mutation strategy is adaptively selected according to its success rate. Wu et al. [34] proposed a DE with multipopulation-based ensemble of mutation strategies (MPEDE). In MPEDE, three mutation strategies are allocated to three subpopulations for evolution search. After a period, the current best mutation strategy will be adaptively determined according to the improvement of fitness value, which will be assigned to the subpopulation with largest number in next search period.

Wu et al. [35] designed a novel DE variant by using ensemble of multiple DE variants, named EDEV for short. In EDEV, the population is partitioned into four subpopulations, including three indicator subpopulations with smaller size and one reward subpopulation with much larger size. Each constituent DE variant of EDEV owns an indicator subpopulation. The reward subpopulation is assigned to the DE variant with the best performance in previous generations.

About adaptive topological neighborhood, Das et al. [36] proposed a DE using a neighborhood-based mutation operator (DEGL). In DEGL, the concept of small neighborhood is defined on the index-graph of individuals, and the weight factor is used to dynamically adjust the mutation operator based on local model and global model, which effectively balances the exploration and exploitation ability. Cai and Wang [37] proposed a DE with neighborhood and direction information (NGDE), which utilizes a neighborhood selection mechanism based on population position order to guide individuals to search in a good direction. Epitropakis et al. [38] proposed enhancing DE utilizing proximity-based mutation operators (ProDE), in which each individual constructs a topological neighborhood according to the proximity index and adaptively selects the search direction generated by the individual that is closed to it. Wang et al. [39, 40] proposed two DE algorithms based on eigenvector crossover operation, which use the fitness value to construct the topological neighborhood of the population. Each individual of the population searches adaptively feasible solution space by learning the data distribution features of the population topological neighborhood.

Additionally, to alleviate the long time-consuming problem of the algorithm in fitness evaluation, Zhan et al. [41] proposed a double-layered heterogeneous DE algorithm, in which different populations with various parameters or/and mutation strategies run concurrently and adaptively migrate to deliver robust solutions by making the best use of performance differences among multiple populations, and a set of cloud virtual machines run in parallel to evaluate fitness of corresponding populations, reducing computational costs as offered by cloud. In the literature [42], an adaptive distributed differential evolution is proposed for decreasing the sensitivity of strategies and parameters, named ADDE for short. In ADDE, three populations called exploration population, exploitation population, and balance population are co-evolved concurrently by using the master-slave multipopulation distributed framework. Different populations adaptively choose suitable mutation strategies according to their previous performance.

Although these adaptive methods have improved the performance of DE in solving nondifferentiable, nonlinear, and nonseparable problems to a certain extent, they do not address black box problems with unknown features because their adaptive rules are designed according to designers’ cognition of the problem features. For example, in the literature [30], the current best individual will be adjusted adaptively when the population falls into a local optimal trap. To determine whether the population converges prematurely, a parameter d is designed to reflect the convergence degree of population. The population is considered to fall into the neighborhood of local optimal solution when the parameter meets the predefined threshold that is set according to the designer’s cognition of solving problems. When the population falls into the local optimal trap, the mutation strategy with good exploration ability will be assigned to the population for helping the population jump out the local optimal trap. Thus, the predefined threshold determines whether the population falls into local optimal trap, which determines the next search behavior of the population. In the literature [43], the whole process of population search is divided into five search stages in chronological order, and the mutation strategy is assigned to population according to the search stage features determined by designer.

Therefore, to optimize the black box problems with unknown features, a DE with autonomous selection of mutation strategies and control parameters (ASDE) is proposed in this paper. In ASDE, a historical experience archive with population features is utilized to store accumulated historical experience of the combination of mutation strategies and control parameters, and the accumulated historical experience can be autonomously mapped into rules repository so that the population can choose the appropriate combination of mutation strategies and control parameters for evolution search according to those rules. Furthermore, each individual of the population chooses a suitable combination of mutation strategies and control parameters to explore feasible solution space according to the historical experience whose population features is most similar to the population features of current population, and the improvement of fitness value of the individuals is used to update corresponding historical experience.

The main contributions of this paper can be summarized as follows:(1)The historical experience archive with population features is proposed for storing cumulated historical experience so that individuals can autonomously select appropriate combination of mutation strategies and control parameters according to the rule mapped from the cumulated historical experience.(2)The updating and utilization mechanism of the historical experience can assure that the historical experience of mutation strategies and control parameters can be effectively accumulated and utilized efficiently.

The rest of this paper is provided as follows. The brief review of canonical DE is described in Section 2. In Section 3, DE with autonomous selection of mutation strategies and control parameters is addressed in detail. Section 4 presents the experiment results on 15 functions of CEC2015. The results of parameter extraction of photovoltaic models are shown in Section 5, followed by conclusion in Section 6.

2. The Reviews of DE

The implement of canonical DE, proposed by Storn and Price [1], is composed of initialization, mutation, crossover, and selection, which are addressed briefly as follows.

2.1. Initialization

The initial population of DE is consisted of a group feasible solutions randomly selected form objective solution space, termed , where NP denotes the population size and D represents the dimensions of problem. is th individual of population, which is generated bywhere and are lower and upper bounds of feasible solution space, respectively. is a random number, sampled uniformly and randomly from range [0, 1].

2.2. Mutation

The mutation operator simulates the gene mutation in nature, which is used to create a mutation vector for each target vector at th generation. is calculated bywhere F is scaling factor, and value of is into interval [0, 1]. are distinct indices.

2.3. Crossover

The mutation vector is used to generate trial vector by performing crossover addressed aswhere denotes the th dimension data of th individual and is one of the sets . In addition, CR represents the crossover rate and is into the interval [0, 1].

2.4. Selection

To make better individuals in the trial vectors and target vectors inherit to next generation, the selection is performed as formula (4), and minimization problems are considered in this paper:where denotes calculating the fitness value of vector .

Mutation, crossover, and selection will be repeated until the predefined termination criterion is met.

3. The Proposed Algorithm

In this section, ASDE will be detailly described in terms of historical experience archive, historical experience update and utilization, used mutation strategies and control parameters, and the architecture of proposed algorithm.

3.1. The Historical Experience Archive

It can be clearly shown from Figure 1 that the historical experience archive with entries is consisted of population features (, , and ) and historical experience , where denotes the standard deviation of fitness value of the population, and are the sum of standard deviation of each dimension of individuals. The and are calculated by formulas (5) and (6), respectively.

Meanwhile, the stands for the generation number of continuous stagnations of the population, updated according to formula (7), where the stagnation is referred to that the optimum solution is not improved during population evolution:where is the mean value of fitness of population, and denotes the mean value of th dimension data of all individuals.

The can indicate the difference between fitness values of all individuals in the population, and the can show the diversity properties of the population. The can indicate whether the population falls into the local optimal trap. Thus, the combination of , , and can ideally describe the state of the current population.

In addition, the historical experience is the special vector that is shown in Figure 2 and composed of the cumulated reward of combinations of mutation strategies and control parameters, where the reward is the average nonnegative improvement of fitness value of the individuals that use corresponding combinations for evolution search.

3.2. The Updating and Utilization Mechanism of the Historical Experience

To assign most appropriate historical experience to current population, the updating and utilization mechanism of the historical experience is utilized, so that each individual of the current population can use proper combination of mutation strategies and control parameters to search according to the rules mapped from assigned historical experience. A population feature similarity calculation operator is mostly main operator of this mechanism, used to calculate the population feature similarity between population features PFt = (, and ) of current population and population features () of the historical experience archive. The population feature similarity calculation operator is described as formula (8), in which the combination of Mahalanobis distance and Euclidean distance is viewed as the population feature calculator. The Mahalanobis distance is an effective method to calculate the similarity of two unknown sample sets, because it is not affected by unit of each dimension data so that each dimensional data of the records of the historical experience archive can be processed equally. However, the Mahalanobis distance has a necessary condition that the covariance matrix of population feature sets of the historical experience archive must exist. Thus, a combination of Mahalanobis distance and Euclidean distance is used to calculate the population feature similarity:where denotes the Mahalanobis function that is utilized to calculate the Mahalanobis distance between and .

After the population feature similarity calculation operator is executed, the historical experience () with highest population feature similarity (the smaller the value of , the higher the population feature similarity) is assigned to the current population. The -greedy strategy is utilized to assign combination of mutation strategies and control parameters to each individual of the current population according to the assigned historical experience. The combination of mutation strategies and control parameters corresponding to the maximum value of the assigned historical experience, with (1–) probability, is selected, and the population randomly selects a combination with probability.

After the population performs once evolution search, the historical experience archive will be updated, in which the oldest record of the historical experience archive is replaced by the new historical experience calculated bywhere is cumulative degree factor and a constant in interval [0, 1], and mean () denotes the arithmetic mean operation. The is a vector with the same dimension as , consisted of average nonnegative improvements of fitness value of the individuals that use corresponding combinations for evolution search.

3.3. The Used Mutation Strategies and Control Parameters

There are three well-known mutation strategies described as formulas (2), (10), and (11), which are also utilized to evolution search in EPSDE. DE/rand/1 (formula (2)) was slower, but more robust than the other strategies that rely on the best-so-far vector. DE/best/2/bin are better than DE/best/1 due to its ability to improve diversity by producing more trial vectors. DE/current-to-rand/1 being a rotation-invariant strategy without crossover can solve rotated problems better than other strategies. Thus, these three mutation strategies are used in ASDE.

DE/best/2 [44]:

DE/current-to-rand/1 [45]:where is the optimal solution of the current population, and denotes the random chosen from interval [0, 1]. Additionally, in DE/current-to-best/1 strategy, the crossover operator is not performed.

The parameter feasible space of and is equally divided into 5 intervals, i.e., [0, 0.2], [0.2, 0.4], [0.4, 0.6], [0.6, 0.8], and [0.8, 1]. When the specific interval is selected to set a parameter’s value, the exact value of the parameter is chosen uniformly randomly from within that interval.

3.4. Algorithm Architecture

As shown in Figure 3, the algorithm starts with the initialized population and historical experience archive, in which each individual of the initialized population is randomly and uniformly sampled from feasible solution space, and the population features and the historical experience of each record of the initialized historical experience archive are initialized as infinity and zeros vector, respectively.

Before the population evolution search, the historical experience with highest population feature similarity is assigned to the current population by calculating the population feature similarity between population features of current population and population features of the historical experience archive according to formula (8).

Then, each individual of the population utilizes -greedy strategy to select the combination of mutation strategies and control parameters according to the assigned historical experience , and uses the assigned combination to perform mutation, crossover and selection.

After all individuals of the population evolution search, the average nonnegative improvement of fitness value of the individuals is utilized to update the historical experience , and the updated historical experience and the population feature are used to update the historical experience archive.

The above process will be repeated until the termination condition is met. The pseudocode of ASDE is shown in Algorithm 1.

(1)Begin
(2) All , and of the historical experience archive are set infinity, and all historical
(3) experience () of the historical experience archive are equal to 0;
(4) pos = 1;
(5) Initialize uniformly random population and calculate fitness value of population;
(6) Set t = 0;
(7)While termination condition is not met
(8)  t = t + 1;
(9)  Calculate population feature by using formulas (5)–(7);
(10)  Calculate population feature similarity by using formula (8);
(11)  [∼, I] = min ();
(12)    =  , where is Ith record of the historical experience archive.
(13)  for i = 1 to NP
(14)    is selected with -greedy strategy according to , ;
(15)     = floor(( - 1)/25 + 1), where denotes the mutation strategy used by th individual;
(16)     = floor (mod ( – 1, 25)/5) 0.2 + rand (0, 1) 0.2;
(17)     = mod ( – 1, 2) 0.2 + rand (0, 1) 0.2;
(18)   Mutation, crossover and generation of trial vector using , and ;
(19)  end for
(20)  for i = 1 to NP
(21)   if
(22)      =  ;
(23)     () = [ (), -];
(24)   end if
(25)  end for
(26)  Updating the by using formula (9);
(27)  Updating the historical experience archive by using and ;
(28)  pos = pos + 1;
(29)  if pos > H
(30)   pos = 1;
(31)  end if
(32)end while
(33)End

4. The Experiment Results of 15 Benchmark Functions of CEC2015

In this section, firstly, ASDE is used to optimize the 15 test functions of CEC2015 [46] that are composed of 2 unimodal functions (F1 and F2), 3 multimodal functions (F3–F5), 3 hybrid functions (F6–F8), and 7 composition functions (F9–F15), compared with some state-of-the-art algorithms (DE [1], EPSDE [31], PSO [47], SLPSO [48], MFO [49], AMFO [50], and RLDE [51]).

Furthermore, 28 benchmark functions of CEC2017 [52] are used to verify the performance of ASDE, which consists of 1 unimodal function F1, 7 simple multimodal functions (F3–F9), 10 hybrid functions (F10–F19), and 10 composition functions (F20–F29). It is noteworthy that F2 of CEC2017 is not used because it is an unstable problem. Some state-of-the-art algorithms are utilized to participate in the comparison, such as, DE [1], EPSDE [31], PSO [47], SLPSO [48], MFO [49], AMFO [50], RLDE [51], jSO [53], LSHADE-SPACMA [54], and EB-LSHADE [55].

The common parameters of all the algorithms are set as follows. The population size is set as 100, and the maximum number of function evaluation is 10000, where denotes the dimensions of the solved problems. In CEC2015, the dimension D is set as 30. The dimensions D are 10, 30, 50, and 100 for CEC2017. Meanwhile, in ASDE, the size of the historical experience archive is the same to population size, and is equal to 0.4. The cumulative degree factor is set as 0.3 according to the sensitivity analysis of it in Section 4.3. In addition, the other control parameters of the compared algorithm follow their original papers.

In this experiment, the average value and standard deviation of the function error value are recorded for verifying the performance of the algorithms, gained by each algorithm with 51 independent runs, where is the best solution obtained by the algorithm in a run and denotes the theoretical global optimum of the benchmark functions. The maximum number of function evaluations (FES) is set to 10000D for both CEC2013 and CEC2017. Wilcoxon’s rank sum test at a 5% significance level was utilized to generate statistically reliable results for CEC2015, and the Friedman test is used in CEC2017. According to Wilcoxon signed-rank test, R+ is the sum of ranks for the functions in which ASDE outperforms the compared algorithm and R is the sum of ranks for the opposite. Larger ranks demonstrate larger performance discrepancy.

4.1. Comparisons on Solution Accuracy

In each row of Table 1, the mean values over 25 independent runs are shown in the first line, and the standard deviations are presented in the second line. The value and H value of the nonparametric statistical test (Wilcoxon’s rank sum test) with a significance level  = 0.05 are given in the third and fourth lines. The symbol ‘ǂ’ is tagged in the back of the mean value produced by the algorithm that is significantly worse than ASDE. If ASDE is worse than other algorithms, a ‘ξ’ is added in the back of the mean value of corresponding algorithm. The symbol ‘∼’ demonstrates that there is no significant difference between ASDE and compared algorithm. At the last row of the table, a summary of total number of “ǂ,” “ξ’,” and “∼” is presented. Additionally, the best solution is highlighted in bold.

It is clearly seen from Table 1 that ASDE gains the better performance than EPSDE on 7 functions composed of 1 multimodal function (F5), 3 hybrid functions (F6–F8), and 3 composition functions (F11–F13), which indicates that the proposed historical experience archive with population features and the updating and utilization mechanism of the historical experience can provide the better combination of mutation strategies and control parameters than EPSDE. ASDE is worse than EPSDE on 1 functions F14, which may be because the -greedy strategy (  = 0.4) is slightly poor in the exploitation ability of the population. In addition, ASDE is similar to the EPSDE on 7 functions (F1–F4, F9, F10, and F15).

ASDE obtains the best performance on 5 functions (F3, F7, F11, F13, and F15), which shows that the ASDE is promising in solving complex problems. EPSDE is the best on 3 functions (F1–F3), and DE achieves the best performance on 5 functions (F6, F8, F10, F14, and F15). Meanwhile, PSO only is the best on function F3, and SLPSO outperforms other compared algorithms on 5 functions (F4, F5, F9, F12, and F15). Additionally, MFO gains the best performance on function (F15).

Table 2 shows the average ranks according to Friedman test for the compared algorithms using CEC2017. The best ranks are shown in bold, and the second ranks are underlined. From Table 2, we can see that value generated by Friedman test for all dimensions are less than 0.05. Therefore, it can be concluded that there is a significant difference between the performances of the algorithms.

It can be clearly seen from Table 2 that ASDE is ranked first for 10 dimensions and ranked second for 30 and 100 dimensions. Regarding mean ranking, LSHADE-SPACMA gains the best ranking, and ASDE obtained the second ranking.

According to Wilcoxon’s test shown in Table 3, ASDE is significantly better than AMFO, DE, EPSDE, MFO, PSO, and RLDE for all dimensions. On the other hand, there is no significant difference between ASDE and LSHADE-SPACMA for all dimensions. The performance of ASDE is similar to that of jSO for D = 30, 50, and 100. There is no significant difference between ASDE and both of EBLSHADE and SLPSO for D = 50 and 100.

4.2. The Comparison Results of Convergence Speed

In Figure 4, the vertical axis is the nature logarithm of the mean value over independent 25 runs, and the horizontal axis is the sampling point where 30 sampling points are taken from FES = 1000 and mod (FES, 10000) = 0.

Figure 4 clearly shows that ASDE obtains the best performance on 5 functions (F7, F9, F11, F13, and F15), which demonstrates that ASDE is effective to optimize some complex problems. Although ASDE gains poor convergence performance on functions F1, it has strong global exploration ability and achieves the best convergence accuracy. This may be because the proposed mechanism can assign autonomously the combination of mutation strategies and control parameters to the population so that the population can use the more exploratory mutation strategies and control parameters to search.

Additionally, DE is best on 5 functions (F6, F8, F10, F14, and F15), and EPSDE gains the best convergence speed on 3 functions (F1, F2, and F15). MFO obtains the best convergence speed on 2 functions (F3 and F15), and SLPSO is the best convergence speed on 5 functions (F4, F5, F9, F12, and F15). PSO and RLDE have poor convergence speed on all benchmark functions.

4.3. Sensitivity Analysis of the Parameter

The parameter determines the impact level of the historical experience on the current population; that is, in the historical experience, the larger the value of the parameter , the greater the proportion of the reward of each combination used in the current population. To find out a good choice of the parameter , ASDE with different parameter  = {0.1, 0.2, …, 1} is utilized to optimize 15 benchmark problems of CEC2015. The other parameter settings of the algorithm are the same as the settings described above. Table 4 summarizes the results of average errors over 25 independent runs. For each problem, the best result of ASDE with different values of the parameter is displayed in bold font. At the last row of the table, an average ranking of ASDE with different values of the parameter is presented.

Table 4 clearly shows that the value affects the performance of algorithms, and the same value gains different performance on different functions. ASDE with  = 0.3 obtains the best performance on 6 functions (F3, F8, F9, F12, F14, and F15), and its average ranking is the best. In addition, although ASDE with  = 1 obtains the best performance on 9 functions (F1–F4, F6, F9, F12, F13, and F15), it makes the historical experience more volatile, because evolutionary algorithm search has probability features. Thus, to make the algorithm more robust and make the historical experience more stable,  = 0.3 is recommended for the ASDE algorithm.

4.4. Comparison Results of Time Complexity

In this section, the time complexity of ASDE is evaluated and is expressed as follows:(1)The complexity of population initialization is .(2)Calculating the fitness value costs .(3)Calculating the diversity of population costs .(4)The feature similar between current population and the records of the historical experience archive needs .(5)Assigning control parameters and mutation strategies for each individual costs .(6)Generating the individual to be updated cost .(7)Calculating the fitness value of offspring population costs .(8)The update of the historical experience archive is .

The overall time complexity of ASDE is shown as follows:

The total comparisons of average time complexity of 28 functions for D = 100 using CEC2017 in one iteration about 11 algorithms are displayed in Figure 5 in the form of bar plot. Figure 5 clearly shows that the mean CPU time of ASDE is slightly worse than DE, jSO, PSO, and SLPSO, and slightly better than MFO and RLDE. In addition, the mean CPU time of ASDE is significantly better than AMFO, EBLSHADE, EPSDE, and LSHADE-SPACMA.

5. The Application about Parameter Extraction of Photovoltaic Models

As we all know, solar energy is one of the most important renewable energy sources [56], and its main practical application is photovoltaic (PV) power generation [57], because solar energy can be directly converted into electricity through the PV system. However, the conversion efficiency of PV model is greatly affected by model parameters. To deal with this problem, the ASDE is used to extract the parameters of PV models.

5.1. PV Models and Fitness Functions

The single diode and double diode models [58] are the most commonly used PV models, which can explain the current-voltage features of the PV systems. The single diode model, the diode model, the PV module and the fitness function are modeled as follows.

5.1.1. Single-Diode Model

The output current and voltage of the single diode model is calculated by using [59]where and are the series resistance and the shunt resistance, respectively, and stands for the diode ideal factor. and stands for the output current and the output voltage of the single diode model, respectively. and are the photogenerated current and the diode current, respectively. represents the junction thermal voltage, generated by usingwhere denotes the Boltzmann constant (), and stands for the temperature of junction in Kelvin. represents the electron charge ().

The unknown parameters (, , , , and ) of single diode model will be extracted.

5.1.2. Double-Diode Model

The relationship of output current and voltage of double diode model is described aswhere denotes the current of the -th diode, and represents the -th diode ideal factor.

There are five unknown parameters (, , , , , , and ) of the double-diode model that is extracted.

5.1.3. PV Module

The output current and voltage of the PV module is described as [60]where is the number of solar cells that are connected in series, and denotes the number of solar cells that are connected in parallel. In the experiment, is 1, because the PV modules are all in series. Thus, the output current and voltage of the used PV module is addressed as

The parameters (, , , , and ) of PV module are needed to extracted.

5.1.4. Fitness Functions

Minimizing the error between simulated and measured current data is the target of parameter extraction of PV models. The absolute error current (AEC) of the individuals is calculated as follows [61].

The AEC of the single diode model is calculated by

The AEC of the double diode model is generated by

The AEC of the PV module is created as

To quantify the overall error between the simulated and measured current, the root mean square error is used as the fitness function, addressed aswhere represents the feasible solution consisted of the unknown parameters, and is the number of measured current data. It is clearly shown that the smaller the fitness value, the more accurate the extracted parameters.

5.2. Parameters Setting

The current-voltage data of single (PV–F1) and double (PV–F2) diode models are obtained from [62], measured on a 57 mm diameter commercial silicon R.T.C. France solar cell under 1000 at 33°C. Three different PV modules are used to test the performance of ASDE, i.e., ploycrystalline Photowatt-PWP201 (PV–F3), monocrystalline STM6-40/36 (PV–F4), and polycrystalline STP6-120/36 (PV–F5). The ploycrystalline Photowatt-PWP201 is gauged under 1000 at 45°C [62]. The monocrystalline STM6-40/36 and the polycrystalline STP6-120/36 are gauged under 51°C and 55°C, respectively, whose current-voltage data are obtained from [63]. The feasible range of parameters to be extracted is displayed in Table 5. In addition, the parameter setting of the simulation is the same to Section 4.

5.3. Comparison on Solution Accuracy

The basic settings of Table 6 are the same to Table 1. It can be seen clearly from Table 4 that ASDE obtains the best performance on 2 photovoltaic models (PV–F1 and PV-F2) and gets the second performance on the other models. Meanwhile, ASDE is better than DE, PSO, AMFO, and RLDE on all five photovoltaic models. Compared with SLPSO and MFO, ASDE is better on four models and is similar on one model. In addition, ASDE has obtained the better performance than EPSDE on three models. This indicates that ASDE is significant in solving the problem of parameter extraction of the photovoltaic model. Finally, ASDE is worse than EPSDE on 2 models (PV–F4 and PV-F5), which may be because the -greedy strategy (  = 0.4) is slightly poor in the exploitation ability of the population.

It is noteworthy that the root mean square error between the current obtained by ASDE and the theoretical measured current is less than 0.05. Thus, we believe that ASDE is effective in optimizing the parameter extraction problem of photovoltaic models.

5.4. The Comparison Results of Convergence Speed

In Figure 6, the vertical axis is the nature logarithm of the mean value over independent 25 runs, and the horizontal axis is the sampling point where sampling points are taken from FES = 1000 and mod (FES, 10000) = 0.

Figure 6 shows that ASDE gains the best convergence speed on 3 models (PV–F1, PV-F2, and PV-F3), which suggests that ASDE can get the promising convergence performance in optimizing the parameter extract problems of photovoltaic models. On model PV-F4, in the early stage of search, the convergence speed of ASDE is better than EPSDE. However, in the later stage of the search, ASDE obtains worse convergence speed than EPSDE. This could be because ASDE with fixed value has the slightly poor convergence ability, which is for avoiding the population falling into the trap of local optimal solution. Although ASDE gains worse convergence performance than MFO on model PV-F5, it achieves the best convergence accuracy. This indicates that the exploration ability of the ASDE is interesting.

6. Conclusions

In reality, most optimization problems are black box optimization problems with unknown problem features, which makes the traditional adaptive DE algorithm unable to achieve satisfactory performance in optimizing those optimization problems. To improve this situation, a DE with autonomous selection of mutation strategies and control parameters is proposed in this paper, named ASDE for short. In ASDE, the historical experience archive with population features is utilized to preserve historical experience of mutation strategies and control parameters, which makes the historical experience can be cumulated so that the accumulated historical experience can be mapped into rules repository, and the individuals can choose the combination of mutation strategies and control parameters according to those rules. In addition, to assure that the historical experience can be effectively accumulated and utilized efficiently, an updating and utilization mechanism of the historical experience is proposed in this paper. Finally, the 15 functions of CEC2015 and the parameter extraction problems of the photovoltaic model are utilized to verify the performance of ASDE, and the simulation results demonstrate that ASDE outperforms seven compared algorithms.

In our future work, ASDE will be applied to solve other real-world optimization problems to further test its performance.

Data Availability

The data used to support the findings of this study cannot be shared due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

Zhenyu Wang and Zijian Cao were responsible for conceptualization and methodology. Investigation was conducted by Chen Liu and Haowen Jia. Writing-original draft preparation was carried out by Zhenyu Wang. Zijian Cao and Feng Tian were responsible for writing-review and editing. Binhui Han and Fuxi Liu were responsible for funding acquisition. All authors have read and agreed to the published version of the manuscript.

Acknowledgments

This research was partially funded by the Shaanxi Natural Science Basic Research Project (grant no. 2020JM-565).