Abstract

Slime mould algorithm (SMA) is a population-based metaheuristic algorithm inspired by the phenomenon of slime mould oscillation. The SMA is competitive compared to other algorithms but still suffers from the disadvantages of unbalanced exploitation and exploration and is easy to fall into local optima. To address these shortcomings, an improved variant of SMA named MSMA is proposed in this paper. Firstly, a chaotic opposition-based learning strategy is used to enhance population diversity. Secondly, two adaptive parameter control strategies are proposed to balance exploitation and exploration. Finally, a spiral search strategy is used to help SMA get rid of local optimum. The superiority of MSMA is verified in 13 multidimensional test functions and 10 fixed-dimensional test functions. In addition, two engineering optimization problems are used to verify the potential of MSMA to solve real-world optimization problems. The simulation results show that the proposed MSMA outperforms other comparative algorithms in terms of convergence accuracy, convergence speed, and stability.

1. Introduction

Solving optimization problems means finding the best value of the given variables which satisfies the maximum or minimum objective value without violating the constraints. With the continuous development of artificial intelligence technology, real-world optimization problems are becoming more and more complex. Traditional mathematical methods find it difficult to solve nonproductivity and noncontinuous problems effectively and are easily trapped in local optima [1, 2]. Metaheuristic optimization algorithms are able to obtain optimal or near-optimal solutions within a reasonable amount of time [3]. Thus they are widely used for solving optimization problems, such as mission planning [47], image segmentation [810], feature selection [1113], and parameter optimization [1418]. Metaheuristic algorithms find optimal solutions by modeling physical phenomena or biological activities in nature. These algorithms can be divided into three categories: evolutionary algorithms, physics-based algorithms, and swarm-based algorithms. Evolutionary algorithms, as the name implies, are a class of algorithms that simulate the laws of evolution in nature. Genetic algorithms [19] based on Darwin’s theory of superiority and inferiority are one of the representatives. There are other algorithms such as differential evolution which mimic the crossover and variation mechanisms of genetics [20], biogeography-based optimization inspired by natural biogeography [21], and evolutionary programming [22] and evolutionary strategies [23]. Physics-based algorithms search for the optimum by simulating the laws or phenomena of physics in the universe. The simulated annealing, inspired by the phenomenon of metallurgical annealing, is the best-known physics-based algorithm. Apart from SA, other physics-based algorithms have been proposed, such as gravity search algorithm [24], sine cosine algorithm [25], black hole algorithm [26], nuclear reaction optimizer [27], and Henry gas solubility optimization [28]. Swarm-based algorithms are inspired from the social group behavior of animals or humans. Particle swarm optimization [29] and ant colony optimization [30], which simulate the foraging behavior of birds and ants, are two of the most common swarm-based algorithms. In addition to those, the researchers have proposed new swarm-based algorithms. The grey wolf optimizer [31] simulates the collaborative foraging of grey wolves. The salp swarm algorithm [32] is inspired by the foraging and following of the salps. Monarch butterfly optimization [33] is inspired by the migratory activities of monarch butterfly populations. The naked mole-rat algorithm [34] mimics the mating patterns of naked mole-rats. However, the no free lunch theory points out that no single algorithm can solve all optimization problems well [35]. This motivates us to continuously propose new algorithms and improve existing ones. Recently, inspired by the phenomenon of slime oscillations, Li proposed a new population-based algorithm called slime mould algorithm (SMA) [36]. Although SMA is competitive compared to other algorithms, there are some shortcomings in SMA. Due to the shortcoming of diminished population diversity in SMA, it easily falls into local optimum [37]. The selection of update strategies by SMA weakens the exploration ability [38, 39]. As the problem grows more complex, SMA converges slower in late iterations and has difficulty maintaining a balance between exploitation and exploration [40, 41]. To further enhance the performance of SMA and considering that the NFL encourages us to continuously improve these existing algorithms, a modified variant of SMA called MSMA is proposed in this paper. A chaotic opposition-based learning strategy is first used to improve population diversity. The search scope is expanded using the inverse solution of the imposed chaos operator. Second, two adaptive parameter control strategies are proposed to balance the relationship between exploitation and exploration better. Finally, a spiral search strategy is introduced to enhance the global exploration ability of the algorithm and avoid falling into local optimum. To verify the superiority of MSMA, 13 functions with variable dimensions and 10 functions with fixed dimensions were used for testing. The differences between the algorithms were also analyzed using the Wilcoxon test and the Friedman test. Moreover, two engineering optimization problems were used to verify the performance of MSMA further.

The remainder of this paper is organized as follows. A review of the basic SMA is provided in Section 2. Section 3 provides a detailed description of the proposed MSMA. In Section 4, the effectiveness of the proposed improved strategy and the superiority of the improved algorithm are verified using classical test functions. Based on this, the MSMA is applied to solve the two engineering design problems in Section 4. The main reasons for the success of MSMA are discussed in Section 5. Finally, conclusions and future works are given in Section 6.

2. Slime Mould Algorithm

In this section, the basic procedure of SMA is described. SMA works by simulating the behavioral and morphological changes of slime mould during the foraging process. The mathematical model of the slime mould is as follows:where denotes the number of current iterations. denotes the optimal individual. and are two individuals randomly selected from the population at iteration . is a parameter in the range of to . is a variable decreasing from 1 to 0. denotes the weight of slime mould. is a variable that is calculated by the following formula:where is the fitness of . is the population number. is the best fitness so far.

The formula for is as follows:where is the maximum number of iterations.

The formula of is calculated as follows:where condition denotes individuals ranking in the top half of fitness. and denote the best fitness and worst fitness in the current population, respectively.

The mathematical formula for updating the position of the slime mould is as follows:where and are the upper and lower bounds of the search space, respectively.

3. Proposed MSMA

To overcome the shortcomings of the basic SMA, this paper proposes three improvement strategies to enhance its performance. A chaotic opposition-based learning strategy is used to enhance the population diversity, as well as balancing algorithm exploitation and exploration ability using self-adaptive strategy. A spiral search strategy is used to prevent the algorithm from falling into local optimum. The three improvement strategies are described in detail in the following.

3.1. Chaotic Opposition-Based Learning Strategy

Opposition-based learning (OBL) is a new technique that has emerged in recent years in computing, proposed by Tizhoosh [42]. It has been shown that the probability that the reverse solution gets closer to the global optimal solution is nearly 50% higher than that of the current original solution. OBL enhances population diversity mainly by generating the reverse position of each individual and evaluating the original and reverse individuals to retain the dominant individuals into the next generation. The OBL formula is as follows:where is the reverse solution corresponding to .

To further enhance the population diversity and overcome the deficiency that the reverse solution generated by the basic OBL is not necessarily better than the current solution, considering that chaotic mapping has the characteristics of randomness and ergodicity, it can help to generate new solutions and enhance the population diversity. Therefore, this paper combines chaotic mapping with OBL and proposes a chaotic opposition-based learning strategy. The specific mathematical model is described as follows:where denotes the inverse solution corresponding to the ith individual in the population. is the corresponding chaotic mapping value.

3.2. Self-Adaptive Strategy
3.2.1. New Nonlinear Decreasing Strategy

During the iterative optimization of SMA, the changes of parameter have an important impact on the balance of exploitation and exploration. In SMA, decreases rapidly in the early iterations and slows down in the later iterations. Smaller in the early stage is not conducive to global exploration. Therefore, in order to further balance the exploitation and exploration and enhance the global exploration capability and the convergence capability of local exploitation, a new nonlinear decreasing strategy is proposed in this paper. The new definition of parameter is shown as follows:

To visually illustrate the effect of the new strategy, we compare it with the parameter change strategy in SMA, as shown in Figure 1. The new strategy proposed in this paper decreases slowly in the early stages, which increases the time for global exploration. In the late iteration, the reduction is also faster than the original strategy, which facilitates the SMA to accelerate the exploitation.

3.2.2. Linear Decreasing Selection Range

For equation (1), the original SMA randomly selects two individuals from all populations. This is not conducive to the later convergence of the algorithm. In order to enhance the convergence of SMA, the selection range in equation (1) is reduced with increasing number of iterations. The selection range parameter SR is described as follows:where and are the maximum and minimum selection ranges, respectively.

3.3. Spiral Search Strategy

In order to better balance the exploitation and exploration of SMA, this paper introduces a spiral search strategy. The spiral search diagram is shown in Figure 2.

As can be seen from Figure 2, the spiral search strategy can expand the search scope and better improve the global exploration performance. The mathematical formula of the spiral search strategy is shown as follows:

The spiral search strategy and the original strategy are chosen randomly according to the probability to update the population location. Thus, the modified position updating formula is as follows:and the pseudocode and flowchart of the MSMA are shown in Algorithm 1 and Figure 3.

Initialization {
 Initialize z, NP, tmax, , ,
 Initialize the the positions of slime mould }
 Main loop
 While (t < tmax)
  Calculate the fitness of
  Generate inverse position of slime mould by equation (8)
  Calculate the fitness of
  Select the best (, )
  Calculate the W by equation (4)
  Update by equation (10)
  Update by equation (9)
  For each search solution
   Update p, vb, vc, l
   Update positions by equation (12)
  End For
  t = t+1
 End While
Return the best fitness and
3.4. Computational Complexity Analysis

MSMA is mainly composed of subsequent components: initialization, fitness assessment, reverse population fitness assessment, ranking, weight update, and position update, in which denotes the number of slime moulds, denotes the function’s dimension, and denotes the maximum number of iterations. The computational complexity of initialization is, fitness evaluation and inverse population fitness evaluation is , the computational complexity of ranking is , the computational complexity of weight update is , and the complexity of position update is . Therefore, the total complexity of MSMA is .

4. Numerical Experiment and Analysis

In this section, various experiments are performed to verify the performance of MSMA. The experiments mainly include twenty-three classical test functions and two engineering design optimization problems.

4.1. Benchmark Test Functions and Parameter Settings

The 23 benchmark test functions include 7 unimodal, 6 multimodal, and 10 fixed-dimension multimodal functions. The unimodal functions have only one global optimal solution and are usually used to verify the exploitation capability of the algorithm. The multimodal functions have multiple locally optimal solutions and are therefore often used to examine the algorithm’s exploration ability and its ability to escape from local optimums. These benchmark functions are listed in Table 1.

The experimental results of MSMA are compared with those of the other eight algorithms. The comparison algorithms are MPA [43], MFO [44], SSA [32], EO [45], MRFO [46], HHO [47], GSA, and GWO. To ensure fairness, all algorithms were run on a Windows 10 AMD R7 4800U 16 GB platform and code was programmed using MATLAB R2016b. In the experiment, the number of populations NP is 30 and the maximum number of iterations t is 500. The results of 50 independent runs of the experiment are recorded. The parameters of the comparison algorithms were set according to the original literature as shown in Table 2.

4.2. Chaotic Mapping Selection Test

The chaotic opposition-based learning strategy proposed in this paper combines chaotic mapping and opposition-based learning mechanisms. To verify which chaotic mapping is used, 10 chaotic mappings are combined with the opposition-based learning mechanism. The SMA using chaotic mapping with ID 1 is named SMA-C1. The rest of the SMA algorithms using chaotic mapping are named similarly. The details of the chaotic mappings are shown in Table 3. Table 4 lists the results of each algorithm for solving the benchmark test functions.

As shown in Tables 4 and 5, SMA-C1, SMA-C2, SMA-C3, SMA-C4, SMA-C5, SMA-C6, SMA-C7, SMA-C8, SMA-C9, and SMA-C10 all show better results compared to SMA. This indicates that all 10 chaotic opposition-based learning strategies can improve SMA performance. SMA-C6 achieved the best results in solving the unimodal functions F1–F7, which indicates that the Pricewise map can better enhance the exploitation ability of SMA. When solving the multimodal functions F8–F13, SMA-C5 achieved satisfactory answers, which shows that the Logistic map can enhance the exploration ability of SMA. SMA-C4 achieves satisfactory solutions in the fixed-dimensional functions F14–F23, which indicates that Iterative map can enhance the local optimal avoidance ability of SMA. While SMA-C7 with Sine map is not the best performer in any of the three categories, it is ranked first in the overall ranking. This indicates that Sine map has the best effect in improving the comprehensive performance of SMA. In summary, in this paper, the Sine map with the first overall ranking is chosen to generate chaotic mapping values for chaotic opposition-based strategy.

4.3. Improvement Strategy Effectiveness Test

As seen in Section 3, three strategies are used in this paper to improve SMA performance. To evaluate the impact of each strategy on SMA, three SMA-derived algorithms (MSMA-1, MSMA-2, and MSMA-3) are developed according to Table 6. COBL represents chaotic opposition-based learning strategy, SA represents adaptive strategy, and SS represents spiral search strategy. Table 6 lists the results of each algorithm for solving the benchmark test functions.

As shown in Tables 7 and 8, MSMA with complete improvement strategies performs best overall. The three SMA-derived algorithms also ranked higher than SMA. The ranking from highest to lowest is as follows: MSMA-1, MSMA-2, and MSMA-3. It is shown that the three strategies have the largest to smallest impact on MSMA performance in the following order: COBL > SA > SS. Further analysis shows that MSMA-1 performs best in solving the unimodal functions F1–F7. This indicates that COBL can significantly improve the local search ability of SMA. MSMA-3 achieves satisfactory results on multimodal functions F8–F13 and F14–F23. This shows that SS can improve the global exploration capability of SMA, allowing the algorithm to get rid of local optimal solutions. MSMA-2 performs better in all three types of functions, which indicates that the adaptive strategy balances the exploitation and exploration capabilities of SMA. It is worth noting that MSMA-3 performs less well than SMA in the unimodal functions. This is due to the fact that the spiral search strategy expands the search of the space around the individual, resulting in a weakened exploitation capability. However, the combination of the three strategies allows the comprehensive performance of MSMA to be significantly improved, further illustrating the importance of balanced exploitation and exploration capabilities to enhance the performance of an algorithm. Finally, to more visually show the performance of each algorithm, a radar plot is drawn based on the ranking of each algorithm. As shown in Figure 4, the smaller the area enclosed by each curve, the better the performance. Obviously, MSMA has the smallest enclosed area, which means that MSMA has the best performance. On the contrary, SMA has the largest area.

4.4. Comparison and Analysis of Optimization Results

Tables 9 to Table 12 list the optimization results for F1–F13 of each algorithm for Dim = 30, 100, 500, 1000. Table 13 then shows the results of the ten algorithms in fixed-dimensional functions F14–F23. From the optimization results, MSMA achieves better results in most of the test functions.

Specifically, for the unimodal functions F1–F7, MSMA achieved satisfactory results both in low dimensions and in high dimensions. MSMA can obtain the theoretical optimal solutions of F1 and F3 stably in different dimensions. In comparison, SMA failed to achieve the theoretical optimal value in all the test functions and performed weaker than MSMA. Comparing the test results of each dimension, we found that MSMA’s performance has not dropped too much with increasing dimensions. This indicates that MSMA has excellent local exploitation capability. For the multimodal functions F8–F13, the MSMA steadily achieves the theoretical optimal values at F9–F11 with Dim = 30, 100, 500, 1000. When in low dimensions (Dim = 30, 100), MSMA performs best in solving F8. As the dimension increases, MSMA ranks second, only after SSA. MSMA has the best comprehensive performance in the multimodal functions, indicating that the improved strategy greatly enhances the global exploration capability of SMA.

Fixed-dimension functions are often used to test the ability of an algorithm to keep a balance between exploitation and exploration. The SMA performs best in six of the ten functions (F14, F16, F17, and F21–F23) when analyzing the mean and standard deviation. In addition, MSMA provides a better solution than SMA in all fixed-dimensional functions. Therefore, we can conclude that the MSMA proposed in this paper can well balance the exploitation and exploration capabilities with strong local optimal avoidance.

4.5. Convergence and Stability Analysis

In order to analyze the convergence performance of MSMA, convergence curves are plotted according to the results of different dimensions, as shown in Figure 5. We can learn that the convergence speed and convergence accuracy of MSMA are better than those of SMA in different dimensional cases. In addition, the convergence speed and convergence accuracy of MSMA do not decrease too much as the dimensionality increases. Therefore, the improvement strategy proposed in this paper can effectively improve the convergence speed of SMA and achieve better optimization results.

To analyze the distribution properties of MSMA on a fixed-dimensional function, box plots were drawn. From Figure 6, it can be seen that the maximum, minimum, and median values of MSMA are almost the same in most of the test functions. Especially for F14 and F17, there are no outliers in MSMA. The above shows that MSMA is superior to the comparison algorithm in terms of stability.

4.6. Statistical Test

To statistically validate the differences between MSMA and the comparison algorithms, Wilcoxon’s rank-sum test [48] and Friedman test [49] were used for testing.

Table 14 presents the statistical results with a significance level of 0.05. The symbols “+/=/−” indicate that MSMA is better than, similar to, or worse than the comparison algorithm. As shown in Table 14, MSMA outperforms other comparative algorithms in different cases and achieves results of 91/23/3, 96/18/3, 94/18/5, 93/19/5, and 66/15/9, confirming the significant superiority of MSMA in most cases compared to other algorithms.

Table 15 shows the statistics of F1–F13 in different dimensions and the fixed-dimensional functions F14–F23. The statistics show that MSMA ranks first in all cases. Therefore, it can be considered that MSMA has the best performance compared to other algorithms.

4.7. Engineering Design Problems

Engineering design optimization problems are often solved using metaheuristic algorithms. In this section, MSMA is used to solve two engineering design problems: the welded beam design problem and the tension/compression spring design problem. The results provided by MSMA are compared with those of other algorithms.

4.7.1. Welded Beam Design Problem

The welded beam design problem is a classical structural optimization problem, proposed by Coello [50]. As shown in Figure 7, the objective of this design problem is to minimize the manufacturing cost of the welded beam. The optimization variables include weld thickness h (), joint beam length l (), beam height t (), and beam thickness b (). The mathematical model of the welded beam design problem is as follows:It is subject towhere

The results of MSMA solving this problem are compared with those of other algorithms, as shown in Table 16. The results show that MSMA is the optimal algorithm for solving this problem, and the optimal solutions for each parameter are [0.205729, 3.470488, 9.036623, 0.205729], with the corresponding minimum cost of 1.724852.

4.7.2. Tension/Compression Spring Design Problem

The tension/compression spring design problem is a mechanical engineering design optimization problem. As shown in Figure 8, the objective of this problem is to reduce the weight of the spring. It includes three optimization objectives: wire diameter (), average coil diameter d (), and the number of coils L (). The comparison results are shown in Table 17. The mathematical model of this problem is described below.

The results showed that MSMA achieved the lowest cost of 0.012665 compared to GA3, CPSO, CDE, DDSCA, GSA, hHHO-SCA, AEO, and MVO. The corresponding values of the variables were [0.051747, 0.358090, 11.122192].

5. Discussion

In this section, the reasons for the superior performance of MSMA are discussed. The results in Table 5 demonstrate that the chaotic opposition-based learning strategy can enhance the performance of SMA. The different results of different chaotic mappings are caused by the different sequences generated by each chaotic mapping. The results reported in Table 8 demonstrate that all three improvement strategies proposed in this paper can improve the performance of the algorithm. MSMA-1 is competitive in unimodal functions. This is mainly due to the utilization of chaotic mapping for MSMA-1 to enhance the exploitation. MSMA-3 uses a spiral search strategy to improve the performance on the multimodal functions. It is due to the fact that the strategy expands the search of each individual for the space around itself and the population diversity is better. MSMA-2 maintains a balance of exploitation and exploration through adaptive strategies and thus ranks medium in both the multimodal and unimodal functions. The best performance of MSMA indicates that these three strategies complement each other and maintain a good balance between exploitation and exploration. This is also evidenced by the results of the Friedman test in Table 15.

6. Conclusions

In this paper, three improvement strategies are proposed in order to improve the performance of SMA. Firstly, a chaotic opposition-based learning strategy is used to enhance the population diversity. Secondly, two adaptive parameter control strategies are proposed to effectively balance the exploitation and exploration of SMA. Finally, a spiral search strategy is used to expand the SMA to search near individuals and avoid falling into local optimum. To evaluate the performance of the proposed MSMA, 23 classical test functions are used, including 13 multidimensional functions (Dim = 30, 100, 500, 1000) and 10 fixed-dimensional functions.

From the experimental results and the discussion just mentioned, the following conclusions can be drawn.

The sine mapping works best in combination with the opposition-based learning mechanism. Using chaotic opposition-based learning strategy can enhance the exploitation capability of MSMA.

Using a spiral search strategy can significantly enhance MSMA’s exploration capabilities and avoid getting trapped in local optimum.

The two self-adaptive strategies maintain a good balance between exploitation and exploration.

Compared with the eight advanced algorithms, MSMA has better convergence accuracy, faster convergence speed, and more stable performance. MSMA has the potential to solve real-world optimization problems.

In future work, we will use MSMA to solve the multi-UAV path planning problem and the task assignment problem. Moreover, MSMA can be extended as a multiobjective optimization algorithm.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors acknowledge funding received from the following science foundations: the National Natural Science Foundation of China (no. 62101590) and the Science Foundation of Shanxi Province, China (2020JQ-481, 2021JM-224, and 2021JM-223).