Next Article in Journal
An Asymmetric Polling-Based Optimization Model in a Dynamic Order Picking System
Previous Article in Journal
Distributed Network of Adaptive and Self-Reconfigurable Active Vision Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LARO: Opposition-Based Learning Boosted Artificial Rabbits-Inspired Optimization Algorithm with Lévy Flight

1
Electronic Information and Electrical Engineering College, Shangluo University, Shangluo 726000, China
2
College of Mathematics and Computer Application, Shangluo University, Shangluo 726000, China
3
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(11), 2282; https://doi.org/10.3390/sym14112282
Submission received: 8 October 2022 / Revised: 24 October 2022 / Accepted: 26 October 2022 / Published: 31 October 2022

Abstract

:
The artificial rabbits optimization (ARO) algorithm is a recently developed metaheuristic (MH) method motivated by the survival strategies of rabbits with bilateral symmetry in nature. Although the ARO algorithm shows competitive performance compared with popular MH algorithms, it still has poor convergence accuracy and the problem of getting stuck in local solutions. In order to eliminate the effects of these deficiencies, this paper develops an enhanced variant of ARO, called Lévy flight, and the selective opposition version of the artificial rabbit algorithm (LARO) by combining the Lévy flight and selective opposition strategies. First, a Lévy flight strategy is introduced in the random hiding phase to improve the diversity and dynamics of the population. The diverse populations deepen the global exploration process and thus improve the convergence accuracy of the algorithm. Then, ARO is improved by introducing the selective opposition strategy to enhance the tracking efficiency and prevent ARO from getting stuck in current local solutions. LARO is compared with various algorithms using 23 classical functions, IEEE CEC2017, and IEEE CEC2019 functions. When faced with three different test sets, LARO was able to perform best in 15 (65%), 11 (39%), and 6 (38%) of these functions, respectively. The practicality of LARO is also emphasized by addressing six mechanical optimization problems. The experimental results demonstrate that LARO is a competitive MH algorithm that deals with complicated optimization problems through different performance metrics.

1. Introduction

Most practical applications of problem processing often go to the appropriate solution of an optimization problem by their very nature [1]. Therefore, the optimization problem has been a problem that has received much attention from the beginning, and the exploration of various efficient methods for complicated optimization problems (COPs) has captured the attention of scholars in many fields. Among them, the traditional mathematical optimization method as an optimization strategy requires that the associated objective function needs meet convexity and separability. This property requirement guarantees that it approximates the optimal solution theoretically. However, traditional mathematical strategies are complicated in dealing with highly complex and demanding optimization problems [2]. Newton’s method and the branch-and-bound method are typical deterministic algorithms. Although such algorithms are superior to metaheuristic nature-inspired algorithms in solving some single-parameter tests in terms of functional tests, deterministic algorithms tend to fall into local optimal solutions when faced with more demanding objective functions and constraint functions. Deterministic methods may not be effective when facing multimodal, discrete, non-differentiable, or non-convex problems, or with a comprehensive search space. In addition, deterministic algorithms sometimes require a derivative. Therefore, deterministic algorithms often do not work in solving engineering problems [3].
MH techniques have recently attracted more scholarly attention because of their unique idea of providing a suitable candidate to handle various complex and realistic optimization problems. In general, MH methods have some available advantages over traditional mathematical optimization methods: MH algorithms are an efficient search, low-complexity global optimization method, and different solutions can be searched for in each iteration, making these other solutions highly competitive in obtaining optimal solutions [4].
Depending on the object of construction, experts tend to classify MH algorithms into four parts: evolution-based algorithms, group-intelligence-based algorithms, physical- or chemical-based algorithms, and human-behavior-based algorithms (HBAs) [5]. Evolution-based algorithms imitate the natural evolutionary laws of the biological world; examples of this category include genetic algorithms [6], evolutionary strategies [7], differential evolution [8] which is a variation, crossover, and selection-based garlic algorithm, and evolutionary planning (EP) [9].
Group-intelligence-based (GIB) algorithms are often motivated by the cooperative conduct of various plants and animals in natural environments that live in groups and work together to find food/prey. This GIB category includes aphid–ant mutualism (AAM) [10], bottlenose dolphin optimizer (BDO) [11], beluga whale optimization (BWO) [12], capuchin search algorithm (CapSA) [13], sand cat swarm optimization (SCSO) [14], manta ray foraging optimization algorithm (MRFO) [15], black widow optimization algorithm (BWOA) [16], and chimp optimization algorithm (CHOA) [17].
A physical- or chemical-based algorithm simulates the physical laws and chemical phenomena of biological nature, usually following a generic set of rules to discriminate the influence of interactions between candidate solutions. The type includes the gravitational search algorithm [18], a popular physical- or chemical-based algorithm motivated by Newton’s law of gravity. According to some gravitational force, subjects are attracted to each other according to the law of gravitation. Examples include atom search optimization [19], ion motion algorithm [20], equilibrium optimizer [21], and water cycle algorithm (WCA) [22].
In the last case of the four types, HBAs are exploited by taking advantage of various characteristics associated with humans. The main types of this category include human mental search [23], poor and rich optimization algorithm [24], and teaching learning-based optimization [25].
Dealing with COPs usually consists of two steps: exploration and exploitation. Exploration and exploitation are two opposite strategies. The algorithm searches for a better solution in the global discovery domain in the exploration step. In the development step, the algorithm tends to locate the best solution found so far by exploring the vicinity of the candidate solutions. The trade-off between exploration and exploitation is considered one of the most common problems in current metaheuristic algorithms [26]. This general problem forces the optimization search process to utilize one of the search mechanisms at the expense of the other strategy. In this context, many scholars have proposed algorithms to balance such mechanisms. For example, Zamani proposed a quantum-based avian navigation optimizer algorithm inspired by the navigation behavior of migratory birds [27]. In addition, Nadimi-Shahraki introduced the proposed multi-trial vector approach and archiving mechanism to the differential evolution algorithm, thus proposing a diversity-maintained differential evolution algorithm [28]. ARO was suggested as a newly developed metaheuristic technique with steps inspired by the laws of rabbit survival in the natural world [29].
Regarding the no-free-lunch theory [30], no nature-inspired method can optimally handle every realistic COP [31]. The above facts imply that optimization methods are applied to solve specific COPs but may not be valid for solving other COPs, and experimental results tend to reveal that artificial rabbits optimization has poor convergence accuracy and tends to get stuck in local solutions when handling complicated or high-latitude issues. Therefore, based on the importance of the above two reasons, this paper suggests a hybrid artificial rabbit optimization with Lévy flight and selective opposition strategy, called enhanced ARO algorithm (LARO). LARO is a variant of the ARO algorithm. First, to enhance the worldwide finding capability of ARO, the Lévy flight strategy is fully utilized [32]. The Lévy flight strategy helps LARO to design local solution avoidance and international exploration. Secondly, local exploitation of LARO is achieved by using a selective opposition strategy with improved convergence accuracy [33]. The innovative points and the major contributions of this paper are given below:
(i)
The Lévy flight strategy is introduced in the random hiding phase to improve the diversity and dynamics of the population, which further improves the convergence accuracy of ARO.
(ii)
The introduced selection opposition strategy extends the basic opposition strategy and adaptively re-updates the algorithm to improve the ability to jump out of the local optimum.
(iii)
Numerical experiments are tested on 23 standard test functions, the CEC2017 test set, and the CEC2019 test set.
(iv)
LARO is implemented and tested on six engineering design cases.
The remainder of this study is organized as given below. Section 2 describes the ARO mathematical model. The Lévy flight strategy, selective opposition strategy, and LARO algorithm are introduced in Section 3. Section 4 presents the numerical results and discussion of the proposed algorithm, mainly applied to 23 benchmark functions and the CEC2019 test set. An application of LARO to six real engineering problems is described in Section 5. Section 6 concludes this research work and discusses future prospects.

2. Artificial Rabbits Optimization (ARO)

The ARO algorithm is proposed mainly by referring to two laws of rabbit survival in the natural world: detour foraging and random hiding [29]. Among them, detour foraging is an exploration strategy to prevent detection by natural predators by having rabbits eat the grass near the nest. Random hiding is a strategy in which rabbits move to other burrows, mainly to hide further. The beginning of any search algorithm relies on the initialization process. Considering that the size of the design variable has dimension d, the size of the artificial rabbit colony is N, and the upper and lower limits are ub and lb. Then the initialization is done as follows.
z i , k = r ( u b k l b k ) + l b k , k = 1 , 2 , , d
where z i , k denotes the position of the jth dimension of the ith rabbit and r is a random number that we are given along with it.
The metaheuristic algorithm mainly considers the two processes of exploration and exploitation, while detour foraging mainly considers the exploration phase. Detour foraging is the tendency of each rabbit to stir around the food source and explore another rabbit location randomly chosen in the group to obtain enough food. The updated formula for detour foraging is given below.
v i ( t + 1 ) = z j ( t ) + R ( z i ( t ) z j ( t ) ) + r o u n d ( 0.5 ( 0.05 + r 1 ) ) n 1 ,
R = l C
l = ( e e ( t 1 T max ) 2 ) sin ( 2 π r 2 )
C ( k ) = { 1 i f k = = G ( l ) 0 e l s e k = 1 , , d a n d l = 1 , , [ r 3 d ]
G = randp ( d )
n 1 ~ N ( 0 , 1 )
where v i , k ( t + 1 ) denotes the new position of the artificial rabbit, i,j = 1, …, N. z i denotes the position of the ith artificial rabbit, and z j represents artificial rabbits at other random positions. Tmax is the maximum number of iterations. [·] symbolizes the ceiling function, which represents rounding to the nearest integer, and randp represents a stochastic arrangement from 1 to d random permutation of integers. r1, r2, and r3 are stochastic numbers from 0 to 1. L represents the running length, which is movement speed when detour foraging. n1 obeys the standard normal distribution. The perturbation is mainly reflected by the normal distribution random number of n1. The perturbation of the last term of Equation (2) can help ARO avoid local extremum and perform a global search.
Random hiding is mainly modeled after the exploration stage of the algorithm, where rabbits usually dig several burrows around their nests and randomly choose one to hide in to reduce the probability of being predated. We first define the process by which rabbits randomly generate burrows. The ith rabbit produces the jth burrow by:
b i , j ( t ) = z i ( t ) + H g z i ( t ) ,
H = T max t + 1 T max n 2
n 2 ~ N ( 0 , 1 )
g ( k ) = { 1 i f k = = j 0 e l s e k = 1 , d
where i = 1, …, N and j = 1, …, d, and n2 follows the standard normal distribution. H denotes the hidden parameter that decreases linearly from 1 to 1/Tmax with stochastic perturbations. Figure 1 shows the change in the value of an over the course of 1000 iterations. In the figure, the H value trend generally decreases, thus maintaining a balanced transition from exploration to exploitation throughout the iterations.
The update formula for the random hiding method is shown below.
v i ( t + 1 ) = z i ( t ) + R ( r 4 b i , r ( t ) z i ( t ) ) ,
g r ( k ) = { 1 i f k = = [ r 5 d ] 0 e l s e k = 1 , , d
b i , r ( t ) = z i ( t ) + H g r z i ( t )
where v i , k ( t + 1 ) is the new position of the artificial rabbit, b i , r ( t ) represents a randomly selected burrow among the d burrows generated by the rabbit for hiding, and r4 and r5 represent the random number given by us in the interval 0 to 1. R is given by Equations (3)–(6).
After the two update strategies are implemented, we renew the position of the ith artificial rabbit by Equation (15).
z i ( t + 1 ) = { z i ( t ) i f f ( z i ( t ) ) f ( v i ( t + 1 ) ) v i ( t + 1 ) e l s e f ( z i ( t ) ) > f ( v i ( t + 1 ) )
This equation represents an adaptive update. The rabbit automatically chooses whether to stay in its current position or move to a new one based on the adaptation value.
For an optimization algorithm, populations prefer to perform the exploration phase in the early stages and an exploitation phase in the middle and late stages. ARO relies on the energy of the rabbits to design a finding scheme: the rabbits’ energy decreases over time, thus simulating the exploration to exploitation transition. The definition of the energy factor in the artificial rabbits algorithm we give is:
A ( t ) = 4 ( 1 t T max ) ln 1 r
where r is a given random number and r is the random number in (0, 1). Figure 2 shows the change in the value of an over the course of 1000 iterations. Analysis of the information in the figure shows that the trend in the value of A’s is that the overall situation is decreasing, thus maintaining a balanced transition from exploration to exploitation throughout the iterations. Algorithm 1 gives the pseudo-code of the fundamental artificial rabbits optimization. Figure 3 provides the flow chart of ARO.
Algorithm 1: The framework of artificial rabbits optimization
 1: The parameters of artificial rabbits optimization including the size of artificial rabbits N, and TMax.
 2: Random initializing a set of rabbits zi and calculate fi.
 3: Find the best rabbits.
 4: While tTMax do
 5:  For i = 1 to N do
 6:   Calculate the energy factor A by Equation (16).
 7:   If A > 1 then
 8:    Random choose a rabbits from all individuals.
 9:    Compute the R using Equations (3)–(6).
 10:     Perform detour foraging strategy by Equation (2).
 11:     Calculate the fitness value of the rabbit’s position fi.
 12:     Updated the position of rabbit by Equation (15).
 13:    Else
 14:     Generate d burrows and select one randomly according to Equation (14).
 15:     Perform random hiding strategy by Equation (12).
 16:     Calculate the fitness value of the rabbit’s position fi.
 17:     Updated the position of rabbit by Equation (15).
 18:    End if
 19:   End for
 20:   Search for the best artificial rabbit.
 21:   t = t + 1.
 22: End while
 23: Output the most suitable artificial rabbit.

3. Hybrid Artificial Rabbits Optimization

Hybrid optimization algorithms are widely used in practical engineering due to targeted improvements to the original algorithm that enhance the different performances of the algorithm. For example, Liu proposed a new hybrid algorithm that combines particle swarm optimization and single layer neural network to achieve the complementary advantages of both and successfully implemented in wavefront shaping [34]. Islam effectively solves the clustered vehicle routing problem by combining particle swarm optimization (PSO) and variable neighborhood search (VNS), fusing the diversity of solutions in PSO and bringing solutions to local optima in VNS [35]. Devarapalli proposed a hybrid modified grey wolf optimization–sine cosine algorithm that effectively solves the power system stabilizer parameter tuning in a multimachine power system [36]. To mitigate the poor accuracy and ease of falling into local solutions of the original ARO algorithm, we propose a hybrid, improved LARO algorithm by introducing a Lévy flight strategy and selective opposition in the ARO algorithm and applying the proposed algorithm to engineering optimization problems. Among them, Lévy flight is employed to boost the algorithm’s accuracy. The selective opposition strategy helps the algorithm jump out of local solutions.

3.1. Lévy Flight Method

The Lévy flight method is often introduced in improved algorithms and proposed algorithms, mainly to provide dynamism to the algorithm updates, where the Lévy flight operator is mentioned primarily for generating a regular random number, which is characterized as a small number in most cases and a large random number in few cases. This arbitrary number generation law can help various update strategies to provide dynamics and jump out of local solutions. Lévy distribution is defined by the following equation. Figure 4 provides Levy’s flight path in two-dimensional space [32].
Levy ( t ) ~ u = t 1 γ , 0 < γ 2 ,
where t is the step length, which can be calculated by Equation (18). The formulas for solving the step size of the Lévy flight are given in Equations (18)–(21).
t = u | v | 1 / γ ,
u ~ N ( 0 , σ u 2 ) ,   v ~ N ( 0 , σ v 2 )
σ u = ( Γ ( 1 + β ) sin ( π β / 2 ) Γ ( ( 1 + β / 2 ) β 2 ( β 1 ) / 2 ) ) 1 / β ,
σ v = 1 ,
where σu and σv are defined as given in Equations (20) and (21). Both u and v obey Gaussian distributions with mean 0 and variance σu2 and σv2, as shown in Equation (19). Γ denotes a standard Gamma function, while β denotes a correlation parameter, which is usually set to 1.5.
In the random hiding phase, we replace the r4 random numbers with the random numbers generated by the Lévy flight strategy. Since the random hiding stage is an exploitation stage, we introduce Lévy flight in this strategy to avoid ARO from falling into local candidate solutions in the exploitation phase. Additionally, it helps the algorithm improve the convergence accuracy and the flexibility of the random hiding stage. The following equation provides the random hidden phase based on the Lévy flight, where α is a parameter fixed to 0.1.
v i ( t + 1 ) = z i ( t ) + R ( α l e v y ( β ) b i , r ( t ) z i ( t ) ) , i = 1 , , N

3.2. Selective Opposition (SO) Strategy

SO is a modified idea of opposition-based learning (OBL) [33]. The idea of SO is to modify the size of the rabbits far from the optimal solution by using new opposition-based learning to bring it closer to the rabbit in the optimal position. In addition, the selective opposition strategy tends to be affected by a linearly decreasing threshold. When the rabbits deploy SO, selective opposition assists the rabbits in achieving a better situation in the development phase by changing the proximity dimension of different rabbits [37]. The updates are as follows.
First, we define a threshold value. The threshold value will be decreased until the limit case is reached. As shown in the following equation, SO checks the distance of each candidate rabbit location from the current rabbit dimension to the best rabbit location for all candidate rabbit locations.
d d i = | z i b e s t , j z i , j |
where ddj is the difference distance of all dimensions of each rabbit. When ddj is greater than the Threshold (TS) value we define, the far and near rabbit positions are calculated. Then, all difference distances for all rabbit positions are listed.
s r c = 1 6 j = 1 ( d d j ) 2 d d j ( d d j 2 1 )
The src is proposed mainly to measure the correlation between the current rabbit and the optimal rabbit position. Assuming that src < 0 and the far dimension (df) is larger than the close dimensions (dc), the rabbit’s position will be updated by Equation (25).
Z d f = l b d f + u b d f Z d f
Algorithm 2 gives the pseudo-code for selective opposition (SO).
Algorithm 2: Selective Opposition (SO)
 1: The parameters of selective opposition including: initial generation (t), rabbit size (N), the maximum generation (TMax), dimension (d), dc = [], and df = [].
 2: TS = 2 − [t·(2/TMax)].
 3: For i = 1 to N do
 4:  If ZiZibest then
 5:   For j = 1 to d do
 6:    ddj = |zibest,j-zi,j|{ddj = the discrepancy distance of the jth dimension}
 7:    If ddj < TS then
 8:     Determine the far dimensions (df).
 9:     Calculate far distance dimensions (df).
 10:     Else
 11:      Determine the close dimensions (dc).
 12:      Calculate close distance dimensions (dc).
 13:     End if
 14:    End for
 15:    Summing over all ddj.
 16:     s r c = 1 6 j = 1 ( d d j ) 2 d d j ( d d j 2 1 ) .
 17:    If src ≤ 0 and size(df) > size(dc) then
 18:     Perform Z′df = LBdf + UBdfZdf.
 19:    End if
 20:   End if
 21: End for

3.3. Detailed Implementation of LARO

Two modifications, namely Lévy flight and selective opposition, are included in ARO. These modifications suitably help the ARO algorithm to increase the convergence and population variety while obtaining more qualitative candidate solutions. The detailed procedures of LARO are shown below.
Step1: Suitable parameters for LARO are supplied: the size of artificial rabbit N, the dimensionality of the variables d, the upper and lower bounds ub and lb of the problem variables, and all iterations TMax;
Step2: Randomly select a series of rabbit locations and calculate their fitness values. Find the rabbit with the best position;
Step3: Calculate the value of the energy factor A by Equation (16). If A > 1, select an arbitrary rabbit from all groups of rabbits;
Step4: Calculate the value of R by using Equations (3)–(6). Perform detour foraging strategy by means of Equation (2). Then calculate the adaptation value of the updated rabbit position and update the rabbit position by means of Equation (15);
Step5: If A ≤ 1, randomly generate burrows and randomly select one according to Equation (14). The new position of the rabbit is updated by a random hiding strategy based on the improved Lévy flight strategy of Equation (22). The corresponding fitness is calculated and then the rabbit’s position is updated by Equation (15);
Step6: The distance of each candidate rabbit position from the current rabbit dimension to the best rabbit position is calculated by Equation (23);
Step7: If ddj > Threshold, determine the near size df and count the number of df. If ddjThreshold, determine the far dimension dc and count the number of dc. Then calculate src from the calculated ddj by Equation (24);
Step8: If src <= 0 and df > dc, execute Equation (25) and re-update the rabbit’s position;
Step9: If the iterations exceed the maximum case, the optimal result is exported.
To better introduce the proposed LARO algorithm in this study, the pseudo-code of LARO is offered in Algorithm 3. Among them, line 15 is the Lévy flight strategy improved with the random hiding strategy. Lines 20–40 are the selective opposition strategy. Figure 5 illustrates the flowchart of the LARO algorithm.
Algorithm 3: The algorithm composition of LARO
 1: The parameters of artificial rabbits optimization: the size of artificial rabbits N, TMax, the sensitive parameter α, β, dc = [], and df = [].
 2: Random initializing a set of rabbits zi and calculate fi.
 3: Find the best rabbits.
 4: While tTMax do
 5:  For i =1 to N do
 6:   Compute the energy factor A using Equation (16).
 7:   If A > 1 then
 8:    Random choose a rabbits from all individuals.
 9:    Compute the R using Equations (3)–(6).
 10:     Perform detour foraging strategy by Equation (2).
 11:     Calculate the fitness value of the rabbit’s position fi.
 12:     Updated the position of rabbit by Equation (15).
 13:    Else
 14:     Generate d burrows and select one randomly according to Equation (14).
 15:     Perform random hiding strategy by Equation (22).
 16:     Calculate the fitness value of the rabbit’s position fi.
 17:     Updated the position of rabbit by Equation (15).
 18:    End if
 19:   End for
 20:   TS = 2 − [t·(2/TMax)].
 21:   For i = 1 to N do
 22:    If ZiZibest then
 23:     For j = 1 to d do
 24:      ddj = |zibest,j-zi,j|{ddj = the discrepancy distance of the jth dimension}
 25:      If ddj < TS then
 26:       Determine the far dimensions (df).
 27:       Calculate far distance dimensions (df).
 28:      Else
 29:       Determine the close dimensions (dc).
 30:       Calculate close distance dimensions (dc).
 31:      End if
 32:     End for
 33:     Summing over all ddj.
 34:      s r c = 1 6 j = 1 ( d d j ) 2 d d j ( d d j 2 1 ) .
 35:     If src ≤ 0 and size(df) > size(dc) then
 36:      Perform Z′df = LBdf + UBdfZdf.
 37:     End if
 38:    End if
 39:   End for
 40:   Updated the position of rabbit by Equation (15).
 41:   Search for the best rabbits bestj.
 42:   t = t + 1.
 43: End while
 44: Output the most suitable artificial rabbit.

3.4. In-Depth Discussion of LARO Complexity

The estimation of LARO complexity is mainly done by adding the selective opposition part to the ARO base algorithm. At the same time, the Lévy strategy only improves how ARO is updated without increasing the complexity. Calculating the complexity is an effective method when assessing the complexity of solving real problems. The complexity is associated with the size of artificial rabbits N, d, and TMax. The total complexity of the artificial rabbits algorithm is as follows [29].
O ( A R O ) = O ( 1 + N + T M a x N + 0.5 T M a x N d + 0.5 T M a x N d ) = O ( T M a x N d + T M a x N + N )
The selective opposition strategy focuses on the consideration of all dimensions of all rabbit locations. Therefore, the complication of the LARO algorithm is:
O ( L A R O ) = O ( 2 T M a x N d + T M a x N + N )

4. Numerical Experiments

To numerically experimentally validate the capabilities of the LARO algorithm, two basic suites were selected: 23 benchmark test functions [26] and ten benchmark functions from the standard CEC2019 test suite [26]. We selected some optimized metaheuristic algorithms to compare with our proposed LARO, including arithmetic optimization algorithm (AOA) [38], grey wolf optimization (GWO) [39], coot optimization algorithm (COOT) [40], golden jackal optimization (GJO) [41], weighted mean of vectors (INFO) [42], moth–flame optimization (MFO) [43], multi-verse optimization (MVO) [44], sine cosine optimization algorithm (SCA) [45], salp swarm optimization algorithm (SSA) [46], and whale optimization algorithm (WOA) [47]. The LARO algorithm was compared with all the different search algorithms subjected to Wilcoxon rank sum and Friedman’s mean rank test. The full algorithm was run 20 times separately. In addition, to better demonstrate the experiments, we tested the best, worst, mean, and standard deviation (STD) values for this period. The main parameters of the other relevant algorithms we provide are in Table 1.

4.1. Experimental Analysis of Exploration and Exploitation

Differences between candidate solutions in different dimensions and the overall direction tend to influence whether the group tends to diverge or aggregate. When growing to separate, the differences among all candidate individuals in all dimensions will come to the fore. This situation means that all candidate individuals will explore the domain in a particular manner. This approach will allow the optimization method to analyze the candidate solution space more extensively through the transient features. Alternatively, when a trend toward aggregation is generated, the candidate solutions explore the room based on a broad synergistic situation, reducing the variability of all candidate individuals and exploiting the exploration region of candidate solutions in a detailed manner. Maintaining the right synergy between this divergent discovery pattern and the aggregated development pattern is necessary to ensure optimization capability.
For the experimental part, we draw on the dimensional diversity metric suggested by Hussain et al. in [48] and calculate corresponding exploration and exploitation ratios. We selected the CEC2019 test set and provided the exploration and exploitation analysis graphs for some of the CEC2019 test functions in Figure 6.
From the figure, we can find that LARO starts from exploration in all the test functions and then gradually transitions to the exploitation stage. In the test functions of cec03 and cec09, we find that LARO can still maintain an efficient investigation rate in the middle and late iterations, and in the face of cec04, cec07, and cec08, LARO will quickly shift to an efficient discovery rate in the mid-term, ending the iteration with an efficient exploration situation. This discovery process shows that the more efficient exploration rate early in LARO guarantees a reasonable full-range finding capability to prevent getting stuck in current local solutions. In contrast, the middle step is smooth over the low, and the more efficient development rate in the later period guarantees that it can be exploited with higher accuracy after high exploration.

4.2. Comparative Analysis of Populations and Maximum Iterations

The population size and the maximum iterations affect the performance of the population-based metaheuristic algorithm. Therefore, in this section, we perform a sensitivity analysis of LARO involving the size of the initial population as well as the maximum iterations. This study considers the two most commonly used combinations of population and maximum iterations: (1) the size of artificial rabbit colonies is 50, and Tmax = 1000, (2) the size of artificial rabbit colonies is 100, and Tmax = 500, and LARO for the experiments conducted in case (1) is defined as LARO1, and LARO in case (2) is LARO2. The performance and running time of LARO1 and LARO2 are compared in the experiments with 23 test functions.
Table 2 provides a comparison of LARO with two different parameters in 23 benchmark functions. From the numerical results, it can be found that both parameters of ARO used almost similar running times. However, the convergence accuracy of LARO1 is better than that of LARO2, which indicates that LARO, with the case 1 parameter, can provide better convergence accuracy with the same guaranteed running cost. Therefore, the results show that the performance of LARO is affected by the population size and the number of iterations. The best performance was returned when the population size was set to 50, and the maximum number of iterations was 1000.

4.3. Analysis of Lévy Flight the Jump Parameter α

According to the mechanism of the Lévy flight strategy, we replace the r4 random numbers with the random numbers generated by the Lévy flight strategy. Thus, ARO is prevented from falling into local candidate solutions in the utilization phase. Additionally, the jump parameter α will affect the change of the updated position. In general, a larger jump parameter α, which increases the step size of the Lévy flight strategy, can ensure that the algorithm jumps out of the local solution, but it may also cause the optimal solution information not to be preserved. If the value is too small, it will affect the sensitivity of the Lévy flight strategy and, thus, the accuracy of the algorithm. Therefore, the jump parameter α has a great impact on the performance of LARO.
This section discusses the impact of the jump parameter α on the performance of the algorithm, and 10 test functions of CEC2019 are used to explore the impact of the jump parameter. The relevant jump parameters take four different values of 0.1, 0.01, 0.5, and 0.5, and three value intervals [0.01, 0.05], [0.05, 0.1], and [0.1, 0.5], respectively. The numerical intervals indicate the random number within each provided interval. The mean values of the solutions obtained by LARO for the CEC2019 test function over 20 independent trials are provided in Table 3. For a clearer view of the effect of the jump parameter α on LARO performance, Figure 7 provides the convergence curves for the ten test functions.
By analyzing Table 3, it can be observed that the average rank is the smallest when the jump parameter α = 0.1 at 3.2. and the best average values are obtained for five test functions (cec01, cec05, cec07, cec08, cec10). The value of the jump parameter α is a more in-between suitable value, indicating that the value balances the information of retaining the optimal solution and jumping out of the local solution. Figure 7 provides an iterative plot of the seven jump parameters. From the graph, it can be found that the LARO algorithm has a faster convergence rate, as well as a higher iteration accuracy for the jump parameter α = 0.1. Therefore, LARO can show the best performance when the jump parameter α is taken at 0.1.

4.4. Experiments on the 23 Classical Functions

To evaluate the strength of LARO in traversing the solution space, finding the optimal candidate solution, and getting rid of local solutions, we used 23 benchmark functions, where the unimodal test benchmarks (F1–F7) were used to examine the ability of LARO to develop accuracy. The multimodal benchmark set (F8–F13) was used to test the capability of LCAHA for spatial exploration. The fixed-dimensional multimodal test benchmarks (F14–F23) are mainly used to verify LARO’s excellent ability to handle low-dimensional spatial investigation. The dimensions of F01–F11 are 30, while the dimensions of F14–F23 are all different because they are fixed-dimensional multimodal test functions (the dimensions of F14–F23 functions are 2, 4, 2, 2, 2, 3, 6, 4, 4, 4, 4).
Table 4 shows LARO’s experimental and statistical results, the original ARO, and ten other search algorithms. Five relevant evaluation metrics (best, worst, average, standard, and ranking) were selected for this experiment. Additionally, we used Friedman ranking test results for all algorithms based on the mean value. In addition, the statistical presentation of the Wilcoxon test for LARO and other selected MH algorithms is shown in Table 4. When calculating the significance level, we let the default value of the significance level be set to 0.05. In addition, “+” denotes that a particular MH algorithm converges better than LARO. “−” denotes the opposite effect. “=” suggests that the impact of convergence in a given test problem is the same as the convergence of a particular MH algorithm.
Analysis of the table shows that the proposed LARO has a Friedman rank of 1.6957 and is in the first place. Next is ARO, with 2.3478 ranked seconds. Figure 8 provides the average rank of the 12 comparison algorithms. LARO provides the best case among all algorithms on the 16 tested functions. In more detail, LARO ranked first in two unimodal functions (F1 and F3) and obtained the best results in four multimodal functions (F9, F10, F11, F13), respectively. Additionally, the best case was obtained in eight fixed-dimensional functions (F14, F16, F17, F18, F19, F20, F21, and F23). In addition to this, LARO shows strong competitiveness in some functions (F4, F8, F15, F22). Moreover, LARO and some other algorithms offer the best case when facing some of the tested functions. For example, ARO, GJO, INFO, and LARO obtain the best average solution when facing the F9 and F11 functions. In addition, ARO shows a notable ability to successfully solve three and seven problems in the face of unimodal and multimodal functions. Thus, it can be seen that LARO mainly improves the ability of the original algorithm to deal with unimodal problems while somewhat enhancing the ability to deal with multimodal and fixed-dimensional problems.
By analyzing the experimental results and tests, it can be seen that LARO improves the convergence ability of the algorithm by introducing a Lévy flight strategy in the random hiding phase, which leads to a good convergence effect and accuracy of LARO when facing unimodal problems without multiple solutions. In addition, due to the introduction of the selective opposition strategy in ARO, LARO can effectively filter the optimal solution among multiple local solutions when dealing with multimodal problems and fixed-dimensional problems. This result is because the selective opposition strategy helps the algorithm to jump out of the local solutions adaptively. Experimental results also demonstrate that LARO has better convergence than other algorithms and ARO when dealing with multimodal and fixed-dimension problems. Therefore, it can be shown that LARO is a reliable optimization method in terms of performance. However, it is also found that LARO tends to break the balance between exploration and exploration in the overall iterative process, affecting the algorithm’s performance.
In Table 5, we give the p-values of 11 MH algorithms, LARO algorithm, and the Wilcoxon test to check whether LARO outperforms other MH algorithms. The Wilcoxon test results for ARO, AOA, GWO, COOT, GJO, and INFO algorithms are 2/16/5, 1/1/21, 0/1/22, 0/11/12, 3/2/18, and 3/12/8. The Wilcoxon test results for MFO, MVO, SCA, SSA, and WOA were 1/5/17, 0/1/22, 0/1/22, 1/7/15, and 2/3/18, respectively.
Figure 9 offers the convergence plots of the twelve different methods on the 23 benchmark functions, where the X-axis of the plot represents the iterations, and the Y-axis represents the degree of adaptation (some test functions (F1, F2, F3, F4, F5, F6, F7, F10, F11, F12, F13, F14, F15) are represented as logarithms of 10). The results in the figure demonstrate that LARO has a high-speed convergence rate and convergence accuracy when dealing with a part of the functions (F1, F2, F3, F4, F5) in the face of F1–F7 unimodal functions. Additionally, LARO continues to improve accuracy near the optimal solution later in the iteration. This analysis shows its reliable performance in getting rid of the local key. For the F8–F23 functions, we can see that LARO exhibits a characteristic that transitions rapidly between the early search and late development phases and converges near the optimal position at the beginning of the iteration. Then, LARO progressively determines the best marquee position and updates the answer to confirm the previous search results. Figure 10 illustrates box plots of 12 different MH algorithms for showing the distribution of means in various problems. In most of the issues tested, the distribution of LARO is more concentrated and downward than the other algorithms. This finding also illustrates the consistency and stability of LARO. Overall, LARO can handle the 23 basic test sets very well.

4.5. Experiments on the CEC2017 Classical Functions

In this section, the proposed LARO is simulated in CEC2017 for 29 of these test functions. LARO and the other comparison methods are executed 20 times individually, with the same relevant parameters set in Section 4.4. Cec01 and cec03–cec30 have a problem dimension of 10. Numerical results include the output of ARO [29], BWO [12], CapSA [13], GA [49], PSO [50], RSA [51], WSO [52], GJO [41], E-WOA [53], WMFO [54], and CSOAOA [26] outputs. As shown in Table 6, the evaluation methods of all 29 tested functions are compared by the proposed LARO algorithm. In addition, the results of Friedman’s statistical test are given in the last part of the table. In this case, Friedman’s statistical test ranking is given based on the mean value.
As shown in Table 6, the average rank of Friedman for LARO is 1.8621, while the average rank of WOS and ARO are 2.6897 and 2.7241, respectively. Therefore, LARO’s final ranking is the first. The results show that LARO provides a good output profile on 29 tested functions. LARO can succeed on 11 functions (cec05, cec07, cec09, cec11, cec15, cec17, cec18, cec20, cec22, cec23, cec28, cec30). In addition, LARO was able to obtain better optimization results and average values for the ten tested functions (cec01, cec03, cec06, cec08, cec10, cec14, cec19, cec21, cec27, cec29). The numerical results show that the proposed LARO exhibits excellent performance in the unimodal problem, indicating that the LARO algorithm again converges quickly. The performance of LARO for multimodal functions also illustrates that the introduced selective opposition effectively helps the algorithm to jump out of local solutions. In the face of composition and hybrid functions, LARO demonstrates excellent optimization ability, indicating the effectiveness of the Lévy flight strategy in improving the accuracy of the algorithm, while WSO and ARO can successfully solve six (cec01, cec08, cec10, cec12, cec13, cec29) and four functions (cec06, cec14, cec19, cec26), respectively.

4.6. Experiments on CEC2019 Test Functions

In this section, the proposed LARO has experimented with ten functions in CEC2019 [26]. The LARO algorithm is executed 20 times individually, and the parameters given are consistent with those of the numerical experiments in Section 4.3. Among them, the dimensionality of the functions cec01–cec03 is different from the others, with 9, 16, and 18 for cec01–cec03, respectively, while the problem dimensionality of cec04–cec10 is 10 [55]. The numerical results in agreement with AOA [38], GWO [39], COOT [40], GJO [41], INFO [42], MFO [43], MVO [44], SCA [45], SSA [46], WOA [47] are compared. As shown in Table 7, the four relevant evaluation methods are compared by the proposed LARO algorithm in all ten tested functions. In addition, the Wilcoxon and the Friedman statistical test results are given in the last part of the table. The experimental results conclude that LARO is superior in handling these challenging optimization function problems. LARO ranks first with an average ranking of 1.3636. In addition, LARO performs as the optimal case in seven of the ten CEC2019 functions (cec01, cec04, cec05, cec06, cec07, cec08, cec10), and ARO shows the best results in the other three functions (cec02, cec03, cec09). Numerical experimental results demonstrate that the LARO algorithm can accurately approach the optimal solution and is highly competitive with other MH methods for various types of problems. Moreover, experimental results likewise demonstrate that the LARO algorithm enhances the variety of the population and the accuracy of solving the problem due to the addition of the Lévy flight and the selective opposition, which effectively avoids local optimal solutions.
The convergence plots of the MH method in Figure 11 show the high quality and high accuracy of the LARO solutions and the significant convergence speed, such as cec01, cec02, cec03, cec04, cec05, cec06, cec7, cec08, cec10. Box plots and radar plots of the test function runs in CEC2019 are provided in Figure 12 and Figure 13, respectively, where these box-line plots provide very small widths, indicating the stability and superiority of LARO. In comparison, the radar plot demonstrates that LARO has the smallest ranking among all the tested functions. In Table 8, we give the p-values of 11 MH methods, the LARO algorithm, and the Wilcoxon test to check whether LARO outperforms other MH algorithms. The Wilcoxon test results for AROO, AOA, GWO, COOT, GJO, and INFO algorithms are 2/7/1, 0/0/10, 0/2/8, 0/1/9, 0/0/ The Wilcox test results for MFO, MVO, SCA, SSA, and WOA are 0/0/10, 0/1/9, 0/0/10, 0/1/9, and 0/0/10, respectively.

4.7. Impact Analysis of Each Improvement

The experiments in this section focus on numerical experiments of LARO with the compared algorithms on three standard test sets (23 benchmark test functions, CEC2017, CEC2019). This section summarizes the impact of different improvement strategies on the algorithm performance.
The introduction of the Lévy flight strategy in ARO is mainly used to solve the problem of low convergence accuracy of the original ARO. In contrast, single-peaked functions (e.g., F01–F07) are often used to test the convergence accuracy of the algorithm due to characteristics such as the absence of multiple solutions and the ease of exploration to the vicinity of the optimal solution. In the numerical experiments of 23 benchmark test functions, LARO ranks 1, 3, 1, 2, 3, 5, and 3 among the F01–F07 single-peaked functions, respectively. Except for F05–F06, the convergence accuracy of LARO is higher than that of the original ARO. In addition, in the numerical experiments of CEC2017, LARO ranks better than the original in both cec01 and cec03 ARO. Therefore, the Lévy flight strategy helps ARO to improve convergence accuracy successfully.
The selective backward learning strategy is introduced mainly to help ARO to jump out of the local solution in time. The multi-peaked functions (e.g., F08–F13, cec04–cec10 of CEC2017, and cec01–cec10 of CEC2019) are prone to fall into the vicinity of local solutions during the search process due to the existence of multiple solutions, which affects the convergence performance of the algorithm. Therefore, the algorithm’s ability to iterate over the multi-peaked functions reflects its ability to jump out of local solutions. In the numerical experiments with 23 benchmark test functions, LARO ranks 2, 1, 1, 1, 1, 3, and 1 for functions F08–F13, respectively. Except for F12, LARO’s optimization average is higher than the original ARO. In addition, LARO ranks 3, 1, 2, 1, 2, 1, 1, and 2 against cec04–cec10 of CEC2017, respectively. Except for cec06, the optimized average value of LARO is higher than that of the original ARO, and when facing the test set of CEC2019, the average ranking of LARO is 1.3636, which is higher than ARO at 1.9091. Therefore, LARO converges better than the original ARO when dealing with multi-peaked functions, which indicates that the selective backward learning strategy helps LARO to better jump out of the local solution.

5. Application of LARO in Semi-Real Mechanical Engineering

This subsection uses six practical mechanical engineering applications. There are many constraint treatments for optimization problems, such as penalty functions, co-evolutionary, adaptive, and annealing penalties [56]. Among them, penalty functions are the most used treatment strategy because they are simple to construct and easy to operate. Therefore, this paper uses the penalty function strategy to handle the optimization constraints of these six mechanical engineering optimization models, for the engineering optimization problem with minimization constraints defined as:
Minimize:
f ( x ¯ ) , x ¯ = [ x 1 , x 2 , , x n ]
Subject to:
{ g i ( x ¯ ) 0 , i = 1 , 2 , , m h j ( x ¯ ) = 0 , j = 1 , 2 , , k
where m is the number of inequality constraints and k is the number of equation constraints. x ¯ is the design variable of the engineering problem with dimension n. For the case with boundary constraints, a boundary requirement exists for all dimensional variables:
l b i x i u b i , i = 1 , 2 , n
where lb and ub are the lower and upper bounds of the n-dimensional variable and n is the number of dimensions of the variable.
Therefore, the mathematical description of the engineering optimization problem after constraint weighting is
f ( x ¯ ) = f ( x ¯ ) + α i = 1 m max { g i ( x ¯ ) , 0 } + β j = 1 k max { h j ( x ¯ ) , 0 }
where α is the weight of the inequality constraint and β is the weight of the equation constraint. Considering the optimization process to satisfy the inequality and equation constraints, we require α and β to be large values. This paper sets them to 1 × 105 [38]. Therefore, the objective function is severely penalized (the value of the objective function increases) when the optimization solution exceeds any constraint. This mechanism will allow the algorithm to avoid illegal solutions inadvertently computed during the iterative process.
LARO and all the comparison algorithms were executed 30 times. The relevant parameters were a maximum iteration of 1000 and a population size of 50. In addition, for the solution of the practical engineering applications, we used the same comparison algorithms as in the numerical experiments, including AOA [38], GWO [39], COOT [40], GJO [41], INFO [42], MFO [43], MVO [44], SCA [45], SSA [46], WOA [47].

5.1. Welded Beam Design Problem (WBD)

The WBD requires that the design cost of the WBD be guaranteed to be minimal under various restraints. The schematic structural diagram of the WBD is provided in Figure 14. Four main relevant independent variables are obtained for the WBD: the welding thickness (h), rod attachment length (l), rod height (t), and rod thickness (b) [38]. The given variables are required to satisfy seven constraints. The model of the WBD is given below.
z = [ z 1 , z 2 , z 3 , z 4 ] = [ h , l , t , b ]
Minimize:
f ( z ) = 1.10471 z 1 2 z 2 + 0.04811 z 3 z 4 ( 14.0 + z 2 ) ,
Variable:
0.1 z 1 2 , 0.1 z 2 10 ,
0.1 z 3 10 , 0.1 z 4 2 ,
Subject to:
g 1 ( z ) = τ ( z ) τ max 0 ,
g 2 ( z ) = σ ( z ) σ max 0 ,
g 3 ( z ) = δ ( z ) δ max 0 ,
g 4 ( z ) = z 1 z 4 0 ,
g 5 ( z ) = P P C ( z ) 0 ,
g 6 ( z ) = 0.125 z 1 0 ,
g 7 ( z ) = 1.10471 z 1 2 + 0.04811 z 3 z 4 ( 14.0 + z 2 ) 5.0 0 ,
where,
τ ( z ) = ( τ ) 2 + 2 τ τ z 2 R + ( τ ) 2 ,
τ = P 2 x 1 x 2 , τ = M R J ,
M = P ( L + z 2 2 ) ,
R = z 2 2 4 + ( z 1 + z 3 2 ) 2 ,
J = 2 { 2 z 1 z 2 [ z 2 2 4 + ( z 1 + z 3 2 ) 2 ] } ,
σ ( z ) = 6 P L z 4 z 3 2 , δ ( z ) = 6 P L 3 E z 3 2 z 4 ,
P c ( z ) = z 3 2 z 4 6 36 4.013 E L 2 ( 1 z 3 2 L E 4 G ) ,
P = 6000 l b , L = 14 i n , δ max = 0.25 i n , E = 30 × 1 6   p s i ,
G = 12 × 10 6   p s i , τ max = 13600   p s i , σ max = 30000   p s i
Table 9 provides the output results and best-fit cases for the search methods, and Table 10 documents the statistical output of the search methods. The combined evaluation of these two tables indicates that LARO obtained: the best outcomes for LARO with the same conditioning parameters. LARO has the best optimal value of the average. LARO obtains better results under the same conditioning parameters for the average and STD metrics, and, regarding the worst score metric, LARO performs well compared to different methods. The output results suggest that the LARO algorithm has good applicability for solving the WBD problem. Figure 15 provides the convergence iterations of LARO and the compared algorithms for the WBD problem. The figure shows that the proposed LARO has the best convergence and converges to the vicinity of the optimal solution in the early iterations. In comparison, the AOA has the worst convergence effect and convergence accuracy.

5.2. Pressure Vessel Design Problem (PVD)

The structure of the PVD is illustrated in Figure 16. The ultimate aim of the PVD is to keep the total cost of the three aspects of the cylindrical vessel to a minimum. Both edges of the vessel are capped while the top is hemispherical. The PVD has four relevant design variables, including the shell (Ts), the thickness of the head (Th), the radius of entry (R), and the length of the cylindrical section (L) [38]. The mathematical model (four constraints) of the PVD is presented as follows.
z = [ z 1 , z 2 , z 3 , z 4 ] = [ T s , T h , R , L ]
Minimize:
f ( x ) = 0.6224 z 1 z 3 z 4 + 1.7781 z 2 z 3 2 + 3.1661 z 1 2 z 4 + 19.84 z 1 2 z 3 ,
Variable range:
0 z 1 , z 2 99 ,
10 z 3 , z 4 200 .
Subject to:
g 1 ( z ) = z 1 + 0.0193 z 3 0 ,
g 2 ( z ) = z 2 + 0.00954 z 3 0 ,
g 3 ( z ) = π z 3 2 z 4 4 3 π z 3 3 + 1 , 296 , 000 0 ,
g 4 ( z ) = z 4 240 0 ,
Table 11 provides the output results of the different search methods and the suitable average solution for solving the PVD problem. Table 12 documents the statistical outputs of the different methods of solving the PVD problem. By analyzing and evaluating two data, we find that LARO obtains suitable results for LARO with the same conditioning parameters. LARO has the suitable optimal value for the average value. LARO obtains the best case for the average and STD metrics compared to different search methods. The experimental output suggests that the LARO algorithm performs well in completing the PVD problem. Figure 17 provides the convergence iterations of LARO and the comparison algorithms in the PVD problem. From the results, it can be seen that LARO converges to the optimal solution. Compared to the other algorithms, LARO has the fastest convergence rate. SCA and SSA have poor convergence in the early stages, while AOA has poor convergence throughout. The results show that LARO has an advantage over the other algorithms in solving the PVD problem.

5.3. Tension/Compression String Design (TCS)

The most crucial objective of the TCS is to fit the mass optimally. The TCS includes three relevant design variables: wire diameter (d), number of active coils (N), and average coil diameter (D) [38]. A schematic representation of the TCS problem is displayed in Figure 18. The design model of the TCS is given below.
z = [ z 1 , z 2 , z 3 ] = [ d , D , N ]
Minimize:
f ( z ) = ( z 3 + 2 ) z 2 z 1 2
Variable range:
0.05 z 1 2 , 0.25 z 2 1.3 , 2 z 3 15 ,
Subject to:
g 1 ( z ) = 1 z 3 z 2 3 71785 z 1 4 0 ,
g 2 ( z ) = 4 z 2 2 z 1 z 2 12566 ( z 2 z 1 3 z 1 4 ) + 1 5108 z 1 2 1 0 ,
g 3 ( z ) = 1 140.45 z 1 z 2 2 z 3 0 ,
g 4 ( x ) = z 1 + z 2 1.5 1 0 .
Table 13 provides the experimental results of all search methods and the best decision variables and the best average objective function values for solving the TCS problem, and provides the four constraint values for all algorithms, and Table 14 gives the statistical results for all search algorithms in solving the TCS problem. By analyzing and evaluating both data, we can find that LARO obtains better experimental results than other comparative algorithms. LARO has the best optimal, average, worst, and STD values. The numerical experimental results suggest that the LARO algorithm is a superior performance method for dealing with TCS. Figure 19 provides the convergence iterations of LARO and the comparison algorithms in the TCS problem. The vertical coordinates in the figure are the log values of the fitness values. From the results, it can be seen that LARO converges to the optimal solution. Compared to other algorithms, LARO has faster convergence. SSA has poor convergence in the early stage, while MVO has poor convergence throughout the process compared to different algorithms. The results show that LARO has an advantage over the other algorithms in solving the TCS problem.

5.4. Gear Train Design (GTD)

The ultimate requirement of the GTD problem is to make the gear set with the most appropriate gear ratio cost to prepare the composite gear train. Figure 20 illustrates a schematic diagram of the GTD problem. There are four relevant integer variables for the GTD, where the four variables stand for the size of the teeth of four other gears [26]. These design variables represent the number of teeth on the gears and are denoted as Ta, Tb, Tc, and Td. The mathematical model of the GTD is given below.
z = [ z 1 , z 2 , z 3 , z 4 ] = [ T a , T b , T c , T d ]
Minimize:
f ( x ) = ( 1 6.931 z 1 z 2 z 3 z 4 ) 2
Variable range:
12 z 1 , z 2 , z 3 , z 4 60 ,
Table 15 provides the experimental results for all search algorithms and the best average solution for the GTD problem. Table 16 presents the statistical output for the different search methods in solving the GTD. The analysis shows that LARO gives better experimental results compared to other search algorithms. LARO provides the best optimal, average, worst, and STD value. Numerical experiments show that the LARO algorithm can obtain good accuracy in solving the GTD problem. Figure 21 provides the convergence iteration results of LARO and the comparison algorithm on the GTD problem. The vertical coordinates in the figure are the log values of the adaptation values. From the results, it can be seen that LARO converges to the optimal solution. LARO’s convergence speed and accuracy are reasonable compared to other algorithms. SSA, GWO, INFO, and LARO all have good convergence, while AOA has poor convergence throughout the process compared to different algorithms. The results show that LARO has an advantage over other algorithms in solving the GTD problem.

5.5. Speed Reducer Design (SRD)

The ultimate aim of the SRD is to ensure that the weight of the mechanical equipment is minimized while satisfying the 11 constraints. The schematic design diagram of the SRD is shown in Figure 22. The SRD has seven relevant variables, including the bending stress of the gear teeth, the covering stress, the transverse deflection of the shaft, and the stress in the shaft, used to control the facilities of the SRD problem [38]. Here, z1 is the tooth width, z2 is the tooth mode, and z3 is the discrete design variable representing the teeth in the pinion. Similarly, z4 is the length of the first axis between the bearings and z5 is the length of the second axis between the bearings. The sixth and seventh design variables (z6 and z7) are the diameters of the first and second shafts, respectively. The design model of the SRD (11 constraints and objective functions) is given below.
z = [ z 1 , z 2 , z 3 , z 4 , z 5 , z 6 , z 7 ] = [ b , m , p , l 1 , l 2 , d 1 , d 2 ]
Minimize:
f ( x ) = 0.7854 z 1 z 2 2 ( 3.3333 z 3 2 + 14.9334 z 3 43.0934 ) 1.508 z 1 ( z 6 2 + z 7 2 ) + 7.4777 ( z 6 3 + z 7 3 ) ,
Variable range:
2.6 z 1 3.6 , 0.7 z 2 0.8 ,
17 z 3 28 , 7.3 z 4 8.3 ,
7.8 z 5 8.3 , 2.9 z 6 3.9 , 5 z 7 5.5 .
Subject to:
g 1 ( z ) = 27 z 1 z 2 2 z 3 1 0 ,
g 2 ( z ) = 397.5 z 1 z 2 2 z 3 2 1 0 ,
g 3 ( z ) = 1.93 z 4 3 z 2 z 3 z 6 4 1 0 ,
g 4 ( x ) = 1.93 z 5 3 z 2 z 3 z 7 4 1 0 ,
g 5 ( z ) = ( 745 z 4 z 2 z 3 ) 2 + 16.9 × 10 6 110.0 z 6 3 1 0 ,
g 6 ( z ) = ( 745 z 4 z 2 z 3 ) 2 + 157.5 × 10 6 85.0 z 6 3 1 0 ,
g 7 ( z ) = z 2 z 3 40 1 0 ,
g 8 ( z ) = 5 z 2 z 1 1 0 ,
g 9 ( z ) = z 1 12 z 2 1 0 ,
g 10 ( z ) = 1.5 z 6 + 1.9 z 4 1 0 ,
g 11 ( z ) = 1.1 z 7 + 1.9 z 5 1 0 ,
Table 17 shows the most suitable outputs from LARO and the different selection comparison methods in dealing with the SRD problem. Table 18 gives the statistics of all search algorithms. It can be found that LARO outperforms the different search algorithms in terms of optimal performance. LARO has the best optimal, worst, average, and STD values for the same maximum iterations, while the smaller STD also indicates that LARO has good robustness. Therefore, LARO is effective in optimizing SRD solutions. Figure 23 provides the results of the convergence iterations of the LARO and comparison algorithms on the SRD problem. The vertical coordinates in the figure are the log values of the adaptation values. From the results, it can be seen that LARO converges to the optimal solution. The convergence speed and convergence accuracy of LARO are good compared to other algorithms. All the algorithms converge to near the optimal solution in the early iteration. The results show that LARO is an excellent algorithm for solving the SRD problem.

5.6. Tubular Column Design (TCD)

The TCD problem is to ensure that the cost of designing a homogeneous column with a tubular cross-section is minimized under the condition that six constraints are satisfied with suitable compressive loads [4]. The schematic design diagram of the TCD is illustrated in Figure 24. Two material-related conditions to be established for the TCD problem include yield stress σy = 500 kgf/cm2 and modulus of elasticity E = 0.85 × 106 kgf/cm2. The mathematical model of the TCD problem is given below.
z = [ z 1 , z 2 ] = [ d , t ]
Minimize:
f ( z ) = 9.8 z 1 z 2 + 2 z 1
Variable range:
2 z 1 14 , 0.2 z 2 0.8 ,
Subject to:
g 1 ( z ) = P π z 1 z 2 σ y 1 0 ,
g 2 ( z ) = 8 P L 2 π 3 E z 1 z 2 ( z 1 2 + z 2 2 ) 1 0 ,
g 3 ( z ) = 2.0 z 1 1 0 ,
g 4 ( z ) = z 1 14 1 0 ,
g 5 ( z ) = 0.2 z 2 1 0 ,
g 6 ( z ) = z 2 8 1 0 .
Table 19 presents the most suitable outputs obtained by the LARO and other selection comparison algorithms for the TCD problem. Table 20 gives the statistics of all search algorithms dealing with the TCD. It can be noticed that LARO outperforms the different search methods in terms of optimal performance. LARO has the best optimal, worst, average, and STD values for the same maximum iterations, while the smaller STD also indicates that LARO has good robustness. Therefore, LARO is effective in optimizing the solution of TCD problems. Figure 25 provides the results of the convergence iterations of the LARO and comparison algorithms on the TCD problem. From the results, it can be seen that LARO converges to the optimal solution. LARO converges faster compared to the other algorithms. MVO converges poorly in the early stages, while AOA and WOA converge poorly throughout the process compared to the different algorithms. The results show that LARO has an advantage over the other algorithms in solving the TCD problem.

6. Conclusions

In this study, an effective metaheuristic method called the enhanced ARO algorithm (LARO) is proposed. LARO is a variant of the ARO algorithm. To boost the global finding ability of ARO, the avoidance of local solutions and international exploration of LARO are designed by making full use of the Lévy flight strategy. In addition, local exploitation of LARO is achieved by using the selective opposition strategy. The most remarkable feature of LARO is that it has a straightforward structure and high computational accuracy, often requiring only the basic parameters (i.e., population size and termination conditions) for solving optimization problems. We tested the performance of LARO with 23 test functions, the CEC2019 test suite, and six mechanical engineering design problems. The experimental results show that LARO can obtain the optimal average solution in 16 of the 23 classical test functions and obtain the smallest average rank (2.3478). Additionally, LARO obtains the best solutions for five and seven functions in CEC2017 and CEC2019, respectively. The conclusion shows that the strategies for improved ARO are very effective in improving the optimization performance. However, there is still room for further improvement in the exploration ability of LARO when facing the CEC2017 test functions. In the mechanical optimization problem, all six practical problems are complex problems with multiple nonlinear constraints and multiple local solutions, and the output results show that LARO can obtain the best decision variables and objective function values. Because of its excellent convergence, exceptional exploration ability, and lack of need to fine-tune the initial parameters, LARO has excellent potential to handle optimization problems with various characteristics.
In future work, this study will expand the versions of the ARO algorithm to include the ARO algorithm for opposing learning initialization, the multi-objective ARO algorithm, the binary ARO algorithm, and the discrete version of the ARO algorithm [57,58,59,60,61,62]. In addition, we will focus on applying LARO to various complex real-world engineering optimization problems, such as hyperparametric optimization of machine learning algorithms, urban travel recommendations in intelligent cities, job-shop scheduling problems, image segmentation, developable surface modeling [63], and smooth path planning for mobile robots.

Author Contributions

Conceptualization, Y.W. and G.H.; Data curation, Y.W., L.H. and G.H.; Formal analysis, L.H. and J.Z.; Funding acquisition, G.H.; Investigation, Y.W., L.H. and J.Z.; Methodology, L.H., J.Z. and G.H.; Project administration, Y.W., J.Z. and G.H.; Resources, Y.W. and G.H.; Software, Y.W., L.H. and J.Z.; Supervision, G.H.; Validation, J.Z. and G.H.; Visualization, G.H.; Writing–original draft, Y.W., L.H., J.Z. and G.H.; Writing—review & editing, Y.W., L.H., J.Z. and G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Research Fund of Department of Science and Department of Education of Shaanxi, China (Grant No. 21JK0615).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study were included in this published article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  2. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. Starling murmuration optimizer: A novel bio-inspired algorithm for global and engineering optimization. Comput. Methods Appl. Mech. Eng. 2022, 392, 114616. [Google Scholar] [CrossRef]
  3. Knypiński, Ł. Performance analysis of selected metaheuristic optimization algorithms applied in the solution of an unconstrained task. COMPEL—Int. J. Comput. Math. Electr. Electron. Eng. 2021, 41, 1271–1284. [Google Scholar] [CrossRef]
  4. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  5. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  6. Ozcalici, M.; Bumin, M. Optimizing filter rule parameters with genetic algorithm and stock selection with artificial neural networks for an improved trading: The case of Borsa Istanbul. Expert Syst. Appl. 2022, 208, 118120. [Google Scholar] [CrossRef]
  7. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  8. Han, Z.; Chen, M.; Shao, S.; Wu, Q. Improved artificial bee colony algorithm-based path planning of unmanned autonomous helicopter using multi-strategy evolutionary learning. Aerosp. Sci. Technol. 2022, 122, 107374. [Google Scholar] [CrossRef]
  9. David, B.F. Artificial Intelligence through Simulated Evolution. In Evolutionary Computation: The Fossil Record; Wiley-IEEE Press: New York, NY, USA, 1998; pp. 227–296. [Google Scholar]
  10. Eslami, N.; Yazdani, S.; Mirzaei, M.; Hadavandi, E. Aphid–Ant Mutualism: A novel nature-inspired metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 201, 362–395. [Google Scholar] [CrossRef]
  11. Srivastava, A.; Das, D.K. A bottlenose dolphin optimizer: An application to solve dynamic emission economic dispatch problem in the microgrid. Knowl.-Based Syst. 2022, 243, 108455. [Google Scholar] [CrossRef]
  12. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  13. Braik, M.; Sheta, A.; Al-Hiary, H. A novel meta-heuristic search algorithm for solving optimization problems: Capuchin search algorithm. Neural Comput. Appl. 2021, 33, 2515–2547. [Google Scholar] [CrossRef]
  14. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2022, 1–25. [Google Scholar] [CrossRef]
  15. Hu, G.; Li, M.; Wang, X.; Wei, G.; Chang, C.-T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl.-Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
  16. Hu, G.; Du, B.; Wang, X.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl.-Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  17. Hu, G.; Dou, W.; Wang, X.; Abbas, M. An enhanced chimp optimization algorithm for optimal degree reduction of Said–Ball curves. Math. Comput. Simul. 2022, 197, 207–252. [Google Scholar] [CrossRef]
  18. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  19. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  20. Javidy, B.; Hatamlou, A.; Mirjalili, S. Ions motion algorithm for solving optimization problems. Appl. Soft Comput. 2015, 32, 72–79. [Google Scholar] [CrossRef]
  21. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  22. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110–111, 151–166. [Google Scholar] [CrossRef]
  23. Mousavirad, S.J.; Ebrahimpour-Komleh, H. Human mental search: A new population-based metaheuristic optimization algorithm. Appl. Intell. 2017, 47, 850–887. [Google Scholar] [CrossRef]
  24. Samareh Moosavi, S.H.; Bardsiri, V.K. Poor and rich optimization algorithm: A new human-based and multi populations algorithm. Eng. Appl. Artif. Intell. 2019, 86, 165–181. [Google Scholar] [CrossRef]
  25. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  26. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  27. Zamani, H.; Nadimi-Shahraki, M.H.; Gandomi, A.H. QANA: Quantum-based avian navigation optimizer algorithm. Eng. Appl. Artif. Intell. 2021, 104, 104314. [Google Scholar] [CrossRef]
  28. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  29. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  30. Griffiths, E.J.; Orponen, P. Optimization, block designs and No Free Lunch theorems. Inf. Process. Lett. 2005, 94, 55–61. [Google Scholar] [CrossRef]
  31. Service, T.C. A No Free Lunch theorem for multi-objective optimization. Inf. Process. Lett. 2010, 110, 917–923. [Google Scholar] [CrossRef]
  32. Iacca, G.; dos Santos Junior, V.C.; Veloso de Melo, V. An improved Jaya optimization algorithm with Lévy flight. Expert Syst. Appl. 2021, 165, 113902. [Google Scholar] [CrossRef]
  33. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective Opposition based Grey Wolf Optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  34. Liu, K.; Zhang, H.; Zhang, B.; Liu, Q. Hybrid optimization algorithm based on neural networks and its application in wavefront shaping. Opt. Express 2021, 29, 15517–15527. [Google Scholar] [CrossRef] [PubMed]
  35. Islam, M.A.; Gajpal, Y.; ElMekkawy, T.Y. Hybrid particle swarm optimization algorithm for solving the clustered vehicle routing problem. Appl. Soft Comput. 2021, 110, 107655. [Google Scholar] [CrossRef]
  36. Devarapalli, R.; Bhattacharyya, B. A hybrid modified grey wolf optimization-sine cosine algorithm-based power system stabilizer parameter tuning in a multimachine power system. Optim. Control. Appl. Methods 2020, 41, 1143–1159. [Google Scholar] [CrossRef]
  37. Arini, F.Y.; Chiewchanwattana, S.; Soomlek, C.; Sunat, K. Joint Opposite Selection (JOS): A premiere joint of selective leading opposition and dynamic opposite enhanced Harris’ hawks optimization for solving single-objective problems. Expert Syst. Appl. 2022, 188, 116001. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  39. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  40. Naruei, I.; Keynia, F. A new optimization method based on COOT bird natural life model. Expert Syst. Appl. 2021, 183, 115352. [Google Scholar] [CrossRef]
  41. Chopra, N.; Mohsin Ansari, M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  42. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  43. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  44. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  45. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  46. Devarapalli, R.; Sinha, N.; Rao, B.; Knypiński, Ł.; Lakshmi, N.; García Márquez, F.P. Allocation of real power generation based on computing over all generation cost: An approach of Salp Swarm Algorithm. Arch. Electr. Eng. 2021, 70, 337–349. [Google Scholar]
  47. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  48. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
  49. Squires, M.; Tao, X.; Elangovan, S.; Gururajan, R.; Zhou, X.; Acharya, U.R. A novel genetic algorithm based system for the scheduling of medical treatments. Expert Syst. Appl. 2022, 195, 116464. [Google Scholar] [CrossRef]
  50. Peng, J.; Li, Y.; Kang, H.; Shen, Y.; Sun, X.; Chen, Q. Impact of population topology on particle swarm optimization and its variants: An information propagation perspective. Swarm Evol. Comput. 2022, 69, 100990. [Google Scholar] [CrossRef]
  51. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  52. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  53. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef] [PubMed]
  54. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Oliva, D. Hybridizing of Whale and Moth-Flame Optimization Algorithms to Solve Diverse Scales of Optimal Power Flow Problem. Electronics 2022, 11, 831. [Google Scholar] [CrossRef]
  55. Brest, J.; Maučec, M.S.; Bošković, B. The 100-Digit Challenge: Algorithm jDE100. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 19–26. [Google Scholar]
  56. Coello Coello, C.A. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
  57. Hu, G.; Yang, R.; Qin, X.Q.; Wei, G. MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2023, 403, 115676. [Google Scholar] [CrossRef]
  58. Zheng, J.; Hu, G.; Ji, X.; Qin, X. Quintic generalized Hermite interpolation curves: Construction and shape optimization using an improved GWO algorithm. Comput. Appl. Math. 2022, 41, 115. [Google Scholar] [CrossRef]
  59. Huang, L.; Wang, Y.; Guo, Y.; Hu, G. An Improved Reptile Search Algorithm Based on Lévy Flight and Interactive Crossover Strategy to Engineering Application. Mathematics 2022, 10, 2329. [Google Scholar] [CrossRef]
  60. Li, Y.; Zhu, X.; Liu, J. An Improved Moth-Flame Optimization Algorithm for Engineering Problems. Symmetry 2020, 12, 1234. [Google Scholar] [CrossRef]
  61. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Ewees, A.A.; Abualigah, L.; Abd Elaziz, M. MTV-MFO: Multi-Trial Vector-Based Moth-Flame Optimization Algorithm. Symmetry 2021, 13, 2388. [Google Scholar] [CrossRef]
  62. Chen, Y.; Wang, L.; Liu, G.; Xia, B. Automatic Parking Path Optimization Based on Immune Moth Flame Algorithm for Intelligent Vehicles. Symmetry 2022, 14, 1923. [Google Scholar] [CrossRef]
  63. Hu, G.; Zhu, X.N.; Wang, X.; Wei, G. Multi-strategy boosted marine predators algorithm for optimizing approximate developable surface. Knowl.-Based Syst. 2022, 254, 109615. [Google Scholar] [CrossRef]
Figure 1. The change of H over the course of 1000 iterations.
Figure 1. The change of H over the course of 1000 iterations.
Symmetry 14 02282 g001
Figure 2. The change of A over the course of 1000 iterations.
Figure 2. The change of A over the course of 1000 iterations.
Symmetry 14 02282 g002
Figure 3. Flow chart of ARO.
Figure 3. Flow chart of ARO.
Symmetry 14 02282 g003
Figure 4. Lévy flight path of 500 times movements in a two-dimensional space.
Figure 4. Lévy flight path of 500 times movements in a two-dimensional space.
Symmetry 14 02282 g004
Figure 5. Flowchart for the LARO algorithm.
Figure 5. Flowchart for the LARO algorithm.
Symmetry 14 02282 g005
Figure 6. The exploration and exploitation diagrams of LARO.
Figure 6. The exploration and exploitation diagrams of LARO.
Symmetry 14 02282 g006
Figure 7. Iteration plot of seven parameters in CEC2019.
Figure 7. Iteration plot of seven parameters in CEC2019.
Symmetry 14 02282 g007aSymmetry 14 02282 g007b
Figure 8. The average rank of the twelve algorithms.
Figure 8. The average rank of the twelve algorithms.
Symmetry 14 02282 g008
Figure 9. Convergence plots of LARO and different MH methods on 23 test functions.
Figure 9. Convergence plots of LARO and different MH methods on 23 test functions.
Symmetry 14 02282 g009aSymmetry 14 02282 g009bSymmetry 14 02282 g009c
Figure 10. Box plot of LARO and ARO, AOA, GWO, COOT, GJO, INFO, MFO, MVO, SCA, SSA, WOA on 23 test functions.
Figure 10. Box plot of LARO and ARO, AOA, GWO, COOT, GJO, INFO, MFO, MVO, SCA, SSA, WOA on 23 test functions.
Symmetry 14 02282 g010aSymmetry 14 02282 g010b
Figure 11. Convergence plots of LARO and other search methods on the CEC2019 test functions.
Figure 11. Convergence plots of LARO and other search methods on the CEC2019 test functions.
Symmetry 14 02282 g011
Figure 12. Box plot of LARO and ARO, AOA, GWO, COOT, GJO, INFO, MFO, MVO, SCA, SSA, WOA on CEC2019 test functions.
Figure 12. Box plot of LARO and ARO, AOA, GWO, COOT, GJO, INFO, MFO, MVO, SCA, SSA, WOA on CEC2019 test functions.
Symmetry 14 02282 g012aSymmetry 14 02282 g012b
Figure 13. Radar chart of ARO, AOA, GWO, COOT, GJO, INFO, MFO, MVO, SCA, SSA, WOA, and LARPO on CEC2019.
Figure 13. Radar chart of ARO, AOA, GWO, COOT, GJO, INFO, MFO, MVO, SCA, SSA, WOA, and LARPO on CEC2019.
Symmetry 14 02282 g013aSymmetry 14 02282 g013b
Figure 14. WBD structure.
Figure 14. WBD structure.
Symmetry 14 02282 g014
Figure 15. Convergence iteration plot of LARO and comparison algorithms in WBD problem.
Figure 15. Convergence iteration plot of LARO and comparison algorithms in WBD problem.
Symmetry 14 02282 g015
Figure 16. PVD structure.
Figure 16. PVD structure.
Symmetry 14 02282 g016
Figure 17. Convergence iteration plot of LARO and comparison algorithms in PVD problem.
Figure 17. Convergence iteration plot of LARO and comparison algorithms in PVD problem.
Symmetry 14 02282 g017
Figure 18. TCS structure.
Figure 18. TCS structure.
Symmetry 14 02282 g018
Figure 19. Convergence iteration plot of LARO and comparison algorithms in TCS problem.
Figure 19. Convergence iteration plot of LARO and comparison algorithms in TCS problem.
Symmetry 14 02282 g019
Figure 20. GTD structure.
Figure 20. GTD structure.
Symmetry 14 02282 g020
Figure 21. Convergence iteration plot of LARO and comparison algorithms in GTD problem.
Figure 21. Convergence iteration plot of LARO and comparison algorithms in GTD problem.
Symmetry 14 02282 g021
Figure 22. SRD structure.
Figure 22. SRD structure.
Symmetry 14 02282 g022
Figure 23. Convergence iteration plot of LARO and comparison algorithms in SRD problem.
Figure 23. Convergence iteration plot of LARO and comparison algorithms in SRD problem.
Symmetry 14 02282 g023
Figure 24. TCD structure.
Figure 24. TCD structure.
Symmetry 14 02282 g024
Figure 25. Convergence iteration plot of LARO and comparison algorithms in TCD problem.
Figure 25. Convergence iteration plot of LARO and comparison algorithms in TCD problem.
Symmetry 14 02282 g025
Table 1. Suitable parameters for different algorithms.
Table 1. Suitable parameters for different algorithms.
MethodsParametersValue Situation
AOA [38]µ0.499
aa5
GWO [39]Convergence parameter (a)Linear decrease from 2 to 0
WOA [47]ADrop from 2 to 0
b2
SSA [46]Leader position update probabilityc3 = 0.5
INFO [42]c2
d4
MVO [43]Wormhole Existence Probability WEPMax1
Wormhole Existence Probability WEPMin0.2
SCA [44]A2
Table 2. Comparison of LARO with two different parameters in 23 benchmark functions.
Table 2. Comparison of LARO with two different parameters in 23 benchmark functions.
FunctionsAlgorithmsMeanSTDTimeFunctionsAlgorithmsMeanSTDTime
F01LARO12.17E-18107.22299F13LARO10.001100.0033822.00742
LARO24.78E-911.39E-907.38622LARO20.000590.0024719.16593
F02LARO11.45E-966.15E-966.97114F14LARO10.99800031.32199
LARO21.90E-493.98E-496.97564LARO20.99800029.23997
F03LARO15.18E-1461.97E-14516.08589F15LARO10.000312.97E-165.84512
LARO25.48E-732.11E-7214.75086LARO20.000312.37E-086.53547
F04LARO18.57E-753.61E-747.35237F16LARO1−1.031632.16E-165.51881
LARO21.10E-372.49E-376.88073LARO2−1.031632.10E-166.22799
F05LARO10.004700.004367.95983F17LARO10.3978905.33003
LARO20.032340.033717.86360LARO20.3978906.17085
F06LARO15.06E-066.23E-066.85314F18LARO135.94E-165.36706
LARO20.000140.000126.81894LARO236.28E-165.99102
F07LARO10.000210.0001410.94454F19LARO1−3.862782.28E-156.59347
LARO20.000320.0002110.75831LARO2−3.862782.28E-156.07269
F08LARO1−1.15E+04334.443748.30804F20LARO1−3.292270.052826.68436
LARO2−1.15E+04299.320238.43271LARO2−3.322004.44E-167.05183
F09LARO1007.25003F21LARO1−10.153203.36E-1511.62110
LARO2007.61304LARO2−10.153202.79E-157.59196
F10LARO18.88E-1608.25929F22LARO1−10.069011.493399.18435
LARO28.88E-1607.80431LARO2−10.402943.58E-158.20396
F11LARO1008.49955F23LARO1−10.536360.000248.69545
LARO2008.82418LARO2−10.536411.58E-159.09290
F12LARO12.43E-072.89E-0720.03895
LARO26.01E-062.89E-0620.26232
Table 3. Performance analysis of jump parameter α in CEC2019.
Table 3. Performance analysis of jump parameter α in CEC2019.
FunctionIndexAlgorithms
α = 0.1α = 0.01α = 0.05α = 0.5α = [0.01, 0.05]α = [0.05, 0.1]α = [0.1, 0.5]
cec01Mean1111111
Rank1111111
cec02Mean4.24624.21954.25534.22264.11764.22334.2304
Rank6273145
cec03Mean1.74881.58871.55391.52721.61221.78641.6667
Rank6321475
cec04Mean12.851311.099312.687116.185113.746013.848713.8736
Rank3127456
cec05Mean1.07471.10891.10291.08801.08761.07901.0776
Rank1765432
cec06Mean1.50551.59951.60111.48811.48571.41561.4059
Rank5674321
cec07Mean386.6686437.5734464.3752450.1384467.1603489.9973465.0378
Rank1243675
cec08Mean3.04153.34433.09793.27203.26203.40223.5178
Rank1524367
cec09Mean1.13861.11561.12551.11511.11431.11861.1130
Rank7463251
cec10Mean18.055320.005919.996420.002920.085921.000520.0014
Rank1524673
Mean rank3.23.63.93.53.44.73.6
Final ranking1463274
Table 4. Statistical outcomes of the different MH methods on the 23 test functions.
Table 4. Statistical outcomes of the different MH methods on the 23 test functions.
FunctionIndexAlgorithms
AROAOAGWOCOOTGJOINFOMFOMVOSCASSAWOALARO
F1Best1.21E-1389.28E-105.04E-731.68E-937.76E-1329.34E-561.60E-060.00092.20E-075.37E-091.09E-1866.69E-199
Worst2.30E-1279.90E-072.39E-692.23E-189.04E-1274.43E-551.00E+040.00490.00731.10E-086.38E-1732.12E-177
Mean1.74E-1284.73E-073.70E-701.12E-199.55E-1283.11E-551.00E+030.00230.00108.60E-094.09E-1741.08E-178
STD5.61E-1282.34E-077.22E-704.99E-192.29E-1279.13E-563.08E+030.00090.00191.60E-0900
Rank395746121110821
F2Best1.06E-742.49E-134.85E-421.57E-471.47E-1437.02E-292.94E-200.00542.91E-244.52E-067.23E-1184.88E-105
Worst2.80E-670.00061.55E-407.37E-188.09E-1372.40E-281.43E-180.02163.94E-208.69E-064.13E-1051.84E-95
Mean2.19E-680.00013.81E-413.68E-196.36E-1381.58E-282.79E-190.01303.20E-216.09E-062.10E-1069.20E-97
STD6.66E-680.00024.24E-411.65E-181.82E-1373.66E-293.93E-190.00399.22E-211.28E-069.22E-1064.11E-96
Rank411591681271023
F3Best2.19E-1157.27E-089.95E-241.69E-1085.66E-1501.41E-557.16E-120.00242.53E-204.86E-101.02E+039.25E-160
Worst4.64E-980.00051.70E-183.75E-173.30E-1363.43E-547.83E-080.02744.50E-092.26E-092.52E+048.97E-145
Mean2.99E-990.00011.35E-191.87E-182.21E-1377.44E-556.93E-090.01522.35E-101.31E-099.94E+036.05E-146
STD1.06E-980.00014.11E-198.38E-187.66E-1377.17E-551.99E-080.00851.00E-095.18E-107.06E+032.09E-145
Rank310562491178121
F4Best2.49E-590.00128.10E-191.00E-503.15E-1023.80E-292.24E-100.01498.01E-149.04E-062.09E-056.58E-83
Worst1.51E-510.03868.67E-171.10E-182.48E-949.71E-294.81E-060.05583.26E-091.97E-0583.89751.08E-74
Mean8.18E-530.00829.98E-185.69E-201.34E-957.06E-293.41E-070.02895.72E-101.49E-0526.75875.69E-76
STD3.36E-520.00951.86E-172.46E-195.54E-951.69E-291.09E-060.01091.01E-092.60E-0627.75442.40E-75
Rank310651481179122
F5Best0.000626.341925.172912.73425.96351.00E-150.53370.34666.39121.042325.96050.0002
Worst0.009027.862727.9110164.45798.70063.82E-089.00E+04420.50668.0566326.632126.99120.0259
Mean0.002926.927626.652233.40686.87594.47E-094.68E+0343.95987.033048.025726.54650.0057
STD0.00230.38750.690331.01890.71591.08E-082.01E+04100.26460.391981.86410.31240.0069
Rank287941121051163
F6Best2.65E-070.33798.71E-063.83E-051.56E-06000.00110.11842.99E-100.00178.78E-07
Worst2.57E-060.84420.99510.00090.49974.93E-324.50E-310.00390.59959.36E-100.00731.50E-05
Mean1.09E-060.59390.28720.00030.13766.16E-336.39E-320.00230.25536.36E-100.00414.77E-06
STD6.58E-070.13780.28390.00020.15121.16E-321.13E-310.00080.12611.80E-100.00154.17E-06
Rank412116912710385
F7Best2.92E-053.42E-080.00010.00016.24E-067.13E-050.00050.00020.00010.00052.56E-051.02E-05
Worst0.00066.65E-050.00130.00900.00020.00130.00660.00350.00240.01180.00250.0004
Mean0.00022.38E-050.00050.00216.19E-050.00030.00260.00120.00060.00450.00060.0002
STD0.00022.11E-050.00030.00235.14E-050.00030.00140.00080.00060.00310.00060.0001
Rank416102511981273
F8Best−1.18E+04−6.15E+03−7.74E+03−8.96E+03−2.77E+03−4.19E+03−4.19E+03−3.83E+03−2.64E+03−3.30E+03−1.26E+04−1.21E+04
Worst−1.02E+04−4.95E+03−4.93E+03−7.04E+03−1.84E+03−3.36E+03−2.52E+03−2.43E+03−2.05E+03−2.23E+03−8.37E+03−1.08E+04
Mean−1.09E+04−5.49E+03−6.35E+03−7.90E+03−2.32E+03−3.63E+03−3.38E+03−3.16E+03−2.30E+03−2.80E+03−1.20E+04−1.16E+04
STD4.52E+023.84E+027.00E+025.59E+022.58E+022.08E+023.78E+023.27E+021.55E+022.95E+021.18E+033.94E+02
Rank365411789121012
F9Best0000003.97983.980406.964700
Worst04.39E-0711.57261.71E-130044.844022.88530.325741.78825.68E-140
Mean01.10E-071.09758.53E-150017.912812.04010.016315.02392.84E-150
STD01.55E-072.94743.81E-140010.33955.80960.07287.95901.27E-140
Rank179611121081151
F10Best8.88E-167.48E-087.99E-158.88E-168.88E-168.88E-164.44E-150.00888.88E-165.88E-068.88E-168.88E-16
Worst8.88E-160.00031.87E-141.54E-114.44E-158.88E-167.99E-150.02287.99E-152.01337.99E-158.88E-16
Mean8.88E-160.00011.40E-148.04E-133.91E-158.88E-164.62E-150.01784.80E-150.54894.09E-158.88E-16
STD08.13E-052.33E-153.43E-121.30E-1507.94E-160.00351.59E-150.86642.28E-150
Rank110894161171251
F11Best01.15E-0600000.02710.111000.083600
Worst00.01230.04042.22E-16000.31730.59090.74480.70180.03720
Mean00.00060.00271.11E-17000.13470.30300.03810.26500.00340
STD00.00280.00944.97E-17000.08180.12090.16640.14950.01040
Rank167511101291181
F12Best1.11E-080.40170.00547.28E-078.14E-074.71E-324.71E-322.93E-050.03003.35E-120.00013.24E-08
Worst2.49E-070.52900.05810.10370.05934.93E-320.93290.31220.10820.37120.02011.16E-06
Mean6.05E-080.45410.02530.00520.02984.79E-320.10880.03130.06550.06880.00163.27E-07
STD5.65E-080.03310.01580.02320.02188.02E-340.25270.09600.02050.13360.00443.40E-07
Rank212657111891043
F13Best7.63E-082.82521.91E-057.71E-053.30E-061.35E-321.35E-320.00010.07981.39E-110.00271.95E-07
Worst0.04392.96610.40640.03640.39920.04390.01100.01180.32570.01100.19437.23E-06
Mean0.00392.95280.24620.00910.10810.00330.00160.00150.22400.00160.05221.81E-06
STD0.01020.04080.11800.00920.10230.01010.00400.00340.06480.00400.06051.90E-06
Rank612117953210481
F14Best0.99800.99800.99800.99800.99800.99800.99800.99800.99800.99800.99800.9980
Worst0.998012.670512.67050.998012.67052.98211.99200.99802.98210.998010.76320.9980
Mean0.99808.41575.29330.99804.91581.14691.09740.99801.49410.99802.61170.9980
STD04.49655.03542.88E-164.23950.48570.30605.24E-120.88141.02E-163.54610
Rank112111107658191
F15Best0.00030.00030.00030.00030.00030.00030.00060.00030.00040.00030.00030.0003
Worst0.00030.02070.02040.00120.00120.00120.00230.02040.00150.00120.00140.0003
Mean0.00030.00310.00240.00040.00040.00040.00100.00660.00090.00070.00060.0003
STD2.47E-190.00510.00620.00020.00030.00020.00040.00930.00040.00030.00031.29E-16
Rank111104539128762
F16Best−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Worst−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
STD2.22E-167.42E-123.08E-094.04E-141.70E-082.28E-162.28E-166.22E-081.79E-056.02E-151.97E-112.28E-16
Rank189510111112571
F17Best0.39790.49600.39790.39790.39790.39790.39790.39790.39790.39790.39790.3979
Worst0.39792.59580.39790.39790.39810.39790.39790.39790.39940.39790.39790.3979
Mean0.39791.31080.39790.39790.39790.39790.39790.39790.39840.39790.39790.3979
STD00.65574.30E-077.30E-083.92E-05007.31E-080.00044.68E-158.28E-080
Rank112961011811571
F18Best333.000033.0000333.00003.000033.00003
Worst3303.000033.0000333.00003.000033.00003
Mean311.13.000033.0000333.00003.000033.00003
STD6.28E-1612.69444.15E-062.78E-144.45E-072.88E-161.45E-153.93E-079.62E-064.37E-144.46E-069.56E-16
Rank112105711811691
F19Best−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8623−3.8628−3.8628−3.8628
Worst−3.8628−3.8628−3.8549−3.8628−3.8549−3.8628−3.8628−3.8628−3.8535−3.8628−3.8549−3.8628
Mean−3.8628−3.8628−3.8615−3.8628−3.8592−3.8628−3.8628−3.8628−3.8569−3.8628−3.8614−3.8628
STD2.28E-156.30E-070.00282.03E-150.00402.28E-152.28E-151.34E-070.00341.51E-140.00282.28E-15
Rank189111117126101
F20Best−3.3220−3.3220−3.3220−3.3220−3.3220−3.3220−3.3220−3.3220−3.1559−3.3220−3.3220−3.3220
Worst−3.2031−3.2031−3.1981−3.2031−2.8404−3.2031−3.1376−3.2024−2.6213−3.1989−3.0867−3.2031
Mean−3.2982−3.2804−3.2739−3.2982−3.1169−3.2625−3.2322−3.2565−3.0157−3.2261−3.2356−3.3101
STD0.04880.05820.06050.04880.12340.06100.06340.06080.12980.04920.07680.0366
Rank245311697121081
F21Best−10.1532−10.1532−10.1531−10.1532−10.1524−10.1532−10.1532−10.1532−6.1684−10.1532−10.1532−10.1532
Worst−2.6305−5.1007−5.0552−10.1532−5.0551−2.6305−2.6305−5.1007−0.8798−2.6305−2.6303−10.1532
Mean−9.7771−7.8795−9.3905−10.1532−9.3888−9.7771−7.3843−8.8900−3.9983−8.7666−8.6480−10.1532
STD1.68212.57881.86213.40E-131.86131.68213.24572.24461.71872.51573.08703.51E-15
Rank310526311712891
F22Best−10.4029−10.4029−10.4029−10.4029−10.4025−10.4029−10.4029−10.4029−7.6292−10.4029−10.4029−10.4029
Worst−3.7243−3.7243−10.4021−10.4029−4.4596−2.7659−2.7659−2.7659−0.9097−2.7519−5.0877−10.4017
Mean−10.0690−9.2076−10.4026−10.4029−10.1033−8.9235−7.5987−9.3755−4.2828−10.0204−9.8710−10.4029
STD1.49342.47370.00023.97E-131.32843.04183.26622.54802.00991.71081.63590.0003
Rank593141011812672
F23Best−10.5364−10.5364−10.5363−10.5364−10.5363−10.5364−10.5364−10.5364−10.2588−10.5364−10.5364−10.5364
Worst−10.5364−2.4217−10.5355−10.5364−10.5304−2.4217−2.4217−5.1756−0.9487−2.8711−2.4217−10.5364
Mean−10.5364−8.7521−10.5360−10.5364−10.5337−8.6684−8.9409−9.4642−5.0736−9.8851−8.1094−10.5364
STD1.82E-153.21040.00022.30E-130.00143.33612.90572.20001.65432.03923.42343.43E-15
Rank194351087126111
Mean rank2.34789.08707.21745.17395.86963.73917.39138.82619.52177.78267.08701.6957
Final ranking211745381012961
+/=/−2/16/51/1/210/1/220/11/123/2/183/12/81/5/170/1/220/1/221/7/152/3/18−/−/−
Table 5. Statistical output and associated p-values on 23 test functions.
Table 5. Statistical output and associated p-values on 23 test functions.
p-ValueAROAOAGWOCOOTGJOINFOMFOMVOSCASSAWOA
F16.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−1.25E-05/−
F26.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/+6.79E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/+
F36.80E-08/−6.79E-08/−6.79E-08/−6.80E-08/−2.06E-06/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−
F46.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/+6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.79E-08/−6.80E-08/−
F50.3507/=6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/+6.79E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−
F60.0001/+6.80E-08/−4.87E-07/−6.80E-08/−0.0020/−4.85E-08/+6.68E-08/+6.80E-08/−6.80E-08/−6.80E-08/+6.80E-08/−
F70.9676/=4.54E-07/+0.0001/−6.67E-06/−1.92E-05/+0.2733/=6.80E-08/−6.01E-07/−0.0013/−6.80E-08/−0.0256/−
F80.0003/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.74E-08/−6.72E-08/−6.80E-08/−6.80E-08/−6.79E-08/−0.0002/+
F9NaN/=9.42E-06/−0.0198/−0.3421/=NaN/=NaN/=7.98E-09/−8.01E-09/−0.3421/=7.95E-09/−0.3421/+
F10NaN/=8.01E-09/−3.84E-09/−9.90E-08/−8.64E-08/−NaN/=7.43E-10/−8.01E-09/−8.30E-09/−7.98E-09/−2.17E-06/−
F11NaN/=8.01E-09/−0.1626/=0.3421/=NaN/=NaN/=8.01E-09/−8.01E-09/−0.0009/−8.01E-09/−0.1626/=
F121.81E-05/+6.80E-08/−6.80E-08/−1.06E-07/−1.23E-07/−6.13E-08/+0.0012/−6.80E-08/−6.80E-08/−0.0071/−6.80E-08/−
F130.8604/=6.80E-08/−6.80E-08/−6.80E-08/−2.56E-07/−0.0001/−0.0002/−6.80E-08/−6.80E-08/−0.0002/−6.80E-08/−
F14NaN/=9.32E-08/−6.41E-05/−NaN/=2.71E-06/−0.1626/=0.1624/=NaN/=7.72E-09/−NaN/=0.0045/−
F15NaN/=8.01E-09/−7.79E-09/−0.0004/−7.98E-09/−0.3421/=6.80E-09/−7.95E-09/−8.01E-09/−2.97E-08/−8.01E-09/−
F16NaN/=NaN/=2.53E-05/−NaN/=7.93E-09/−NaN/=NaN/=2.99E-08/−8.01E-09/−NaN/=NaN/=
F17NaN/=8.01E-09/−8.01E-09/−0.1626/=8.01E-09/−NaN/=NaN/=7.99E-09/−8.01E-09/−NaN/=0.0002/−
F18NaN/=0.0093/−8.01E-09/−NaN/=8.01E-09/−NaN/=NaN/=8.01E-09/−7.99E-09/−NaN/=8.01E-09/−
F19NaN/=8.01E-09/−8.01E-09/−NaN/=8.01E-09/−NaN/=NaN/=8.01E-09/−8.01E-09/−NaN/=8.01E-09/−
F200.3939/=8.54E-07/−6.38E-07/−0.3939/=2.91E-08/−0.0068/−0.0001/−2.61E-07/−1.51E-08/−4.09E-06/−1.93E-07/−
F210.3421/=8.01E-09/−8.01E-09/−NaN/=8.01E-09/−0.3421/=0.0009/−8.01E-09/−8.01E-09/−0.0196/−8.01E-09/−
F221/=1.50E-07/−2.78E-07/−0.3421/=1.86E-08/−0.1379/=0.0028/−1.75E-07/−1.13E-08/−1/=1.75E-07/−
F23NaN/=8.01E-09/−8.01E-09/−NaN/=8.01E-09/−0.0198/−0.0198/−8.01E-09/−8.01E-09/−0.1626/=8.01E-09/−
+/=/−2/16/51/1/210/1/220/11/123/2/183/12/81/5/170/1/220/1/221/7/152/3/18
Table 6. Statistical outcomes of the different search methods on the CEC2017 test functions.
Table 6. Statistical outcomes of the different search methods on the CEC2017 test functions.
FunctionAlgorithms
IndexLAROAROBWOCapSARSAWSOGJOPSOGAE-WOAWMFOCSOAOA
cec01AVE822.77311155.65899.20E+071987.72791.03E+10343.42832.48E+081.39E+084.41E+075.21E+092.43E+104.78E+05
STD868.54121306.05453.87E+071757.91783.66E+09893.99343.17E+083.43E+085.58E+074.34E+097.66E+091.05E+06
Rank237411198610125
cec03AVE300.0046300.01771290.6426300.00009725.5951300.01032256.8681300.07861.82E+051.67E+046.20E+06397.3366
STD0.01340.0492325.11981.24E-073043.94700.02802505.18080.19755.92E+056.62E+031.17E+07151.6787
Rank247193851110126
cec04AVE402.5610404.3365410.8631400.00001059.8248401.5091441.9227447.2097477.4278694.09313366.0150404.4857
STD1.87291.48572.48541.10E-05506.38641.546825.777889.988950.4126131.89411568.44371.3019
Rank346111278910125
cec05AVE507.3202510.4546525.6527518.5186585.3797509.0703531.3297528.7659543.3536577.9511663.3490515.2876
STD2.62975.35254.20077.872013.27985.453711.019010.371015.826321.965236.94357.4489
Rank136511287910124
cec06AVE600.0014600.0012604.9375601.1525645.8434600.3361605.6297605.8022635.5544647.6467705.6071600.2195
STD0.00210.00271.22902.82097.85910.84004.57834.863711.393813.816115.76710.2002
Rank216510478911123
cec07AVE724.8801725.0783737.1078731.7659806.3894729.3690745.4072731.0701776.8818815.25821203.7555741.0450
STD5.61625.67106.25518.510910.98197.811213.24418.889331.998623.8207109.30329.4407
Rank126510384911127
cec08AVE808.7147813.8299818.0646820.1136856.2384804.3845828.7147823.3905837.1595854.5212950.9072819.7897
STD2.47444.72124.30317.47127.43572.13847.765612.362011.124915.428821.50274.3114
Rank234611187910125
cec09AVE900.1293900.6541915.7054911.25441477.5334900.3249962.9553924.07091091.06981712.33495670.2241917.3586
STD0.24191.89397.510117.7217167.38210.613463.127754.7140191.0585381.57321788.947332.5949
Rank135410287911126
cec10AVE1333.28411425.84871517.74741582.21252508.42681222.89871920.43282035.35731976.59922401.34523808.63571415.7327
STD153.5287150.0427132.9630218.1276295.4488211.1734357.3134404.6299281.2993383.0208367.5857195.5798
Rank245611179810123
cec11AVE1103.92411106.29441128.95911132.86562919.69801108.48461397.40741250.74834878.98514082.69254.12E+041113.7413
STD2.05684.01168.465628.07841220.32715.0667977.4104206.64294671.94945143.40457.72E+045.8604
Rank125693871110124
cec12AVE7362.30778907.60286.51E+055967.17692.84E+081637.41748.96E+052.50E+074.22E+064.15E+073.39E+091.59E+05
STD5635.16447205.59334.36E+055347.18602.75E+08216.83429.71E+059.45E+075.90E+064.88E+071.81E+092.90E+05
Rank346211179810125
cec13AVE1495.11861318.44921.49E+041433.97091.56E+071315.11061.23E+041.01E+046.08E+041.72E+043.90E+082546.0110
STD733.971415.51387.51E+03182.35061.98E+079.13547.88E+031.60E+041.38E+051.19E+043.71E+082019.8441
Rank428311176109125
cec14AVE1407.26271406.03051722.57131455.10414655.98591417.40272591.50712864.17411.02E+042.81E+035.97E+061410.5210
STD4.31403.7572311.972124.11561706.56569.64361687.23286123.75981.03E+041.55E+035.96E+064.9376
Rank216510479118123
cec15AVE1502.84351503.88472474.59221536.50287655.44461511.77913297.20311985.86658896.43951.38E+046.44E+071584.7036
STD1.97193.07241015.281345.94434511.80199.70821864.26841260.16087789.92507.26E+031.04E+08286.2738
Rank127493861011125
cec16AVE1681.03831700.55401645.59411791.13642065.91431653.23141829.68001752.68601853.52421990.38772912.32841815.4437
STD89.855473.887336.3768155.7101148.588264.2038138.0245138.4040109.7768123.0833325.0862132.8314
Rank341611285910127
cec17AVE1710.42041715.68681740.56641754.91791830.73021739.64891762.48901798.30871783.96621874.91302392.90671736.2611
STD8.586611.37206.709848.118929.929210.397615.925149.167154.1561105.8992249.242411.0306
Rank125610479811123
cec18AVE1802.01531803.53852.09E+041886.80737.14E+071819.73033.87E+042.07E+041.44E+041.58E+041.11E+093106.6477
STD1.71393.30111.58E+04114.14502.16E+0814.97071.09E+041.94E+049.04E+031.21E+048.18E+082504.4269
Rank129411310867125
cec19AVE1900.99291900.66003234.23611918.00088.96E+051902.72982.19E+047.53E+039507.90767.25E+051.23E+081930.8991
STD0.84750.55291852.037228.24984.92E+052.40255.32E+041.84E+047146.55481.73E+062.25E+0883.0316
Rank216411397810125
cec20AVE2007.99232009.47152035.48372045.37062273.11322022.38972105.15022087.82622104.47952226.81652561.86712016.8935
STD8.63327.87293.734827.137055.31558.734057.626150.903558.307168.3003179.741727.6321
Rank125611497810123
cec21AVE2252.40612256.77772217.00892294.88332319.91132274.56552328.42652309.49872343.70292365.41022464.83772283.5269
STD58.252057.894137.403456.761755.985649.70057.842156.654760.018535.943227.505751.9989
Rank231684971011125
cec22AVE2297.00352297.76192315.35922306.10433002.01192300.99662339.01182311.50272378.57132842.35134529.49032319.1097
STD19.183416.17372.06102.8618250.72530.742433.743332.353641.4632516.2846584.26417.3529
Rank126411385910127
cec23AVE2611.90412616.19712622.26342622.98952694.34602617.03442635.13562662.02412678.97892692.22802907.23042616.0528
STD5.19237.20043.87939.200419.19619.310915.265851.983819.989640.782497.36165.2518
Rank135611478910122
cec24AVE2669.85952694.33312700.07462747.98032858.84282644.11292761.87262791.81942782.87642811.56303037.41022500.5324
STD114.172999.9286100.621960.763752.1016120.973817.257532.308876.719060.6137104.33610.4202
Rank345611279810121
cec25AVE2919.75722917.14732933.78382932.04743337.89212923.83882938.70482927.62473008.34353250.46734727.97932915.9381
STD23.911423.457918.807022.7472120.546322.922047.408383.644339.4223181.1080880.776722.8398
Rank327611485910121
cec26AVE2912.33502902.96712934.08312980.52614079.56762915.76213061.97703223.62483503.10094221.52815222.15852907.7708
STD39.439313.269178.6913126.0442281.259146.8308208.2813407.6659346.3298393.2302426.7847120.3242
Rank315610478911122
cec27AVE3094.44283096.30483095.15743101.53963177.97993097.39613100.07833133.98453209.30033226.39753482.95443093.0179
STD2.24914.11142.324116.991050.44513.985514.088030.755738.732368.9513208.34662.6834
Rank243795681011121
cec28AVE3126.93663161.71433298.24743271.55033788.64013150.70653398.10363437.71793603.98453526.92424076.73513254.7122
STD67.5801114.0390129.1247145.5009136.6237106.474287.7514154.4405162.5968165.7635407.5146139.0270
Rank136511278109124
cec29AVE3156.87563164.73443177.35933225.55863422.42523152.04103193.72973264.35183307.78293447.67264124.67533182.2187
STD13.919515.697911.367559.3761147.892213.174038.327982.445770.9418143.8086317.016827.7701
Rank234710168911125
cec30AVE8279.39161.48E+051.79E+051.26E+051.14E+076.55E+047.48E+051.72E+064.54E+065.39E+062.16E+089.22E+04
STD5677.85043.47E+052.77E+052.99E+053.73E+072.77E+059.13E+052.16E+063.31E+066.60E+061.51E+082.00E+05
Rank156411278910123
Mean rank1.86212.72415.44834.827610.37932.68977.65527.24148.965510.069012.00004.1379
Final ranking136511287910124
Table 7. Statistical outcomes of the different search methods on the CEC2019 test functions.
Table 7. Statistical outcomes of the different search methods on the CEC2019 test functions.
FunctionIndexAlgorithms
AROAOAGWOCOOTGJOINFOMFOMVOSCASSAWOALARO
F1Best15.12E+0711111.10E+058.03E+041.15975.62E+041.71E+031
Worst12.53E+081.97E+053.48E+041.22E+0312.02E+071.36E+068.30E+062.99E+062.27E+071
Mean11.50E+081.45E+044.77E+0373.893017.36E+065.78E+051.20E+067.12E+055.09E+061
STD04.89E+074.41E+049.90E+03270.328707.51E+063.72E+052.43E+066.81E+057.25E+060
Rank112654111798101
F2Best3.69567.07E+0331.27204.42664.11764.1009220.3923170.7931470.2927181.58793610.09454.0581
Worst4.32892.28E+04557.61105.1682511.48384.45738.76E+03651.50375186.14952492.17429352.60344.3569
Mean4.20121.34E+04282.42434.895475.40324.28811.95E+03426.74483063.8568763.76836258.69624.2462
STD0.13544.23E+03161.74190.2301155.62770.07352.69E+03122.92751529.9521707.50831510.30980.0656
Rank112645397108112
F3Best1.00021.00001.40951.00011.47381.40911.40911.00046.01941.00002.40981.4096
Worst2.29637.48607.71123.08037.90027.71209.712011.711610.76677.68566.30922.7905
Mean1.50275.98942.90101.76764.23632.00466.05287.08268.52443.14564.74401.7488
STD0.29361.46242.10500.57002.11001.78952.44672.41921.27492.00161.04590.4476
Rank195374101112682
F4Best5.974817.97834.36859.957112.44129.954611.88428.961032.167610.949623.30105.9748
Worst25.873998.504729.187225.876238.205355.722467.016734.831863.465260.6973109.527425.8739
Mean13.194851.723213.660417.172823.622523.924828.623417.933744.895530.649752.868112.8513
STD5.362722.43785.77264.53727.951011.591114.04327.27827.963314.032221.91665.6119
Rank211346785109121
F5Best1.029563.47091.10141.01971.24171.03931.03941.15844.27341.12551.63581.0172
Worst1.1796162.46143.74021.246212.29161.31241.57101.677610.27431.46983.20601.1451
Mean1.083499.53021.66011.12153.36351.15681.18401.30156.87681.24161.91461.0747
STD0.044425.33200.56390.06312.47200.08600.13870.13151.63550.09800.35900.0295
Rank212831045711691
F6Best1.00029.94221.19431.21481.54551.22301.33291.14464.69531.13245.15491.0000
Worst3.622613.66286.76846.04706.98936.10778.39034.80638.11018.157811.36013.7205
Mean1.563111.57792.64872.75644.06162.94464.01082.49976.72213.68087.89181.5055
STD0.85951.08231.60381.34751.26161.40911.86791.06111.09221.73901.80310.7416
Rank212459683107111
F7Best126.4556783.708555.0048500.4912515.5437365.0528355.7205298.10401205.5680527.5808281.048316.3069
Worst825.41101551.29951300.94181427.83281744.29351461.90151587.92331192.05921696.16761690.01351936.0646916.3734
Mean449.81921130.5116693.0267869.9624995.5096871.74171076.1407753.13811386.47411002.09741116.1865386.6686
STD186.7004240.2304310.3256229.9777315.9565283.0647305.6328212.0728133.1321351.7500402.5621231.7633
Rank211357694128101
F8Best2.42274.14482.62173.15593.24443.15573.62852.80283.94343.74873.70712.3788
Worst3.71155.46394.02294.58974.53524.50204.81704.99114.67405.01885.06333.7337
Mean3.20394.97343.32523.93393.90053.78044.37423.95544.38464.38604.66113.0415
STD0.35180.33630.37170.36250.34880.35500.36580.54420.20450.38090.30240.3829
Rank212365487910111
F9Best1.01961.41591.09781.12801.09711.06921.10231.08141.35741.03771.15421.0504
Worst1.12244.45161.28751.56461.45551.28571.62531.25681.72371.87221.97291.2603
Mean1.08183.25421.16931.26531.23271.15561.34851.16041.50821.31411.37011.1386
STD0.02960.69020.06380.11730.08080.06580.14140.05270.11610.18800.19720.0474
Rank112576394118102
F10Best20.980820.94507.42561.000111.725521.000021.000021.006121.208720.998521.02351.0000
Worst21.005420.999521.509221.650921.579321.107321.271221.311721.523221.000021.400921.0076
Mean20.998520.982820.707518.440620.534821.047121.092921.033821.381620.999921.148918.0553
STD0.00560.01023.12696.83242.65250.04120.08980.06690.08180.00030.09697.1881
Rank654239108127111
Mean rank1.909110.90915.00004.27276.54554.63648.36366.363610.63647.545510.18181.3636
Final ranking212537496118101
+/=/−2/7/10/0/100/2/80/1/90/0/100/3/70/0/100/1/90/0/100/1/90/0/10−/−/−
Table 8. Statistical output and associated p-values on the CEC2019 test functions.
Table 8. Statistical output and associated p-values on the CEC2019 test functions.
p-ValueAROAOAGWOCOOTGJOINFOMFOMVOSCASSAWOA
F1NaN/=8.01E-09/−2.99E-08/−2.57E-05/−0.0002/−NaN/=7.99E-09/−8.01E-09/−8.01E-09/−8.01E-09/−8.01E-09/−
F20.0531/=6.80E-08/−6.80E-08/−6.80E-08/−5.87E-06/−0.0810/=6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−6.80E-08/−
F30.0167/+1.20E-06/−0.0499/−0.4903/=2.04E-05/−3.73E-05/−2.92E-05/−1.20E-06/−6.80E-08/−0.0908/=9.17E-08/−
F40.8285/=1.22E-07/−0.4092/=0.0036/−2.58E-05/−0.0002/−2.58E-05/−0.0077/−6.73E-08/−1.40E-05/−7.82E-08/−
F50.8181/=6.80E-08/−1.66E-07/−0.0114/−6.80E-08/−0.0006/−0.0011/−6.80E-08/−6.80E-08/−1.43E-07/−6.80E-08/−
F60.6949/=6.80E-08/−0.0007/−0.0002/−6.92E-07/−0.0001/−3.99E-06/−0.0005/−6.80E-08/−1.81E-05/−6.80E-08/−
F70.0208/−1.92E-07/−0.0016/−2.36E-06/−1.20E-06/−1.25E-05/−1.20E-06/−4.68E-05/−6.80E-08/−1.20E-06/−2.06E-06/−
F80.1806/=6.80E-08/−0.0385/−1.05E-06/−6.92E-07/−7.58E-06/−1.06E-07/−1.10E-05/−6.80E-08/−6.80E-08/−7.90E-08/−
F97.41E-05/+6.80E-08/−0.2977/=6.61E-05/−7.41E-05/−0.5428/=7.58E-06/−0.1988/=6.80E-08/−0.0003/−7.95E-07/−
F100.3648/=0.0015/−7.95E-07/−7.41E-05/−7.58E-06/−1.20E-06/−7.95E-07/−7.90E-08/−6.80E-08/−0.0009/−6.80E-08/−
+/−/=2/7/10/0/100/2/80/1/90/0/100/3/70/0/100/1/90/0/100/1/90/0/10
Table 9. The output results of search methods and the best average solution for solving the WBD problem.
Table 9. The output results of search methods and the best average solution for solving the WBD problem.
MethodsVariablesAverage Value
z1z2z3z4
AOA0.4586045655.3438902337.077334730.5826991824.580028691
WOA0.2135806883.7559050768.5827035530.2754856322.030078497
SCA0.1959461143.3476642069.3478974240.2103325191.779589725
SSA0.1633733354.297281939.0606827090.2060316041.762193145
MVO0.1938488453.2127545889.0650648730.2056587751.676674633
MFO0.2063094613.0053864678.9982689890.2077269161.66898036
GJO0.2005055013.0913300629.0414399170.2058244981.667357263
GWO0.2045983613.0174983699.0382800880.2057699061.662180603
COOT0.2047170933.0131029739.0397129070.2057337981.661704081
INFO0.2057296462.9968445839.0366237650.2057296461.660343027
LARO0.205729642.9968446519.036623910.205729641.660343003
Table 10. The statistical output results of the search methods in solving the WBD problem.
Table 10. The statistical output results of the search methods in solving the WBD problem.
MethodsBestWorstAverageSTD
AOA2.7944239486.8455497444.5800286910.91504286
WOA1.6796416713.0865595682.0300784970.434995141
SCA1.6975282851.8542674611.7795897250.037930287
SSA1.6624020662.1019241191.7621931450.118760299
MVO1.6634631641.7042976991.6766746330.011127428
MFO1.6603430031.8030842861.668980360.032089776
GJO1.6616614981.6812408241.6673572630.005566875
GWO1.6609835941.6645696611.6621806030.000980854
COOT1.6604117331.6670971751.6617040810.001993273
INFO1.6603430031.6603434831.6603430271.08E-07
LARO1.6603430031.6603430031.6603430031.61E-12
Table 11. The output results of the different search methods and suitable average for solving the PVD problem.
Table 11. The output results of the different search methods and suitable average for solving the PVD problem.
MethodsVariablesAverage Value
z1z2z3z4
AOA23.3658836822.2543392661.53559125107.681032719175.76962
MVO15.248262197.67174995549.48975553107.35620226308.314027
INFO14.578724117.3632684247.58766135126.83555356220.150067
SSA14.84540067.37599730148.36652284118.8511936207.171299
WOA15.468672827.29710931649.96403036112.81668636183.161843
COOT14.538505367.37828136846.86702321129.83179826174.664451
SCA12.800617566.93470009241.84828537187.01664966066.462829
GJO12.936267456.4447111642.86942058179.88744825841.359167
MFO12.662093786.56173265642.01439281180.73485245836.539712
GWO12.02995895.95708271540.320018232005654.433575
LARO11.792844855.92714841740.319618722005654.370337
Table 12. The statistical output of the different search methods in completing the PVD problem.
Table 12. The statistical output of the different search methods in completing the PVD problem.
MethodsBestWorstAverageSTD
AOA4187.05314539246.4742719175.7696211415.14805
MVO5902.4789076989.7978026308.314027276.6784548
INFO5654.3703377332.8415086220.150067372.4182866
SSA5593.1596526820.4101186207.171299323.7892186
WOA3239.2040297896.9686136183.1618431062.960652
COOT5654.3703376410.0867616174.664451201.1646918
SCA5400.3113226480.1920136066.462829325.5644391
GJO5654.3718747348.5833945841.359167516.7116594
MFO5654.3703376406.4927685836.539712241.0245654
GWO5654.371375654.7277055654.4335750.078883037
LARO5654.3703375654.3703375654.3703370
Table 13. The output results of the different search methods and suitable average for solving the TCS problem.
Table 13. The output results of the different search methods and suitable average for solving the TCS problem.
MethodsVariablesConstraintsAverage Value
z1z2z3g1g2g3g4
AOA0.1041087620.8957477969.974698398−0.727734476−0.483278426−2.066153786−0.3334289610.159056177
MVO0.0683101240.9059838412.190034779−0.009015503−0.000527276−4.493836611−0.350470690.017530118
WOA0.0586452350.5610755045.965217771−6.03E-06−3.83E-09−4.298337604−0.586852840.013873948
MFO0.0533983870.40612963310.28537326−7.77E-17−0.001737508−4.100612643−0.6936479870.013015725
SSA0.0516330580.36006889712.6417731−5.15E-07−0.004482237−4.005031941−0.725532030.013008419
SCA0.050803460.33482524513.19661988−0.009149089−0.004520311−3.933453135−0.7429141970.012934593
COOT0.0532019790.3971121939.864478444−2.33E-05−1.59E-05−4.113557864−0.6997905520.012806575
GJO0.0506099140.33163196113.10680319−0.000992211−0.000352761−3.991490705−0.7451720840.012725811
INFO0.052629870.38075685110.2445935−2.64E-07−1.50E-06−4.093957217−0.7110755190.012716498
GWO0.0505414140.32999647713.17162459−0.000408023−0.000169331−3.992540736−0.7463080730.01271224
LARO0.0518049150.35951641811.12900459−2.02E-05−1.50E-05−4.063715705−0.7240956280.012665939
Table 14. The statistical output of the different search methods in completing the TCS problem.
Table 14. The statistical output of the different search methods in completing the TCS problem.
MethodsBestWorstAverageSTD
AOA0.0131501630.6229910130.1590561770.173444481
MVO0.0138673140.0183844240.0175301180.00099031
WOA0.0126662520.0177733020.0138739480.001345825
MFO0.0126652680.0152664780.0130157250.000614519
SSA0.0127041220.014606740.0130084190.000441887
SCA0.0128020770.0132079830.0129345930.00012683
COOT0.0126656650.0133733610.0128065750.000194775
GJO0.0126833570.0127409670.0127258111.61E-05
INFO0.0126652330.0129456970.0127164986.46E-05
GWO0.0126814340.0127317820.012712241.55E-05
LARO0.0126652750.0126691090.0126659399.49E-07
Table 15. The output results of the different search methods and suitable average for solving the GTD problem.
Table 15. The output results of the different search methods and suitable average for solving the GTD problem.
MethodsVariablesAverage Value
z1z2z3z4
AOA271949470.00587623
MFO192147525.37E-09
SCA222052501.17E-09
WOA171845458.45E-10
MVO221649476.45E-10
SSA181742506.15E-10
COOT181947482.98E-10
INFO182249502.95E-10
GJO202050501.76E-10
GWO181949471.66E-10
LARO191947491.19E-11
Table 16. The statistical output of the different search methods in completing the GTD problem.
Table 16. The statistical output of the different search methods in completing the GTD problem.
MethodsBestWorstAverageSTD
AOA1.09E-070.0309697040.005876230.007816411
WOA2.31E-112.18E-085.37E-095.89E-09
MFO2.31E-112.36E-091.17E-096.59E-10
SCA2.70E-122.36E-098.45E-108.19E-10
MVO2.70E-121.36E-096.45E-104.78E-10
GJO2.70E-122.36E-096.15E-106.58E-10
INFO2.70E-122.36E-092.98E-105.83E-10
SSA2.31E-111.36E-092.95E-104.25E-10
GWO2.70E-121.36E-091.76E-103.42E-10
COOT2.70E-129.92E-101.66E-103.43E-10
LARO2.70E-122.31E-111.19E-111.04E-11
Table 17. The output results of the different search methods and suitable average for solving the SRD problem.
Table 17. The output results of the different search methods and suitable average for solving the SRD problem.
MethodsVariablesAverage Value
z1z2z3z4z5z6z7
AOA3.4679714090.72351327121.747514477.8805167478.0619145433.5764308835.4041321884264.527578
WOA3.5238431970.717.13455327.7023002897.9767487243.4433704355.3198660423085.450381
SCA3.5930508460.70015083117.00023467.6331891518.0581338223.4286873295.3194579543084.05852
MVO3.5190008070.7177.4968818577.9697223193.4288633865.2871224643030.335706
SSA3.5157615930.700000002177.7881648318.0397699463.4136255035.2867670093029.139433
GJO3.5054893830.70011882117.001588427.66035057.9062735763.3641007395.2888531053009.67407
GWO3.5022402370.70001115417.000684137.5935872587.8896263.3546678295.2878347843003.700073
MFO3.5050.7177.357.8253.3506405265.2866920412999.286604
COOT3.5000000390.700000001177.3000001747.83.3505410265.286683272996.301629
INFO3.50.7177.37.83.3505409495.286683232996.301563
LARO3.50.717.000023287.3000003747.83.3505409315.2866832262996.301563
AOA3.4679714090.72351327121.747514477.8805167478.0619145433.5764308835.4041321884264.527578
Table 18. The statistical output of the different search methods in completing the SRD problem.
Table 18. The statistical output of the different search methods in completing the SRD problem.
MethodsBestWorstAverageSTD
AOA3227.9200496010.0915584264.527578742.9424323
WOA3012.6925093392.4089543085.45038190.75866975
SCA3047.1986433133.5208173084.0585223.93469942
MVO3003.7779743072.803233030.33570618.55650452
SSA3002.1494093094.7896923029.13943323.381985
GJO3000.1318553029.6222283009.674076.744875293
GWO2999.7782353009.2908643003.7000732.904547932
MFO2996.3015633035.5786472999.2866049.103443233
COOT2996.3015642996.3018482996.3016299.34E-05
INFO2996.3015632996.3015632996.3015634.00E-08
LARO2996.3015632996.3015632996.3015639.54E-10
Table 19. The output results of the different search methods and suitable average for solving the TCD problem.
Table 19. The output results of the different search methods and suitable average for solving the TCD problem.
MethodsVariablesAverage Value
z1z2
AOA6.0122822170.31544827830.15798064
WOA5.4893856020.29255884626.70565649
SCA5.4702726660.29219717126.60424231
GJO5.4535197380.29162588426.49283541
GWO5.4523409930.29166060226.4889654
MVO5.4522345360.2916561426.4882102
COOT5.4521807890.29162646826.48636379
INFO5.4521814580.29162639126.48636292
SSA5.452180820.2916264326.48636194
MFO5.4521807360.29162642926.48636147
LARO5.4521807360.29162642926.48636147
AOA6.0122822170.31544827830.15798064
Table 20. The statistical output of the different search methods in completing the TCD problem.
Table 20. The statistical output of the different search methods in completing the TCD problem.
MethodsBestWorstAverageSTD
AOA26.8112792334.4655972130.157980642.258475608
WOA26.4996210627.557307126.705656490.23330885
SCA26.5261309626.6982166226.604242310.050054727
GJO26.4880261226.4979817426.492835410.002808313
GWO26.4868557326.4923444626.48896540.001724975
MVO26.4868149426.4921958626.48821020.001225866
COOT26.4863614826.486369326.486363792.65E-06
INFO26.4863614726.4863902726.486362926.44E-06
SSA26.4863615326.4863625726.486361943.10E-07
MFO26.4863614726.4863614726.486361473.09E-10
LARO26.4863614726.4863614726.486361473.65E-15
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Huang, L.; Zhong, J.; Hu, G. LARO: Opposition-Based Learning Boosted Artificial Rabbits-Inspired Optimization Algorithm with Lévy Flight. Symmetry 2022, 14, 2282. https://doi.org/10.3390/sym14112282

AMA Style

Wang Y, Huang L, Zhong J, Hu G. LARO: Opposition-Based Learning Boosted Artificial Rabbits-Inspired Optimization Algorithm with Lévy Flight. Symmetry. 2022; 14(11):2282. https://doi.org/10.3390/sym14112282

Chicago/Turabian Style

Wang, Yuanyuan, Liqiong Huang, Jingyu Zhong, and Gang Hu. 2022. "LARO: Opposition-Based Learning Boosted Artificial Rabbits-Inspired Optimization Algorithm with Lévy Flight" Symmetry 14, no. 11: 2282. https://doi.org/10.3390/sym14112282

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop