Introduction

In recent years, many nature-inspired optimization algorithms have been proposed. Some of swarm-inspired algorithms are appreciated such as Particle Swarm Optimization algorithm (PSO)1, Firefly Algorithm (FA)2, Dragonfly Algorithm (DA)3, Whale Optimization Algorithm (WOA)4, Grey Wolf Optimizer (GWO)5, Monarch Butterfly Optimization (MBO)6, Earthworm Optimization Algorithm (EWA)7, elephant herding optimization (EHO)8, moth search (MS) algorithm9, Slime Mould Algorithm (SMA)10, Colony Predation Algorithm (CPA)10 and Harris Hawks Optimization (HHO)11. Besides, quite a number of physics-inspired algorithm simulated physical laws in the universe or nature, such as Curved Space Optimization (CSO)12, Water Wave Optimization (WWO)13, etc. Moreover, some algorithms based on the mathematical foundations are also creative approaches, e.g. Runge Kutta optimizer (RUN)14.

On the other hand, some algorithms simulate human behavior such as Teaching–Learning-Based Optimization (TLBO)15, and Human Behavior-Based Optimization (HBBO)16. Meanwhile, Genetic Algorithm (GA)17 is inspired by evolution, and achieves a lot of success in solving optimization problems in many fields. With the growing popularity of GA, many evolutions-based algorithms are proposed in the literature, including Evolutionary Programming (EP)18, and Evolutionary Strategies (ES)19.

Nowadays, metaheuristic algorithms become an essential tool for solving complex optimization problems in various fields. Many researchers applied such algorithms to make an effort to deal with difficult issues in biology20, economics21, engineering22,23, etc. Therefore, constructing new algorithms to meet such complex requirements has a significant merit.

In this study, a strong algorithm is constructed for solving local and global optimization problems. The idea comes from the natural motion of planets in our solar system and the interplanetary interactions throughout their lifecycle. Newton's law of gravity reflects the gravitational interaction of the Sun with planets orbiting to find the optimized position through individual planets characteristics. These planets characteristics are their masses and distances.

In this paper, we propose an optimization algorithm using Newton's law of universal gravitation as the basis for its development. In this algorithm, a number of pre-eminent features are considered, such as local search, global search, to increase the ability for finding the exact solutions built into simulating the planets’ movement in the universe.

This research paper is structured into several sections as follows. In the next section, the construction of a meta-heuristic algorithm is presented. The structural POA is simulated based on Newton's law of universal gravitation and astronomical phenomena. Then, a wide range of applications of various benchmark problems is used to demonstrate how effective POA is. At the same time, we present the applications of the POA to real engineering problems. Finally, based on the results presented, the last section reports the conclusions.

The Planet Optimization Algorithm (POA)

Physics is a fundamental science whose laws governs everything from the tiniest object electrons, neutrons, or protons to extremely massive stars or galaxies (about a hundred thousand light-years across). The laws of physics are widely applied in everyday life from transportation to medicine, from agriculture to industry, etc. In science, it is also the foundation for many other sciences such as chemistry, biology, even math. In the field of artificial intelligence (AI), the laws of physics are the inspiration for many optimization algorithms. In the study, we also present an algorithm based on such a physical law.

Inspiration

Inspired by the laws of the motion in the universe, an algorithm is proposed from the interaction of mutual gravitational between the planets. Specifically, this optimization algorithm simulates the universal gravitation laws of Isaac Newton. The core of this algorithm is given as follows:

$$\left| {\mathop{F}\limits^{\rightharpoonup} } \right| = G \times m_{1} \times m_{2} /R^{2}$$
(1)

where \(\overrightarrow {F}\): The gravitational force acting between two planets; \(G\) : The gravitational constant; \(R\) : The distance between two planets; \(m_{1} ,m_{2}\): The mass of the two planets.

The gravitation of a two-planet (as shown in Fig. 1) is dependent on as Eq. (1). However, in this study, we find that the value of force \(\overrightarrow {F}\) will give less effective results than when using the moment \((M)\) as a parameter in the search process of the algorithm.

$$\left| {\overrightarrow {M} } \right| = \left| {\overrightarrow {F} } \right| \times R$$
(2)
Figure 1
figure 1

The force F acting between two planets.

The planet optimization algorithm

The universe is infinitely big and has no boundary, and it is a giant space that is filled with galaxies, stars, planets, and many and many interesting astrophysical objects. For simplicity and ease of visualization, we use the solar system to make representation for this algorithm simulation.

First of all, a system that consists of the Sun, the Earth and the Moon (as shown in Fig. 2) is considered in this case. Of course, everybody understands that the Sun maintains its gravitation to keep the Earth moving around it. Interestingly, the mass of the Sun is 330,000 times higher than that of the Earth. However, the Earth also creates a gravitational force large enough to keep the Moon in orbit around the Earth. This demonstrates that two factors influence the motion of a planet, not only the mass but also the distance between the two planets. An algorithm simulating the law of universal gravitation is, therefore, presented as follows:

  • The Sun will act as the best solution. In the search space, it will have the greatest mass, which means it will have a greater gravitational moment for the planets around and near it.

  • Between the Sun other planets, there is a gravitational attraction moment between each other. However, this moment depends on the mass as well as the distance between these two objectives. This means that, although the Sun has the largest mass compared to other planets, its moment on the too distant planets is negligible. This helps the algorithm to avoid local optimization as illustrated in Fig. 3.

Figure 2
figure 2

The gravitational force acting between planets.

Figure 3
figure 3

Local and global optimization: (a) 3D view; (b) plane.

In the tth iteration, the mass of red planet (see Fig. 3) is the biggest, so it represents the Sun. As the pink planets are close to the Sun, they will move to the location of the Sun because of a gravitational attraction moment \((M_{p}^{t} )\) between the Sun and the planets.

Nevertheless, the red planet (or the Sun) in the tth iteration does not have the desired position that we are looking for, i.e. a minimum optimum. In other words, if all planets move to the red planet, the algorithm is stuck in the local space. In contrast, the blue planet is a potential location and far from the Sun. The interaction of the Sun with the blue planet \((M_{b}^{t} )\) is small, because it is far from the Sun in the tth iteration. Thus, it is quite free for the blue planet to search a better location in the next iterations.

The main core of the algorithm is based on the above 2 principles. Besides, the Sun is the true target of searching, and of course we don’t have its exact location. In this case, the planet with the highest mass in the tth iteration would act as the Sun at the same time.

The implementation of the algorithm is as follows:

Stage 1: the best start

Ideally, a good algorithm is the one in which the final best solution should be independent of the initial positions. Nevertheless, the reality is exactly the opposite for almost all stochastic algorithms. If the objective region is hilly and the global optimum is located in an isolated minor area, an initial population has an important role. If an initial random population does not create any solution in the vicinity of the global search level of the original population, the probability that the population concentrates on true optimum can be very low.

In contrast, with building initial solutions near the global optimal position, the probability of the convergence of the population to the optimal location is very high. Globalization is indeed very high, and consequently, population initialization plays a vital role. Ideally, the initiation should use the critical sampling method, such as techniques applied to the Monte Carlo method in order to sample the solutions for an objective context. This, however, requests enough intellect of the problem and cannot be satisfied for most algorithms.

Similar to choosing initial population, choosing the best solution in the original population to the role of the Sun with respect to all the other planets moving to the position is important. This selection will determine the convergence speed as well as the accuracy of the algorithm in the future.

Therefore, the algorithm's first step is to find an effective solution to play a role of the best solution to increase the convergence and accuracy of the search problem in the first iterations.

Stages 2: M factor

$$M = \left| {\mathop{F}\limits^{\rightharpoonup} } \right|R_{{{\text{ij}}}}^{{}} = G\frac{{m_{i} m_{j} }}{{R_{{{\text{ij}}}}^{2} }} \times R_{{{\text{ij}}}}^{{}}$$
(3)

In Eq. (3), the following parameters are defined:

  • The mass of the planets:

    $$m_{i} ,m_{j} = \frac{1}{{a^{{obj_{i,j} /\alpha }} }}$$
    (4)

    where \(a = 2\) is a constant parameter, and \(\alpha = \left| {\max (obj) - obj_{sun} } \right|\) . This means that if the objective function value of a planet is smaller, the mass of this planet is larger. \(obj_{i,j} ,\max (obj),obj_{sun}\) are the values of objective function of the ith or jth planet, the worst planet and the Sun, respectively.

  • The distance between any 2 objects i and j with “Dim” as dimensions, Cartesian distance, is calculated by Eq. (5):

    $$R_{{{\text{ij}}}} = \left\| {X_{i}^{t} - X_{j}^{t} } \right\| = \sqrt {\sum\limits_{k = 1}^{Dim} {\left( {X_{i}^{t} - X_{j}^{t} } \right)^{2} } }$$
    (5)
  • G is a parameter, and it is equal to unity in this algorithm.

Stage 3: Global search

From the above, a formula built to simulate global search is indicated by Eq. (6)

$$\overrightarrow {{X_{i}^{t + 1} }} = \overrightarrow {{X_{i}^{t} }} + b \times \beta \times r_{1} \times \left( {\overrightarrow {{X_{Sun}^{t} }} - \overrightarrow {{X_{i}^{t} }} } \right)$$
(6)

The lefthand side of the formula illustrates the current position of a planet ith in the (t + 1) iteration, while the righthand side consists of the main elements as follows:

  • \(\overrightarrow {{X_{i}^{t} }}\) is the current position of a planet ith in the iteration tth.

  • \(\beta = M_{i}^{t} /M_{{_{\max } }}^{t}\),\(r_{1} = rand(0,1),b\) is a constant parameter.

  • \(\overrightarrow {{X_{Sun}^{t} }}\) is the current position of the Sun in the iteration tth.

where \(\beta\) is a coefficient that depends on M, as shown in Eq. (3), in which \(M_{i}^{t}\) is the Sun's gravity on a planet ith at t iteration, and \(M_{\max }^{t}\) is the value of \(\max (M_{i}^{t} )\) at t iteration. Therefore, the \(\beta\) coefficient contains values in the interval (0, 1).

Stage 4: Local search

In the search process, the true location is always the desired target to be found. However, this goal will be difficult or easy to achieve in this process depending on the complexity of the problem. In most cases, it is only possible to find an approximate value that fits the original requirement. That is to say, the true Sun location yet is in the space between the found solutions.

Interestingly, although Jupiter is the most massive planet in the solar system, Mercury is the planet, for which its location is the closest the Sun. It means that the best solution position to true Sun location at the t iteration may not be closer than the location of some other solutions to the true location of the Sun.

When the distance between the Sun and planets is small, the local search process is run. As mentioned above, the planet with the biggest mass will operate as the Sun, and in that case, it is Jupiter. Planets near the Sun will go to the location of the Sun. In other words, the planets move a small distance between it and the Sun at t iteration instead of going straight towards the Sun. The aim of this step is to increase accuracy in a narrow area of search space. Eq. (7) indicates the process for local search as follows:

$$\overrightarrow {{X_{i}^{t + 1} }} = \overrightarrow {{X_{i}^{t} }} + c \times r_{1} \times \left( {r_{2} \times \overrightarrow {{X_{sun}^{t} }} - \overrightarrow {{X_{i}^{t} }} } \right)$$
(7)

where \(c = c_{0} - t/T\), t is the tth iteration, T is the maximum number of iterations, and c0 = 2. \(r_{2}\) is Gauss distribution function illustrated by Eq. (8).

$$f(x,\mu ,\sigma ) = \frac{1}{{\sigma \sqrt {2\pi } }}\exp \left( { - \frac{{\left( {x - \sigma } \right)^{2} }}{{2\sigma^{2} }}} \right)$$
(8)

Many evolutionary algorithms are also randomized by applying common stochastic processes such as power-law distribution and Lévy distribution. However, Gaussian distribution or normal distribution is the most popular since the large number of physical variables (see Fig. 4), including light intensity, errors/uncertainty in measurements, and many other processes, obey this distribution.

Figure 4
figure 4

Gauss distribution.

The coefficient \(r_{2}\) is the Gaussian distribution with mean value \(\mu = 0.5\) and standard deviation \(\sigma = 0.2\). It means that 68.2% of \(r_{2}\) is in zone 1 about \((\mu - \sigma ) = 0.3\) to \((\mu + \sigma ) = 0.7\), and 27.2% of its values is in zone 2 from \((\mu \pm 2\sigma )\) to \((\mu \pm \sigma )\). In other words, POA will move to around the Sun without ignoring potential solutions in local search.

Exploitation employs any data obtained from the issue of interest to create new solutions, which are better than existing solutions. This process and information (for instance gradient), however, are normally local. Therefore, this search procedure is local. The result of search process typically leads to high convergence rates, and it is the strong point of exploitation (or local search). Nevertheless, the weakness of local search is that normally it gets stuck in a local mode.

In contrast, exploration is able to effectively explore the search space, and it typically creates many diverse solutions far from the current solutions. Thus, exploration (or global search) is normally on a global scale. The great strength of global search is that it rarely gets stuck in a local space. The weakness of the global search, however, is slow convergence rates. Besides, in many cases, it wastes effort and time since a lot of new solutions can be far from the global solution.

Figure 5 shows the operation of this algorithm, in which two local and global search processes are governed by the distance parameter Rmin. This means that a planet far away from the Sun will be moved depending on Newton law. In contrast, for planets very close to the Sun, the effect of the newton force is so great. They are only moving, therefore, around the Sun. A planet, which is close to the Sun, will support the Sun in exploring a local search space, as shown in Eq. (7), while the motion of distant planets from the Sun is less affected by this star at the same time. It means they have a chance to find new potential stars. Search local and global spaces runs simultaneously. This guarantees the enhancement of the accuracy of the search process, but this algorithm does not miss the potential locations.

Figure 5
figure 5

Flow chart of the proposed POA.

The parameter Rmin must satisfy the following two conditions:

  • If the Rmin is too large, the algorithm will focus on local search in the first iterations. Therefore, the probability of finding a potential location far away from the present one is difficult.

  • In contrast, if Rmin is too small, the algorithm focuses on global search. In other words, the exploration of POA in the zone around the Sun is not thorough. Consequently, the best value of the search process may not satisfy the condition.

    $$R_{\min } = \left( {\sum\limits_{1}^{Dim} {(up_{i} - low_{i} )^{2} } } \right)/R_{0}$$
    (9)

In this study, Rmin is chosen by dividing the search space into 1000 (R0 = 1000) zones. Where ‘low’ and ‘up’ are lower and upper bounds of each problem, respectively. With an explicit structure consisting of 2 local and global search processes, POA has satisfied the above two issues and promises to be effective and saving of time in solving complex problems.

Results and discussion

In this section, POA is compared with a series of algorithms using well-known problems. The investigations run on the operating system of Windows 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00 GHz 1.80 GHz with RAM 16.0 GB.

Experimental results using classical benchmark functions

In this subsection, POA is employed to handle a wide range of applications of various benchmark problems. A set of mathematical functions with known global optima is commonly employed to validate the effectiveness of the algorithms. The same process is also followed, and a set including 23 benchmark functions in the literature as test beds are employed for this comparison24,25,26. These test functions consist of 3-group, namely unimodal (F1–F7), multi-modal (F8–F13), and fixed-dimension (F14–F23) multimodal benchmark functions. POA is compared with seven algorithms, namely PSO1, GWO5, GSA27, FA2 and ASO28, HHO11, HSG29 on a set of 23 benchmark functions as shown in Table 1.

Table 1 Classical benchmark functions.

Numerical examples with Dim ≤ 30

Each benchmark function runs 30 times by the POA algorithm. A sample size of POA with 30 planets is selected to perform 500 iterations. The statistical results (average–Ave, and standard deviation–Std) are summarized in Tables 2, 3 and 4.

Table 2 Results of unimodal benchmark functions.
Table 3 Results of multimodal benchmark functions.
Table 4 Results of fixed-dimension multimodal benchmark functions.

The results from the comparison to F5 and F6 are quite good for POA compared with the others. The F1 to F4 and F7 functions witness that POA algorithm's accuracy is the most superior compared with all other algorithms.

In comparison with 7-unimodal functions, the most of multimodal functions consist of a lots of local optimization areas with the number increasing exponentially with dimensions. This makes them good conditions to evaluate the exploratory ability of a meta-heuristic optimization algorithm.

Table 3 indicates that POA outperforms in F10 and F11 functions, and is quite competitive with the rest.

Similar to the unimodal functions, once again the multimodal and fixed-dimension multimodal functions prove the competitiveness of POA with other algorithms, and show that the obtained results from F14 to F23 are promising.

Figure 6 illustrates the convergence of POA after 100 iterations of 100 planets. The first two metrics are qualitative metrics that illustrate the history of planets through the course of generations. During the whole optimization process, the planets are represented using red points as shown in Fig. 6. The trend of planets explores potential zones of the search space, and exploit quite accurately the global optimum. These investigations demonstrate that POA is able to get the high effectiveness in approximating the global optimum of optimization problems.

Figure 6
figure 6

Level of convergence of POA after 100 iterations.

The third metric presents the movement of the 1st planet in the first dimension during optimization. This metric helps us to monitor if the first planet, which represents all planets, faces sudden movement in the initial generations and has more stability in the final generations. This movement is able to guarantee the exploration of the search region. Finally, the movement of planets is very short, which causes exploitation of the search region. Obviously, POA demonstrates that this is an algorithm that meets a requirement of accuracy, as well as a high degree of convergence.

The final quantitative metric is the convergence level of the POA algorithm. The best value of all planets in each generation is stored and the convergence curves are shown in Fig. 6. The decreasing of fitness over the generations demonstrates the convergence of the POA algorithm.

Numerical examples with high-dimensional optimization problems

To validate the performance of POA with respect to high-dimensional optimization problems, the first 13 classical benchmark functions of the above-mentioned ones with Dim = 1000 are employed to investigate POA. For a fair comparison, seven of the above mentioned meta-heuristic optimization algorithms and POA with population size N = 30 independently run in 30 times. Additionally, the maximum number of iterations is fixed at 500 for all test functions.

The test discussed in this subsection demonstrate that POA is promising for dealing with 13 classical benchmark problems. Among the tested 23 benchmark functions, 13 functions had Dim = 1000, as presented in Table 5 and Fig. 7. This subsection confirmed the ability of POA to deal with high-dimensional problems as the dimension of those 13 classical benchmarks has been increased from 30 to 1000.

Table 5 Results of the first 13 benchmark functions with Dim = 1000.
Figure 7
figure 7

Performance comparison of algorithms.

Wall-clock time analysis

In this experiment, a comparison is made between POA and the other seven algorithms in the time-consuming computation experiments of the 13 functions. The time-consuming calculation method is that each benchmark function independently implements 30-times all algorithms, then the values of 30-time running is saved in Table 6. For Dim = 30, not only does the computation of POA outperform some algorithms, while taking less time, such as GSA, ASO, and FA, but also it is sometimes far superior to GWO, even the time-consuming calculation of PSO. For Dim = 1000, POA always ranks first in computational time. These results show that the POA has merit for optimization problems in high dimensional problems.

Table 6 Wall-clock time costs (second) on benchmarks of POA and other participants.

Experimental results using CEC functions

In order to further clarify the efficiency of the proposed algorithm, POA is tested on the complex challenges, namely Evaluation Criteria for the CEC 201730 and CEC 201931. Its results are compared with those of well-known and modern meta-heuristic algorithms: DA, WOA, and the arithmetic optimization algorithm (AOA)32. These algorithms are selected because of the reasons:

  • All of them are based on the principle of PSO as with POA.

  • All algorithms are well cited in the literature, and AOA is a recently published study.

  • These algorithms were proven that they were superior performance both on benchmark test functions and real-world problems.

  • They are publicly provided by their authors.

Like the 23 classical benchmark functions, each function of the CEC Benchmark Suite is run 30 times, and each algorithm was allowed to search the landscape for 500 iterations using 30 agents.

CEC 2017 problems

In this subsection, the IEEE CEC 2017 problems is employed to test the performance of POA. The CEC'17 standard set consists of 28 real challenging benchmark problems. The first is unimodal function, 2–7 are multimodal one. While ten functions next are Hybrid, the rest of CEC 2017 are 10 composition functions. Table 7 presents a brief description of CEC 2017.

Table 7 CEC 2017 problems.

As shown in Table 8, POA is highly efficient, because compared to WOA, DA and AOA, it outperforms all algorithms in 21/28 of CEC 2017 standard set. In addition, the Wilcoxon signed rank test with α = 0.05 significance level is shown in Table 9 in order to analyze the significant differences between the results of POA and other algorithms. These results have proven that POA provides a great performance in terms of solution quality when handling the functions of CEC 2017.

Table 8 Results of CEC 2017 problems.
Table 9 Results of Wilcoxon sign-rank test for CEC 2017 problems with α = 0.05.

CEC 2019 problems

Table 10 presents a brief description of CEC 2019. It can be seen from Table 11 that POA outperforms other optimization algorithms in all CEC 2019 functions. Indeed, results in many test functions (e.g. F52, F53, F56) show that POA is more powerful than others not only at the average value of 30 runs, but also at the other statistical values, such as the best, worst and Std value. Once again, The Wilcoxon signed rank test (as shown in Table 12) demonstrated the superior performance of POA to solve CEC 2019 problems.

Table 10 CEC 2019 problems.
Table 11 Results of CEC 2019 problems.
Table 12 Results of Wilcoxon sign-rank test for CEC 2019 problems with α = 0.05.

In the next section, some classical engineering design problems are employed to further evaluate the performance of the POA. Besides, POA is also compared with other well-known techniques to confirm its results.

Engineering design problems

In this study, three constrained engineering design problems, namely tension/compression spring, welded beam, pressure vessel designs, are used to investigate the applicability of POA. The problems have some equality and inequality constraints. The POA should be, therefore, equipped with a constraint solving technique. Meanwhile, POA can optimize constrained problems as well at the same time. It should be noted that the population size and the number of iterations are, respectively, set to 30 and 500 for 50 runs to find the results for all problems in this section.

Tension/compression spring

The main aim of this problem is to minimize the weight of a tension/compression spring. The design problem is subject to three constraints, namely surge frequency, shear stress, and minimum deflection. This problem consists of three variables: Wire diameter (d), mean coil diameter (D), and the number of active coils (N).

Tension/compression spring design problem has been solved by both mathematicians and heuristic techniques. Some researchers have made efforts to employ several methods for minimizing the weight of a tension/compression spring (Ha and Wang: PSO33; Coello and Montes: The Evolution Strategy (ES)34 and GA35; Mahdavi et al.: Harmony Search (HS)36; Belegundu: Mathematical optimization37 and Arora: Constraint correction38; Huang et al.: Differential Evolution (DE)39). Additionally, GWO5 algorithms and HHO11 have also been employed as heuristic optimizers for this problem. The comparison of the results of these methods and POA is shown in Table 13.

Table 13 Comparison of results for tension/compression spring.

Welded beam

The aim of welded beam design problem is to minimize its fabrication cost. The constraints of the problem are shear stress \((\tau )\), bending stress in the beam \((\theta )\), buckling load of the bar \((P_{c} )\), end deflection of the beam \((\delta )\) and side constraints. Welded beam design problem has four variables, namely thickness of weld \((h)\), length of attached part of bar \((l)\), the height of the bar \((t)\), and thickness of the bar \((b)\). This problem is illustrated in the literature5,40,41.

Lee and Geem40 employed HS to deal with this problem, while Deb42,43 and Coello44 used GA. Seyedali Mirjalili applied GWO5 to solve this problem. Richardson’s random approach, Davidon-Fletcher-Powell, Simplex technique, Griffith and Stewart’s successive linear approximation are the mathematical methods that have been adopted by Ragsdell and Philips41 for this problem. More recently, Heidari, et al.11 and Yang, et al.29 have used HHO and HGS, respectively, to solve the problem. Table 14 shows a comparison between the different methods. The results indicate that POA reaches a design with the minimum cost compared to other optimizations. The best result of the cost function obtained by the POA is 1.72564.

Table 14 Comparison of results for welded beam design problem.

Pressure vessel

Pressure vessel design problem is well-known, where the fabrication cost of the total cost consisting of material, forming, and welding of a cylindrical vessel should be minimized. There are four variables, namely thickness of the shell \((T_{s} )\), thickness of the head \((T_{h} )\), Inner radius \((R)\) and length of the cylindrical section without considering the head \((L)\), and four constraints.

Pressure vessel design problem has also been popular among optimization studies in different researches. Several heuristic techniques, namely DE39, PSO33, GA35,45,46, ACO47, ES [59], GWO5, MFO48, HHO11 and SMA10, that have been adopted for the optimization of this problem. Mathematical approaches employed are augmented Lagrangian Multiplier49 and branch-and-bound50. We can see that POA is again able to search a design with the minimum cost as shown in Table 15.

Table 15 Comparison of results for pressure vessel design problem.

Conclusions

In the paper, a meta-heuristic algorithm, inspired by the gravitational law of Newton, is proposed. POA's structure in search processes consists of 2 phases that aim for proper balance exploration and exploitation. Several outperform features are shown through the accuracy of 23 classical benchmark functions and 38 IEEE CEC test functions (CEC2017, CEC 2019). In many functions, POA showed that the obtained results are more accurate than the others many times.

In the final evaluation section, a set of well-known test cases, including three engineering test problems, are thoroughly investigated to examine the operation of POA in practice. Each problem is a type of distinct engineering, including very diverse search spaces. Therefore, these engineering problems are employed to test the POA thoroughly. The obtained results demonstrate that POA is able to solve effectively real challenging problems with unknown search spaces and a large number of constraints. The results compared to GSA, GWO, PSO, DE, ACO, MFO, SOS, CS, HHO, SMA, HGS, etc., suggest that POA is superior.

The structure of POA is simple and explicit, very effective, even fast. Experiments revealed short computational time for handling complex optimization problems. Therefore, we firmly authenticate that POA is a powerful algorithm to solve optimization problems.