Next Article in Journal
Who Is the Most Effective Country in Anti-Corruption? From the Perspective of Open Government Data and Gross Domestic Product
Next Article in Special Issue
Research on Path Planning Algorithm for Driverless Vehicles
Previous Article in Journal
Evaluating E-Teaching Adoption Criteria for Indian Educational Organizations Using Fuzzy Delphi-TOPSIS Approach
Previous Article in Special Issue
EADN: An Efficient Deep Learning Model for Anomaly Detection in Videos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybridization of Manta-Ray Foraging Optimization Algorithm with Pseudo Parameter-Based Genetic Algorithm for Dealing Optimization Problems and Unit Commitment Problem

by
Mohammed A. El-Shorbagy
1,2,*,
Hala A. Omar
2 and
Tamer Fetouh
3
1
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
2
Department of Basic Engineering Science, Faculty of Engineering, Menoufia University, Shebin El-Kom 32511, Egypt
3
Electrical Engineering Department, Faculty of Engineering, Menoufia University, Shebin El-Kom 32511, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(13), 2179; https://doi.org/10.3390/math10132179
Submission received: 13 May 2022 / Revised: 12 June 2022 / Accepted: 16 June 2022 / Published: 22 June 2022
(This article belongs to the Special Issue Recent Advances in Computational Intelligence and Its Applications)

Abstract

:
The manta ray foraging optimization algorithm (MRFO) is one of the promised meta-heuristic optimization algorithms. However, it can stick to a local minimum, consuming iterations without reaching the optimum solution. So, this paper proposes a hybridization between MRFO, and the genetic algorithm (GA) based on a pseudo parameter; where the GA can help MRFO to escape from falling into the local minimum. It is called a pseudo genetic algorithm with manta-ray foraging optimization (PGA-MRFO). The proposed algorithm is not a classical hybridization between MRFO and GA, wherein the classical hybridization consumes time in the search process as each algorithm is applied to all system variables. In addition, the classical hybridization results in an extended search algorithm, especially in systems with many variables. The PGA-MRFO hybridizes the pseudo-parameter-based GA and the MRFO algorithm to produce a more efficient algorithm that combines the advantages of both algorithms without getting stuck in a local minimum or taking a long time in the calculations. The pseudo parameter enables the GA to be applied to a specific number of variables and not to all system variables leading to reduce the computation time and burden. Also, the proposed algorithm used an approximation for the gradient of the objective function, which leads to dispensing derivatives calculations. Besides, PGA-MRFO depends on the pseudo inverse of non-square matrices, which saves calculations time; where the dependence on the pseudo inverse gives the algorithm more flexibility to deal with square and non-square systems. The proposed algorithm will be tested on the test functions that the main MRFO failed to find their optimum solution to prove its capability and efficiency. In addition, it will be applied to solve the unit commitment (UC) problem as one of the vital power system problems to show the validity of the proposed algorithm in practical applications. Finally, several analyses will be applied to the proposed algorithm to illustrate its effectiveness and reliability.

1. Introduction

The optimization problem is finding the optimal set of control variables, whether continuous or discrete. Accordingly, the optimization problems can be classified into; discrete or continuous. Constrained and multimodal problems are examples of optimization problems [1,2].
The optimization problems are highly significant from both the manufacturing and scientific perspective. It is a vital and challenging area, especially in engineering designs that have efficient form and are more accurate. Besides, the feasible region may be a narrow subset of the search domain. Also, according to the presence or absence of equality or inequality constraints, optimization problems are categorized into constrained and unconstrained problems. Traditionally, for the unconstrained optimization problems, many techniques were classified into direct search and gradient-based methods. While the constrained optimization problems algorithms are classified into indirect and direct methods. These traditional optimization techniques are insufficiently robust in discontinuous, vast multimodal, and noisy search spaces [3,4].
Due to the shortcomings of traditional optimization approaches, meta-heuristic techniques have been introduced for handling optimization problems. Meta-heuristic algorithms are the most exemplary optimization algorithms due to their robustness, performance reliability, simplicity, and ease of implementation, among other benefits. There are various types of meta-heuristic algorithms, including:
(1) Evolutionary algorithms: These algorithms, such as the genetic algorithm (GA) [5,6,7,8], differential evolution algorithm (DEA) [9], and evolutionary strategy algorithm (ESA) [10], are based on evolutionary theory. The evolutionary algorithms apply the search during simulation of the inspired process by selection and reproduction, to find the optimum solution. They are global optimization methods, that scale well to higher-dimensional problems, as they are robust concerning noisy evaluation functions. However, they may not find the global optimum solution, can stick to a local minimum and consume iterations without improvement, and require relatively high computational.
(2) Swarm-based algorithms: Swarm-based algorithms are considered the most basic forms of meta-heuristic algorithms. These algorithms modeled the behavior and characteristics of swarms’ systems, leading to the coining of the term “swarm intelligence” (SI) by Gerardo Beni and Jing Wang in 1989 [11]. Swarm intelligence algorithms (SIAs) are linked to the study of swarms, or colonies of social animals, where research into social behavior in swarms of organisms encouraged the invention of several practical optimization algorithms. These algorithms simulate the social behavior and decision-making of various social groupings, such as particle swarm optimization algorithm (PSOA), ant colony optimization algorithm (ACOA), grasshopper optimization algorithm (GOA), and manta-ray foraging optimization algorithm (MRFOA) [12,13,14,15,16,17], etc. There are several advantages to swarm-based algorithms such as they respond to internal disturbances and external challenges and complete operations even with the failure of some agents. Also, they are self-organized, which means that the solutions are emergent than pre-defined. Furthermore, the swarm system is updated based on predetermined and new stimuli, and the agents’ operations are implemented in a parallel manner. However, they have some drawbacks: the swarm’s function cannot be predicted according to the agent’s function. In addition, it is difficult to predict the behavior from the individual rules. Besides, they are sensitive to small changes in their laws where any change could lead to an unexpected group level.
(3) Human social behavior-based algorithms: These algorithms are based on human social behavior, such as group teaching optimization algorithm (GTOA), imperialist competitive algorithm (ICA), and teaching-learning based optimization algorithm [18,19,20,21,22], etc. These types of algorithms allow the social phenomena to be a valuable source of inspiration for algorithms. The efficiency and effectiveness of these algorithms may be better than other swarm intelligent mechanisms that animal groups inspire. However, they have the same drawbacks as swarm-based algorithms.
(4) Physics-based algorithms: Natural physics laws have been utilized to develop physics-based algorithms such as the gravitational search algorithm (GSA) [23], magnetic optimization algorithm (MOA) [24], and the simulated annealing (SA) [25], etc. In the GSA algorithm, according to Newton’s universal law of gravitation, larger particles will have more attraction power than smaller particles. Hence, smaller ones are attracted to the larger ones. Consequentially, all smaller particles will be drawn toward the largest particle. The largest particle can resemble the global optimum solution in the case of optimization. While the probable solutions in the MOA are magnetic particles distributed throughout the search space. According to its fitness, each magnetic particle has a mass and magnetic field measurement. Magnetic particles with a larger magnetic field and mass are more suitable. These particles live in a lattice-like environment and attract their neighbors with a force of attraction. The proposed cellular structure allows for greater exploitation of local neighborhoods before they progress to the global best, increasing population variety. Also, SA [25] is one of the physics-based popular algorithms. SA is a random-search technique that simulates how a metal cools and freezes into a minimum energy crystalline structure (the annealing process) and searches for a minimum in a system. These algorithms can deal with arbitrary systems and cost functions, almost guarantee to find an optimal solution, and are relatively easy to code, even for complex problems. However, these metaheuristic algorithms require many choices to turn them into actual algorithms. Besides, they may become very slow, especially if the cost function is expensive to compute. The precision of the numbers used in implementing these algorithms can significantly affect the speed and ability to reach the solution.
The MRFO algorithm [17] is a bio-inspired swarm intelligence optimizer that simulates the food search proclivities of the manta ray. According to three inclinations, Manta ray searches for food: chain foraging, cyclone foraging, and somersault foraging. Foraging manta rays line up in an organized manner to capture lost prey missed or undetected by the last manta ray in the chain in a process known as chain foraging. This cooperative interaction between competing manta rays reduces the possibility of prey loss in their eyesight and increases food rewards. Foraging by a cyclone occurs when there is a high density of prey.
However, MRFO is not good at fine-tuning solutions around optima due to its stochastic nature and slow convergence speed. Also, it may stick to a local minimum, leading to consuming iterations without reaching the optimal solution. Due to the above drawbacks, several modifications were applied to MRFO to enhance its performance. The combination of the SA and MRFO algorithms is presented in [26]; where the SA algorithm is used to describe the initial population of the MRFO algorithm, which speeds up convergence significantly. However, falling into a local minimum still represents a critical issue in the algorithm. In [27], an elegant approach is proposed based on MRFO integrated with a gradient-based optimizer (GBO). The proposed MRFO-GBO aims to decrease the risk of the original MRFO being stuck in local optima while also speeding up the solution process. It depends on a local escaping operator (LEO) to update the current solution. However, the LEO operator generates a new solution using several solutions (the best and other randomly selected solutions), which does not guarantee a better solution. In [26], the opposition-based learning (OBL) methodology was combined with MRFO to obtain an appropriate structure for optimization problems. The hybrid algorithm aimed to benefit from OBL to produce a more performant algorithm. However, it depends on computing all the population’s opposite solutions, and consequently, the computational load increases.
The ‘unit commitment’ (UC) problem in power systems aims to determine the startup and shut down schedules of generating units to meet forecasted demands over a definite period. The objective is to minimize the total production cost while satisfying all problem constraints. The global optimum solution can be obtained by exploring all possible solutions and choosing the one having the minimum objective value. However, in realistic power systems, this is not applicable. Therefore, researchers used different algorithms to find a satisfactory solution to achieve the constraints and objectives within a reasonable time [28].
Various approaches and several mathematical techniques have been proposed for solving the UC problem. Several inspired algorithms were applied to solve the UC problem due to their capability to handle nonlinear, mixed-integer, and large-scale problems. Besides, they can manage non-convex fuel cost functions with non-linear constraints. Some of the most commonly used algorithms are artificial neural network [29], simulated annealing [30], grey wolf optimizer [31], genetic algorithm, and particle swarm optimization technique [32,33]. Also, hybrid meta-heuristic optimization techniques have been applied to the UC problem considering the advantages and features of each optimization technique [34,35].
To improve the standard MRFO performance, this paper presents a hybridization of the manta-ray foraging optimization (MRFO) algorithm with a pseudo-parameter-based genetic algorithm (GA). The proposed algorithm is abbreviated as PGA-MRFO. The proposed algorithm applies the MRFO with its classical procedure until it is stuck in a local minimum. Then, the assumption of the pseudo parameter is used, leading to reducing the number of input variables in the GA. Finally, the GA will be applied to get out of the local minimum. Then, the assumption of the pseudo parameter is used, leading to reducing the number of input variables in the genetic algorithm. Finally, the GA will be applied to get out of the local minimum. Then, the search process will continue using the classical MRFO until it falls into a local minimum again. These processes are continued until the termination criterion is satisfied, at which point the best manta ray is reported as the final solution. The main contributions of the proposed algorithm can be summarized as follows:
  • The manta-ray foraging optimization (MRFO) technique is proposed to be combined with a pseudo-parameter-based genetic algorithm (GA) to overcome and improve the poor performance of standard MRFO.
  • The GA’s objective function depends only on three variables, whatever the number of independent variables in the problem. The dimension of the optimization problem will not affect the number of the GA’s variables leading to less computational burden.
  • The suggested approach uses GA to take out the manta-ray algorithm from any local minimum to a better local minimum until an optimal solution is found.
The rest of the paper is organized as follows: Section 2 represents an overview of the MRFO algorithm. Section 3 illustrates the proposed algorithm with its flowchart and pseudo-code. Section 4 shows the results and comparisons based on five test functions. A practical application of the proposed algorithm to solve one of the most critical issues in the electrical power systems (unit commitment) problem is introduced in Section 5. Section 6 presents the proposed algorithm’s uncertainty, robustness, and computational time analysis. Finally, Section 7 introduces the conclusion.

2. MRFO Overview

Manta beams are extravagant animals. However, they have all the earmarks of being horrible. A grown-up manta beam can eat 5 kg of tiny fish regularly. They have developed an assortment of unique and astute scrounging methodologies. The primary scavenging technique is chain searching. When at least 50 manta beams begin scrounging, they line up, one behind another, shaping a precise line. More modest male manta beams are piggybacked upon female ones and swim on top of their backs to coordinate with the beats of the female’s sectoral balances. The second searching system is tornado scavenging.
Many manta beams assemble when the centralization of tiny fish is exceptionally high. Their last parts interface up with heads in twisting to create a spiraling vertex in the eye of the twister, and the separated water climbs towards the surface, which maneuvers the microscopic fish into their open mouths. The final foraging strategy is somersault foraging. Its behavior is one of the most splendid sceneries in nature. When manta rays find a food source, they will do a series of backward somersaults, circling the plankton to draw it towards manta rays. Somersault is a random, frequent, local and cyclical movement that helps manta rays optimize food intake.
In MRFO, manta rays can observe the position of plankton and swim towards it. The higher the concentration of plankton in a place, the better the situation is. Although the best solution is unknown, MRFO assumes the best solution found so far is the plankton with a high concentration of manta rays that want to approach and eat. Manta rays line up head-to-tail and form a foraging chain. Individuals except for the first move towards the food and the one in front of it. In each iteration, the individual is updated by the best solution and the solution in front of it. Chain individual foraging can be modeled as follows:
x i d ( t + 1 )   { x i d ( t ) + r × ( x b e s t d ( t ) x i d ( t ) ) + α 1 ( x b e s t d ( t ) x i d ( t ) ) ,   i = 1 x i d ( t ) + r × ( x i 1 d ( t ) x i d ( t ) ) + α 1 ( x b e s t d ( t ) x i d ( t ) ) ,   i = 2 , 3 , N
α 1 = 2 r | log r |
The position update of the ith individual is determined by the position x i 1 d ( t ) of the (i−1) the current individual and the position x b e s t d ( t ) of the food. When a school of manta rays recognizes a patch of plankton in deep water, they will form a long foraging chain and swim towards the food by a spiral. A similar spiral foraging strategy can be found in WOA. In the cyclone foraging strategy, each manta ray swims toward the one in front of it. The manta ray swarms in line, developing a spiral perform foraging. Individual not only follow the one in front of it but only moves toward the food along a spiral path. This motion behavior may be extended to an n-D space. For simplicity, this mathematical model of cyclone foraging can be defined as:
x i d ( t + 1 )   { x b e s t d ( t ) + r × ( x b e s t d ( t ) x i d ( t ) ) + β ( x b e s t d ( t ) x i d ( t ) ) ,   i = 1 x b e s t d ( t ) + r × ( x i 1 d ( t ) x i d ( t ) ) + β ( x b e s t d ( t ) x i d ( t ) ) ,   i = 2 , 3 , N
β = 2 e r 1 ( T t + 1 ) T   sin ( 2 π r 1 )
In somersault foraging behavior, the position of the food is viewed as a pivot. Each individual swims to and from around the pivot and somersaults to a new position. Therefore, they constantly update their positions around the best position. The mathematical model can be created as follows:
x i d ( t + 1 ) = x i d ( t ) + S   ( r 2 x b e s t d ( t ) r 3 x i d ( t ) )
MRFO starts by generating a random population in the problem domain as with other metaheuristic optimizers. Each updates its position according to the one in front of it and the reference position at each iteration. The value of the ratio increases from ( 1 T ) to 1, respectively, perform the exploratory and exploitative search. The current best solution is chosen as a reference for exploitation when the ratio ( t T ) < r a n d o m   n u m b e r . However, when the ratio ( t T ) > r a n d o m   n u m b e r , a random position in the search space, is selected as a reference position for the exploration.
Meanwhile, MRFO can switch between the chain foraging behavior and the cyclone foraging behavior according to the random number. Then individuals update their positions concerning the best position found so far by somersault foraging. All the updates and calculations are interactively performed until the stop criterion is met. Eventually, the best individual’s position and fitness value are returned. Algorithm 1 illustrates the pseudo-code of the MRFO algorithm.
Algorithm 1. Pseudo-code of MRFO algorithm.
Mathematics 10 02179 i001
According to the MRFO algorithm procedure, the best solution will not be updated if the current fitness function is not better than the previous iteration. Consequently, the algorithm will be stuck in the reached local point. As a result, an additional procedure should be applied to get it out from this local minimum point.

3. The Proposed Algorithm (PGA-MRFO)

3.1. Flow Chart and General Description of the Algorithm

Figure 1 shows the flow chart of the proposed algorithm. The MRFO is applied until it falls into a local minimum point. Then, the genetic algorithm (GA) is used according to specific variables and the objective function until a better solution is obtained. The resulting solution and corresponding objective value are entered into the MRFO as the best solution and fitness function to complete the optimization process. The fundamental steps are repeated until the termination criterion is reached.

3.2. Mathematical Formulation of the Genetic Objective Function

Each variable in the system can be represented as a function of a parameterization parameter (s); each variable can be written as:
x i = g i ( s ) ,   f o r   i = 1 , , N
Also, the objective function can be stated as a function of the parameterization variable. It can be written as:
X = [ x 1   x 2   x N ]   ,   f o r   i = 1 , , N
f = f ( X ) = f ( g i ( s ) ) .
Let
x i [ a i ,   b i ]   f o r   i = 1 , 2 , , N
Then, the domain of parameterization parameter can be estimated as:
s [ s m i n ,   s m a x ]
where
s m i n = min { a 1 ,   a 2 , , a N }
s m a x = max { b 1 ,   b 2 , , b N }
Consequently,
Δ s [ s m i n ,   s m a x ]
Equation (13) states that the parameterization parameter domain is bounded by the minimum and maximum values of lower and upper limits of the system variables, respectively.
Theorem 1.
(First Order Necessary Condition for (Local) Optimality) [2].
If X is an unconstrained local minimizer of a differentiable function, then we must have According to the First Order Necessary Condition for (Local) Optimality) If X is an unconstrained local minimizer of a differentiable function f :   R N R , then:
f ( X ) = 0
The derivative of the fitness function WRT, the parameterization parameter, can be written as:
d f d s = d f d g 1 × d g 1 d s + d f d g 2 × d g 2 d s + + d f d g N × d g N d s
According to Equation (6), Equation (15) can be written as:
d f d s = d f d x 1 × d x 1 d s + d f d x 2 × d x 2 d s + + d f d x N × d x N d s
As a vector form, Equation (16) can be written as:
d f d s = [ d f d x 1   d f d x 2     d f d x N ] × [ d x 1 d s d x 1 d s .. .. d x N d s ]
The first vector represents the gradient of the objective function to the system variables. So, Equation (17) can be written as:
d f d s = f × [ d x 1 d s d x 1 d s .. .. d x N d s ]
where,
f = [ d f d x 1   d f d x 2     d f d x N ]
According to the previous theorem, the gradient of Equation (19) can be approximated as:
f = α [ c 1   c 2   c N ]
where,
c i [ 0 , 1 ]
α [ 10 6 ,   10 6 ]
According to Equation (18), the gradient of the variables w.r.t. the parameterization parameter ( Δ X ) can be calculated as:
{ Δ X Δ s = [ d x 1 d s d x 2 d s .. .. d x N d s ] = d f d s × ( f ) + = Δ f Δ s × ( f ) + Δ x = Δ x Δ s × Δ s
d f d s = Δ f Δ s
where ( f ) + is the pseudo-inverse (Moore-Penrose) of the approximated gradient. The resulting objective function from the GA should be less than the entered one. It can be written as:
Δ f = { δ f f   i f   f 0   f f δ   i f   f < 0   where   δ ( 0 , 1 )
X = [ x 1   x 2   x N ] ,   f o r   i = 1 , , N ;
After calculating the gradient ( Δ X ) . The updated values of the variables can be evaluated as:
X n = X + Δ X T = [ x 1   x 2   x N ] + [ d x 1 d s   d x 2 d s   d x N d s ]
X n e w = c o r e c t e i o n   ( X n )
Then, the fitness function of the new individual can be calculated as:
f n e w = f ( X n e w )
So that the variables of the GA will be { δ ,   Δ s ,   α }.
The GA aims to reduce the absolute difference between the calculated fitness function of the new solution ( f n e w ) and the presumed enhanced fitness function ( δ f   o r   f δ ) . The fitness function in the GA can be written as follows:
f i t n e s s   ( X ) = { | f n e w δ f |   i f   f 0   | f n e w f δ |   i f   f < 0
The whole optimization problem that is presented to the GA is as follows:
{ min α [ 10 6 ,   10 6 ] ,   δ ( 0 , 1 )   ,   Δ s [ tan π 2 ,   tan π 2 ] | f n e w δ f |   i f   f 0 min α [ 10 6 ,   10 6 ] ,   δ ( 0 , 1 )   ,   Δ s [ tan π 2 , tan π 2 ] | f n e w f δ |   i f   f < 0 }
The GA is applied based on the objective function in Equation (31) through the defined domain. If the enhanced solution is obtained, the genetic operations are terminated, and the resulted solution with its objective function is reintroduced to the MRFO algorithm. Otherwise, the GA continues based on the resulting solution until getting an enhanced solution and getting out of the local minimum point.

3.3. Detailed Steps of the Proposed Algorithm (PGA-MRFO)

In this section, the detailed steps of the proposed algorithm are explained.
Step 1: The MRFO is applied with its standard procedure until the termination condition, or it cannot update the best solution of the current iteration, which means falling into a local minimum.
Step 2: The GA is initialized with the best solution resulting from MRFO, the number of populations and generations, the domain of the genetic variables ( α ,   δ , Δ s ) , and other genetic parameters. Then, the initial population of the GA is created.
Step 3: For each individual, the new solution is calculated according to Equations (27) and (28). Then, its objective function is calculated according to Equation (29), and the corresponding fitness function of the GA is calculated according to Equation (30).
Step 4: The rest of the GA’s operations continue until the maximum number of generations is terminated. The best solution yielded from the GA ( X n e w ,   f n e w ) will satisfy one of the following three cases.
(a)
It may satisfy the problem’s termination (stopping) criterion. Then, it will be approved as the optimum solution, and the search process will be terminated.
(b)
If it is better than the trapped solution of MRFO, it will replace the MRFO’s trapping solution, and the MRFO algorithm is reapplied.
(c)
Suppose the resulting solution is not better than the trapped solution of MRFO. In that case, it will be entered again into the GA to be improved according to the genetic objective function (step 2).
Step 5: Steps 1–4 are repeated until the termination (stopping) criterion is reached.
For further illustrations of the PGA-MRFO, the pseudo-code is detailed in Algorithm 2.
Algorithm 2. Pseudo-code of PGA-MRFO.
Mathematics 10 02179 i002
Mathematics 10 02179 i003
Step 6: This step is added to avoid cycling if the GA fails to enhance the existing solution. The GA trials’ limit will be defined in the PGA-MRFO algorithm. If the GA fails to improve the solution in the current trial, its last population will be used as the initial population for the subsequent trial. Suppose the trial number exceeds the GA trials’ limit. In that case, the algorithm terminates, and the PGA-MRFO returns the best-calculated solution and its corresponding objective value as the local optimum solution to that problem. The GA trials’ limit is 5 in the proposed algorithm.

4. Testing Performance of PGA-MRFO Algorithm

The proposed algorithm was coded in MATLAB. 2020a, implemented with a Core (TM) i7, Intel(R) CPU with 3.2 GHz and 16 GB RAM. For computational studies, a population size of 50 and 1000 generations is used, the crossover fraction is 0.8, and all other parameters and functions were default as in GA Tool. Also, the termination criterion for both algorithms (MRFO and PGA-MRFO) is defined as δ = | F o p t i m u m F l | T o l e r e n c e   = 1 × 10−6; where F o p t i m u m is the optimal objective of the test function, and F l is the calculated objective by the algorithm.
According to the MRFO test functions in [18], ten test functions (F5, F7, F8, F13, F14, F15, F20, F21, F22, and F23) fall into a local minimum. So, both classical MRFO and PGA-MRFO algorithms will be applied to these ten test functions to verify the validation and reliability of the proposed (PGA-MRFO) algorithm. The results of each algorithm will be compared to the optimum solution based on the stopping criterion. Table 1 compares the two algorithms according to the number of consumed iterations. In all test functions, the PGA-MRFO outperformed the conventional MRFO in achieving a lower value for Fbest in fewer iterations, except for F8, where it reached a better value for Fbest in the same number of iterations, as shown in Table 1. The results illustrate how the PGA-MRFO converged faster and was more reliable than the original MRFO. Table 1’s last column indicates the suggested algorithm’s improvement as a percentage relation, which is determined as follows:
Percentage   improvement = | PGA MRFO   iterations MRFO   iterations | MRFO   iterations × 100 .  
On the other hand, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 show the converges curves to the optimum solution by the classical MRFO and the proposed PGA-MRFO algorithm for all test functions. From the table and figures, we can see that:
  • For F5 (Figure 2), the PGA-MRFO algorithm converges to the optimum solution in 438 iterations. But the MRFO algorithm consumed 1000 iterations and could not reach the optimum solution.
  • For F7 (Figure 3), the PGA-MRFO algorithm converges to the optimum solution in fewer iterations than the MRFO algorithm.
  • For F8 (Figure 4), both algorithms failed to reach the optimum solution during 1000 iterations. However, PGA-MRFO converged to a better solution than the MRFO algorithm in the same number of iterations.
  • For F13 (Figure 5), the PGA-MRFO algorithm reached the optimum solution using 181 iterations. The MRFO algorithm consumed 1000 iterations and failed to converge to the optimum solution.
  • For F14 (Figure 6), the PGA-MRFO reached the optimum solution in fewer iterations than the MRFO algorithm. It should be noticed that the fluctuations in the results using the PGA-MRFO algorithm represent the trials of the algorithm to get out of the local minimum and move to a better solution.
  • For F15 (Figure 7), the PGA-MRFO reached the optimum solution in 169 iterations, while the MRFO algorithm could not get a near-optimal solution even after using 1000 iterations.
  • For F20 (Figure 8), the PGA-MRFO algorithm converges to the optimum solution in 35 iterations. But, the MRFO algorithm spent 500 iterations and could not reach the optimum solution due to trapping in the local minimum.
  • For F21 (Figure 9), the PGA-MRFO algorithm got the optimum solution in 109 iterations. The MRFO algorithm consumed 1000 iterations without converging to the optimum solution due to falling into the local minimum.
  • For F22 (Figure 10), the PGA-MRFO algorithm got the optimum solution in 169 iterations. But, the MRFO algorithm needed 325 iterations to reach the optimum solution, where this number of additional iterations was wasted in trapping into the local minimum.
  • For F23 (Figure 11), the PGA-MRFO algorithm got the optimum solution in 87 iterations. In contrast, the MRFO algorithm spent 1000 iterations and could not converge to the optimum solution. The figure shows that the MRFO algorithm consumed more than 800 iterations with almost the same solution due to trapping into the local minimum. But, the PGA-MRFO algorithm fluctuated until getting out of the local minimum and reached the optimum solution.
To further clarify, both algorithms, classical MRFO and the PGA-MRFO, are applied to the 13 test functions with the possibility of increasing dimensions [17]. The dimensions of these problems are increased to 100. Besides, the GA is applied to the same functions for more illustration of the comparison. The maximum number of generations is 50, and the population size is 20 for the three algorithms to accelerate the calculations for this high dimension’s cases. Table 2 illustrates the results of the three algorithms and the related percentage improvement according to Equation (32). As shown in the table, for high-dimensional problems, the suggested approach can reach the desired tolerance in fewer iterations.

5. Application of PGA-MRFO Algorithm to Unit Commitment Problem

5.1. Description of the Problem

Unit commitment (UC) problem is a nonlinear, mixed integer combinatorial optimization problem. The UC issue involves scheduling a power system generating units on or off mode over a given period to meet a specific objective. The UC is performed centrally by the utility control center based on a load forecast at a particular time. The aim is to minimize the total operation costs subject to different operating conditions. The operation costs include all generating units’ production and startup costs over the entire study time.

5.2. Problem Formulation

The UC problem can be formulated mathematically by presenting the objective function and the operating constraints. The objective function that represents the total operating cost of the system can be written as follows:
min P i ,   U i k = 1 T i = 1 N U i k · [ F ( P i k ) + S i k ( 1 U i k 1 ) ]
F ( P i k ) = {   a i + b i P i k + c i P i k 2 o r F i 0 + λ i · H i · P i k ;
The following system constraints must be fulfilled:
  • Power balance constraint:
The total generation must cover the whole load and system losses that can be neglected.
i = 1 N g U i k · P i k = T 1 L D k
2.
Generation limits:
Generation units must be dispatched between the minimum and maximum limits of generation.
P i k m i n P i k P i k m a x
3.
Minimum up time:
The generation unit must be kept running for a specific time before turning it off, which can be formulated as follows:
U T i k = ( U T i k 1 + 1 ) · u i k
( u i k 1 u i k ) · U T i k 1 T i u p m i n ;
4.
Spinning reserve:
The total capacity of all generation units must cover the whole load and a specific reserve percentage. It can be formulated as follows:
i = 1 N U i k · P i k m a x T L D k + R k

5.3. Test System Cases

The proposed algorithm is applied to solve two cases of the UC problem.

5.3.1. Case 1: Four Units Test Power System

The data of the test system is given in Table 3. The load demand changes over eight intervals each of 3 h, as shown in Figure 12.

5.3.2. Case 2: Ten Units Test Power System

The data of the test system is given in Table 4. The load demand changes over 8 intervals each of 3 h, as shown in Figure 13.

5.4. Modify the PGA-MRFO to Solve UC Problem

The data in this problem is discrete. As a result, the concept of a continuous pseudo parameter cannot be applied here. The new individual will vary between a countable set of values. In addition, there is no need to use the α parameter that was defined by Equation (22). Since there is no need to calculate an approximation of the Jacobian in this discrete problem, the number of parameters is decreased to two parameters instead of three, and the ( Δ s ) parameter domain will be [0 1], and the parameter (δ) still have the same domain. The reason will be explained in the detailed steps of the modified part of the algorithm. The minimization problem will be solved using the GA is defined as:
{ min δ ( 0 , 1 )   ,   Δ s [ 0 , 1 ] | f n e w δ f |   i f   f 0 min δ ( 0 , 1 )   ,   Δ s [ 0 , 1 ] | f n e w f δ |   i f   f < 0 }
The PGA-MRFO algorithm basic steps are applied as explained before, except for the part related to the updated individual. In the continuous case, Δ X was calculated according to Equation (23). While in the discrete case, it will be approximated according to the following steps.
  • Let X = [ x 1   x 2   x 3 . x n ] be the best individual that is stuck in a local minimum.
  • The limits of change in each variable [ x i X ] can be calculated as:
    s p a n ( x i ) = [ x i l o w x i ,   x i u p x i ]
  • Each variable’s available set of discrete values represents the integer values between the lower and upper limits according to the span values calculated in step 1.
For example: Let X = [ 3   2   0 ] , and the lower and upper boundaries of the problem’s variables are l o w = [ 0   1   0 ]   a n d   u p = [ 5   7   4 ] . According to Equation (40), the span of each variable in X is:
{ s p a n ( x 1 ) = [ 3   4 ] s p a n ( x 2 ) = [ 1   5 ] s p a n ( x 3 ) = [ 0   4 ]
Sequentially, the available set of cases for each variable (VC(i)) will be:
{ V C ( 1 ) = { 3 , 2 , 1 , 0 , 1 , 2 , 3 , 4 ] } V C ( 2 ) = { 1 , 0 , 1 , 2 , 3 , 4 , 5 } V C ( 3 ) = { 0 , 1 , 2 , 3 , 4 }
4.
The two parameters in the GA are used to calculate the index of the proposed change in each variable. This index is approximated as:
d x i n d e x ( x i ) = I n t e g e r ( Δ s l e n g h t ( V C ( i ) ) ) d x i n d e x = [ d x i n d e x ( x 1 )         d x i n d e x ( x 2 ) . d x i n d e x ( x n ) ]
Since the index cannot be less than one or more than the length of the set (VC(i)), the domain of the parameter (Δs) was replaced by [0 1], as mentioned before. Besides, if the (dxindex) is calculated as 0, it will be replaced by 1.
5.
After identifying the index, the new individual-based genetic parameters will be defined as:
d x i = V C ( d x i n d e x ( x i ) )
d x = [ d x 1   d x 2 . d x n ]
X n e w = X + d x
For example: If d x i n d e x = ( 1 ,   3 ,   2 ) , then d x = [ V C ( 1 , 1 )   V C ( 2 , 3 )   ( 3 , 2 ) ] . Finally, the new variable will be approximated as X n e w = [ 3   2   0 ] + [ 3   1   1 ] = [ 0   3   1 ] .
6.
For each individual in the genetic population, the new solution is calculated according to these five steps. Then, its objective function is calculated according to Equation (29), and the corresponding fitness function of the GA is calculated according to Equation (30).
These steps (from 1 to 6) replace steps 2 and 3 in the PGA-MRFO for the continuous domain problems. In addition, due to the modified domain of (Δs) parameter and the dispensing of Jacobian calculations, there is no need to apply the correction step indicated by Equation (28). The new variables will always be in the feasible region. As mentioned before, the rest of the algorithm is used until the stopping criterion is achieved. The dynamic programming (DP), the particle swarm optimization (PSO), the classical MRFO algorithm, and the PGA-MRFO algorithms will be applied to the application problem.

5.4.1. Results of UC Problem—Case 1

Table 5 shows the results of the four algorithms in the three different scenarios. The population consists of 100 individuals in all scenarios. But the maximum number of generations is 100 in scenario 1, 300 in scenario 2, and 500 in scenario 3. These numbers were chosen randomly to examine the performance of algorithms during a broader range of iterations. The optimum solution for this case is (73,274). As a result, it is used as the tolerance indicator. If any algorithm reaches this solution, it will terminate, and the consumed iterations’ number is recorded. Otherwise, the research process will continue until reaching the maximum number of generations. The DP algorithm does not depend on iterations as with other algorithms. So, its results will be included only in tables. In the three scenarios, the proposed algorithm discovers the optimum solution in a few iterations, whereas the other algorithms, DP, PSO, and standard MRFO, utilized all iterations and failed to locate the optimal solution, as shown in Table 5.
In addition, Figure 14, Figure 15 and Figure 16 show the converge curves to the optimum solution obtained by DP, PSO, the classical MRFO, and the proposed PGA-MRFO algorithm for all three scenarios. The proposed PGA-MRFO algorithm found the optimum solution in a very small number of iterations, as shown in the figures. While the other algorithms, PSO and MRFO, consumed the whole number of iterations without reaching the optimal solution, even when the population size grew larger.

5.4.2. Results of UC Problem—Case 2

The same procedures in case-1 are applied in this case. The four algorithms, DP, PSO, classical MRFO, and the proposed PGA-MRFO algorithm are used to solve this case. Table 6 summarizes the results for this case at 100 population size and three scenarios of generations number (100, 300, 500). Since the UC-case 1’s optimum solution was known, it was employed as a tolerance for the termination criterion. In UC-case 2, however, it is unknown, hence all iterative algorithms are applied until the generation number ends. Then, the best solution is recorded. In each of the three scenarios, the PGA-MRFO method outperforms the DP, PSO, and standard MRFO algorithms and obtains the best solution. Finally, the results of this section illustrate the ability of the proposed algorithm to solve both continuous and discrete problems and its ability to overcome trapping into the local minimum and reach the optimum solution in fewer iterations.

6. Analysis of the Proposed Algorithm

Several analyses will be applied to the proposed algorithm in this section. These analyses aim to illustrate the effectiveness and reliability of the proposed algorithm. In addition, additional comparisons will be added to show the improvements suggested by the proposed algorithm.

6.1. Uncertainty Analysis of the Proposed Algorithm

Uncertainty analysis aims to quantify the variability of the output due to the variability of the input. The quantification is most often performed by estimating statistical measurements such as mean and standard deviation. The load disturbance and fuel cost in the UC problem are considered examples of uncertainty parameters in this research.
(1) Load disturbance: A total of 20 different cases of load disturbance were considered to clarify the effect of uncertainty on the calculations of the optimum solution. The load disturbances were randomly calculated by increasing or decreasing as a percentage of the original loads. The load uncertainty cases are taken from 1% to 20% of the original case. The population size is 20, and the maximum number of generations is 50 for the four algorithms, DP, PSO, MRFO, and PGA-MRFO, in all cases. These cases are applied to the model of the UC problem—Case 1 illustrated in the previous section.
Table 7 shows the best solution obtained from the four algorithms for the original load data and the uncertainty load data. Besides, it shows the mean and the standard deviation for the different instances of load uncertainty. The results illustrate uncertainty influence on the proposed algorithm compared to the other algorithms. The PGA-MRFO has the closest mean value to the original case than other algorithms despite the uncertainty percentage reaching 20%. In addition, the proposed algorithm standard deviation is less than other algorithms, which means the calculated solution in each case is almost close to that mean.
(2) Fuel cost: Since the fuel price changes continuously, it can be considered a source of uncertainty in the calculations of the UC problem. So, 20 cases of disturbance or uncertainty of fuel cost are considered. The uncertainty is considered from 1% to 20% of the original price indicated in the model of the UC problem—Case 1 illustrated in the previous section. The DP, PSO, MRFO, and PGA-MRFO results are recorded in Table 8.
Table 8 illustrates that the suggested algorithm’s mean uncertainty cases are the closest to the problem’s results and solution when using the original data. In addition, the results show that the dispersion of results around the mean in the case of the proposed algorithm is better than the other algorithms. Finally, the tables and results clarify the effectiveness of the proposed algorithm in cases of uncertainty in the examined UC problem.

6.2. Robustness Analysis of the Proposed Algorithm

The algorithm robustness can be expressed as the ability of the algorithm to reach a good solution in most times that it is applied. Besides, the ability to provide good results in other problems of the same type can be another robustness analysis measure. The test functions have illustrated the proposed algorithm’s robustness versus various problems. However, the recurrence application of the proposed algorithm will be explored to verify its robustness from the first point of view. Three cases of operations and recurrence will be applied. First, the algorithm will be applied to constant population size and a different size of generations. Second, the maximum number of generations remains constant with varying the population size. Finally, it will be applied to one of the algorithm’s parameters. The UC problem—Case 1 will be used for these analyses.
(Robustness analysis case-1) The PGA-MRFO algorithm is applied with a constant population size (20) while the maximum generations number varies from 10 to 200. Figure 17 shows the results of this case in detail, where the resulting optimum solution variation is calculated by the PGA-MRFO algorithm versus the size of the generations.
(Robustness analysis case-2) In this case, The PGA-MRFO algorithm is applied with a constant number of generations (50) while the population size varies from 10 to 200. Figure 18 shows the relationship between the calculated optimum solution and population size.
(Robustness analysis case-3) The PGA-MRFO algorithm is a hybrid between the MRFO algorithm and the GA algorithm, so it is affected by their parameters. The MRFO algorithm depends on random parameters that vary continuously with individuals and during generations. The parameter ( r 1 ) is one of these parameters. The robustness of the proposed algorithm versus this parameter will be explored. r 1 was chosen for investigation since it is a common parameter in MRFO, whereas the other parameters differ depending on the number of system variables. 20 constant values between (0,1) of this parameter have been considered and tested by the proposed algorithm. In the basic algorithm, this parameter is changed each iteration randomly. So, in this examination, each value of r 1 will remain constant until the termination of the optimization process, and the resulting solution will be recorded. Figure 19 shows the variation of the optimum solution obtained from the proposed algorithm versus the parameter r 1 .
Figure 17, Figure 18 and Figure 19 show how the PGA-MRFO algorithm often reaches the same solution versus different parameter values of r 1 , indicating its robustness and reliability. The mean and standard deviation for each case will be calculated for further analysis. Besides, the percentage error between the mean and the original best solution will be calculated to evaluate the deviation of that mean from the optimum solution. While the optimum solution is considered as calculated in Section 4, which equals (73,274). This percentage will be calculated as:
E r r o r = m e a n o p t i m u m   s o l u t i o n   o p t i m u m   s o l u t i o n × 100
Table 9 shows the mean and standard deviation values of each case of robustness examination mentioned before. The table illustrates that, despite different changes in the corresponding parameters, the PGA-MRFO algorithm could detect the optimum or near optimum solution with minimum error, showing its robustness and availability.

6.3. Time Complexity and Computational Time Analysis of PGA-MRFO

In the following, three strategies are introduced to analyze the time complexity and computational time of PGA-MRFO:
  • The time complexity may be considered the amount of time an algorithm takes to be executed as a function of the input size. Here, the input size indicates the number of operations performed by the algorithm. The PGA-MRFO algorithm’s time complexity may be considered the result of the classical MRFO’s time complexity and the genetic algorithm’s time complexity. The genetic algorithm’s time complexity is considered as O ( D G P ) Where; D is the problem dimension, G is the maximum number of generations, and P is the population size. While in the proposed algorithm, the dimension in the genetic loop is constant (3), its time complexity reduces to be O ( 3 G P ) or after approximation O ( G P ) . So, the time complexity of the PGA-MRFO algorithm is less than that of the hybridization algorithm between GA and classical MRFO.
  • The actual computation time of the PGA-MRFO algorithm is calculated through different cases of a problem. Then, curve fitting is used to identify this time as a function of specific parameters. In this strategy, the parameters will be the dimensions of the problem and the maximum value between generations and population size. Since the time complexity depends on the problem dimensions and other parameters, it is essential to raise the dimensional easily to facilitate the investigation, which will not always be available in the UC problem. So, the time complexity analysis of the proposed algorithm will be discussed for the following test function:
F = i = 1 N x i 2 ,   x i [ 100 , 100 ]
The number of variables takes values from 10 to 500 will be considered the first parameter for fitting time complexity. While both generations and population size are created randomly between the range [10,200]. The maximum value between generations and population size will be considered the second parameter. The PGA-MRFO is applied for 50 cases within the determined range of generations and population size, and the time in minutes is recorded. The fitting toolbox in MATLAB is used to identify the best fitting surface for the available data to deduce an approximated formula of the computations time. So, The 5th-degree polynomial is selected. The resulted equation for approximation of time is the following equation:
Approximated time   = p 00 + p 10 x + p 01 y + p 20 x 2 + p 11 x y + p 02 y 2 + p 30 x + p 21 x 2 y + p 12 x y 2 + p 03 y 3   + p 40 x 4 + p 31 x 3 y + p 22 x 2 y 2 + p 13 x y 3 + p 04 y 4 + p 50 x 5 + p 41 x 4 y + p 32 x 3 y 2 + p 23 x 2 y 3   + p 14 x y 4 + p 05 y 5 x d
The values of the approximated coefficients are shown in Table 10. While the surface of fitting the calculations time versus the dimensions and the maximum value between population size and generations is shown in Figure 20.
Indeed, other variables, such as algorithm parameters and the objective function’s computation time, affect the computation time. The latter depends on the objective function and its equations and not only on the dimensions of the problem. More analysis of the time complexity of the proposed algorithm can be investigated in future work.
  • Another strategy to illustrate the improvement resulting from the proposed PGA-MRFO algorithm is to apply it and the MRFO algorithm to the objective function of Equation (49). The problem dimension varies from 10 to 500, yielding 50 cases, while the maximum generations and population size will be fixed at (100, 50). Figure 21 shows the comparison between PGA-MRFO and MRFO calculation time.
Figure 21 shows, in most cases, that despite the proposed algorithm combining two algorithms, it reaches the desired tolerance faster than the original MRFO, especially for high dimensions. We can say that the proposed algorithm saves time that may be wasted by the MRFO when it is stuck in a local minimum. So, from that point of view, PGA-MRFO is considered to have fewer computations time and consequently less burden than the original MRFO. In addition, Table 11 shows the mean value and standard deviation of the calculated time for both algorithms based on the 50 cases mentioned before.
In most test cases, the proposed algorithm takes less time, as shown in Table 11. Although the MRFO takes less time than the PGA-MRFO in some cases, the proposed algorithm’s meantime is less than that of the MRFO. Finally, tables and figures demonstrate the applicability and benefits of the proposed algorithm.

7. Conclusions

This paper introduced a hybrid optimization algorithm based on the classical MRFO and pseudo genetic algorithms. The proposed algorithm aims to solve the MRFO algorithm falling into the local minima, which consumes iterations without improvement. The PGA is the Genetic algorithm-based pseudo parameter to reduce the number of variables in the GA algorithm. A sub-optimization problem was derived based on the pseudo parameter and optimality condition. After solving the sub-problem, the best solution is updated, and the MRFO passes the local minimum. MRFO has applied again until it falls into another local minimum. The search is continued until the termination criterion is reached. The PGA-MRFO was applied to the test functions stuck in the local minimum in the original MRFO paper [17]. The results show how the proposed hybridization enhanced the performance of the MRFO algorithm and enabled it to reach the optimum solution. In addition, the algorithm was applied to the 13 test functions in [17], so that their dimensions can be increased. Besides, it was applied to the unit commitment (UC) problem to illustrate its validity and efficiency. Some modifications were added to the PGA-MRFO algorithm to fit this type of problem. The results of the PGA-MRFO algorithm were compared to the DP, PSO, and the classical MRFO algorithms. Uncertainty, robustness, and time analysis were investigated for the proposed algorithm to illustrate its properties. All of the results show that the proposed algorithm is superior to other algorithms in terms of reaching the optimal solution in a few iterations, whether the population size changes or the maximum number of generations.
In our future work, three directions will be prioritized: (i) Adding further PGA-MRFO adjustments and evaluating their impact on optimization results. (ii) PGA-MRFO can be used to tackle optimization problems in various fields. (iii) Use PGA-MRFO to improve existing metaheuristic algorithms, such as the moth search (MS) algorithm, the slime mould algorithm (SMA), the hunger games search (HGS), the Runge Kutta optimizer (RUN), the colony predation algorithm, and the Harris hawk’s optimization (HHO).

Author Contributions

Conceptualization, M.A.E.-S.; Data curation, T.F.; Formal analysis, H.A.O. and T.F.; Investigation, M.A.E.-S. and H.A.O.; Methodology, H.A.O.; Resources, T.F.; Software, H.A.O.; Validation, M.A.E.-S.; Writing—original draft, H.A.O. and T.F.; Writing—review & editing, M.A.E.-S. All authors have read and agreed to the published version of the manuscript.

Funding

The authors received no specific funding for this work.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data used to support the findings of this study are included in the article.

Conflicts of Interest

The authors declare that this article’s content has no conflicts of interest.

Abbreviations

VariableDescription
NNumber of variables
x i d ( t ) The position of ith individual at time t in dth dimension
r Random vector within the range of [0,1]
α 1 Weight coefficient
x b e s t d ( t ) The plankton with high concentration
βweight coefficient
TMaximum number of iterations
SSomersault factor
r 1   ,   r 2   a n d   r 3 Random   numbers   in   [ 0 ,   1 ]
tThe current iteration
sThe pseudo parameter
α Weight   coefficient   within   the   range   [ 10 6 ,   10 6 ]
Δ s Pseudo parameter step
x i The problem variable i
x l The lower limit of the variable i
x u The upper limit of the variable i
[ a i ,   b i ] The variable (i) domain
[ s m i n ,   s m a x ] Pseudo parameter domain
f The fitness function
g i The pseudo function
f The gradient of the fitness function
f + The pseudo-inverse (Moore-Penrose) of the approximated gradient
δ The presumed improving ratio of objective function within range (0, 1)
X Vector of system variables
X n e w New   calculated   vector   X
X b e s t The best solution
f b e s t The best fitness value
P i k The output power of generation unit i at period k
U i k The commitment state of unit i at period k
F ( P i k ) The generation fuel cost of unit i at period k
S i k The startup cost of unit i at period k
N g Number of generating units
T 1 The total number of scheduling periods
F i o The no-load cost of generator i
λ i The incremental fuel cost of generator i
H i The heat rate of the generator i
U T i k The number of the ON-hours for unit i
T i u p m i n The minimum up-time for unit i
R k The spinning reserve requirement
T L D k The total load

References

  1. Michael, B.-B. Nonlinear Optimization with Engineering Applications. In Springer Optimization and Its Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  2. Rao, S.S. Engineering Optimization: Theory and Practice, 3rd ed.; Wiley: Hoboken, NY, USA, 2009. [Google Scholar]
  3. Michalewicz, Z. Evolutionary Computation Techniques for Nonlinear Programming Problems. Int. Trans. Oper. Res. 2008, 1, 223–240. [Google Scholar] [CrossRef] [Green Version]
  4. Onwubolu, G.C.; Babu, B.V. New Optimization Techniques in Engineering; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004; Volume 141. [Google Scholar]
  5. John, H.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar]
  6. Sastry, K.; Goldberg, D.; Kendall, G. Genetic Algorithms. In Search Methodologies; Burke, E.K., Kendall, G., Eds.; Springer: Boston, MA, USA, 2005. [Google Scholar]
  7. El-Desoky, I.M.; El-Shorbagy, M.A.; Nasr, S.M.; Hendawy, Z.M.; Mousa, A.A. A hybrid genetic algorithm for job shop scheduling problems. Int. J. Adv. Eng. Technol. Comput. Sci. 2016, 1, 6–17. [Google Scholar]
  8. El-Shorbagy, A.M.; Ayoub, A.Y.; El-Desoky, I.M.; Mousa, A.A. A Novel Genetic Algorithm Based K-Means Algorithm for Cluster Analysis. In International Conference on Advanced Machine Learning Technologies and Applications; Springer: Cham, Switherland, 2018; pp. 92–101. [Google Scholar]
  9. Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  10. Simon, D. Evolutionary Optimization Algorithms; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2013. [Google Scholar]
  11. Beni, G.; Wang, J. Swarm Intelligence in Cellular Robotic Systems. In Proceedings of the NATO Advanced Workshop on Robots and Biological Systems, Tuscany, Italy, 26–30 June 1989; Springer: Berlin/Heidelberg, Germany, 1989; pp. 703–712. [Google Scholar]
  12. El-Shorbagy, M.; Hassanien, A.E. Particle swarm optimization from theory to applications. Int. J. Rough Sets Data Anal. 2018, 5, 1–24. [Google Scholar] [CrossRef]
  13. El-Shorbagy, M.A.; El-Shorbagy, M. Weighted Method Based Trust Region-Particle Swarm Optimization for Multi-Objective Optimization. Am. J. Appl. Math. 2015, 3, 81. [Google Scholar] [CrossRef]
  14. Allah, A.M.A.; El-Shorbagy, M.A. Enhanced Particle Swarm Optimization Based Local Search for Reactive Power Compensation Problem; Scientific Research Publishing: Wuhan, China, 2012. [Google Scholar]
  15. Dorigo, M.; Stutzle, T. Ant Colony Optimization; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
  16. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  17. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  19. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  20. Rao, R.; Savsani, V.; Vakharia, D. Teaching–Learning-Based Optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  21. Kashan, A.H. An efficient algorithm for constrained global optimization and application to mechanical engineering design: League championship algorithm (LCA). Comput. Des. 2011, 43, 1769–1792. [Google Scholar]
  22. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  23. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  24. Tayarani-N, M.H.; Akbarzadeh-T, M.R. Magnetic Optimization Algorithms a New Synthesis. In Proceedings of the2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 2659–2664. [Google Scholar]
  25. Van Laarhoven, P.J.M.; Aarts, E.H.L. Simulated Annealing. In Simulated Annealing: Theory and Applications; Springer: Dordrecht, The Netherlands, 1978; pp. 7–15. [Google Scholar]
  26. Izci, D.; Ekinci, S.; Eker, E.; Kayri, M. Improved manta ray foraging optimization using opposition-based learning for optimization problems. In Proceedings of the 2020 International Congress on Human-Computer Interaction Optimization and Robotic Applications (HORA), Ankara, Turkey, 26–27 June 2020; pp. 1–6. [Google Scholar]
  27. Mohamed; Hassan, H.; Essam; Houssein, H.; Mohamed; Mahdy, A.; Kamel, S. An improved Manta ray foraging optimizer for cost-effective emission dispatch problems. Eng. Appl. Artif. Intell. 2021, 100, 1–20. [Google Scholar]
  28. Muralikrishnan, N.; Jebaraj, L.; Rajan, C.C.A. A Comprehensive Review on Evolutionary Optimization Techniques Applied for Unit Commitment Problem. IEEE Access 2020, 8, 132980–133014. [Google Scholar] [CrossRef]
  29. Shadaksharappa, N.M. Optimum Generation Scheduling for Thermal Power Plants using Artificial Neural Network. Int. J. Electr. Comput. Eng. 2011, 1, 134–139. [Google Scholar] [CrossRef]
  30. Dudek, G. Adaptive simulated annealing schedule to the unit commitment problem. Electr. Power Syst. Res. 2010, 80, 465–472. [Google Scholar] [CrossRef]
  31. Sakthi, S.S.; Santhi, R.; Krishnan, N.M.; Ganesan, S.; Subramanian, S. Wind Integrated Thermal Unit Commitment Solution Using Grey Wolf Optimizer. Int. J. Electr. Comput. Eng. 2017, 7, 2309. [Google Scholar] [CrossRef]
  32. Ponciroli, R.; Stauff, N.E.; Ramsey, J.; Ganda, F.; Vilim, R.B. An Improved Genetic Algorithm Approach to the Unit Commitment/Economic Dispatch Problem. IEEE Trans. Power Syst. 2020, 35, 4005–4013. [Google Scholar] [CrossRef]
  33. Zhai, Y.; Liao, X.; Mu, N.; Le, J. A two-layer algorithm based on PSO for solving unit commitment problem. Soft Comput. 2020, 24, 9161–9178. [Google Scholar] [CrossRef]
  34. Kamboj, V.K. A novel hybrid PSO–GWO approach for unit commitment problem. Neural Comput. Appl. 2016, 27, 1643–1655. [Google Scholar] [CrossRef]
  35. Khunkitti, S.; Watson, N.R.; Chatthaworn, R.; Premrudeepreechacharn, S.; Siritaratiwat, A. An Improved DA-PSO Optimization Approach for Unit Commitment Problem. Energies 2019, 12, 2335. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flow chart of the proposed algorithm.
Figure 1. Flow chart of the proposed algorithm.
Mathematics 10 02179 g001
Figure 2. Converges curves for F5.
Figure 2. Converges curves for F5.
Mathematics 10 02179 g002
Figure 3. Converges curves for F7.
Figure 3. Converges curves for F7.
Mathematics 10 02179 g003
Figure 4. Converges curves for F8.
Figure 4. Converges curves for F8.
Mathematics 10 02179 g004
Figure 5. Converges curves for F13.
Figure 5. Converges curves for F13.
Mathematics 10 02179 g005
Figure 6. Converges curves for F14.
Figure 6. Converges curves for F14.
Mathematics 10 02179 g006
Figure 7. Converges curves for F15.
Figure 7. Converges curves for F15.
Mathematics 10 02179 g007
Figure 8. Converges curves for F20.
Figure 8. Converges curves for F20.
Mathematics 10 02179 g008
Figure 9. Converges curves for F21.
Figure 9. Converges curves for F21.
Mathematics 10 02179 g009
Figure 10. Converges curves for F22.
Figure 10. Converges curves for F22.
Mathematics 10 02179 g010
Figure 11. Converges curves for F23.
Figure 11. Converges curves for F23.
Mathematics 10 02179 g011
Figure 12. Load curve for case 1—test system.
Figure 12. Load curve for case 1—test system.
Mathematics 10 02179 g012
Figure 13. Load curve for case 2—test system.
Figure 13. Load curve for case 2—test system.
Mathematics 10 02179 g013
Figure 14. Converges curves for UC problem—Case 1 at max. Generations = 100.
Figure 14. Converges curves for UC problem—Case 1 at max. Generations = 100.
Mathematics 10 02179 g014
Figure 15. Converges curves for UC problem—Case 1 at max. Generations = 300.
Figure 15. Converges curves for UC problem—Case 1 at max. Generations = 300.
Mathematics 10 02179 g015
Figure 16. Converges curves for UC problem—Case 1 at max. Generations = 500.
Figure 16. Converges curves for UC problem—Case 1 at max. Generations = 500.
Mathematics 10 02179 g016
Figure 17. Variation of the optimum solution versus the maximum number of generations.
Figure 17. Variation of the optimum solution versus the maximum number of generations.
Mathematics 10 02179 g017
Figure 18. Variation of the optimum solution versus population size.
Figure 18. Variation of the optimum solution versus population size.
Mathematics 10 02179 g018
Figure 19. Variation of the optimum solution versus the parameter ( r 1 ) .
Figure 19. Variation of the optimum solution versus the parameter ( r 1 ) .
Mathematics 10 02179 g019
Figure 20. The surface of fitting the calculations time.
Figure 20. The surface of fitting the calculations time.
Mathematics 10 02179 g020
Figure 21. Calculations time of MRFO and PGA-MRFO.
Figure 21. Calculations time of MRFO and PGA-MRFO.
Mathematics 10 02179 g021
Table 1. Comparison between the two algorithms according to the consumed iterations.
Table 1. Comparison between the two algorithms according to the consumed iterations.
Test FunctionFunction No.DDomainNO. of Iterations Using MRFONO. of Iterations Using PGA-MRFOPercentage Improvement
Rosenbrok F530 [ 30 , 30 ] 100043856.2%
Quartic F730[−1.28,1.28]100041958.1%
Schwefel F830 [ 500 , 500 ] 100052048%
Penalized2 F1330[−50,50]100018181.9%
Foxholes F142[−65.536,65.536]624232.2%
Kowalik F154[−5,5]100016983.1%
Hartman 6 F206[0,1]5003593%
Shekel 5 F214[0,10]100010989.1%
Shekel 7 F224[0,10]32516549.2%
Shekel 10 F234[0,10]10008791.3%
Table 2. Comparison between the three algorithms according to consumed iterations.
Table 2. Comparison between the three algorithms according to consumed iterations.
Test FunctionFunctionDDomainNO. of Iterations Using MRFONO. of Iterations Using GANO. of Iterations Using PGA-MRFOPercentage Improvement between PGA-MRFO and
MRFOGA
Sphere F1100 [ 100 ,   100 ] 5050786%86%
Schwefel 2.22F2100[−10, 10]5050394%94%
Schwefel 1.2F3100 [ 100 ,   100 ] 5050590%90%
Schwefel 2.21F4100 [ 100 ,   100 ]5050688%88%
Rosenbrock F5100 [ 30 ,   30 ] 5050492%92%
StepF6100[−100, 100]5050590%90%
QuarticF7100 [ 1.28 ,   1.28 ] 50502256%56%
SchwefelF8100 [ 500 ,   500 ] 50303044%0%
RastriginF9100 [ 5.12 ,   5.12 ] 5050394%94%
AckleyF10100 [ 32 ,   32 ] 5050786%86%
Griewank F11100 [ 600 ,   600 ] 5050786%86%
Penalized F12100 [ 50 ,   50 ] 50501766%66%
Penalized2 F13100 [ 50 ,   50 ] 50501080%80%
Table 3. Case 1—test system data.
Table 3. Case 1—test system data.
Unit 1Unit 2Unit 3Unit 4
Pgmin [MW]25607520
Pgmax [MW]8025030060
Inc. Heat rate [BTU/kWh]10,4409000873011,900
Fuel cost [£/MBTU]2222
No load cost [£/h]213585.62684.74252
Min up time [h]4551
Min down time [h]2341
Startup cost [£]1501705000
Table 4. Case 2—test system data.
Table 4. Case 2—test system data.
Unit 1Unit 2Unit 3Unit 4Unit 5Unit 6Unit 7Unit 8Unit 9Unit 10
Pgmin [MW]301301651302255025011027575
Pgmax [MW]100400600420700200750375850250
Inc. Heat rate [BTU/kWh]900010,000110011201800340520606060
Fuel cost [£/MBTU]2222222222
No load cost [£/h]1000970700680450370480660665670
Min up time [h]5321423142
Min down time [h]4243524331
Startup cost [£]2050146021001480210013602300137022001180
Table 5. Results of the four algorithms for the UC problem—Case 1 with a constant population size of 100.
Table 5. Results of the four algorithms for the UC problem—Case 1 with a constant population size of 100.
Max. Generations = 100
AlgorithmDPPSOMRFOPGA-MRFO
Total cost74,11075,00674,64373,274
Consumed iterations-1001006
Max. Generations = 300
AlgorithmDPPSOMRFOPGA-MRFO
Total cost74,11075,00673,55973,274
Consumed iterations-3003007
Max. Generations = 500
AlgorithmDPPSOMRFOPGA-MRFO
Total cost74,11075,00674,02773,274
Consumed iterations-50050010
Table 6. Results of the four algorithms for the UC problem—Case 2 with a constant population size of 100.
Table 6. Results of the four algorithms for the UC problem—Case 2 with a constant population size of 100.
Max. Generations = 100
AlgorithmDPPSOMRFOPGA-MRFO
Total cost38,27191,15242,51437,765
Max. Generations = 300
AlgorithmDPPSOMRFOPGA-MRFO
Total cost38,27191,15249,99336,304
Max. Generations = 500
AlgorithmDPPSOMRFOPGA-MRFO
Total cost38,27191,15249,99336,303
Table 7. Algorithms’ results due to load uncertainty.
Table 7. Algorithms’ results due to load uncertainty.
AlgorithmDP
Optimum Cost
PSO
Optimum Cost
MRFO
Optimum Cost
PGA-MRFO
Optimum Cost
Uncertainty Load Percentage
Original case (0%)74,11074,72174,20773,274
Original case ± 1%74,07073,96174,58974,059
Original case ± 2%74,03173,88574,53173,982
Original case ± 3%75,02074,24674,52474,344
Original case ± 4%72,97275,64175,07574,141
Original case ± 5%75,17674,05473,19072,256
Original case ± 6%72,53876,41475,91474,981
Original case ± 7%75,76674,58373,96873,034
Original case ± 8%73,41674,40474,15773,224
Original case ± 9%76,77875,44175,74274,865
Original case ± 10%74,73477,39076,96975,765
Original case ± 11%73,90275,001.974,865.974,065.1
Original case ± 12%76,28374,62974,28072,712
Original case ± 13%76,29077,85777,40676,253
Original case ± 14%75,77377,18576,46875,535
Original case ± 15%74,09976,83977,23775,842
Original case ± 16%74,09971,98471,36670,446
Original case ± 17%77,74380,24779,53678,603
Original case ± 18%76,21077,56876,98075,650
Original case ± 19%71,21169,90369,19567,618
Original case ± 20%76,80376,69975,56374,629
Mean74,845.775,396.675,077.79574,100.205
Standard deviation2288.4015362271.4542247.99081641.46196
Table 8. Algorithms’ results due to fuel cost uncertainty.
Table 8. Algorithms’ results due to fuel cost uncertainty.
AlgorithmDP
Optimum Cost
PSO
Optimum Cost
MRFO
Optimum Cost
PGA-MRFO
Optimum Cost
Uncertainty Fuel Cost Percentage
Original case (0%)74,11074,72174,20773,274
Original case ± 1% 73,86674,49973,98973,051
Original case ± 2%75,10475,69675,18874,288
Original case ± 3%73,67274,42373,94472,974
Original case ± 4%74,52775,07374,87273,647
Original case ± 5%74,08074,60174,34373,074
Original case ± 6%74,37275,05674,62273,672
Original case ± 7%72,51573,21672,68871,659
Original case ± 8%74,14075,05474,68173,737
Original case ± 9%75,98676,91976,62176,323
Original case ± 10%74,62175,85374,95675,115
Original case ± 11%74,90775,57275,06974,113
Original case ± 12%72,09272,65572,30470,981
Original case ± 13%72,01273,30473,09972,724
Original case ± 14%73,76074,86674,77773,881
Original case ± 15%81,20781,96081,64880,780
Original case ± 16%82,71284,99384,08882,424
Original case ± 17%74,02776,14674,14772,636
Original case ± 18%75,35175,88276,22274,607
Original case ± 19%77,18380,01978,33277,228
Original case ± 20%65,57366,48266,28865,425
Mean74,596.575,670.7875,150.0674,166.67
Standard deviation3471.1033686.1443536.3553408.60549
Table 9. The statistical measures and error for robustness analysis.
Table 9. The statistical measures and error for robustness analysis.
ParameterMeanStandard DeviationError
Maximum number of Generations73,447.75367.42220.002371
Population size73,373.55408.0310.001358
Parameter   r 1 73,377.1329.55170.001407
Table 10. The values of the approximated coefficients.
Table 10. The values of the approximated coefficients.
Coefficient p 00 p 10 p 01 p 20 p 11 p 02 p 30
Value2.2920.35271.301−0.3958−0.5647−0.79770.1382
Coefficient p 21 p 12 p 03 p 40 p 31 p 22 p 13
Value−1.220.466−0.56790.2399−0.3286−0.060930.8375
Coefficient p 04 p 50 p 41 p 32 p 23 p 14 p 05
Value0.6700.079440.03898−0.24430.43150.160.3069
Table 11. The mean and standard deviations of the calculated time for algorithms based on 50 cases.
Table 11. The mean and standard deviations of the calculated time for algorithms based on 50 cases.
AlgorithmMRFOPGA-MRFO
Mean value0.0504180.043794
Standard deviation0.024744380.021480797
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

El-Shorbagy, M.A.; Omar, H.A.; Fetouh, T. Hybridization of Manta-Ray Foraging Optimization Algorithm with Pseudo Parameter-Based Genetic Algorithm for Dealing Optimization Problems and Unit Commitment Problem. Mathematics 2022, 10, 2179. https://doi.org/10.3390/math10132179

AMA Style

El-Shorbagy MA, Omar HA, Fetouh T. Hybridization of Manta-Ray Foraging Optimization Algorithm with Pseudo Parameter-Based Genetic Algorithm for Dealing Optimization Problems and Unit Commitment Problem. Mathematics. 2022; 10(13):2179. https://doi.org/10.3390/math10132179

Chicago/Turabian Style

El-Shorbagy, Mohammed A., Hala A. Omar, and Tamer Fetouh. 2022. "Hybridization of Manta-Ray Foraging Optimization Algorithm with Pseudo Parameter-Based Genetic Algorithm for Dealing Optimization Problems and Unit Commitment Problem" Mathematics 10, no. 13: 2179. https://doi.org/10.3390/math10132179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop