Next Article in Journal
A Nonparametric Approach to Bond Portfolio Immunization
Previous Article in Journal
Weighted Fractional Iyengar Type Inequalities in the Caputo Direction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sine-Cosine Algorithm to Enhance Simulated Annealing for Unrelated Parallel Machine Scheduling with Setup Times

1
School of automation, Wuhan University of Technology, Wuhan 430070, China
2
School of Computer Science, Wuhan University, Wuhan 430072, China
3
Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
4
Department of e-Systems, University of Bisha, Bisha 61922, Saudi Arabia
5
Department of Computer, Damietta University, Damietta 34511, Egypt
6
Mathematics Department, Faculty of Science, Damanhour University, Beheira 22516, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(11), 1120; https://doi.org/10.3390/math7111120
Submission received: 24 October 2019 / Revised: 10 November 2019 / Accepted: 13 November 2019 / Published: 16 November 2019

Abstract

:
This paper presents a hybrid method of Simulated Annealing (SA) algorithm and Sine Cosine Algorithm (SCA) to solve unrelated parallel machine scheduling problems (UPMSPs) with sequence-dependent and machine-dependent setup times. The proposed method, called SASCA, aims to improve the SA algorithm using the SCA as a local search method. The SCA provides a good tool for the SA to avoid getting stuck in a focal point and improving the convergence to an efficient solution. SASCA algorithm is used to solve UPMSPs by minimizing makespan. To evaluate the performance of SASCA, a set of experiments were performed using 30 tests for 4 problems. Moreover, the performance of the proposed method was compared with other meta-heuristic algorithms. The comparison results showed the superiority of SASCA over other methods in terms of performance dimensions.

1. Introduction

In recent years, parallel machine scheduling problems (PMSPs) have attracted significant attention because they are used in different industrial applications and considered to be important key factors for sustainability at the operational level [1,2,3]. This kind of problems aims at assigning a set of jobs to a number of parallel machines with satisfying the requirements of the customers [4]. In general, there are three classes of the PMSPs, namely uniform, identical, and unrelated parallel machine scheduling problem (UPMSPs). However, the uniform and identical are considered as special cases of the UPMSPs, where different machines have different capabilities that are used to perform the same function. Also, if the processing times of the jobs are dependent on the machine to which these jobs are assigned, the machines are called unrelated machines.
The UPMSPs have been applied to different applications such as the mass production lines that use banks of machines with different capabilities and age to perform production tasks and those that are used in drilling operations in a printed circuit board factory [5] and scheduling jobs on a printed wiring board manufacturing line [6]. In addition, they are used in the textile industry and tested models as in [7], and the dicing of semiconductor wafer manufacturing [8]. There are several other applications, including multiprocessor computers and docking systems for ships [9].
In general, the UPMSPs are considered as a set of N jobs that must be executed on only one machine M from a set of unrelated parallel machines ( R M ) minimizing the makespan ( C m a x ), where the nth job consists of a single task that demands a given processing time. In addition, the sequence-dependent setup times ( S i j k ) between the jobs is studied since it is a very common issue in the industry. This means that there exists a difference between the setup time required for two consecutive jobs (i and j) on machine k , k = 1 , , M and the reverse two jobs (i.e., the setup time on machine k between the jobs j and i). Also, the setup time between the jobs i and j on machine k is different from the setup time of the same jobs on another machine k 1 (i.e., there exists a unique setup N × N matrix for each machine) [10]. According to these definitions, this problem can be represented as R M / S i j k / C m a x .
The UPMSPs are considered as an NP-hard problem and they are extremely important requirements in practice [11]. According to this information, the traditional algorithm can be used to find the optimal solution for a small number of instances; however, if the problem has a large number of instances it is very difficult. Therefore, several methods have been proposed to solve the UPMSPs that considered the setup times [12], and they provide good results. Examples of these methods are simulated annealing (SA) [13,14], Tabu Search (TS) algorithm [15], and firefly algorithm [16].
In this paper, an alternative method is proposed to solve the UPMSPs, which is a hybrid between the SA algorithm and Sine Cosine Algorithm (SCA). The SCA is used to improve the exploitation ability of SA, where SA is used as a local search method. The proposed algorithm, namely SASCA, starts by generating a random integer solution that represents the solution for the UPMSPs. This solution has dimensions equal to the number of jobs and each value of it refers to the machine index in which the job must be performed on it. The next step of the proposed method is to select a new solution from the neighbors of the current solutions and compare the quality of them and select the best to represent the current solution. However, to ensure the ability of SA to avoid getting stuck in a local point, the operators of SCA is used to improve the current solution in which the sine and cosine functions are used. The previous steps are repeated until the stop conditions are met.
The main contributions of this paper can be summarized as:
  • A newly proposed method combines the SA and SCA to update the solutions by their properties. Based on these properties the convergence toward the optimal solution increased.
  • The proposed method aims at minimizing the makespan in solving the unrelated parallel machine scheduling problem (UPMSP) with sequence-dependent and machine-dependent setup times.
  • A comparison is provided between the proposed method and other meta-heuristics algorithms.
The rest of this paper is organized as the following: Section 2 gives a review of some related works for the recent UPMSP methods. Section 3 presents the preliminaries about the Mixed Integer Programming for the UPMSP, Simulated Annealing Algorithm, and Sine Cosine Algorithm. In Section 4, the proposed method based on the hybrid between the SA and SCA is introduced to solve the UPMSP problem. Section 5 presents the results and discussion of the proposed method against the other algorithms. The conclusion and future works are presented in Section 6.

2. Literature Review

Several meta-heuristic algorithms have been used to solve UPMSPs in the literature [16]. For example, Logendran et al. [17] proposed a different six algorithms based on the TS to solve the same problem. In contrast, Bozorgirad and Logendran [18] used the TS algorithm to solve the sequence-dependent group-scheduling problem using unrelated-parallel machines; however, the time of execution for every job differs on different machines. Another Tabu search method based on a hybrid concept was proposed in [19] which combined the properties of the variable neighborhood descent approach with the TS algorithm. This algorithm was used to solve UPMSP with sequence-dependent setup times and unequal ready times and the objective function was the weighted number of tardy jobs.
In addition, Eva and Rubén [20] provided a method based on the improvement of the Genetic Algorithm (GA) to solve the same problem but with sequence-dependent setup times. In this modified version of GA, the fast local search and a local search enhanced crossover operator are used. Duygu et al. [21] proposed hybrid GA with a local search method that aimed at the same problem except the modification was to minimize the relocation operation of random key numbers for genes. Also, the GA was proposed in [22] to minimize the total completion time and the number of tardy for UPMSPs with sequence-dependent and machine-dependent setup times, due-times, and ready times.
The Ant Colony Optimization (ACO) algorithm was used to solve the unrelated PMSPs as in [23] which provides results better than TS algorithm and a heuristic algorithm, called Partitioning Heuristic (PH) proposed in [24]. The author of [23] proposed an extension to this study in [25]. They highlighted the difference between the two methods, whereas, in [23] the performance of ACO was evaluated over only preliminary tests where the parameters of ACO were selected by trial and error. Also, a small number of instances and limited problem structures were used, and the results of ACO was only compared with PH and TS. However, in [25], the results were compared with Metaheuristic for Randomized Priority Search (MetaRaPS) [26]. Moreover, another enhancement of ACO was introduced by the same authors to solve the same problem as in [27] which enhanced the parameters of ACO such as pheromone. The authors concluded that the result of the ACO-II was better than the ACO-I, MetaRaPS, and Simulated Annealing (SA). Lin and Hsieh [28] used a modified version of ACO algorithm to solve unrelated PMSOs with set-up times and ready times for minimizing the total weighted tardiness. They proposed a heuristic and iterated hybrid metaheuristic methods to solve the problem and according to the evaluation results, the IHM was better than the ACO and TS.
Sheremetov et al. [29] presented a two-stage GA algorithm to solve UPSMPs of the steam generators for oil cyclic steam stimulation. They considered this petroleum problem as a parallel uniform machine scheduling and they addressed the makespan and the job’s tardiness.
Nguyen and Toggenburger [2] proposed a mixed-integer linear programming scheme to address the scheduling problem of the identical machines with an additional resource. Ezugwu [30] presented a solution method to non-pre-emptive UPMS problems to minimize the makespan. Three methods were proposed to solve UPMS problems, namely SA, Hybrid Symbiotic Organisms Search with SA, and Organisms Search algorithm. As the author described, these algorithms outperform existed methods in case of 120 jobs with 12 machines. In addition, a modified differential evolution algorithm was proposed to improve the consumption of energy problem for the UPSMPs [31]. The developed method characterized each job by determining speed vectors. In [15], the TS algorithm was applied to large scale UPSMPs according to its multiple-jump strategy. Bektur and Saraç [12] developed the performance of SA and TS by combining both of them together with a mixed-integer linear programming scheme as an alternative UPSMPs method. This method aims to minimize the total weighted tardiness of the UPSMPs.
In the same context, the SA algorithm was used to solve the unrelated PMSP with machine-independent sequence-dependent setup times for which the total tardiness was the objective function used as in [32]. However, the SA suffers from some limitations, similar to other single-based meta-heuristic algorithms, such as with increasing the number of jobs, the number of solutions generated from the neighborhood extremely grows. Therefore, determining an efficient solution needs a large computation time, also there is a high probability that the SA can get stuck in a local point. Therefore, all of these points motivated us to provide an alternative method to solve the UPMSP by improving the SA algorithm using the Sine Cosine Algorithm (SCA).
The SCA is a meta-heuristic algorithm proposed in [33] to solve the global optimization problems. The solutions are updated in SCA through using either the sine or the cosine functions. The SCA has a small number of parameters and also its ability to find the optimal solution is better than other metaheuristic (MH) algorithms; therefore, the SCA has been used in many fields. For examples, Elaziz et al., in [34] applied SCA to solve features selection problem; whereas, in [35] SCA was used to select the relevant features to enhance the performance of classification the galaxy images. The authors of [36] efficiently applied SCA to train the feed-forward neural network. Moreover, the improvement of the data clustering by the SCA was used to determine the cluster centers process [37]. Ramanaiah and Reddy [38] solved the Unified Power Quality Conditioner (UPQC) problem using the SCA. Also, the SCA was applied to estimate the parameters of the kernel of Support Vector Regression (SVR) [39].

3. Preliminaries

3.1. Mixed Integer Programming Mathematical Model

The basic concepts of the Mixed Integer Programming (MIP) for the UPMSP with sequence–dependent setup times are discussed in this section. Following [24,26], the MIP formulation is given as
M i n C m a x
Subject to
i = 0 , i 1 N k = 1 M x i j k = 1 ; j = 1 , , N
i = 0 , i h N x i h k j = 0 , j h N x h j k = 0 ;
Equation (3) is performed h = 1 , , N , k = 1 , , M .
C j C i + k = 1 M x i j k ( S i j k + p i k ) + V ( k = 1 M x i j k 1 ) ,
where i = 0 , , N
j = 0 N x 0 j k = 1 , k = 1 , , M
C j C m a x , i = 1 , , N ,
x i j k 0 , 1 , i = 0 , , N , j = 1 , , n , k = 1 , , M ,
C 0 = 0
C j 0 , j = 0 , , N
where C m a x , C j , p j k and S i j k represent maximum completion time (makespan), completion time of job j, processing time of job j on machine k, and sequence-dependent setup time to process job j after job i on machine k, respectively. Also, x j 0 k : 1 if job j is the last job to be processed on machine k and 0 otherwise. The x i j k is 1 if job j is processed directly after job i on machine k and 0 otherwise. The  S 0 j k represent Setup time to process job j first on machine k. The  x 0 j k is 1 if job j is the first job to be processed on machine k and 0 otherwise. While, N , M and V represent the number of jobs, the number of machines, and a large positive number, respectively.
Equation (1) represents the objective function that is used to minimize the makespan. To ensure that every job is assigned to exactly one machine and it is scheduled only once, the constraint set in Equation (2) is used. Meanwhile, the constraint (3) ensures that there exists only one preceding job and only one succeeding job. The constraint set (4) is used to compute the completion times of the jobs at the machines and to satisfy that no job can succeed and precede the same job. This can be ensured through using a large positive number (i.e., V = ) such that if job j is scheduled after job i, then k = 1 M x i j k = 1 , and therefore, V ( k = 1 M x i j k 1 ) = 0 and C j = C i + S i j k + p j k . Otherwise, if job j is not scheduled right after job i, then k = 1 M x i j k = 0 , and therefore, V ( k = 1 M x i j k 1 ) = V .
Set (5) is used to ensure that only one job is scheduled first at every machine. Furthermore, to ensure that C m a x is larger than the completion time of any other job, the constraint (6) is used. Also, the value of the solution x is a binary value over all the search space as stated in constraint (7). The constraints (8) and (9) state that the completion time is set to zero for the job 0, and completion times are set to non-negative values, respectively.

3.2. Simulated Annealing Algorithm

In this section, the basic concepts of the Simulated Annealing (SA) algorithm are introduced. The SA algorithm is classified as a single-based solution method which simulates the annealing process in metallurgy [40]. This process is performed through heating and cooling a metal, which increases the size of crystals and generates uniform crystals with decreasing their defects. The SA algorithm starts by generating a random solution X then it selects another solution Y from the neighborhood of X. Then the fitness function for the two solutions is computed and if the f ( Y ) is better than f ( X ) then the solution X is replaced by Y. Otherwise, there is a chance the solution X can be replaced by Y with a probability that decreased with increasing the difference between fitness functions of the two solutions (i.e., E = f ( X ) f ( Y ) ), this probability is defined as:
P r o b = e E / k T
where k is the Boltzmann constant, and T is the current temperature value. If P r o b is greater than a random number, then X = Y ; otherwise X not changed. After that, the SA algorithm reduces the value of the current temperature (T) using the following equation:
T = β T
where β is random number chosen form the interval [ 0 , 1 ] .

3.3. Sine Cosine Algorithm

The Sine Cosine Algorithm (SCA) was proposed by Mirjalili [33] as a population-based metaheuristic algorithm in which it used the sine and cosine functions to search for the optimal solution. Therefore, the SCA algorithm, similar to other MH algorithms, starts by generating a set of N solutions called X using the following equation
x i = l i + r a n d × ( u i l i ) , i = 1 , , N
where u i and l i represent the upper and lower boundary, respectively, of the search space. The next step in the SCA is to evaluate the performance of each solution x i X through computing its fitness function. After that, the solution will be updated by using either sine or cosine function based on the probability random variable r 1 [ 0 , 1 ] as in the following equation:
x i t + 1 = x i t + r 2 × s i n ( r 3 ) × | r 4 x b t x i t | , r 1 > 0.5 x i t + r 2 × c o s ( r 3 ) × | r 4 x b t x i t | , r 1 0.5
where x b represents the best solution, and r i [ 0 , 1 ] , i = 2 , 3 , 4 represents a random number. The aim of the r 2 is to determine the optimal region for the updated solution, this region may be in the area between the current solution and the best solution or outside. Also, it is used to balance exploration and exploitation through updating its values as [33]:
r 2 = a t a t m a x
where a , t and t m a x are the constant value, the current iteration and the maximum number of iteration, respectively. Also, the aim of r 3 is to determine if the current solution will move in the direction of the best solution x b or in direction outwards the best solution. While the aim of the r 4 is to give x b a random weight to stochastically asserts ( r 4 > 1 ) or stochastically de-asserts ( r 4 < 1 ) the effect of desalination in defining the distance.

4. Proposed Method

In this section, the proposed method to solve an unrelated parallel machine scheduling problem with setup times is introduced (as in Figure 1). This proposed method is called SASCA where the SCA is used to enhance the local search ability of the SA.
In general, the proposed SASCA starts by generating a random integer solution that represents the solution to UPMSPs. Then the SA generates a new solution Y from the neighboring ( N ( X ) ) of the current solution X. The objective function (that aimed to minimize the makespan) is computed for both solutions and if the f ( Y ) < f ( X ) then X = Y . Otherwise, the new solution can be replaced X with probability P r o > α ( α [ 0 , 1 ] s ) ( r a n d [ 0 , 1 ] ) that is computed based on the difference between the objective function values of both solutions (X and Y). Thereafter, the SCA is used to enhance the X through using its strategy. If the new solution is better than the old one then it will replace it, then the temperature T is decreased after performing I i t e r . The previous steps are discussed with more details in the following.

4.1. Initial Solution

The proposed SASCA starts by determining the initial value for each parameter such as the current temperature T = T 0 . Then it generates a random integer solution X with dimension N J (the number of jobs) and it takes value from the interval [ 1 , N m ] . For example, consider we have 15 Jobs and 3 machines, then, the representation of the solution X can be given as [ x 1 , x 2 , , x N J ] = [ 1 2 3 2 3 1 3 3 1 1 2 2 3 1 2 ] . This means that the jobs 1, 6, 9, 10, 14 will be performed on machine number one, jobs 2, 4, 11, 12, 15 on machine number two, and jobs 3, 5, 7, 8, 13 on machine number three.
The next step in this stage is to compute the fitness function for the solution X using Equation (1) (that represents C m a x ) and select the best solutions.

4.2. Updating Solution

The updating of the solution starts by selecting solution Y from the neighbor N ( X ) of the solution X and compute its fitness function f ( Y ) . The difference between the f ( X ) and f ( Y ) is computed (which represents by E ). Then if f ( Y ) f ( X ) then the solution X will be replaced by Y. Meanwhile, if this condition not satisfied, then there is another probability the solution Y can replace X (this probability is defined in Equation (10)). If P r o b > α then X = Y . Thereafter, the next step is to use the operators of SCA algorithm to improve the exploitation ability of SA algorithm as the following: first, the value of the parameter r 2 is updated using Equation (14) also, the value of parameters r 1 , r 3 and r 4 are updated. Then based on the value of r 1 the current solution X will be updated using either the sine or cosine function as in Equation (13). The next step is to update the best solution X b and reduce the temperature as in Equation (11), after running the inner I i t e r from the previous decreasing value of T. The algorithm is stopped if the terminal criteria are met.
The entire steps of the proposed method are illustrated in Algorithm 1.
Algorithm 1 The steps of the proposed method
1:
Input: T 0 initial temprature, Size of population N, dimension of solution N J , and total number of generations t m a x .
2:
Output: The best solution x b .
3:
Set the initial value of N solutions with dimension N J .
4:
Evaluate the quality of each X i by computing its fitness value F i , and update number of fitness evaluation.
5:
Find the best solution X b .
6:
Put t = 1 .
7:
repeat
8:
for i = 1 : N do
9:
   X i N e w = determine the neighbor solution of X i .
10:
  Compute the fitness value f ( X i N e w ) for X i N e w .
11:
  if f ( X i N e w ) < f ( X i ) then
12:
    X i = X i N e w .
13:
  else
14:
    δ = f ( X i ) f ( X i N e w ) .
15:
   if ( e x p ( δ / T ) r 5 ) then
16:
     X i = X i N e w .
17:
   end if
18:
  end if
19:
  Update the temprature T using Equation (11).
20:
end for
21:
for i = 1 : N do
22:
  Update the parameters r 1 , r 2 , r 3 , and r 4 .
23:
  Update X i using Equation (13).
24:
  Evaluate the quality of each X i by computing its fitness value F i .
25:
end for
26:
 Find the best solution X b .
27:
 Set t = t + 1 .
28:
until t < t m a x

5. Experiments and Results

In this section, the dataset description, experiment settings, and the discussion of the results are presented. The experiments are divided into two parts, the first one contains the results of the proposed algorithm and the other metaheuristic algorithms such as grey wolf optimization, particle swarm optimization, genetic algorithm in addition to the traditional SA. The second one compare the results of the proposed SASCA with other state-of-the-art methods. Then, the results of the average percent deviations are provided followed by the influence of the ( β ) variable on the proposed SASCA.

5.1. Dataset Description

We conducted 30 tests for 4 problems, each problem has its machines and jobs. The first problem has 2 machines with 11 kinds of jobs (i.e., 6, 7, 8, 9, 10, 11, 40, 60, 80, 100, and 120 jobs). The second problem has 4 machines with 10 kinds of jobs (i.e., 6, 7, 8, 9, 10, 11, 60, 80, 100, and 120 jobs). The third problem has 6 machines with 8, 9, 10, 11, 100 and 120 jobs, whereas, the last problem has 8 machines with 10, 11, and 120 jobs. These jobs were selected as in [24,25] to evaluate the proposed method on small and large jobs. In this manner, Equation (15) is applied to select large jobs. For instance, 100/8 = 12.5; so, job 100 is ignored from machine 8 and 120 is selected.
S e l e c t J o b i f ( J o b s / M a c h i n e ) > 15
For more information about the dataset used in this paper is available at [41].

5.2. Experiment Settings

The experiments were performed on “Windows 10” with CPU “Core2Due” and 4GB RAM. Each job, in all problems, was evaluated over 15 different problem instances and the average value of C m a x is calculated. The proposed method used a stop condition equals to 25 for the small problems and 10000 iterations for the large problems to record the best obtained value of fitness function ( C m a x ). For a fair comparison, the number of iterations is chosen to meet the same setting in the references. The parameters setting of the proposed method are listed in Table 1. In general, these parameters are selected based on the experiments besides, they showed good performances in our previous works such as [42,43,44,45].

5.3. Comparison with Metaheuristic Methods

In this experiments, the performance of the SASCA is compared with other four MH methods as given in Table 2 and Table 3. This comparison are performed using a set of different Jobs (i.e., 6, 7, 8, 9, 10, 11) and number of machines (2, 4, 6, 8). According to the results of the average of C m a x , it can be noticed that the proposed SASCA has high ability to find the smallest C m a x among all the tested number of machines and jobs. Meanwhile, the SA has better C m a x at the small number of jobs especially at jobs 6, 7, and 8. However, when the number of machines become 6 and number of jobs become 8, the GWO gives better results than SA. By comparing, the performance of the four MH methods (i.e., GWO, PSO, SA, and GA) at 8 machines as well as 10 and 11 jobs, it can be seen that the GA and GWO provided smaller results, respectively, than the other two methods (i.e., SA, and PSO).
Moreover, by analysing the results of computational time(s), it can be observed that the SA is the fast algorithm over all the tested problems except for the case when the number of machines is 2 and jobs 6, when the PSO has the smaller CPU time(s). In addition, it can be noticed that the SASCA requires smaller CPU time(s) than the other methods.

5.4. Comparison with the State-of-the-Art Methods

In this section, we compare the performance of the SASCA and the other methods, for example, Tabu (T9) and Tabu (T8) [24] and Ant Colony Optimization (ACO) [25], Partitioning Heuristic (PH) [25], Tabu Search (TS) [24], and MRPS (Meta-RaPS) [26]. These experiments are performed through two datasets (i.e, small and large) as given in the following subsections.

5.4.1. Small Problems

Table 4 illustrates the results of the SASCA and other methods. The values of the C m a x and computation time are listed in this table. The SASCA is compared with SA, Tabu(T9), and Tabu(T8). The results of the Tabu (T9) and Tabu (T8) are obtained from [24] because it used the same problems (i.e., the same numbers of machines and jobs).
From this table, we can see that the proposed method (SASCA) outperforms the other methods in all problems in terms of C m a x value followed by Tabu (T9), Tabu (T8), and SA. In terms of computation time, the proposed method ranked second after SA followed by Tabu (T8) and Tabu (T9). The SASCA was close to SA but outperformed SA in computational time, as expected, since the SCA algorithm, in general, consumes more time than SA.

5.4.2. Statistical Test for the Small Problems

The performance of the SASCA is evaluated using Wilcoxon’s rank sum test to check if there is a significant difference between the SASCA and the other methods or not in the small problems [46,47,48]. In addition, the Friedman test is applied to rank these methods. The results are given in Table 5 and Table 6, it can be seen from the Wilcoxon test that p-value is less than 0.05 and this indicates there is a significant difference between the proposed method and other methods except Tabu (T9) in terms of C m a x . In contrast, Table 6 shows that the SASCA has the smallest average rank in terms of C m a x , and it achieved the second rank in CPU time(s). Therefore, it can be concluded that the proposed method can outperform the other methods in the case of small datasets.

5.4.3. Large Problems

Table 7 displays the results of the proposed method for large problems versus other methods. The calculated values of the C m a x and standard deviation ( S T D ) for the SASCA are listed in this table along with the compared results which obtained from the state-of-the-art methods namely Ant Colony Optimization (ACO) [25], Partitioning Heuristic (PH) [25], Tabu Search (TS) [24], and MRPS (Meta-RaPS) [26]. In addition, the lower bound (LB) is listed in the last column as a reference value.
From this table, we can conclude that, in terms of C m a x , the proposed method outperformed the other methods in 7 out of 12 problems and its results are closer to the reference values. Whereas, in terms of S T D , the proposed algorithm performed better in 6 out of 12 problems followed by MRPS and ACO, respectively. These results are illustrated in Figure 2 to show the variation of C m a x among these algorithms; the values in this figure are normalized by the following equation:
N o r m a l i z e d v a l u e = C m a x m e t h o d C m a x r e f r e n c e v a l u e C m a x r e f r e n c e v a l u e

5.4.4. Statistical Test for the Large Problems

In this section, the two statistical tests (i.e., Wilcoxon’s rank sum and Friedman test) are used to further analyze the results of the proposed SASCA based on large problems. Table 8 depicts that there is no significant difference between SASCA and the other methods in terms of C m a x . Meanwhile, there are significant differences between SASCA and TS and PH methods in terms of CPU time(s). The same observation is noticed from Table 9, where the SASCA and ACO have the same average rank, followed by MRPS, TS, and PH, respectively, in terms of C m a x . Moreover, in terms of the CPU time(s), the MRPS allocates the first rank followed by the SASCA which allocates the second rank, while the ACO in the third rank followed by the TS and PH, respectively.

5.5. Average Percent Deviations

The average percent deviations values ( a p d ) for the small and the large problems are provided in Table 10 and Table 11 to prove the superiority of the proposed method against the other methods. The a p d of ( C m a x m e t h o d ) of each method are recorded where a p d is calculated as follows:
a p d = C m a x m e t h o d C m a x S A S C A C m a x S A S C A
Table 10 shows that the SASCA outperformed all algorithms in all machines and jobs. Table 11 illustrates that the SASCA outperformed the PH and TS in all large problems and got over MRPS in 10 out of 12 problems; while the SASCA works better than ACO in 7 out of 12 problems. In general, the SASCA shows good ability to work with both small and large problems.

5.6. Parameters Sensitivity

5.6.1. Influence of ( β ) value on the SASCA

In this subsection, the influence of ( β ) variable on the performance of the SASCA is evaluated. In this test, two machines are used with five types of jobs (i.e., 40, 60, 80, 100, and 120). Table 12 shows the C m a x and the STD values. It can be observed that the best value for the β is 0.95 which has the smallest C m a x value followed by β = 0.5 . Meanwhile, in the case of β = 0.5 the proposed SASCA becomes more stable than other two values. In addition, the β = 0.95 is more stable than β = 0.1 .

5.6.2. Influence of the Parameters Setting in the Algorithms

In this section, we study the influence of the parameters setting on the performance of the MH. The values of the parameters for each algorithm are given in Table 13, with the same number of population and the number of iterations used in the previous experiments. Moreover, the number of machines is two and the number of jobs varies from 6 to 11. The comparison results are given in Table 14 and Figure 3. From Table 14 it can be noticed that the SASCA has the smallest C m a x overall the tested problems except at the number of jobs is 10, the PSO is the best algorithm. Whereas, Figure 3 depicts the comparison between the average of the C m a x of the parameter setting in Table 1 and the current one (i.e., Table 13). It can be noticed that the performance of the MH methods based on the value in Table 1 is better than their performance based on Table 13.
From the previous analysis, it can be observed the high performance of the SASCA method, however, there are some limitations. For example, the time computational of the SASCA needs more improvements since it updates the solutions using SA operators followed by the operators of SCA. Besides, the diversity of the solution needs to be enhanced and this can be achieved using the Disruptor operators.

6. Conclusions

Recently, unrelated parallel machine scheduling problems (UPMSPs) have received more attention due to their wide applications in various domains. To solve UPMSPs, Simulated Annealing (SA) algorithm provides suitable results compared to other meta-heuristic methods (MH) methods, but its performance still requires more improvement. Therefore, in this paper, an alternative method was proposed for determining the optimal solution to solve UPMSPs by minimizing the makespan value. The proposed method called SASCA combined the SA algorithm with Sine-Cosine Algorithm (SCA). SASCA worked in sequence order; in the first stage, the optimization process started by using SA to evaluate the problem solution, then the output solution was fed to SCA to continue the optimization process. The final solution was evaluated by the objected function. The performance of the proposed method was compared with several methods including ACO, MRPS, TS, and PH in terms of makespan values and standard deviation. In general, SASCA has the ability to solve small and large problems of unrelated parallel machine scheduling. In the future, the proposed method will be evaluated in different kinds of problems such as image segmentation, task scheduling in cloud computing, and other optimization problems.

Author Contributions

Conceptualization, H.J., D.L., and M.A.A.A.-q.; methodology, H.J., M.A.E., and A.A.E.; software, H.J., M.A.E., and A.A.E.; validation, H.J., M.A.A.A.-q., and O.F.; formal analysis, M.A.E. and A.A.E.; investigation, D.L. and A.A.E.; writing—original draft, H.J.; writing—review and editing, H.J., M.A.E., A.A.E., and O.F.; supervision, D.L.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shim, S.O.; Park, K. Technology for production scheduling of jobs for open innovation and sustainability with fixed processing property on parallel machines. Sustainability 2016, 8, 904. [Google Scholar] [CrossRef]
  2. Nguyen, N.Q.; Yalaoui, F.; Amodeo, L.; Chehade, H.; Toggenburger, P. Total completion time minimization for machine scheduling problem under time windows constraints with jobs’ linear processing rate function. Comput. Oper. Res. 2018, 90, 110–124. [Google Scholar] [CrossRef]
  3. Gafarov, E.; Werner, F. Two-Machine Job-Shop Scheduling with Equal Processing Times on Each Machine. Mathematics 2019, 7, 301. [Google Scholar] [CrossRef]
  4. Expósito-Izquierdo, C.; Angel-Bello, F.; Melián-Batista, B.; Alvarez, A.; Báez, S. A metaheuristic algorithm and simulation to study the effect of learning or tiredness on sequence-dependent setup times in a parallel machine scheduling problem. Expert Syst. Appl. 2019, 117, 62–74. [Google Scholar] [CrossRef]
  5. Hsieh, J.C.; Chang, P.C.; Hsu, L.C. Scheduling of Drilling Operations in Printed Circuit Board Factory. Comput. Ind. Eng. 2003, 44, 461–473. [Google Scholar] [CrossRef]
  6. Bilyk, A.; Mönch, L. A Variable Neighborhood Search Approach for Planning and Scheduling of Jobs on Unrelated Parallel Machines. J. Intell. Manuf. 2012, 23, 1621–1635. [Google Scholar] [CrossRef]
  7. Silva, C.; Magalhaes, J.M. Heuristic Lot Size Scheduling on Unrelated Parallel Machines with Applications in the Textile Industry. Comput. Ind. Eng. 2006, 50, 76–89. [Google Scholar] [CrossRef]
  8. Kim, D.W.; Na, D.G.; Chen, F.F. Unrelated Parallel Machine Scheduling with Setup times and a Total Weighted Tardiness Objective. Robot. Comput.-Integr. Manuf. 2003, 19, 179–181. [Google Scholar] [CrossRef]
  9. Fanjul-Peyro, L.; Ruiz, R. Iterated greedy local search methods for unrelated parallel machine scheduling. Eur. J. Oper. Res. 2010, 207, 55–69. [Google Scholar] [CrossRef]
  10. Pinedo, M.L. Scheduling: Theory, Algorithms, and Systems; Springer: Berlin, Germany, 2016. [Google Scholar]
  11. Yalaoui, F.; Chu, C. An Efficient Heuristic Approach for Parallel Machine Scheduling with Job Splitting and Sequence-dependent Setup Times. IIE Trans. 2003, 35, 183–190. [Google Scholar] [CrossRef]
  12. Bektur, G.; Saraç, T. A mathematical model and heuristic algorithms for an unrelated parallel machine scheduling problem with sequence-dependent setup times, machine eligibility restrictions and a common server. Comput. Oper. Res. 2019, 103, 46–63. [Google Scholar] [CrossRef]
  13. Hamzadayi, A.; Yildiz, G. Event driven strategy based complete rescheduling approaches for dynamic m identical parallel machines scheduling problem with a common server. Comput. Ind. Eng. 2016, 91, 66–84. [Google Scholar] [CrossRef]
  14. Hamzadayi, A.; Yildiz, G. Hybrid strategy based complete rescheduling approaches for dynamic m identical parallel machines scheduling problem with a common server. Simul. Model. Pract. Theory 2016, 63, 104–132. [Google Scholar] [CrossRef]
  15. Wang, H.; Alidaee, B. Effective heuristic for large-scale unrelated parallel machines scheduling problems. Omega 2019, 83, 261–274. [Google Scholar] [CrossRef]
  16. Ezugwu, A.E.; Akutsah, F. An Improved Firefly Algorithm for the Unrelated Parallel Machines Scheduling Problem With Sequence-Dependent Setup Times. IEEE Access 2018, 6, 54459–54478. [Google Scholar] [CrossRef]
  17. Logendran, R.; McDonellb, B.; Smuckera, B. Scheduling unrelated parallel machines with sequence-dependent setups. Comput. Oper. Res. 2007, 34, 3420–3438. [Google Scholar] [CrossRef]
  18. Bozorgirad, M.A.; Logendran, R. Sequence-dependent group scheduling problem on unrelated-parallel machines. Expert Syst. Appl. 2012, 39, 9021–9030. [Google Scholar] [CrossRef]
  19. Chen, C.L. Iterated hybrid metaheuristic algorithms for unrelated parallel machines problem with unequal ready times and sequence-dependent setup times. Int. J. Adv. Manuf. Technol. 2012, 60, 693–705. [Google Scholar] [CrossRef]
  20. Eva, V.; Rubén, R. A genetic algorithm for the unrelated parallel machine scheduling problem with sequence dependent setup times. Eur. J. Oper. Res. 2011, 211, 612–622. [Google Scholar]
  21. Duygu, Y.E.; Ozmutlu, H.C.; Seda, O. Genetic algorithm with local search for the unrelated parallel machine scheduling problem with sequence-dependent set-up times. Int. J. Prod. Res. 2014, 52, 5841–5856. [Google Scholar]
  22. Tavakkoli-Moghaddam, R.; Taheri, F.; Bazzazi, M.; Izadi, M.; Sassani, F. Design of a genetic algorithm for bi-objective unrelated parallel machines scheduling with sequence-dependent setup times and precedence constraints. Comput. Oper. Res. 2009, 36, 3224–3230. [Google Scholar] [CrossRef]
  23. Arnaout, J.P.; Musa, R.; Rabadi, G. Ant colony optimization algorithm to parallel machine scheduling problem with setups. In Proceedings of the 2008 IEEE International Conference on Automation Science and Engineering, Arlington, VA, USA, 23–26 August 2008; pp. 578–582. [Google Scholar]
  24. Helal, M.; Rabadi, G.; Al-Salem, A. A tabu search algorithm to minimize the makespan for the unrelated parallel machines scheduling problem with setup times. Int. J. Oper. Res. 2006, 3, 182–192. [Google Scholar]
  25. Arnaout, J.P.; Rabadi, G.; Musa, R. A two-stage ant colony optimization algorithm to minimize the makespan on unrelated parallel machines with sequence-dependent setup times. J. Intell. Manuf. 2010, 21, 693–701. [Google Scholar] [CrossRef]
  26. Rabadi, G.; Moraga, R.J.; Al-Salem, A. Heuristics for the unrelated parallel machine scheduling problem with setup times. J. Intell. Manuf. 2006, 17, 85–97. [Google Scholar] [CrossRef]
  27. Arnaout, J.P.; Musa, R.; Rabadi, G. A two-stage ant colony optimization algorithm to minimize the makespan on unrelated parallel machines—-Part II: enhancements and experimentations. J. Intell. Manuf. 2014, 25, 43–53. [Google Scholar] [CrossRef]
  28. Lin, Y.K.; Hsieh, F.U. Unrelated Parallel Machine Scheduling with Setup times and Ready times. Int. J. Prod. Res. 2014, 52, 1200–1214. [Google Scholar] [CrossRef]
  29. Sheremetov, L.; Martínez-Muñoz, J.; Chi-Chim, M. Two-stage genetic algorithm for parallel machines scheduling problem: Cyclic steam stimulation of high viscosity oil reservoirs. Appl. Soft Comput. 2018, 64, 317–330. [Google Scholar] [CrossRef]
  30. Ezugwu, A.E. Enhanced symbiotic organisms search algorithm for unrelated parallel machines manufacturing scheduling with setup times. Knowl.-Based Syst. 2019, 172, 15–32. [Google Scholar] [CrossRef]
  31. Wu, X.; Che, A. A memetic differential evolution algorithm for energy-efficient parallel machine scheduling. Omega 2019, 82, 155–165. [Google Scholar] [CrossRef]
  32. Kim, D.W.; Kim, K.H.; Jang, W.; Chen, F.F. Unrelated parallel machine scheduling with setup times using simulated annealing. Robot. Comput. Integr. Manuf. 2002, 18, 223–231. [Google Scholar] [CrossRef]
  33. Mirjalili, S. SCA: a sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  34. Elaziz, M.E.A.; Ewees, A.A.; Oliva, D.; Duan, P.; Xiong, S. A Hybrid Method of Sine Cosine Algorithm and Differential Evolution for Feature Selection. In International Conference on Neural Information Processing; Springer: Cham, Switzerland, 2017; pp. 145–155. [Google Scholar]
  35. Abd ElAziz, M.; Selim IM, X.S. Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach. Sci. Rep. 2017, 7, 4463. [Google Scholar] [CrossRef] [PubMed]
  36. Sahlol, A.T.; Ewees, A.A.; Hemdan, A.M.; Hassanien, A.E. Training feedforward neural networks using Sine-Cosine algorithm to improve the prediction of liver enzymes on fish farmed on nano-selenite. In Proceedings of the 2016 12th International Computer Engineering Conference (ICENCO), Cairo, Egypt, 28–29 December 2016; pp. 35–40. [Google Scholar]
  37. Kumar, V.; Kumar, D. Data clustering using sine cosine algorithm: Data clustering using SCA. In Handbook of Research on Machine Learning Innovations and Trends; IGI Global: Hershey, PA, USA, 2017; pp. 715–726. [Google Scholar]
  38. Ramanaiah, M.L.; Reddy, M.D. Sine Cosine Algorithm for Loss Reduction in Distribution System with Uniffied Power Quality Conditioner. i-Manag. J. Power Syst. Eng. 2017, 5, 10. [Google Scholar]
  39. Li, S.; Fang, H.; Liu, X. Parameter optimization of support vector regression based on sine cosine algorithm. Expert Syst. Appl. 2018, 91, 63–77. [Google Scholar] [CrossRef]
  40. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  41. WebSite, D. Scheduling Research Dataset. 2018. Available online: http://www.schedulingresearch.com (accessed on 1 April 2018).
  42. Ewees, A.A.; Elaziz, M.A.; Houssein, E.H. Improved grasshopper optimization algorithm using opposition-based learning. Expert Syst. Appl. 2018, 112, 156–172. [Google Scholar] [CrossRef]
  43. Ibrahim, R.A.; Elaziz, M.A.; Ewees, A.A.; Selim, I.M.; Lu, S. Galaxy images classification using hybrid brain storm optimization with moth flame optimization. J. Astron. Telesc. Instrum. Syst. 2018, 4, 038001. [Google Scholar] [CrossRef]
  44. Al-qaness, M.A.; Abd Elaziz, M.; Ewees, A.A.; Cui, X. A Modified Adaptive Neuro-Fuzzy Inference System Using Multi-Verse Optimizer Algorithm for Oil Consumption Forecasting. Electronics 2019, 8, 1071. [Google Scholar] [CrossRef] [Green Version]
  45. Ewees, A.A.; El Aziz, M.A.; Hassanien, A.E. Chaotic multi-verse optimizer-based feature selection. Neural Comput. Appl. 2019, 31, 991–1006. [Google Scholar] [CrossRef]
  46. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  47. Črepinšek, M.; Liu, S.H.; Mernik, M. Replication and comparison of computational experiments in applied evolutionary computing: Common pitfalls and guidelines to avoid them. Appl. Soft Comput. 2014, 19, 161–170. [Google Scholar] [CrossRef]
  48. Črepinšek, M.; Liu, S.H.; Mernik, L. A note on teaching—Learning-based optimization algorithm. Inf. Sci. 2012, 212, 79–93. [Google Scholar] [CrossRef]
Figure 1. The entire phases of the proposed method.
Figure 1. The entire phases of the proposed method.
Mathematics 07 01120 g001
Figure 2. Normalized C m a x to show the performances of all methods.
Figure 2. Normalized C m a x to show the performances of all methods.
Mathematics 07 01120 g002
Figure 3. Average C m a x of the 6 jobs on 2 machines to show the difference between the original parameters (Case 1) and the new parameters (Case 2) to show the performances of all methods.
Figure 3. Average C m a x of the 6 jobs on 2 machines to show the difference between the original parameters (Case 1) and the new parameters (Case 2) to show the performances of all methods.
Mathematics 07 01120 g003
Table 1. The parameters setting of the proposed method.
Table 1. The parameters setting of the proposed method.
AlgorithmParameters Setting
SASCA I n i t i a l t e m p e r a t u r e ( T 0 ) = 10,
t e m p e r a t u r e r e d u c t i o n r a t e ( β ) = 0.97, p a r a m e t e r ( a ) = 2,
p a r a m e t e r ( r 1 ) [ 2 , 0 ]
SA I n i t i a l t e m p e r a t u r e = 10 , t e m p e r a t u r e r e d u c t i o n r a t e = 0.97 ,
l o c a l s t e p = 1
GWO P a r a m e t e r ( a ) [ 2 , 0 ]
PSO I n e r t i a w e i g h t ( w ) = 1 , i n e r t i a w e i g h t d a m p i n g r a t i o ( w D a m p ) = 0.99 ,
p e r s o n a l l e a r n i n g c o e f f i c i e n t ( c 1 ) = 1 , g l o b a l l e a r n i n g c o e f f i c i e n t ( c 2 ) = 2
GA C r o s s o v e r p r o b a b i l i t y ( p c ) = 0.8 , e x t r a r a n g e f a c t o r f o r c r o s s o v e r ( γ ) = 0.2 ,
m u t a t i o n p e r c e n t a g e ( p m ) = 0.3 , m u t a t i o n p r o b a b i l i t y ( m u ) = 0.02 , s e l e c t i o n p r e s s u r e ( s p ) = 8
Table 2. Average of C m a x values of the small problems for each algorithm (best results are in boldface).
Table 2. Average of C m a x values of the small problems for each algorithm (best results are in boldface).
MJSASCASAGWOPSOGA
26357.33362.60378.33372.56375.38
27453.00470.20504.80505.80572.67
28512.00513.67530.88531.25530.88
29566.67624.47598.33592.20617.25
210639.33692.20681.67677.56664.86
211716.00788.40744.50752.25743.75
46224.67247.27276.80280.30276.80
47212.33266.60256.00292.17291.00
48251.00327.13344.40344.40333.00
49342.00376.73364.00367.33360.00
410340.33398.47358.00367.75367.75
411368.33434.80413.00442.00450.00
68214.67244.67221.25223.00219.33
69242.33267.60242.90307.50334.00
610228.67295.20312.00332.00344.67
611258.00319.67337.00341.00345.00
810221.67252.00231.33228.67227.00
811221.33327.67288.00296.00323.00
Table 3. Average of the computational time of the small problems for each algorithm (best results are in boldface).
Table 3. Average of the computational time of the small problems for each algorithm (best results are in boldface).
MJSASCASAGWOPSOGA
260.04900.05450.03770.03370.0457
270.05410.03870.05600.05540.0597
280.05820.03780.05830.05840.0645
290.05860.04740.05940.05910.0768
2100.06290.04450.06470.06000.0794
2110.05900.03300.07320.07080.0914
460.05810.02870.05150.04680.0665
470.06010.02370.05920.05700.0834
480.05850.02950.06370.06160.0813
490.06130.03000.07180.06660.0883
4100.06370.03080.08570.08050.1075
4110.06460.02990.08920.08540.1066
680.06200.02830.08030.07400.1021
690.06520.02870.08340.08170.1052
6100.06530.03220.09910.08990.1121
6110.08220.03120.11340.10180.1314
8100.07020.04120.11380.10450.1318
8110.07230.06680.11890.11020.1324
Table 4. Average of C m a x values of the small problems for each algorithm (best results are in boldface).
Table 4. Average of C m a x values of the small problems for each algorithm (best results are in boldface).
SASCASATabu (T9) [24]Tabu (T8) [24]
MJobs C max Time C max Time C max Time C max Time
26357.330.0490362.600.05450397.200.440395.270.150
7453.000.0541470.200.03869502.000.210494.730.200
8512.000.0582513.670.03778522.070.260521.200.080
9566.670.0586624.470.04742614.530.310607.330.290
10639.330.0629692.200.04455649.600.370645.330.340
11716.000.0590788.400.03299724.470.440722.530.440
46224.670.0581247.270.02869249.070.010251.270.020
7212.330.0601266.600.02369259.530.024264.270.260
8251.000.0585327.130.02947268.930.070270.470.034
9342.000.0613376.730.03004347.730.930346.470.860
10340.330.0637398.470.03082363.270.950360.330.980
11368.330.0646434.800.02994375.800.960376.300.990
68214.670.0620244.670.02832235.470.034240.270.290
9242.330.0652267.600.02866244.330.060249.270.050
10228.670.0653295.200.03221254.670.060259.130.080
11258.000.0822319.670.03118265.870.040273.800.040
810221.670.0702252.000.04123230.070.090232.000.080
11221.330.0723327.670.06682232.870.110235.200.120
Table 5. Results of Wilcoxon test for the small problems (best results are in boldface).
Table 5. Results of Wilcoxon test for the small problems (best results are in boldface).
MeasureSATabu (T9)Tabu (T8)
C m a x 0.0130.0600.047
Time0.0000.0410.006
Table 6. Results of Friedman test for the small problems (best results are in boldface).
Table 6. Results of Friedman test for the small problems (best results are in boldface).
SASCASATabu (T9)Tabu (T8)
C m a x 13.562.672.78
Time:2.52.062.722.72
Table 7. Average of C m a x values of the large problems for each algorithm.
Table 7. Average of C m a x values of the large problems for each algorithm.
MJobsSASCAACO [25]MRPS [26]TS [24]PH [25]LB
C max STD C max STD C max STD C max STD C max STD C max STD
2402398.4039.452404.3336.88242235.442486.5339.542521.4757.282344.736.31
603574.7022.153575.233.413617.9335.613736.4755.613733.3350.173510.1736.03
804737.0011.194741.860.284803.2757.624942.2770.364926.9374.114664.8358.43
1005986.7381.025897.660.685988.658.056180.8773.496128.0763.965819.2359.80
1207234.8756.937082.664.647196.4771.987447.680.897336.5373.797008.0369.27
4601715.2728.061736.621.51752.417.711785.5325.191817.8726.791650.7315.79
802294.5322.332307.818.682334.0715.572370.1322.262396.6725.972201.4815.94
1002855.0015.422849.4731.882867.2720.532934.1335.242959.9346.612740.720.46
1203432.5016.363404.5325.663432.9317.453515.1333.153537.836.923291.216.71
61001890.579.571891.0711.381892.676.681940.614.981973.4721.211783.036.12
1202295.8017.332249.215.692252.614.582313.0725.932353.6738.372137.611.17
81201684.147.101685.413.771706.88.931739.7315.011778.1348.311580.237.43
Table 8. Results of Wilcoxon test for the large problems (best results are in boldface).
Table 8. Results of Wilcoxon test for the large problems (best results are in boldface).
ACOMRPSTSPH
C m a x 0.9240.6890.4200.420
Time0.1400.6870.0210.001
Table 9. Results of Friedman test for the large problems (best results are in boldface).
Table 9. Results of Friedman test for the large problems (best results are in boldface).
SASCAACOMRPSTSPH
C m a x 1.581.582.834.334.67
Time2.332.421.674.004.58
Table 10. The a p d values for the small problems for the hybrid method of Simulated Annealing algorithm and Sine Cosine Algorithm (SASCA) and the other methods.
Table 10. The a p d values for the small problems for the hybrid method of Simulated Annealing algorithm and Sine Cosine Algorithm (SASCA) and the other methods.
MJobsSATabu(T9)Tabu(T8)
260.0150.1120.106
70.0380.1080.092
80.0030.0200.018
90.1020.0840.072
100.0830.0160.009
110.1010.0120.009
460.1010.1090.118
70.2560.2220.245
80.3030.0710.078
90.1020.0170.013
100.1710.0670.059
110.1800.0200.022
680.1400.0970.119
90.1040.0080.029
100.2910.1140.133
110.2390.0310.061
8100.1370.0380.047
110.4800.0520.063
Table 11. The a p d values for the large problems among SASCA and the other methods.
Table 11. The a p d values for the large problems among SASCA and the other methods.
MJobsACOMRPSTSPH
2400.0020.0100.0370.051
600.0000.0120.0450.044
800.0010.0140.0430.040
100−0.0150.0000.0320.024
120−0.021−0.0050.0290.014
4600.0120.0220.0410.060
800.0060.0170.0330.045
100−0.0020.0040.0280.037
120−0.0080.0000.0240.031
61000.0000.0010.0260.044
120−0.020−0.0190.0080.025
81200.0010.0130.0330.056
Table 12. Influence of the β on the performance of the SASCA.
Table 12. Influence of the β on the performance of the SASCA.
Job β = 0.95 β = 0.5 β = 0.1
C max STD C max STD C max STD
402398.4039.452416.1531.462401.1730.14
603574.7022.153598.0019.753616.0030.61
804737.0011.194857.1319.714823.0027.00
1005986.7381.026054.2575.786083.5074.72
1207234.8756.937337.5049.507350.6054.76
Table 13. New parameters setting for testing the sensitivity of the parameters.
Table 13. New parameters setting for testing the sensitivity of the parameters.
AlgorithmParameters Setting
SASCA T 0 = 5 , β = 0.97 , a = 3 , r 1 [ 3 , 0 ]
SA T 0 = 5 , β = 0.97 , l o c a l s t e p = 1
GWO a [ 3 , 0 ]
PSO w = 0.9 , w D a m p = 0.2 , c 1 = 2 , c 2 = 2
GA p c = 0.6 , m u = 0.05 , γ = 0.4 , p m = 0.5 , s p = 5
Table 14. Average of C m a x of the small problems on 2 machines for testing the sensitivity of the parameters (best results are in boldface).
Table 14. Average of C m a x of the small problems on 2 machines for testing the sensitivity of the parameters (best results are in boldface).
JSASCASAGWOPSOGA
6372.33391.12390.25388.75389.00
7457.50488.11496.60479.40519.67
8512.80536.36535.80534.33539.25
9568.00623.50605.75605.00629.60
10696.50700.02689.40681.00688.40
11729.67775.45749.75751.50749.75

Share and Cite

MDPI and ACS Style

Jouhari, H.; Lei, D.; A. A. Al-qaness, M.; Abd Elaziz, M.; Ewees, A.A.; Farouk, O. Sine-Cosine Algorithm to Enhance Simulated Annealing for Unrelated Parallel Machine Scheduling with Setup Times. Mathematics 2019, 7, 1120. https://doi.org/10.3390/math7111120

AMA Style

Jouhari H, Lei D, A. A. Al-qaness M, Abd Elaziz M, Ewees AA, Farouk O. Sine-Cosine Algorithm to Enhance Simulated Annealing for Unrelated Parallel Machine Scheduling with Setup Times. Mathematics. 2019; 7(11):1120. https://doi.org/10.3390/math7111120

Chicago/Turabian Style

Jouhari, Hamza, Deming Lei, Mohammed A. A. Al-qaness, Mohamed Abd Elaziz, Ahmed A. Ewees, and Osama Farouk. 2019. "Sine-Cosine Algorithm to Enhance Simulated Annealing for Unrelated Parallel Machine Scheduling with Setup Times" Mathematics 7, no. 11: 1120. https://doi.org/10.3390/math7111120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop