Hybrid metaheuristics with evolutionary algorithms specializing in intensification and diversification: Overview and progress report☆
Introduction
In the last few years, a new family of search and optimization algorithms have arisen based on extending basic heuristic methods by including them into an iterative framework augmenting their exploration capabilities. This group of advanced approximate algorithms has received the name metaheuristics (MHs) [28] and an overview of various existing methods is found in [11]. MHs have proven to be highly useful for approximately solving difficult optimization problems in practice because they may obtain good solutions in a reduced amount of time. Simulated annealing, tabu search, evolutionary algorithms (EAs), ant colony optimization, estimation of distribution algorithms, scatter search, path relinking, greedy randomized adaptive search procedure (GRASP), multi-start and iterated local search (ILS), guided local search, and variable neighborhood search (VNS) are, among others, often listed as examples of classical MHs. They have individual historical backgrounds and follow different paradigms and philosophies.
Over the last years, a large number of search algorithms were reported that do not purely follow the concepts of one single classical MH, but they attempt to obtain the best from a set of MHs (and even other kinds of optimization methods) that perform together and complement each other to produce a profitable synergy from their combination. These approaches are commonly referred to as hybrid MHs [83], [93].
Intensification and diversification (I&D) are two major issues when designing a global search method [11]. Diversification generally refers to the ability to visit many and different regions of the search space, whereas intensification refers to the ability to obtain high quality solutions within those regions. A search algorithm should strike a tactical balance between these two sometimes-conflicting goals. Most classical MHs have several components for intensification and diversification. Blum and Roli [11] define an I&D component as any algorithmic or functional component that has intensification and/or diversification effect on the search process. Examples are genetic operators, perturbations of probability distributions, the use of tabu lists, or changes in the objective function. Thus, I&D components are operators, actions, or strategies of MHs.
In general, providing an adequate balance between the I&D components of an MH becomes a very complicate task [94]. In fact, although most classical MHs attempt to achieve this objective in their own way, it turns out that some of them show clear trend toward intensification and others, toward diversification, i.e., they show certain specialization in intensification or diversification. An alternative approach to force MHs to have themselves responsibilities for both I&D involves the design of hybrid MHs with search algorithms specializing in I&D, which combine this type of algorithms with the objective of compensating each other and put together their complementary behaviors (the exploration and exploitation of the search space).
EAs [7], [8], [19] are stochastic search methods that mimic the metaphor of natural biological evolution. EAs rely on the concept of a population of individuals (representing search points in the space of potential solutions to a given problem), which undergo probabilistic operators such as mutation, selection, and (sometimes) recombination to evolve toward increasingly better fitness values of the individuals. There has been a variety of slightly different EAs that, basically, fall into four different categories, which have been developed independently from each other. These are evolution strategies [9], genetic algorithms (GAs) [29], genetic programming [55], and evolutionary programming [23]. EAs have recently received increased interest because they offer practical advantages to researchers facing difficult optimization problems (they may locate high performance regions of vast and complex search spaces). Other advantages include the simplicity of the approach, their flexibility, and their robust response to changing circumstances.
Precisely, the flexibility offered by the EA paradigm allows specialized models to be obtained with the aim of providing intensification and/or diversification, i.e., EAs specializing in I&D . On the one hand, beneficial diversification properties are inherent to EAs, because they manage populations of solutions, providing a natural and intrinsic way for exploring search space. Even more, many techniques were presented in the literature that favor diversity in EA population with the aim of consolidating diversification associated with these algorithms [3], [13], [21], [29], [54], [62]. Then, specialization of EAs in diversification becomes really viable. On the other hand, some components of EAs may be specifically designed and their strategy parameters tuned, in order to provide an effective refinement. In fact, several EAs specializing in intensification have been presented with this aim [49], [61], [73].
The outstanding role played by EAs at present along with the great interest raised by their hybridizations with other algorithms [33], [82] endorse the choice of their specialist approaches as suitable ingredients to build hybrid MHs with search algorithms specializing in I&D. In fact, the design of hybrid MHs with is an innovative line of research with prospective future as way for obtaining search algorithms that may achieve accurate and reliable solutions to hard real-world problems.
The goal in this article is twofold. Firstly, we attempt to paint a more complete picture of than before. To do so, we overview existing design principles for these algorithms and align them to arrive at an insightful line of research. We cite the existing literature whenever relevant. From the literature reviewed, we have identified three lines of research in designing . The first two, collaborative and integrative , derive from a well-known classification for hybrid MHs and, at present, they have a consolidated background of knowledge. The third one, less explored, concerns a strategy with which may help classical MHs to improve their behavior. In particular, it involves replacing some I&D components in MHs by customized (evolutionary I&D components) that develop the same work more effectively. In this line, our second objective is to present an instance of this novel approach in order to complement the overview and provide additional results and insights on the study of . In particular, we propose an evolutionary perturbation technique for ILS, which is a micro-EA that effectively explores in the neighborhood of particular solutions.
The remainder of this article is organized as follows. In Section 2, we give an overview of the existing research on . In Section 3, we propose an ILS model with evolutionary perturbation technique that allow us to illustrate the way new instances may be built by embedding evolutionary I&D components in MHs. In addition, the benefits of the proposal in comparison to other ILS algorithms proposed in the literature to deal with binary optimization problems are experimentally shown. Finally, in Section 4, we provide the main conclusions of this work and examine future research lines. In Appendix A, we describe the features of the test suite used for experiments, in Appendix B, we explain the statistical test that was used for the experimental study, and finally, in Appendix C, we enclose a table with results of the algorithms.
Section snippets
Review of HMH-
Nowadays, different authors have emphasized the need for hybridization of EAs with other optimization algorithms, machine learning techniques, MHs, etc. [12], [33], [82], [90]. Some of the possible reasons for hybridization are [33], [90]: (1) to improve the performance of EAs, (2) to improve the quality of the solutions obtained by EAs, and (3) to incorporate the EA as part of a larger system. This paper concerns mainly the last point and, in particular, those hybrid MHs that include a
ILS with evolutionary perturbation technique
ILS [45], [63] belongs to the group of MHs that extend classical LS methods by adding diversification capabilities. The essential idea of ILS is to perform a biased, randomized walk in the space of locally optimal solutions instead of sampling the space of all possible candidate solutions. This walk is built by iteratively applying first a perturbation to a locally optimal solution, then applying an LS algorithm, and finally using an acceptance criterion which determines to which locally
Conclusions
In this paper, we provided an overview of different ways may be combined with other MHs, and even with other kinds of search algorithms, to obtain effective hybrid MHs. We have identified three lines of research in this topic: collaborative , integrative , and MHs with evolutionary I&D components. With the aim of complementing the review, we have taken an important next step along the less investigated approach by contributing with an ILS algorithm with an
References (106)
- et al.
Parallel heterogeneous genetic algorithms for continuous optimization
Parallel Comput
(2004) - et al.
Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions
Eur J Oper Res
(2003) - et al.
An iterated local search heuristic for the logistics network design problem with single assignment
Int J Prod Econ
(2008) The CHC adaptive search algorithm: how to have safe search when engaging in non-traditional genetic recombination
- et al.
Relative building block fitness and the building block hypothesis
- et al.
Global and local real-coded genetic algorithms based on parent-centric crossover operators
Eur J Oper Res
(2008) - et al.
Replacement strategies to maintain useful diversity in steady-state genetic algorithms
Inf Sci
(2008) - et al.
A GA-based fuzzy modeling approach for generating TSK models
Fuzzy Sets Syst
(2002) - et al.
Towards hybrid evolutionary algorithms
Int. Trans Oper Res
(1999) Iterated local search for the quadratic assignment problem
Eur J Oper Res
(2006)
An iterative local-search framework for solving constraint satisfaction problem
Appl Soft Comput
Parallelism and evolutionary algorithms
IEEE Trans Evol Comput
The exploration/exploitation tradeoff in dynamic cellular genetic algorithms
IEEE Trans Evol Comput
Performance evaluation of an advanced local search evolutionary algorithm
A restart CMA evolution strategy with increasing population size
Evolutionary algorithms in theory and practice
Handbook of evolutionary computation
Evolution strategies: a comprehensive introduction
Nat Comput
ACO applied to group shop scheduling: a case study on intensification and diversification
Metaheuristics in combinatorial optimization: overview and conceptual comparison
ACM Comput Surv
Evolutionary algorithms domain knowledge = real-world evolutionary computation
IEEE Trans Evol Comput
Effects of diversity control in single-objective and multi-objective genetic algorithms
J Heuristics
A modified PSO structure resulting in high exploration ability with convergence guaranteed
IEEE Trans Syst Man Cybern B
Image registration with iterated local search
J Heuristics
Ant colony optimization
Introduction to evolutionary computing
A taxonomy of cooperative search algorithms
A study on non-random mating and varying population size in genetic algorithms using a royal road function
Evolutionary computation: toward a new philosophy of machine intelligence
Application of genetic recombination to genetic local search in TSP
Int J Inf Technol
Local search based on genetic algorithms
Genetic algorithms in search, optimization, and machine learning
Messy genetic algorithms: motivation, analysis, and first results
Complex Syst
Massively multimodality, deception, and genetic algorithms
Adaptive niching via coevolutionary sharing
Hybrid evolutionary algorithms: methodologies, architectures, and reviews
Variable neighborhood search
Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)
Evol Comput
Completely derandomized self-adaptation in evolution strategies
Evol Comput
Evaluating the CMA evolution strategy on multimodal test functions
Editorial introduction special issue on memetic algorithms
Evol Comput
Gradual distributed real-coded genetic algorithms
IEEE Trans Evol Comput
A particle swarm optimization method with enhanced global search ability for design optimizations of electromagnetic devices
IEEE Trans Magn
Stochastic local search: foundations and applications
Cited by (160)
An explicit exploration strategy for evolutionary algorithms
2023, Applied Soft ComputingFuzzy adaptive jellyfish search-optimized stacking machine learning for engineering planning and design
2022, Automation in ConstructionIdentification of significant bio-markers from high-dimensional cancerous data employing a modified multi-objective meta-heuristic algorithm
2022, Journal of King Saud University - Computer and Information SciencesA diversity metric for population-based metaheuristic algorithms
2022, Information Sciences
- ☆
This work was supported by Project TIN2005-08386-C05-01.