Hybrid metaheuristics with evolutionary algorithms specializing in intensification and diversification: Overview and progress report

https://doi.org/10.1016/j.cor.2009.02.010Get rights and content

Abstract

Nowadays, a promising way to obtain hybrid metaheuristics concerns the combination of several search algorithms with strong specialization in intensification and/or diversification. The flexible architecture of evolutionary algorithms allows specialized models to be obtained with the aim of providing intensification and/or diversification. The outstanding role that is played by evolutionary algorithms at present justifies the choice of their specialist approaches as suitable ingredients to build hybrid metaheuristics.

This paper focuses on hybrid metaheuristics with evolutionary algorithms specializing in intensification and diversification. We first give an overview of the existing research on this topic, describing several instances grouped into three categories that were identified after reviewing specialized literature. Then, with the aim of complementing the overview and providing additional results and insights on this line of research, we present an instance that consists of an iterated local search algorithm with an evolutionary perturbation technique. The benefits of the proposal in comparison to other iterated local search algorithms proposed in the literature to deal with binary optimization problems are experimentally shown. The good performance of the reviewed approaches and the suitable results shown by our instance allow an important conclusion to be achieved: the use of evolutionary algorithms specializing in intensification and diversification for building hybrid metaheuristics becomes a prospective line of research for obtaining effective search algorithms.

Introduction

In the last few years, a new family of search and optimization algorithms have arisen based on extending basic heuristic methods by including them into an iterative framework augmenting their exploration capabilities. This group of advanced approximate algorithms has received the name metaheuristics (MHs) [28] and an overview of various existing methods is found in [11]. MHs have proven to be highly useful for approximately solving difficult optimization problems in practice because they may obtain good solutions in a reduced amount of time. Simulated annealing, tabu search, evolutionary algorithms (EAs), ant colony optimization, estimation of distribution algorithms, scatter search, path relinking, greedy randomized adaptive search procedure (GRASP), multi-start and iterated local search (ILS), guided local search, and variable neighborhood search (VNS) are, among others, often listed as examples of classical MHs. They have individual historical backgrounds and follow different paradigms and philosophies.

Over the last years, a large number of search algorithms were reported that do not purely follow the concepts of one single classical MH, but they attempt to obtain the best from a set of MHs (and even other kinds of optimization methods) that perform together and complement each other to produce a profitable synergy from their combination. These approaches are commonly referred to as hybrid MHs [83], [93].

Intensification and diversification (I&D) are two major issues when designing a global search method [11]. Diversification generally refers to the ability to visit many and different regions of the search space, whereas intensification refers to the ability to obtain high quality solutions within those regions. A search algorithm should strike a tactical balance between these two sometimes-conflicting goals. Most classical MHs have several components for intensification and diversification. Blum and Roli [11] define an I&D component as any algorithmic or functional component that has intensification and/or diversification effect on the search process. Examples are genetic operators, perturbations of probability distributions, the use of tabu lists, or changes in the objective function. Thus, I&D components are operators, actions, or strategies of MHs.

In general, providing an adequate balance between the I&D components of an MH becomes a very complicate task [94]. In fact, although most classical MHs attempt to achieve this objective in their own way, it turns out that some of them show clear trend toward intensification and others, toward diversification, i.e., they show certain specialization in intensification or diversification. An alternative approach to force MHs to have themselves responsibilities for both I&D involves the design of hybrid MHs with search algorithms specializing in I&D, which combine this type of algorithms with the objective of compensating each other and put together their complementary behaviors (the exploration and exploitation of the search space).

EAs [7], [8], [19] are stochastic search methods that mimic the metaphor of natural biological evolution. EAs rely on the concept of a population of individuals (representing search points in the space of potential solutions to a given problem), which undergo probabilistic operators such as mutation, selection, and (sometimes) recombination to evolve toward increasingly better fitness values of the individuals. There has been a variety of slightly different EAs that, basically, fall into four different categories, which have been developed independently from each other. These are evolution strategies [9], genetic algorithms (GAs) [29], genetic programming [55], and evolutionary programming [23]. EAs have recently received increased interest because they offer practical advantages to researchers facing difficult optimization problems (they may locate high performance regions of vast and complex search spaces). Other advantages include the simplicity of the approach, their flexibility, and their robust response to changing circumstances.

Precisely, the flexibility offered by the EA paradigm allows specialized models to be obtained with the aim of providing intensification and/or diversification, i.e., EAs specializing in I&D (EAI&D). On the one hand, beneficial diversification properties are inherent to EAs, because they manage populations of solutions, providing a natural and intrinsic way for exploring search space. Even more, many techniques were presented in the literature that favor diversity in EA population with the aim of consolidating diversification associated with these algorithms [3], [13], [21], [29], [54], [62]. Then, specialization of EAs in diversification (EAD) becomes really viable. On the other hand, some components of EAs may be specifically designed and their strategy parameters tuned, in order to provide an effective refinement. In fact, several EAs specializing in intensification (EAI) have been presented with this aim [49], [61], [73].

The outstanding role played by EAs at present along with the great interest raised by their hybridizations with other algorithms [33], [82] endorse the choice of their specialist approaches as suitable ingredients to build hybrid MHs with search algorithms specializing in I&D. In fact, the design of hybrid MHs with EAI&D (HMH-EAI&D) is an innovative line of research with prospective future as way for obtaining search algorithms that may achieve accurate and reliable solutions to hard real-world problems.

The goal in this article is twofold. Firstly, we attempt to paint a more complete picture of HMH-EAI&D than before. To do so, we overview existing design principles for these algorithms and align them to arrive at an insightful line of research. We cite the existing literature whenever relevant. From the literature reviewed, we have identified three lines of research in designing HMH-EAI&D. The first two, collaborative HMH-EAI&D and integrative HMH-EAI&D, derive from a well-known classification for hybrid MHs and, at present, they have a consolidated background of knowledge. The third one, less explored, concerns a strategy with which EAI&D may help classical MHs to improve their behavior. In particular, it involves replacing some I&D components in MHs by customized EAI&D (evolutionary I&D components) that develop the same work more effectively. In this line, our second objective is to present an instance of this novel approach in order to complement the overview and provide additional results and insights on the study of HMH-EAI&D. In particular, we propose an evolutionary perturbation technique for ILS, which is a micro-EA that effectively explores in the neighborhood of particular solutions.

The remainder of this article is organized as follows. In Section 2, we give an overview of the existing research on HMH-EAI&D. In Section 3, we propose an ILS model with evolutionary perturbation technique that allow us to illustrate the way new HMH-EAI&D instances may be built by embedding evolutionary I&D components in MHs. In addition, the benefits of the proposal in comparison to other ILS algorithms proposed in the literature to deal with binary optimization problems are experimentally shown. Finally, in Section 4, we provide the main conclusions of this work and examine future research lines. In Appendix A, we describe the features of the test suite used for experiments, in Appendix B, we explain the statistical test that was used for the experimental study, and finally, in Appendix C, we enclose a table with results of the algorithms.

Section snippets

Review of HMH-EAI&D

Nowadays, different authors have emphasized the need for hybridization of EAs with other optimization algorithms, machine learning techniques, MHs, etc. [12], [33], [82], [90]. Some of the possible reasons for hybridization are [33], [90]: (1) to improve the performance of EAs, (2) to improve the quality of the solutions obtained by EAs, and (3) to incorporate the EA as part of a larger system. This paper concerns mainly the last point and, in particular, those hybrid MHs that include a

ILS with evolutionary perturbation technique

ILS [45], [63] belongs to the group of MHs that extend classical LS methods by adding diversification capabilities. The essential idea of ILS is to perform a biased, randomized walk in the space of locally optimal solutions instead of sampling the space of all possible candidate solutions. This walk is built by iteratively applying first a perturbation to a locally optimal solution, then applying an LS algorithm, and finally using an acceptance criterion which determines to which locally

Conclusions

In this paper, we provided an overview of different ways EAI&D may be combined with other MHs, and even with other kinds of search algorithms, to obtain effective hybrid MHs. We have identified three lines of research in this topic: collaborative HMH-EAI&D, integrative HMH-EAI&D, and MHs with evolutionary I&D components. With the aim of complementing the review, we have taken an important next step along the less investigated HMH-EAI&D approach by contributing with an ILS algorithm with an

References (106)

  • M. Tounsi et al.

    An iterative local-search framework for solving constraint satisfaction problem

    Appl Soft Comput

    (2008)
  • E. Alba et al.

    Parallelism and evolutionary algorithms

    IEEE Trans Evol Comput

    (2002)
  • E. Alba et al.

    The exploration/exploitation tradeoff in dynamic cellular genetic algorithms

    IEEE Trans Evol Comput

    (2005)
  • Alba E, editor. Parallel metaheuristics: a new class of algorithms. New York: Wiley;...
  • A. Auger et al.

    Performance evaluation of an advanced local search evolutionary algorithm

  • A. Auger et al.

    A restart CMA evolution strategy with increasing population size

  • T. Bäck

    Evolutionary algorithms in theory and practice

    (1996)
  • T. Bäck et al.

    Handbook of evolutionary computation

    (1997)
  • H.-G. Beyer et al.

    Evolution strategies: a comprehensive introduction

    Nat Comput

    (2002)
  • C. Blum

    ACO applied to group shop scheduling: a case study on intensification and diversification

  • C. Blum et al.

    Metaheuristics in combinatorial optimization: overview and conceptual comparison

    ACM Comput Surv

    (2003)
  • P.P Bonissone et al.

    Evolutionary algorithms + domain knowledge = real-world evolutionary computation

    IEEE Trans Evol Comput

    (2006)
  • N. Chaiyaratana et al.

    Effects of diversity control in single-objective and multi-objective genetic algorithms

    J Heuristics

    (2007)
  • X. Chen et al.

    A modified PSO structure resulting in high exploration ability with convergence guaranteed

    IEEE Trans Syst Man Cybern B

    (2007)
  • O. Cordón et al.

    Image registration with iterated local search

    J Heuristics

    (2006)
  • M. Dorigo et al.

    Ant colony optimization

    (2004)
  • A.E. Eiben et al.

    Introduction to evolutionary computing

    (2003)
  • M. El-Abd et al.

    A taxonomy of cooperative search algorithms

  • C. Fernandes et al.

    A study on non-random mating and varying population size in genetic algorithms using a royal road function

  • D.B. Fogel

    Evolutionary computation: toward a new philosophy of machine intelligence

    (1995)
  • P. Gang et al.

    Application of genetic recombination to genetic local search in TSP

    Int J Inf Technol

    (2007)
  • C. García-Martínez et al.

    Local search based on genetic algorithms

  • Glover F, Kochenberger G, editors. Handbook of metaheuristics. Massachusetts: Kluwer Academic Publishers,...
  • D.E. Goldberg

    Genetic algorithms in search, optimization, and machine learning

    (1989)
  • D.E. Goldberg et al.

    Messy genetic algorithms: motivation, analysis, and first results

    Complex Syst

    (1989)
  • D.E. Goldberg et al.

    Massively multimodality, deception, and genetic algorithms

  • D.E. Goldberg et al.

    Adaptive niching via coevolutionary sharing

  • C. Grosan et al.

    Hybrid evolutionary algorithms: methodologies, architectures, and reviews

  • P. Hansen et al.

    Variable neighborhood search

  • N. Hansen et al.

    Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)

    Evol Comput

    (2003)
  • N. Hansen et al.

    Completely derandomized self-adaptation in evolution strategies

    Evol Comput

    (2001)
  • Hansen N. Compilation of results on the CEC benchmark function set. Technical Report, Institute of Computational...
  • N. Hansen et al.

    Evaluating the CMA evolution strategy on multimodal test functions

  • Hart WE. Adaptive Global Optimization with Local Search. Ph.D. thesis, University of California, San Diego, California;...
  • Hart WE, Krasnogor N, Smith JE, editors. Recent advances in memetic algorithms. Studies in fuzzyness and soft...
  • W.E. Hart et al.

    Editorial introduction special issue on memetic algorithms

    Evol Comput

    (2004)
  • Heitktter J. SAC-94 Suite of 0/1-multiple-knapsack problem instances, 2001. Available at...
  • F. Herrera et al.

    Gradual distributed real-coded genetic algorithms

    IEEE Trans Evol Comput

    (2000)
  • S.L. Ho et al.

    A particle swarm optimization method with enhanced global search ability for design optimizations of electromagnetic devices

    IEEE Trans Magn

    (2006)
  • H.H. Hoos et al.

    Stochastic local search: foundations and applications

    (2004)
  • Cited by (160)

    View all citing articles on Scopus

    This work was supported by Project TIN2005-08386-C05-01.

    View full text