Elsevier

Information Sciences

Volume 237, 10 July 2013, Pages 82-117
Information Sciences

A survey on optimization metaheuristics

https://doi.org/10.1016/j.ins.2013.02.041Get rights and content

Abstract

Metaheuristics are widely recognized as efficient approaches for many hard optimization problems. This paper provides a survey of some of the main metaheuristics. It outlines the components and concepts that are used in various metaheuristics in order to analyze their similarities and differences. The classification adopted in this paper differentiates between single solution based metaheuristics and population based metaheuristics. The literature survey is accompanied by the presentation of references for further details, including applications. Recent trends are also briefly discussed.

Introduction

We roughly define hard optimization problems as problems that cannot be solved to optimality, or to any guaranteed bound, by any exact (deterministic) method within a “reasonable” time limit. These problems can be divided into several categories depending on whether they are continuous or discrete, constrained or unconstrained, mono or multi-objective, static or dynamic. In order to find satisfactory solutions for these problems, metaheuristics can be used. A metaheuristic is an algorithm designed to solve approximately a wide range of hard optimization problems without having to deeply adapt to each problem. Indeed, the greek prefix “meta”, present in the name, is used to indicate that these algorithms are “higher level” heuristics, in contrast with problem-specific heuristics. Metaheuristics are generally applied to problems for which there is no satisfactory problem-specific algorithm to solve them. They are widely used to solve complex problems in industry and services, in areas ranging from finance to production management and engineering.

Almost all metaheuristics share the following characteristics: they are nature-inspired (based on some principles from physics, biology or ethology); they make use of stochastic components (involving random variables); they do not use the gradient or Hessian matrix of the objective function; they have several parameters that need to be fitted to the problem at hand.

In the last thirty years, a great interest has been devoted to metaheuristics. We can try to point out some of the steps that have marked the history of metaheuristics. One pioneer contribution is the proposition of the simulated annealing method by Kirkpatrick et al. in 1982 [150]. In 1986, the tabu search was proposed by Glover [104], and the artificial immune system was proposed by Farmer et al. [83]. In 1988, Koza registered his first patent on genetic programming, later published in 1992 [154]. In 1989, Goldberg published a well known book on genetic algorithms [110]. In 1992, Dorigo completed his PhD thesis, in which he describes his innovative work on ant colony optimization [69]. In 1993, the first algorithm based on bee colonies was proposed by Walker et al. [277]. Another significant progress is the development of the particle swarm optimization by Kennedy and Eberhart in 1995 [145]. The same year, Hansen and Ostermeier proposed CMA-ES [121]. In 1996, Mühlenbein and Paaß proposed the estimation of distribution algorithm [190]. In 1997, Storn and Price proposed differential evolution [253]. In 2002, Passino introduced an optimization algorithm based on bacterial foraging [200]. Then, Simon proposed a biogeography-based optimization algorithm in 2008 [247].

The considerable development of metaheuristics can be explained by the significant increase in the processing power of the computers, and by the development of massively parallel architectures. These hardware improvements relativize the CPU time–costly nature of metaheuristics.

A metaheuristic will be successful on a given optimization problem if it can provide a balance between the exploration (diversification) and the exploitation (intensification). Exploitation is needed to identify parts of the search space with high quality solutions. Exploitation is important to intensify the search in some promising areas of the accumulated search experience. The main differences between the existing metaheuristics concern the particular way in which they try to achieve this balance [28]. Many classification criteria may be used for metaheuristics. This may be illustrated by considering the classification of metaheuristics in terms of their features with respect to different aspects concerning the search path they follow, the use of memory, the kind of neighborhood exploration used or the number of current solutions carried from one iteration to the next. For a more formal classification of metaheuristics we refer the reader to [28], [258]. The metaheuristic classification, which differentiates between Single-Solution Based Metaheuristics and Population-Based Metaheuristics, is often taken to be a fundamental distinction in the literature. Roughly speaking, basic single-solution based metaheuristics are more exploitation oriented whereas basic population-based metaheuristics are more exploration oriented.

The purpose of this paper is to present a global overview of the main metaheuristics and their principles. That attempt of survey on metaheuristics is structured in the following way. Section 2 shortly presents the class of single-solution based metaheuristics, and the main algorithms that belong to this class, i.e. the simulated annealing method, the tabu search, the GRASP method, the variable neighborhood search, the guided local search, the iterated local search, and their variants. Section 3 describes the class of metaheuristics related to population-based metaheuristics, which manipulate a collection of solutions rather than a single solution at each stage. Section 3.1 describes the field of evolutionary computation and outlines the common search components of this family of algorithms (e.g., selection, variation, and replacement). In this subsection, the focus is on evolutionary algorithms such as genetic algorithms, evolution strategies, evolutionary programming, and genetic programming. Section 3.2 presents other evolutionary algorithms such as estimation of distribution algorithms, differential evolution, coevolutionary algorithms, cultural algorithms and the scatter search and path relinking. Section 3.3 contains an overview of a family of nature inspired algorithms related to Swarm Intelligence. The main algorithms belonging to this field are ant colonies, particle swarm optimization, bacterial foraging, bee colonies, artificial immune systems and biogeography-based optimization. Finally, a discussion on the current research status and most promising paths of future research is presented in Section 4.

Section snippets

Single-solution based metaheuristics

In this section, we outline single-solution based metaheuristics, also called trajectory methods. Unlike population-based metaheuristics, they start with a single initial solution and move away from it, describing a trajectory in the search space. Some of them can be seen as “intelligent” extensions of local search algorithms. Trajectory methods mainly encompass the simulated annealing method, the tabu search, the GRASP method, the variable neighborhood search, the guided local search, the

Population-based metaheuristics

Population-based metaheuristics deal with a set (i.e. a population) of solutions rather than with a single solution. The most studied population-based methods are related to Evolutionary Computation (EC) and Swarm Intelligence (SI). EC algorithms are inspired by Darwin’s evolutionary theory, where a population of individuals is modified through recombination and mutation operators. In SI, the idea is to produce computational intelligence by exploiting simple analogs of social interaction,

Discussion and conclusions

This work surveyed several important metaheuristic methods as they are described in the literature. Some of them are single-solution based, and others are population-based, and although they are based on different philosophies. Nevertheless, a number of these metaheuristics are implemented in a more and more similar way. A unified presentation of these methods is proposed under the name of adaptive memory programming (AMP) [256]. An important principle behind AMP is that a memory containing a

References (287)

  • M. Dorigo et al.

    Ant colony optimization theory: a survey

    Theoretical Computer Science

    (2005)
  • G. Dueck et al.

    Threshold accepting: a general purpose optimization algorithm appearing superior to simulated annealing

    Journal of Computational Physics

    (1990)
  • J.D. Farmer et al.

    The immune system, adaptation, and machine learning

    Physica D

    (1986)
  • T.A. Feo et al.

    A probabilistic heuristic for a computationally difficult set covering problem

    Operations Research Letters

    (1989)
  • H.A. Abbass, MBO: marriage in honey bees optimisation: a haplometrosis polygynous swarming approach, in: CEC’2001...
  • A. Abraham et al.

    Swarm Intelligence in Data Mining

    (2006)
  • U. Aickelin et al.

    Danger theory: the link between AIS and IDS? Artificial immune systems

  • U. Aikelin, S. Cayzer, The danger theory and its application to artificial immune systems, in: Proceedings of the 1st...
  • E. Alba

    Parallel Metaheuristics: A New Class of Algorithms

    (2005)
  • E. Alba et al.

    A survey of parallel distributed genetic algorithms

    Complexity

    (1999)
  • P. Angeline

    Evolutionary optimization versus particle swarm optimization: philosophy and performance differences

  • D. Angus et al.

    Multiple objective ant colony optimisation

    Swarm Intelligence

    (2009)
  • A. Auger, N. Hansen, A restart CMA evolution strategy with increasing population size, in: B. McKay, et al. (Eds.), The...
  • T. Bäck

    Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms

    (1996)
  • T. Bäck, F. Hoffmeister, H.P. Schwefel, A survey of evolution strategies, in: Proceedings of the Fourth International...
  • T. Bäck, G. Rudolph, H.P. Schwefel, Evolutionary programming and evolution strategies: Similarities and differences,...
  • T. Bäck et al.

    An overview of evolutionary algorithms for parameter optimization

    Evolutionary Computation

    (1993)
  • S. Baluja, Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization...
  • S. Baluja et al.

    Using optimal dependency-trees for combinatorial optimization: learning the structure of the search space

  • A. Banks et al.

    A review of particle swarm optimization. Part i: background and development

    Natural Computing

    (2007)
  • A. Banks et al.

    A review of particle swarm optimization. part ii: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications

    Natural Computing

    (2008)
  • R. Battiti et al.

    Reactive Search and Intelligent Optimization

    (2008)
  • R. Battiti et al.

    The reactive tabu search

    ORSA Journal on Computing

    (1994)
  • E.B. Baum, Towards practical “neural” computation for combinatorial optimization problems, in: AIP Conference...
  • J. Baxter

    Local optima avoidance in depot location

    Journal of the Operational Research Society

    (1981)
  • D. Beasley et al.

    An overview of genetic algorithms. Part i: fundamentals

    University Computing

    (1993)
  • D. Beasley et al.

    An overview of genetic algorithms. Part ii: research topics

    University Computing

    (1993)
  • R.L. Becerra, C.A.C. Coello, A cultural algorithm with differential evolution to solve constrained optimization...
  • H.G. Beyer et al.

    Evolution strategies – a comprehensive introduction

    Natural Computing

    (2002)
  • L. Bianchi et al.

    A survey on metaheuristics for stochastic combinatorial optimization

    Natural Computing

    (2009)
  • M. Birattari, Tuning Metaheuristics: A Machine Learning Perspective, Springer Publishing Company, Incorporated, first...
  • M. Birattari, L. Paquete, T. Stützle, K. Varrentrapp, Classification of Metaheuristics and Design of Experiments for...
  • T. Blackwell

    Particle swarm optimization in dynamic environments

  • T. Blickle et al.

    A comparison of selection schemes used in genetic algorithms

    Evolutionary Computation

    (1995)
  • C. Blum et al.

    Metaheuristics in combinatorial optimization: overview and conceptual comparison

    ACM Computing Surveys

    (2003)
  • J. Brest et al.

    Self-adaptive differential evolution algorithm using population size reduction and three strategies, soft computing – a fusion of foundations

    Methodologies and Applications

    (2011)
  • J. Brownlee, Clever Algorithms: Nature-Inspired Programming Recipes, Clever Algorithms: Nature-Inspired Programming...
  • L.N. de Castro

    Artificial Immune Systems: A New Computational Intelligence Approach

    (2002)
  • L.N. de Castro et al.

    aiNet: an artificial immune network for data analysis

  • L.N. de Castro et al.

    Learning and optimization using the clonal selection principle

    IEEE Transactions on Evolutionary Computation

    (2002)
  • Cited by (1345)

    • The family capacitated vehicle routing problem

      2024, European Journal of Operational Research
    View all citing articles on Scopus
    View full text