Skip to main content
Log in

Theoretical basis of parameter tuning for finding optima near the boundaries of search spaces in real-coded genetic algorithms

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Studies on parameter tuning in evolutionary algorithms are essential for achieving efficient adaptive searches. This paper discusses parameter tuning in real-valued crossover operators theoretically. The theoretical analysis is devoted to improving robustness of real-coded genetic algorithms (RCGAs) for finding optima near the boundaries of bounded search spaces, which can be found in most real-world applications. The proposed technique for crossover-parameter tuning is expressed mathematically, and thus enables us to control the dispersion of child distribution quantitatively. The universal applicability and effect have been confirmed theoretically and verified empirically with five crossover operators. Statistical properties of several practical RCGAs are also investigated numerically. Performance comparison with various parameter values has been conducted on test functions with the optima placed not only at the center but also in a corner of the search space. Although the parameter-tuning technique is fairly simple, the experimental results have shown the great effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Notes

  1. Or preservation of statistics.

  2. This equation is different from the relationship presented in Kimura et al. (2000). According to the first author Kimura’s private advice, they have assumed that parent vectors are independent random variables that satisfy \(E({{{\user2{x}_{\theta}^{(i)}}{\user2{x}_{\theta}^{(j\neq i){\rm T}}}}})={\user2{g}_{\theta}\user2{g}_{\theta}}^{{\rm T}}\) under the assumption of an infinite parent population, unlike in this paper.

  3. This equation is different from the relationship presented in Higuchi et al. (2000). In their paper, each parent was assumed to be an independent sample from an infinite parent population satisfying \(E(({\user2{x}_{\theta}^{(i)}}-\user2{g}_{\theta}) ({\user2{x}_{\theta}^{({j\neq i})}}-\user2{g}_{\theta})^{{\rm T}})=0,\) unlike in this paper.

  4. BNDX had been called NDX until 1997.

  5. Experiments with BNDX and TMX have not been conducted because they seem somewhat obsolete and are used only with other search operators in hybrid RCGAs.

Abbreviations

\(\star\) :

Arbitrary variable or a set of variables

\({\mathbb{M}}[\star]\) :

Mean operator

\({\mathbb{V}}[\star]\) :

Variance–covariance matrix operator

\(E(\star)\) :

Mathematical expectation operator

\(\langle\langle \star \rangle\rangle\) :

Estimator operator

\(\psi(\star)\) :

Probability density function, or the infinite population

\(m(t), v(t)\) :

Scalar functions for the first- and second-order statistics

\({{\fancyscript{F}}(\user2{x})}\) :

Objective function

\(\user2{x}, \user2{x}^{{\rm T}}\) :

Solution vector and its transpose

\(\user2{x}_{\theta}^{(\star)}, \user2{x}_{{\lambda}}\) :

Parent vector and child vector

\({{\fancyscript{P}}_{\mu}, \user2{g}_{\mu}}\) :

Whole population of individuals and the mean vector

\({{\fancyscript{P}}_{\theta}, \user2{g}_{\theta}}\) :

Set of individuals selected for a single crossover operation and the mean vector

\({{\fancyscript{P}}_{\lambda}, {\fancyscript{P}}_{\wedge}}\) :

Local and whole child populations

\({\varvec{\phi}}\) :

Supplement vector (the zero vector in this paper)

\(\mu\) :

Population size

θ :

Number of parents for a single crossover operation

\({\lambda}\) :

Child-population size in a single crossover operation

ω :

Number of crossover operations per generation

t :

Generation counter

\({\varkappa}\) :

Expansion or contraction ratio of \({\mathbb{V}}[\fancyscript{P}_{\mu}]\)

z :

Local expansion rate

\(\varepsilon\) :

Expansion rate parameter of SPX

δ :

Experiment parameter to determine the initialization domain

n :

dimension of the search space

References

  • Akimoto Y, Sakuma J, Ono I, Kobayashi S (2009) Adaptation of expansion rate for real-coded crossovers. In: Proceedings of the genetic and evolutionary computation conference (GECCO-2009), ACM SIGEVO, Montréal, Canada, pp 739–746

  • Beyer HG (1999) On the dynamics of EAs without selection. In: Banzhaf W, Reeves C (eds) Foundations of genetic algorithms 5 (FOGA-98), Morgan Kaufmann Publishers, Inc., San Francisco, pp 5–26

  • Beyer HG, Deb K (2001) On self-adaptive features in real-parameter evolutionary algorithms. IEEE Trans Evol Comput 5(3):250–270

    Article  Google Scholar 

  • Coello CAC (2002) Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Methods Appl Mech Eng 191(11–12):1245–1287. doi:10.1016/S0045-7825(01)00323-1

    Article  MATH  Google Scholar 

  • Deb K (2000) An efficient constraint handling method for genetic algorithms. Comput Methods Appl Mech Eng 186:311–338. doi:10.1016/S0045-7825(99)00389-8

    Article  MATH  Google Scholar 

  • Eiben A, Michalewicz Z, Schoenauer M, Smith J (2007) Parameter control in evolutionary algorithms. In: Lobo FG, Lima CF, Michalewicz Z (eds) Parameter setting in evolutionary algorithms, studies in computational intelligence, vol 54. Springer, Berlin, pp 19–46. doi:10.1007/978-3-540-69432-8_2

  • Eshelman LJ, Mathias KE, Schaffer JD (1997) Crossover operator biases: exploiting the population distribution. In: Proceedings of the 7th international conference on genetic algorithms (ICGA’97), Morgan Kaufmann, San Francisco, CA, USA, pp 354–361

  • Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison Wesley, Reading

  • Herrera F, Lozano M (2005) Special issue on real coded genetic algorithms: operators, models and foundations. Soft Comput 9(4):223–323

    Article  Google Scholar 

  • Herrera F, Lozano M, Verdegay JL (1998) Tackling real-coded genetic algorithms: operators and tools for behavioural analysis. Artif Intell Rev 12(4):265–319

    Article  MATH  Google Scholar 

  • Herrera F, Lozano M, Sánchez AM (2003) A taxonomy for the crossover operator for real-coded genetic algorithms: an experimental study. Int J Intell Syst 18(3):309–338

    Article  MATH  Google Scholar 

  • Higuchi T, Tsutsui S, Yamamura M (2000) Theoretical analysis of simplex crossover for real-coded genetic algorithms. In: Schoenauer M, Deb K, Rudolph G, Yao X, Lutton E, Merelo JJ, Schwefel HP (eds) Proceedings of the sixth international conference on parallel problem solving from nature—PPSN VI. Lecture Notes in Computer Science, vol 1917. Springer, Berlin, pp 365–374

  • Hoel PG (1976) Elementary statistics, 4th edn. Wiley, New York

  • Kimura S, Ono I, Kita H, Kobayashi S (2000) An extension of UNDX based on guidelines for designing crossover operators: proposition and evaluation of ENDX. Trans Soc Instrum Control Eng 36(12):1162–1171 (in Japanese with English abstract)

    Google Scholar 

  • Kirkpatrick S, Gelatt C, Vecchi M (1983) Optimization by simulated annealing. Science 220(4598):671–680

    Article  MATH  MathSciNet  Google Scholar 

  • Kita H, Yamamura M (1999) A functional specialization hypothesis for designing genetic algorithms. In: Proceedings of IEEE international conference on systems, man and cybernetics (SMC’99), IEEE, Tokyo, Japan, pp III–579–584

  • Kita H, Ono I, Kobayashi S (1998) Theoretical analysis of the unimodal normal distribution crossover for real-coded genetic algorithms. In: Proceedings of 1998 international conference on evolutionary computation, pp 529–534

  • Kita H, Ono I, Kobayashi S (1999) Multi-parental extension of the unimodal normal distribution crossover for real-coded genetic algorithms. In: Proceedings of IEEE congress on evolutionary computation (CEC 1999), pp 1581–1587

  • Larrañaga P, Lozano JA (eds) (2002) Estimation of distribution algorithms. Kluwer, Norwell

  • Michalewicz Z, Janikow CZ (1991) Handling constraints in genetic algorithms. In: Belew RK, Booker LB (eds) Proceedings of the fourth international conference on genetic algorithms (ICGA-91), Morgan Kaufmann, San Diego, CA, USA, pp 151–157

  • Mühlenbein H, Mahining T, Rodriguez AO (1999) Schemata, distributions and graphical models in evolutionary optimization. Heuristics 5:215–247

    Article  MATH  Google Scholar 

  • Ono I, Kobayashi S (1997) A real-coded genetic algorithm for function optimization using unimodal normal distribution crossover. In: Proceedings of the 7th international conference on genetic algorithms (ICGA’97), Morgan Kaufmann, San Francisco, CA, USA, pp 246–253

  • Ono I, Yamamura M, Kobayashi S (1996) A genetic algorithm with characteristic preservation for function optimization. In: Proceedings of the 4th international conference on soft computing—IIZUKA’96 methodologies for the conception, design, and application of intelligent systems, Iizuka, Fukuoka, Japan, pp 511–514

  • Ono I, Kita H, Kobayashi S (1999) A robust real-coded genetic algorithm using unimodal normal distribution crossover augmented by uniform crossover: effects of self-adaptation of crossover probabilities. In: Banzhaf W, Daida JM, Eiben AE, Garzon MH, Honavar V, Jakiela MJ, Smith RE (eds) Proceedings of the genetic and evolutionary computation conference (GECCO’99), Orlando, Florida, USA, pp 496–503

  • Sakuma J, Kobayashi S (2001) Extrapolation-directed crossover for real-coded GA: overcoming deceptive phenomena by extrapolative search. In: Proceedings of IEEE congress on evolutionary computation (CEC 2001), Seoul, Korea, pp 655–662

  • Sakuma J, Kobayashi S (2002) Extrapolation-directed crossover considering sampling bias in real-coded genetic algorithm. Trans Jpn Soc Artif Intell 17(6):699–707 (in Japanese with English abstract)

    Article  Google Scholar 

  • Satoh H, Yamamura M, Kobayashi S (1996) Minimal generation gap model for GAs considering both exploration and exploitation. In: Proceedings of the 4th international conference on soft computing—IIZUKA’96 methodologies for the conception, design, and application of intelligent systems, Iizuka, Fukuoka, Japan, pp 494–497

  • Someya H (2007) Promising search regions of crossover operators for function optimization. In: Proceedings of the 20th international conference on industrial, engineering & other applications of applied intelligent systems: IEA/AIE 2007. Lecture Notes in Artificial Intelligence, vol 4570: New Trends in Applied Artificial Intelligence, pp 434–443

  • Someya H (2008a) Parameter tuning of real-valued crossover operators for statistics preservation. In: Proceedings of the seventh international conference on simulated evolution and learning: SEAL 2008. Lecture Notes in Computer Science, vol 5361: Simulated Evolution and Learning. Melbourne, Australia, pp 269–278. doi:10.1007/978-3-540-89694-4_28

  • Someya H (2008b) Theoretical parameter value for appropriate population variance of the distribution of children in real-coded GA. In: Proceedings of IEEE congress on evolutionary computation (CEC 2008) as part of the IEEE world congress on computational intelligence (WCCI 2008), IEEE, Hong Kong, pp 2722–2729

  • Someya H, Yamamura M (2001) Genetic algorithm with search area adaptation for the function optimization and its experimental analysis. In: Proceedings of IEEE congress on evolutionary computation (CEC 2001), Seoul, Korea, pp 933–940

  • Someya H, Yamamura M (2002) Robust evolutionary algorithms with toroidal search space conversion for function optimization. In: Proceedings of the genetic and evolutionary computation conference 2002, pp 553–560

  • Someya H, Yamamura M (2005) A robust real-coded evolutionary algorithm with toroidal search space conversion. In: Herrera and Lozano (2005), pp 254–269

  • Tang K, Yao X, Suganthan PN, MacNish C, Chen YP, Chen CM, Yang Z (2008) Benchmark functions for the CEC’2008 special session and competition on large scale global optimization. Tech. rep., IEEE congress on evolutionary computation: CEC 2008 (Proceedings of the IEEE world congress on computational intelligence: WCCI 2008), Hong Kong

  • Tsutsui S (1998) Multi-parent recombination in genetic algorithms with search space boundary extension by mirroring. In: Proceedings of the fifth international conference on parallel problem solving from nature (PPSN V), pp 428–437

  • Tsutsui S (2000) Sampling bias and search space boundary extension in real coded genetic algorithms. In: Whitley LD, Goldberg DE, Cantú-Paz E, Spector L, Parmee IC, Beyer HG (eds) Proceedings of the genetic and evolutionary computation conference (GECCO’00), Las Vegas, Nevada, USA, pp 211–218

  • Tsutsui S, Goldberg DE (2001) Search space boundary extension method in real-coded genetic algorithms. Inform Sci 133(3–4):229–247

    Article  MATH  Google Scholar 

  • Tsutsui S, Yamamura M, Higuchi T (1999) Multi-parent recombination with simplex crossover in real coded genetic algorithms. In: Banzhaf W, Daida JM, Eiben AE, Garzon MH, Honavar V, Jakiela MJ, Smith RE (eds) Proceedings of the genetic and evolutionary computation conference (GECCO’99), Orlando, Florida, USA, pp 657–664

Download references

Acknowledgments

The author thanks Prof. H. Kita (Kyoto University) and Assoc. Prof. S. Kimura (Tottori University) for their helpful advice on the variance–covariance matrix of UNDX-m and that of ENDX, respectively. This work was supported in part by the Grant-in-Aid for Science Research (No. 22700243). Part of the work in this paper has previously appeared in the conference papers: Someya (2008a, b).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroshi Someya.

Appendices

Appendix 1: Definition of CEC 2008 benchmark functions

The CEC 2008 benchmark functions are described in Table 4. The function value of the optimum of every function except \({{\fancyscript{F}}}_{7}\) is zero. Since the optimum of \({{\fancyscript{F}}}_{7}\) is unknown, the best function value, −1548, found in the CEC 2008 was treated as the offset in Sect. 5.2.

Table 4 Description of the CEC 2008 benchmark functions

Appendix 2: Theoretical curves of sampling bias with SPX

The crossover operation using SPX with \(\theta=2\) within a one-dimensional search space [0, 1] is considered. The parental vectors and their centroid are thus regarded as scalar values: \(x_{\theta}^{{(1)}}, x_{\theta}^{(2)},\) and \(g_{\theta},\) where \(0 \le x_{\theta}^{(1)} < x_{\theta }^{(2)} \le 1.\) The proof can be regarded as a generalized version of that for BLX-α with \(\alpha=0.5\) presented in Someya and Yamamura (2002).

From a pair of the parents, a child \(x_{\lambda}\) is generated within the range

$$ g_{\theta}-\frac{w}{2}<x_{\lambda}<g_{\theta}+\frac{w}{2}, $$
(59)

where w represents the width of the child distribution in a single crossover operation. Substituting \(g_{\theta}= \frac{{x_{\theta}^{{(1)}}+ x_{\theta}^{{(2)}}}}{2}\) and \(w=\varepsilon({x_{\theta}^{{(2)}}- x_{\theta}^{{(1)}}})\) into the above inequality leads to

$$ -1<\frac{2x_{\lambda}-({x_{\theta}^{(1)}+x_{\theta }^{(2)}})}{\varepsilon({x_{\theta}^{(2)}- x_{\theta}^{{(1)}}})}< 1. $$
(60)

This can be divided into the following two inequalities:

$$ \frac{({\varepsilon+1}){x_{\theta}^{(1)}-2x_{\lambda}}}{{\varepsilon-1}}< {x_{\theta }^{{(2)}}}, $$
(61)
$$ \frac{({\varepsilon-1}){x_{\theta}^{(1)}+2x_{\lambda}}}{{\varepsilon+1}}< {x_{\theta }^{{(2)}}}. $$
(62)

These show that the p.d.f. of children \(p(x_{\lambda})\) can be expressed with a normalization constant K as follows:

$$ p(x_{\lambda})=K\{{p_{A}(x_{\lambda})+p_{B}(x_{\lambda})+ p_{C} (x_{\lambda})}\}, $$
(63)
$$ \left\{{\begin{array}{l} {\begin{aligned}p_{A}(x_{\lambda})&=\int\limits_{{x_{\lambda} }}^{\gimel} {\int\limits_{{\frac{{\left(\varepsilon+1\right)x_{\theta}^{{(1)}}- 2x_{\lambda }}}{{\varepsilon-1}}}}^{1}{w^{{-1}}\,{\hbox{d}}x_{\theta }^{{(2)}}\,{\hbox{d}}x_{\theta}^{{(1)}}}}\\ &\qquad\qquad\quad\quad\quad\quad:x_{\lambda}\le x_{\theta}^{{(1)}}, p_{B} (x_{\lambda})=\int\limits_{0}^{{x_{\lambda}}} {\int\limits_{{x_{\lambda}}}^{1} {w^{{-1}}\,{\hbox{d}}x_{\theta }^{{(2)}}\,{\hbox{d}}x_{\theta}^{{(1)}}}}\\&\qquad\qquad\quad\quad\quad\quad:x_{\theta }^{{(1)}}\le x_{\lambda}\le x_{\theta }^{{(2)}}, p_{C}(x_{\lambda})=\int\limits_{0}^{{x_{\lambda}}} {\int\limits_{{\frac{{\left(\varepsilon-1\right)x_{\theta}^{{(1)}}+2x_{\lambda }}}{{\varepsilon+1}}}}^{{x_{\lambda}}}{w^{{- 1}}\,{\hbox{d}}x_{\theta}^{{(2)}}\,{\hbox{d}}x_{\theta}^{{(1)}}}}\\&\qquad\qquad\quad\quad\quad\quad: x_{\theta }^{{(2)}}\le x_{\lambda},\end{aligned}}\\ \end{array}}\right. $$
(64)

where \(\gimel\) is the right-hand side of \(x_{\theta}^{(1)}< \frac{2(x_{\lambda}-1)}{\varepsilon+1}+1,\) derived by substituting \(x_{\theta}^{(2)}=1\) into (61). By calculating the inner integrals of the three above cases, the following can be obtained:

$$ \left\{ {\begin{array}{l} \begin{aligned}\varepsilon \cdot p_{A} (x_{\lambda } )& = \int\limits_{{x_{\lambda } }}^{\gimel } \left\{ {\ln | {1 - x_{\theta }^{{(1)}} } | - \ln \left| {\frac{{2\left(x_{\theta }^{(1)}- x_{\lambda}\right)}}{{\varepsilon- 1}}} \right|} \right\}\,{\hbox{d}}x_{\theta }^{{(1)}}\\& \qquad\qquad\quad\quad\quad\quad:x_{\lambda } \le x_{\theta }^{{(1)}}, \varepsilon \cdot p_{B} (x_{\lambda } ) = \int_{0}^{x_\lambda } \{{\ln | {1 - x_{\theta }^{{(1)}} } | - \ln | {x_{\lambda} - x_{\theta }^{{(1)}} }|}\}\,{\hbox{d}}x_{\theta }^{{(1)}}\\& \qquad\qquad\quad\quad\quad\quad:x_{\theta }^{{(1)}} \le x_{\lambda } \le x_{\theta }^{{(2)}} , \varepsilon \cdot p_{C} (x_{\lambda } ) = \int\limits_{0}^{{x_{\lambda } }} \left\{\ln | {x_{\lambda } - x_{\theta }^{{(1)}} } | - \ln \left| {\frac{{- 2({x_{\theta }^{{(1)}}- x_{\lambda } } )}}{{\varepsilon + 1}}} \right|\right\}\,{\hbox{d}}x_{\theta}^{{(1)}}\\& \qquad\qquad\quad\quad\quad\quad:x_{\theta }^{{(2)}} \le x_{\lambda},\end{aligned}\end{array}}\right. $$
(65)

where \(\left|{\star}\right|\) is the absolute value. Integrating gives

$$ \begin{aligned} \frac{\varepsilon}{K} \cdot p(x_{\lambda}) & = \int\limits_{0}^{\gimel} {\{ {\ln ({1 - x_{\theta }^{{(1)}} } ) - \ln 2} \}\,{\hbox{d}}x_{\theta }^{{(1)}}}\\ &\quad-\int\limits_{{x_{\lambda}}}^{\gimel}{\{{\ln ( {x_{\theta}^{{(1)}}-x_{\lambda}})-\ln(\varepsilon-1)} \}\,{\hbox{d}}x_{\theta}^{{(1)}}}\\ &\quad- \int\limits_{0}^{{x_{\lambda}}}{\{{\ln({- x_{\theta }^{{(1)}} + x_{\lambda}})-\ln(\varepsilon+1)} \}\,{\hbox{d}}x_{\theta}^{{(1)}}}.\\ \end{aligned} $$
(66)

By calculating each integral, this can be simplified to

$$ p(x_{\lambda})=\frac{K}{\varepsilon}\{\ln({\varepsilon+1}) -\ln{2}-(1-x_{\lambda})\ln{(1-x_{\lambda})}-x_{\lambda} \ln x_{\lambda}\}, $$
(67)

where

$$ K = \frac{2\varepsilon}{2\{\ln({\varepsilon+1})-\ln{2}\}+1}, $$
(68)

determined for

$$ {\int\limits_{0}^{1}}{p(x_{\lambda}){\hbox{dx}_{\lambda}}}=\frac{K}{\varepsilon}\left\{\ln({\varepsilon+1})-\ln{2} +\frac{1}{2}\right\}=1. $$
(69)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Someya, H. Theoretical basis of parameter tuning for finding optima near the boundaries of search spaces in real-coded genetic algorithms. Soft Comput 16, 23–45 (2012). https://doi.org/10.1007/s00500-011-0732-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-011-0732-1

Keywords

Navigation