Skip to main content

Particle Swarm Optimization with Mutation for High Dimensional Problems

  • Chapter

Part of the book series: Studies in Computational Intelligence ((SCI,volume 82))

Summary

Particle Swarm Optimization (PSO) is an effective algorithm for solving many global optimization problems. Much of the current research, however, focusses on optimizing functions involving relatively few dimensions (typically around 20 to 30, and generally fewer than 100).

Recently PSO has been applied to a different class of problems — such as neural network training — that can involve hundreds or thousands of variables. High dimensional problems such as these pose their own unique set of challenges for any optimization algorithm.

This chapter investigates the use of PSO for very high dimensional problems. It has been found that combining PSO with some of the concepts from evolutionary algorithms, such as a mutation operator, can in many cases significantly improve the performance of PSO on very high dimensional problems. Further improvements have been found with the addition of a random constriction coefficient.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andreou AS, Efstratios F, Spiridon G, Likothanassis D (2002) Exchange rates forecasting: a hybrid algorithm based on genetically optimized adaptive neural networks. Comput Econ 20(3):191–200

    Article  MATH  Google Scholar 

  2. Breiman L (1984) Classification and regression trees. Wadsworth International Group, Belmont, California

    MATH  Google Scholar 

  3. Chiang CC, Fu HH (1994) Divide-and-conquer methodology for modular supervised neural network design. In: Proceedings of the 1994 IEEE international conference on neural networks 1:119–124

    Article  Google Scholar 

  4. Dokur Z (2002) Segmentation of MR and CT images using hybrid neural network trained by genetic algorithms. Neural Process Lett 16(3):211–225

    Article  MATH  Google Scholar 

  5. Foody GM (1998) Issues in training set selection and refinement for classification by a feedforward neural network. Geoscience and remote sensing symposium proceeding:401–411

    Google Scholar 

  6. Fukunaga K (1990) Introduction to statistical pattern recognition, Academic, Boston

    MATH  Google Scholar 

  7. Goldberg DE, Deb K, Korb B (1991) Don’t worry, be messy. In: Belew R, Booker L (eds.) Proceedings of the fourth international conference in genetic algorithms and their applications, pp 24–30

    Google Scholar 

  8. Gong DX, Ruan XG, Qiao JF (2004) A neuro computing model for real coded genetic algorithm with the minimal generation gap. Neural Comput Appl 13:221–228

    Article  Google Scholar 

  9. Guan SU, Liu J (2002) Incremental ordered neural network training. J Intell Syst 12(3):137–172

    MathSciNet  Google Scholar 

  10. Guan SU, Li S (2002) Parallel growing and training of neural networks using output parallelism. IEEE Trans Neural Netw 13(3):542–550

    Article  Google Scholar 

  11. Guan SU, Ramanathan K (2007) Percentage-based hybrid pattern training with neural network specific cross over, Journal of Intelligent Systems 16(1):1–26

    Google Scholar 

  12. Guan SU, Li P (2002) A hierarchical incremental learning approach to task decomposition. J Intell Syst 12(3):201–226

    Google Scholar 

  13. Guan SU, Li S, Liu J (2002) Incremental learning with an increasing input dimension. J Inst Eng Singapore 42(4):33–38

    Google Scholar 

  14. Guan SU, Liu J (2004) Incremental neural network training with an increasing input dimension. J Intell Syst 13(1):43–69

    Google Scholar 

  15. Guan SU, Neo TN, Bao C (2004) Task decomposition using pattern distributor. J Intell Syst 13(2):123–150

    Google Scholar 

  16. Guan SU, Zhu F (2004) Class decomposition for GA-based classifier agents – a pitt approach. IEEE Trans Syst Man Cybern, Part B: Cybern 34(1):381–392

    Article  Google Scholar 

  17. Guan SU, Zhu F (2005) An incremental approach to genetic algorithms based classification. IEEE Trans Syst Man Cybern Part B 35(2):227–239

    Article  Google Scholar 

  18. Haykins S (1999) Neural networks, a comprehensive foundation, Prentice Hall, Englewood Cliffs, NJ

    Google Scholar 

  19. Holland JH (1973) Genetic algorithms and the optimal allocation of trials. SIAM J Comput 2(2):88–105

    Article  MATH  MathSciNet  Google Scholar 

  20. Kim SP, Sanchez JC, Erdogmus D, Rao YN, Wessberg J, Principe J, Nicolelis M (2003) Divide and conquer approach for brain machine interfaces: non linear mixture of competitive linear models, Neural Netw 16(5–6):865–871

    Article  Google Scholar 

  21. Lang KJ, Witbrock MJ (1988) Learning to tell two spirals apart. In: Touretzky D, Hinton G, Sejnowski T (eds.) Proceedings of the 1988 connectionist models summer School, Morgan Kaufmann, San Mateo, CA

    Google Scholar 

  22. Lasarzyck CWG, Dittrich P, Banzhaf W (2004) Dynamic subset selection based on a fitness case topology. Evol Comput, 12(4):223–242

    Article  Google Scholar 

  23. Lehtokangas (1999), Modeling with constructive Backpropagation. Neural Netw 12:707–714

    Article  Google Scholar 

  24. Lu BL, Ito K, Kita H, Nishikawa Y (1995) Parallel and modular multi-sieving neural network architecture for constructive learning. In: Proceedings of the 4th international conference on artificial neural networks 409:92–97

    Article  Google Scholar 

  25. Quilan JR (1986) Introduction of decision trees. Mach Learn 1:81–106

    Google Scholar 

  26. Rumelhart D, Hinton G, Williams R (1986) Learning internal representations by error propagation. In Rumelhart D, McClelland J (eds.) Parallel distributed processing, 1: Foundations. MIT, Cambridge, MA

    Google Scholar 

  27. Satoh H, Yamamura M, Kobayashi S (1996) Minimal generation gap model for GAs considering both exploration and exploitation. In: Proceedings of 4th Int Conference on Soft Computing, Iizuka:494–497

    Google Scholar 

  28. The UCI Machine Learning repository: http://www.ics.uci.edu/mlearn/MLRepository.html

  29. Wong MA, Lane T (1983) A kth nearest neighbor clustering procedure. JR Stat Soc (B) 45(3):362–368

    MATH  MathSciNet  Google Scholar 

  30. Yao X (1993) A review of evolutionary artificial neural networks. Int J Intell Syst 8(4):539–567

    Article  Google Scholar 

  31. Yasunaga M, Yoshida E, Yoshihara I (1999) Parallel backpropagation using genetic algorithm: real-time BP learning on the massively parallel computer CP-PACS. In: International Joint Conference on Neural Networks 6:4175–4180

    Google Scholar 

  32. Zhang BT, Cho DY (1998) Genetic programming with active data selection. Simulated Evol Learn 1485:146–153

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Achtnig, J. (2008). Particle Swarm Optimization with Mutation for High Dimensional Problems. In: Abraham, A., Grosan, C., Pedrycz, W. (eds) Engineering Evolutionary Intelligent Systems. Studies in Computational Intelligence, vol 82. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75396-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-75396-4_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-75395-7

  • Online ISBN: 978-3-540-75396-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics