Summary
Evolutionary computation is known to require long computation time for large problems. This chapter examines the possibility of improving the evolution process by incorporating domain-specific knowledge into evolutionary computation through lifetime learning. Different approaches to combining lifetime learning and evolution are compared. While the Lamarckian approach is able to speed up the evolution process and improve the solution quality, the Baldwinian approach is found to be inefficient. Through empirical analysis, it is conjectured that the inefficiency of the Baldwinian approach is due to the difficulties for genetic operations to produce the genotypic changes that match the phenotypic changes obtained by learning. This suggests that combining evolutionary computation inattentively with any learning method available is not a proper way to construct hybrid algorithms; rather, the correlation between the genetic operations and learning should be carefully considered.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
D. H. Ackley and M. L. Littman. Interactions between learning and evolution. In C. G. Langton, C. Taylor, J. D. Farmer, and S. Rasmussen, editors, Artificial Life 2, pages 487–509. Redwood City, CA: Addison-Wesley, 1992.
D. H. Ackley and M. L. Littman. A case for Lamarckian evolution. In C. G. Langton, editor, Artificial Life 3, pages 3–10. Reading, Mass.: Addison-Wesley, 1994.
R. W. Anderson. Learning and evolution: A quantitative genetics approach. Journal of Theoretical Biology, 175:89–101, 1995.
T. Bäck, U. Hammel, and H.-P. Schwefel. Evolutionary computation: Comments on the history and current state. IEEE Transactions on Evolutionary Computation, 1(1):3–17, 1997.
J. M. Baldwin. A new factor in evolution. American Naturalist, 30:441–451, 1896.
R. K. Belew. When both individuals and populations search: adding simple learning to the genetic algorithm. In J. D. Schaffer, editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 34–41. M. Kaufmann, 1989.
R. K. Belew. Evolution, learning, and culture: Computational metaphors for adaptive algorithms. Complex Systems, 4:11–49. 1990.
Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
H. Braun and P. Zagorski. ENZO-M — a hybrid approach for optimizing neural networks by evolution and learning. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 440–451. Springer-Verlag, 1994.
J. A. Bullinaria. Exploring the Baldwin effect in evolving adaptable control systems. In R. F. French and J. P. Sougne, editors, Connectionist models of learning, development and evolution, pages 231–242. London: Springer, 2001.
J. A. Bullinaria. Evolving efficient learning algorithms for binary mappings. Neural Networks, 16:793–800, 2003.
R. J. Collins and D. R. Jefferson. Selection in massively parallel genetic algorithms. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 249–256, 1991.
Y. Davidor. A naturally occurring niche & species phenomenon: the model and first results. In Proceedings of the Fourth International Conference on Genetic Algorithms, pages 257–262, 1991.
T. Deacon. The Symbolic Species. New York: W.W. Norton, 1997.
D. Depew. The Baldwin effect: an archaeology. Cybernetics and Human Knowing, 7(1):7–20, 2000.
R. M. French and A. Messinger. Genes, phenes and the Baldwin effect: Learning and evolution in a simulated population. In A. B. Rodney and M. Pattie, editors, Artificial Life 4, pages 277–282. Cambridge, Mass.: MIT Press, 1994.
G. W. Greenwood. Training partially recurrent neural networks using evolutionary strategies. IEEE Transactions on Speech Audio Processing, 5(2):192–194, 1997.
J. J. Grefenstette. Lamarckian learning in multi-agent environments. In R. K. Belew and L. B. Booker, editors, Proceedings of the Fourth International Conference on Genetic Algorithms, pages 303–310. San Mateo, CA: Morgan Kaufmann Pub., 1991.
F. Gruau and D. Whitley. Adding learning to the cellular development of neural networks: evolution and the Baldwin effect. Evolutionary Computation, 1(3):213–233, 1993.
W. E. Hart, T. E. Kammeyer, and R. K. Belew. The role of development in genetic algorithms. In L. D. Whitley and M. D. Vose, editors, Foundations of Genetic Algorithms 3, pages 315–332. San Mateo, CA: Morgan Kaufmann Pub., 1995.
I. Harvey. The puzzle of the persistent question marks: A case study of genetic drift. In S. Forrest, editor, Proceedings of the Fifth International Conference on Genetic Algorithms, pages 15–22. San Mateo, CA: Morgan Kaufmann Pub., 1993.
G. E. Hinton and S. J. Nowlan. How learning can guide evolution. Complex Systems, 1:495–502, 1987.
C. R. Houck, J. A. Joines, M. G. Kay, and J. R. Wilson. Empirical investigation of the benefits of partial Lamarckianism. Evolutionary Computation, 5 (1):31–60, 1997.
J. Huang and H. Wechsler. Visual routines for eye location using learning and evolution. IEEE Transactions on Evolutionary Computation, 4(1):73–82, 2000.
R. Keesing and D. G. Stork. Evolution and learning in neural networks: the number and distribution of learning trial affect the rate of evolution. In R. P. Lippmann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 804–810. San Mateo, CA: Morgan Kaufmann Pub., 1991.
K. W. C. Ku, M. W. Mak, and W. C. Siu. A study of the Lamarckian evolution of recurrent neural networks. IEEE Transactions on Evolutionary Computation, 4(1):31–42, 2000.
K. W. C. Ku, M. W. Mak, and W. C. Siu. Approaches to combining local and evolutionary search for training neural networks: A review and some new results. In A. Ghosh and S. Tsutsui, editors, Advances in Evolutionary Computation, pages 615–642. Springer-Verlag (UK), 2003.
V. Maniezzo. Genetic evolution of the topology and weight distribution of neural networks. IEEE Transactions on Neural Networks, 5(1):39–53, 1994.
G. Mayley. Landscapes, learning costs, and genetic assimilation. Evolutionary Computation, 4(3):213–234, 1997.
D. J. Montana and L. Davis. Training feedforward neural network using genetic algorithms. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, pages 762–767, 1989.
M. C. Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, and R. P. Lippmann, editors, Advances in Neural Information Processing Systems 4, pages 275–282. San Mateo, CA: Morgan Kaufmann Pub., 1992.
S. Nolfi, J. L. Elman, and D. Parisi. Learning and evolution in neural networks. Adaptive Behavior, 3:5–28, 1994.
H.-P. Schwefel. Evolution and Optimum Seeking. Wiley, New York, 1995.
P. Turney. Myths and legends of the Baldwin effect. In Proceedings of the Workshop on Evolutionary Computing and Machine Learning at the 19th International Conference on Machine Learning, pages 135–142, 1996.
P. Turney. How to shift bias: Lessons from the Baldwin effect. Evolutionary Computation, 4(3):271–295, 1997.
D. Whitley. A genetic algorithm tutorial. Statistics&Computing, 4(2):65–85, 1994.
D. Whitley, V. S. Gordon, and K. Mathias. Lamarckian evolution, the Baldwin effect and function optimization. In Y. Davidor, H.-P. Schwefel, and R. Manner, editors, Parallel Problem Solving from Nature — PPSN III, pages 6–15. SpringerVerlag, 1994.
R. J. Williams and D. Zipser. Experimental analysis of the real-time recurrent learning algorithm. Connection Science, 1:87–111, 1989.
H. Yamauchi. The difficulty of the Baldwinian account of linguistic innateness. In ECAL01, Lectures Notes in Computer Science, pages 391–400. Springer, 2001.
X. Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.
X. Yao and Y. Liu. A new evolutionary system for evolving artificial neural networks. IEEE Transactions on Neural Networks, 8(3):694–713, 1997.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Ku, K.W.C., Mak, M.W. (2005). Knowledge Incorporation Through Lifetime Learning. In: Jin, Y. (eds) Knowledge Incorporation in Evolutionary Computation. Studies in Fuzziness and Soft Computing, vol 167. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-44511-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-540-44511-1_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-06174-5
Online ISBN: 978-3-540-44511-1
eBook Packages: EngineeringEngineering (R0)