ABSTRACT
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search avoids this problem by encouraging a search in all interesting directions. That occurs by replacing a performance objective with a reward for novel behaviors, as defined by a human-crafted, and often simple, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a novelty pressure in image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art.
- J. E. Auerbach. Automated evolution of interesting images. In Artificial Life 13. MIT Press, 2012.Google Scholar
- Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009. Google ScholarDigital Library
- Y. Bengio, E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by backprop. In Proc. of the ICML, 2014.Google Scholar
- J. Clune and H. Lipson. Evolving three-dimensional objects with a generative encoding inspired by developmental biology. In Proc. of the European Conference on Artificial Life, pages 144--148, 2011.Google Scholar
- G. Cuccu and F. Gomez. When novelty is not enough. In Applications of Evolutionary Computation. 2011. Google ScholarDigital Library
- A. Cully, J. Clune, and J.-B. Mouret. Robots that can adapt like natural animals. arXiv preprint arXiv:1407.3501, 2014.Google Scholar
- J. Deng et al. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, pages 248--{255. IEEE, 2009.Google Scholar
- G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.Google Scholar
- Y. Jia et al. Caffe: Convolutional architecture for fast feature embedding. In Proc. of the International Conference on Multimedia, pages 675--678, 2014. Google ScholarDigital Library
- A. Karpathy. What I learned from competing against a convnet on ImageNet. http://goo.gl/iqCbC0, 2014.Google Scholar
- M. Keane et al. Genetic programming IV: Routine human-competitive machine intelligence. 2006.Google Scholar
- A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097--1105, 2012.Google ScholarDigital Library
- J. Lehman and K. O. Stanley. Exploiting open-endedness to solve problems through the search for novelty. In ALIFE, pages 329--336, 2008.Google Scholar
- J. Lehman and K. O. Stanley. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 19(2):189--223, 2011. Google ScholarDigital Library
- J. Lehman and K. O. Stanley. Novelty search and the problem with objectives. In Genetic Programming Theory and Practice IX, pages 37--56. Springer, 2011.Google ScholarCross Ref
- J. Li, J. Storie, and J. Clune. Encouraging creative thinking in robots improves their ability to solve challenging problems. Algorithms, 13:14.Google Scholar
- A. Liapis, H. P. Mart nez, J. Togelius, and G. N. Yannakakis. Transforming exploratory creativity with delenox. In Proc. of the Fourth International Conference on Computational Creativity, 2013.Google Scholar
- J.-B. Mouret. Novelty-based multiobjectivization. In New Horizons in Evolutionary Robotics. Springer, 2011.Google ScholarCross Ref
- J.-B. Mouret and J. Clune. Illuminating search spaces by mapping elites. arXiv preprint, 2015.Google Scholar
- J.-B. Mouret and S. Doncieux. Sferes v2: Evolvin'in the multi-core world. In Congress on Evolutionary Computation, pages 1--8, 2010.Google ScholarCross Ref
- A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proc. of the Conference on Computer Vision and Pattern Recognition, 2015.Google ScholarCross Ref
- O. Russakovsky et al. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014.Google Scholar
- J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Science, 18(2):173--187, 2006.Google ScholarCross Ref
- J. Secretan et al. Picbreeder: A case study in collaborative evolutionary exploration of design space. Evolutionary Computation, 19(3):373--403, 2011. Google ScholarDigital Library
- K. Stanley and J. Lehman. Why Greatness Cannot Be Planned: The Myth of the Objective. Springer, 2015. Google ScholarCross Ref
- K. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99--127, 2002. Google ScholarDigital Library
- K. O. Stanley. Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines, 2007. Google ScholarDigital Library
- C. Szegedy et al. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.Google Scholar
- B. G. Woolley and K. O. Stanley. On the deleterious effects of a priori objectives on evolution and representation. In Proc. Genetic & Evolutionary Computation Conf., 2011. Google ScholarDigital Library
Index Terms
- Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning
Recommendations
Understanding innovation engines: Automated creativity and improved stochastic optimization via deep learning
The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel ...
A comparison of illumination algorithms in unbounded spaces
GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference CompanionIllumination algorithms are a new class of evolutionary algorithms capable of producing large archives of diverse and high-performing solutions. Examples of such algorithms include Novelty Search with Local Competition (NSLC), the Multi-dimensional ...
Comparing multimodal optimization and illumination
GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference CompanionIllumination algorithms are a recent addition to the evolutionary computation toolbox that allows the generation of many diverse and high-performing solutions in a single run. Nevertheless, traditional multimodal optimization algorithms also search for ...
Comments