skip to main content
10.1145/2739480.2754703acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning

Published:11 July 2015Publication History

ABSTRACT

The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search avoids this problem by encouraging a search in all interesting directions. That occurs by replacing a performance objective with a reward for novel behaviors, as defined by a human-crafted, and often simple, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a novelty pressure in image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm's key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art.

References

  1. J. E. Auerbach. Automated evolution of interesting images. In Artificial Life 13. MIT Press, 2012.Google ScholarGoogle Scholar
  2. Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Y. Bengio, E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep generative stochastic networks trainable by backprop. In Proc. of the ICML, 2014.Google ScholarGoogle Scholar
  4. J. Clune and H. Lipson. Evolving three-dimensional objects with a generative encoding inspired by developmental biology. In Proc. of the European Conference on Artificial Life, pages 144--148, 2011.Google ScholarGoogle Scholar
  5. G. Cuccu and F. Gomez. When novelty is not enough. In Applications of Evolutionary Computation. 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Cully, J. Clune, and J.-B. Mouret. Robots that can adapt like natural animals. arXiv preprint arXiv:1407.3501, 2014.Google ScholarGoogle Scholar
  7. J. Deng et al. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, pages 248--{255. IEEE, 2009.Google ScholarGoogle Scholar
  8. G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 2006.Google ScholarGoogle Scholar
  9. Y. Jia et al. Caffe: Convolutional architecture for fast feature embedding. In Proc. of the International Conference on Multimedia, pages 675--678, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. A. Karpathy. What I learned from competing against a convnet on ImageNet. http://goo.gl/iqCbC0, 2014.Google ScholarGoogle Scholar
  11. M. Keane et al. Genetic programming IV: Routine human-competitive machine intelligence. 2006.Google ScholarGoogle Scholar
  12. A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097--1105, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. J. Lehman and K. O. Stanley. Exploiting open-endedness to solve problems through the search for novelty. In ALIFE, pages 329--336, 2008.Google ScholarGoogle Scholar
  14. J. Lehman and K. O. Stanley. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation, 19(2):189--223, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. Lehman and K. O. Stanley. Novelty search and the problem with objectives. In Genetic Programming Theory and Practice IX, pages 37--56. Springer, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  16. J. Li, J. Storie, and J. Clune. Encouraging creative thinking in robots improves their ability to solve challenging problems. Algorithms, 13:14.Google ScholarGoogle Scholar
  17. A. Liapis, H. P. Mart nez, J. Togelius, and G. N. Yannakakis. Transforming exploratory creativity with delenox. In Proc. of the Fourth International Conference on Computational Creativity, 2013.Google ScholarGoogle Scholar
  18. J.-B. Mouret. Novelty-based multiobjectivization. In New Horizons in Evolutionary Robotics. Springer, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  19. J.-B. Mouret and J. Clune. Illuminating search spaces by mapping elites. arXiv preprint, 2015.Google ScholarGoogle Scholar
  20. J.-B. Mouret and S. Doncieux. Sferes v2: Evolvin'in the multi-core world. In Congress on Evolutionary Computation, pages 1--8, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  21. A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proc. of the Conference on Computer Vision and Pattern Recognition, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  22. O. Russakovsky et al. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014.Google ScholarGoogle Scholar
  23. J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Science, 18(2):173--187, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  24. J. Secretan et al. Picbreeder: A case study in collaborative evolutionary exploration of design space. Evolutionary Computation, 19(3):373--403, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. K. Stanley and J. Lehman. Why Greatness Cannot Be Planned: The Myth of the Objective. Springer, 2015. Google ScholarGoogle ScholarCross RefCross Ref
  26. K. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary computation, 10(2):99--127, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. K. O. Stanley. Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. C. Szegedy et al. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014.Google ScholarGoogle Scholar
  29. B. G. Woolley and K. O. Stanley. On the deleterious effects of a priori objectives on evolution and representation. In Proc. Genetic & Evolutionary Computation Conf., 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      GECCO '15: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation
      July 2015
      1496 pages
      ISBN:9781450334723
      DOI:10.1145/2739480

      Copyright © 2015 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 July 2015

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      GECCO '15 Paper Acceptance Rate182of505submissions,36%Overall Acceptance Rate1,669of4,410submissions,38%

      Upcoming Conference

      GECCO '24
      Genetic and Evolutionary Computation Conference
      July 14 - 18, 2024
      Melbourne , VIC , Australia

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader