Skip to main content

Reinforcement Learning Transfer via Common Subspaces

  • Conference paper
Adaptive and Learning Agents (ALA 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7113))

Included in the following conference series:

Abstract

Agents in reinforcement learning tasks may learn slowly in large or complex tasks — transfer learning is one technique to speed up learning by providing an informative prior. How to best enable transfer between tasks with different state representations and/or actions is currently an open question. This paper introduces the concept of a common task subspace, which is used to autonomously learn how two tasks are related. Experiments in two different nonlinear domains empirically show that a learned inter-state mapping can successfully be used by fitted value iteration, to (1) improving the performance of a policy learned with a fixed number of samples, and (2) reducing the time required to converge to a (near-) optimal policy with unlimited samples.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atkeson, C.G., Moore, A.W., Schaal, S.: Locally weighted learning. A.I. Rev. 11(1-5), 11–73 (1997)

    Google Scholar 

  2. Barto, A.G., Sutton, R.S., Anderson, C.W.: Neuronlike adaptive elements that can solve difficult learning control problems, pp. 81–93. IEEE Press (1990)

    Google Scholar 

  3. Busoniu, L., Babuska, R., De Schutter, B., Ernst, D.: Reinforcement Learning and Dynamic Programming Using Function Approximators, 1st edn. CRC Press, Inc., Boca Raton (2010)

    Book  MATH  Google Scholar 

  4. Close, C.M., Fredrick, D.K., Newel, J.C.: Modeling and Analysis of Dynamic Systems, 3rd edn. John Wiley & Sons, Inc., Third Avenue (2002)

    Google Scholar 

  5. Konidaris, G., Barto, A.: Autonomous shaping: Knowledge transfer in reinforcement learning. In: ICML (2006)

    Google Scholar 

  6. Kuhlmann, G., Stone, P.: Graph-Based Domain Mapping for Transfer Learning in General Games. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) ECML 2007. LNCS (LNAI), vol. 4701, pp. 188–200. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  7. Liu, Y., Stone, P.: Value-function-based transfer for reinforcement learning using structure mapping. In: AAAI (July 2006)

    Google Scholar 

  8. Soni, V., Singh, S.: Using homomorphisms to transfer options across continuous reinforcement learning domains. In: AAAI (July 2006)

    Google Scholar 

  9. Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. IEEE Transactions on Neural Networks 9(5), 1054–1054 (1998)

    Article  Google Scholar 

  10. Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research 10, 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  11. Taylor, M.E., Stone, P., Liu, Y.: Transfer learning via inter-task mappings for temporal difference learning. J. of Machine Learning Research 8(1), 2125–2167 (2007)

    MathSciNet  MATH  Google Scholar 

  12. Taylor, M.E., Jong, N.K., Stone, P.: Transferring Instances for Model-Based Reinforcement Learning. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part II. LNCS (LNAI), vol. 5212, pp. 488–505. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  13. Torrey, L., Walker, T., Shavlik, J., Maclin, R.: Using Advice to Transfer Knowledge Acquired in One Reinforcement Learning Task to Another. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 412–424. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ammar, H.B., Taylor, M.E. (2012). Reinforcement Learning Transfer via Common Subspaces. In: Vrancx, P., Knudson, M., Grześ, M. (eds) Adaptive and Learning Agents. ALA 2011. Lecture Notes in Computer Science(), vol 7113. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28499-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-28499-1_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-28498-4

  • Online ISBN: 978-3-642-28499-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics