Skip to main content

Adaptive Learning of Linguistic Hierarchy in a Multiple Timescale Recurrent Neural Network

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7552))

Abstract

Recent research has revealed that hierarchical linguistic structures can emerge in a recurrent neural network with a sufficient number of delayed context layers. As a representative of this type of network the Multiple Timescale Recurrent Neural Network (MTRNN) has been proposed for recognising and generating known as well as unknown linguistic utterances. However the training of utterances performed in other approaches demands a high training effort. In this paper we propose a robust mechanism for adaptive learning rates and internal states to speed up the training process substantially. In addition we compare the generalisation of the network for the adaptive mechanism as well as the standard fixed learning rates finding at least equal capabilities.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hinoshita, W., Arie, H., Tani, J., Okuno, H.G., Ogata, T.: Emergence of hierarchical structure mirroring linguistic composition in a recurrent neural network. Neural Networks 24(4), 311–320 (2011)

    Article  Google Scholar 

  2. Kleesiek, J., Badde, S., Wermter, S., Engel, A.K.: What do objects feel like? - Active perception for a humanoid robot. In: Proc. 4th Int. Conference on Agents and Artificial Intelligence (ICAART 2012), vol. 1, pp. 64–73. SciTePress, Vilamoura (2012)

    Google Scholar 

  3. Kolen, J.F., Kremer, S.C.: A Field Guide to Dynamical Recurrent Networks. Wiley-IEEE Press (2001)

    Google Scholar 

  4. Peniak, M., Marocco, D., Tani, J., Yamashita, Y., Fischer, K., Cangelosi, A.: Multiple time scales recurrent neural network for complex action acquisition. In: Proceedings of ICDL-Epirob 2011. IEEE, Frankfurt (2011)

    Google Scholar 

  5. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: the rprop algorithm. In: Proc. IEEE Int. Conference on Neural Networks (ICNN 1993), vol. 1, pp. 586–591. IEEE, San Francisco (1993)

    Google Scholar 

  6. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representation by error propagation. In: Parallel Distributed Processing. MIT Press, Cambridge (1986)

    Google Scholar 

  7. Sano, S., Nishide, S., Okuno, H.G., Ogata, T.: Predicting listener back-channels for human-agent interaction using neuro-dynamical model. In: Proc. 2011 IEEE/SICE Int. Symp. on System Integration (SII 2011), Kyoto, pp. 18–23 (2011)

    Google Scholar 

  8. Steels, L., Spranger, M., van Trijp, R., Höfer, S., Hild, M.: Emergent action language on real robots. In: Language Grounding in Robots, ch. 13, pp. 255–276. Springer, New York (2012)

    Chapter  Google Scholar 

  9. Tani, J., Ito, M.: Self-organization of behavioral primitives as multiple attractor dynamics: A robot experiment. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans 33(4), 481–488 (2003)

    Article  Google Scholar 

  10. Wermter, S., Panchev, C., Arevian, G.: Hybrid neural plausibility networks for news agents. In: Proc. National Conference on Artificial Intelligence, pp. 93–98. AIII Press, Orlando (1999)

    Google Scholar 

  11. Williams, R.J., Zipser, D.: Gradient-based learning algorithms for recurrent networks and their computational complexity. In: Backpropagation: Theory, Architectures, and Applications. Lawrence Erlbaum Associates, NJ (1995)

    Google Scholar 

  12. Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural network model: A humanoid robot experiment. PLoS Computational Biology 4(11), e1000220 (2008)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Heinrich, S., Weber, C., Wermter, S. (2012). Adaptive Learning of Linguistic Hierarchy in a Multiple Timescale Recurrent Neural Network. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds) Artificial Neural Networks and Machine Learning – ICANN 2012. ICANN 2012. Lecture Notes in Computer Science, vol 7552. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33269-2_70

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33269-2_70

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33268-5

  • Online ISBN: 978-3-642-33269-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics