Skip to main content

On the Gradient-Based Sequential Tuning of the Echo State Network Reservoir Parameters

  • Conference paper
  • First Online:
PRICAI 2016: Trends in Artificial Intelligence (PRICAI 2016)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9810))

Included in the following conference series:

Abstract

In this paper, the derivative of the input scaling and spectral radius parameters of Echo State Network reservoir are derived. This was achieved by re-writing the reservoir state update equation in terms of template matrices whose eigenvalues can be pre-calculated, so the two parameters appear in the state update equation in the form of simple multiplication which is differentiable. After that the paper derives the derivatives and then discusses why direct application of these two derivatives in gradient descent to optimize reservoirs in a sequential manner would be ineffective due to the nature of the error surface and the problem of large eigenvalue spread on the reservoir state matrix. Finally it is suggested how to apply the derivatives obtained here for joint-optimizing the reservoir and readout at the same time.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    To avoid confusion, we shall refer to the process of choosing good reservoir parameter as “tuning” of ESN, while “training” means to adapt the weights of the readout layer, given some reservoir.

  2. 2.

    The spectral radius of a matrix is the maximum of the magnitudes of its eigenvalues.

  3. 3.

    If the sparseness is above 90 % eigenvalue calculation for \( \mathbf {W} \) may sometime fail due to numerical problems (using Python’s Scipy package).

  4. 4.

    The desired response is needed for supervised sequential learning. It is defined by what filter configuration the ESN is to be operated in. For details, see [1].

  5. 5.

    It is disjoint because, for each point (\( s, \tilde{\rho } \)), \( \mathbf {w}_{\text {out}} \) is solved for by the method of least squares. Since least squares involves matrix inverse, for a slightly different point \( s+\varDelta s,\tilde{\rho } + \varDelta \tilde{\rho } \), a very different \( \mathbf {w}_{\text {out}} \) may be produced.

References

  1. Haykin, S.S.: Adaptive Filter Theory, 4th edn. Pearson Education India, New Delhi (2005)

    MATH  Google Scholar 

  2. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical report GMD Report 148, German National Research Center for Information Technology (2001)

    Google Scholar 

  3. Jaeger, H.: Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD-Forschungszentrum Informationstechnik (2002)

    Google Scholar 

  4. Jaeger, H.: Adaptive nonlinear system identification with echo state networks. Networks 8, 9 (2003)

    Google Scholar 

  5. Jaeger, H.: Reservoir riddle: suggestions for echo state network research. In: Proceedings of International Joint Conference on Neural Networks, pp. 1460–1462 (2005)

    Google Scholar 

  6. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  7. Jiang, F., Berry, H., Schoenauer, M.: Supervised and evolutionary learning of echo state networks. In: Rudolph, G., Jansen, T., Lucas, S., Poloni, C., Beume, N. (eds.) PPSN 2008. LNCS, vol. 5199, pp. 215–224. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  8. Küçükemre, A.U.: Echo state networks for adaptive filtering. Ph.D. thesis, University of Applied Sciences (2006)

    Google Scholar 

  9. Lukoševičius, M.: A practical guide to applying echo state networks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 659–686. Springer, Heidelberg (2012)

    Google Scholar 

  10. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    Article  MATH  Google Scholar 

  11. Petersen, K.B., Pedersen, M.S., et al.: The matrix cookbook. Technical University of Denmark, vol. 7, p. 15 (2008)

    Google Scholar 

  12. Schrauwen, B., Verstraeten, D., Van Campenhout, J.: An overview of reservoir computing: theory, applications and implementations. In: Proceedings of the 15th European Symposium on Artificial Neural Networks, pp. 471–482 (2007)

    Google Scholar 

  13. Williams, R.J., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1(2), 270–280 (1989)

    Article  Google Scholar 

  14. Xia, Y., Jelfs, B., Van Hulle, M.M., Príncipe, J.C., Mandic, D.P.: An augmented echo state network for nonlinear adaptive filtering of complex noncircular signals. IEEE Trans. Neural Netw. 22(1), 74–83 (2011)

    Article  Google Scholar 

  15. Yuenyong, S.: Fast and effective tuning of echo state network reservoir parameters using evolutionary algorithms and template matrices. In: 19th International Computer Science and Engineering Conference (ICSEC), November 2015

    Google Scholar 

  16. Yuenyong, S., Nishihara, A.: Evolutionary pre-training for CRJ-type reservoir of echo state networks. Neurocomputing 149, 1324–1329 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumeth Yuenyong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Yuenyong, S. (2016). On the Gradient-Based Sequential Tuning of the Echo State Network Reservoir Parameters. In: Booth, R., Zhang, ML. (eds) PRICAI 2016: Trends in Artificial Intelligence. PRICAI 2016. Lecture Notes in Computer Science(), vol 9810. Springer, Cham. https://doi.org/10.1007/978-3-319-42911-3_54

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-42911-3_54

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-42910-6

  • Online ISBN: 978-3-319-42911-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics