Skip to main content

Advertisement

Log in

Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks

  • RESEARCH
  • Published:
Journal of Computational Neuroscience Aims and scope Submit manuscript

Abstract

Recurrent Neural Networks (RNNs) are frequently used to model aspects of brain function and structure. In this work, we trained small fully-connected RNNs to perform temporal and flow control tasks with time-varying stimuli. Our results show that different RNNs can solve the same task by converging to different underlying dynamics and also how the performance gracefully degrades as either network size is decreased, interval duration is increased, or connectivity damage is induced. For the considered tasks, we explored how robust the network obtained after training can be according to task parameterization. In the process, we developed a framework that can be useful to parameterize other tasks of interest in computational neuroscience. Our results are useful to quantify different aspects of the models, which are normally used as black boxes and need to be understood in order to model the biological response of cerebral cortex areas.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availibility statement

Code, simulations, and additional figures of this analysis are available at the following Github repository: https://github.com/katejarne/RNN_study_with_keras.

References

Download references

Acknowledgements

Present work was supported by CONICET and UNQ. C. Jarne acknowledge support from PICT 2020-01413. We want to thank also the anonymous reviewers for their careful reading of the manuscript and their insightful comments and suggestions.

Funding

Work is supported by CONICET and UNQ. C. Jarne acknowledge support from PICT 2020-01413.

Author information

Authors and Affiliations

Authors

Contributions

C.J. designed the original version of the research, developed the code, performed simulations, analyzed data and wrote the manuscript. R.L. supervised research, suggested PC analysis of neural trajectories, network size study and damage study visualization, and edited the manuscript.

Corresponding author

Correspondence to Cecilia Jarne.

Ethics declarations

Ethical approval

Not applicable.

Competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Action editor: Nicolas Brunel

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 1905 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jarne, C., Laje, R. Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks. J Comput Neurosci 51, 407–431 (2023). https://doi.org/10.1007/s10827-023-00857-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10827-023-00857-9

Keywords

Navigation