Dynamics of recurrent neural networks with piecewise linear activation function in the context-dependent decision-making task

封面

如何引用文章

全文:

详细

Purpose. This paper aims to elucidate the dynamic mechanism underlying context-dependent two-alternative decision-making task solved by recurrent neural networks through reinforcement learning. Additionally, it seeks to develop a methodology for analyzing such models based on dynamical systems theory. Methods. An ensemble of neural networks with piecewise linear activation functions was constructed. These models were optimized using the proximal policy optimization method. The trial structure, featuring constant stimuli over extended periods, allowed us to treat inputs as system parameters and consider the system as autonomous during finite time intervals. Results. The dynamic mechanism of two-alternative decision-making was uncovered and described in terms of attractors of autonomous systems. The possible types of attractors in the model were characterized, and their distribution within the ensemble of models relative to the cognitive task parameters was studied. A stable division into functional populations was observed in the ensemble of models, and the evolution of these populations’ composition was examined. Conclusion. The proposed approach enables a qualitative description of the problem-solving mechanism in terms of attractors, facilitating the study of functional model dynamics and identification of populations underlying dynamic objects. This methodology allows for tracking the evolution of system attractors and corresponding populations during the learning process. Furthermore, based on this understanding, a two-dimensional network was developed to solve a simplified context-free two-alternative decision problem.  

作者简介

Roman Kononov

Institute of Applied Physics of the Russian Academy of Sciences; Lobachevsky State University of Nizhny Novgorod

ORCID iD: 0009-0008-0441-1559
SPIN 代码: 8925-5441
Scopus 作者 ID: 57212471765
ul. Ul'yanova, 46, Nizhny Novgorod , 603950, Russia

O.  Maslennikov

Institute of Applied Physics of the Russian Academy of Sciences; Lobachevsky State University of Nizhny Novgorod

ORCID iD: 0000-0002-8909-321X
Scopus 作者 ID: 56370370000
Researcher ID: D-4789-2013
ul. Ul'yanova, 46, Nizhny Novgorod , 603950, Russia

Vladimir Nekorkin

Institute of Applied Physics of the Russian Academy of Sciences; Lobachevsky State University of Nizhny Novgorod

ORCID iD: 0000-0003-0173-587X
Scopus 作者 ID: 7004468484
Researcher ID: H-4014-2016
ul. Ul'yanova, 46, Nizhny Novgorod , 603950, Russia

参考

  1. Sussillo D. Neural circuits as computational dynamical systems // Curr. Opin. Neurobiol. 2014. Vol. 25. P. 156-163. doi: 10.1016/j.conb.2014.01.008.
  2. Marblestone A. H., Wayne G., Kording K. P. Toward an integration of deep learning and neuroscience // Frontiers in Computational Neuroscience. 2016. Vol. 10. P. 94. doi: 10.3389/fncom.2016.00094.
  3. Barak O. Recurrent neural networks as versatile tools of neuroscience research // Curr. Opin. Neurobiol. 2017. Vol. 46. P. 1–6. doi: 10.1016/j.conb.2017.06.003.
  4. Richards B. A., Lillicrap T. P., Beaudoin P., Bengio Y., Bogacz R., Christensen A., Clopath C., Costa R. P., de Berker A., Ganguli S., Gillon C. J., Hafner D., Kepecs A., Kriegeskorte N., Latham P., Lindsay G. W., Miller K. D., Naud R., Pack Ch. C., Poirazi P., Roelfsema P., Sacramento J., Saxe A., Scellier B., Schapiro A. C., Senn W., Wayne G., Yamins D., Zenke F., Zylberberg J., Therien D., Kording K. P. A deep learning framework for neuroscience // Nature Neuroscience. 2019. Vol. 22, no. 11. P. 1761–1770. doi: 10.1038/s41593-019-0520-2.
  5. Ehrlich D. B., Stone J. T., Brandfonbrener D., Atanasov A., Murray J. D. PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks // Eneuro. 2021. Vol. 8, no. 1. doi: 10.1523/ENEURO.0427-20.2020.
  6. Durstewitz D., Koppe G., Thurm M. I. Reconstructing computational system dynamics from neural data with recurrent neural networks // Nature Reviews Neuroscience. 2023. Vol. 24, no. 11. P. 693–710. doi: 10.1038/s41583-023-00740-7.
  7. Mante V., Sussillo D., Shenoy K. V., Newsome W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex // Nature. 2013. Vol. 503, no. 7474. P. 78–84. DOI: doi.org/10.1038/nature12742.
  8. McNaughton B. L., Battaglia F. P., Jensen O., Moser E. I., Moser M.-B. Path integration and the neural basis of the “cognitive map” // Nature Reviews Neuroscience. 2006. Vol. 7, no. 8. P. 663–678. doi: 10.1038/nrn1932.
  9. Yang G. Rt., Wang X.-J. Artificial neural networks for neuroscientists: A primer // Neuron. 2020. Vol. 107, no. 6. P. 1048–1070. doi: 10.1016/j.neuron.2020.09.005.
  10. Bernaez T. L., Ekelmans P., Kraynyukova N., Rose T., Busse L., Tchumatchenko T. How to incorporate biological insights into network models and why it matters // The Journal of Physiology. 2023. Vol. 601(15). P. 3037–3053. doi: 10.1113/JP282755.
  11. Vyas S., Golub M. D., Sussillo D., Shenoy K. V. Computation through neural population dynamics // Annual Review of Neuroscience. 2020. Vol. 43. P. 249–275. doi: 10.1146/annurev-neuro-092619-094115.
  12. Sussillo D., Abbott L. F. Generating coherent patterns of activity from chaotic neural networks // Neuron. 2009. Vol. 63, no. 4. P. 544–557. doi: 10.1016/j.neuron.2009.07.018.
  13. Kriegeskorte N., Xue-Xin W. Neural tuning and representational geometry // Nature Reviews Neuroscience. 2021. Vol. 22, no. 11. C. 703–718. doi: 10.1038/s41583-021-00502-3.
  14. Khona M., Fiete I. R. Attractor and integrator networks in the brain // Nature Reviews Neuroscience. 2022. Vol. 23, no. 12. P. 744–766. doi: 10.1038/s41583-022-00642-0.
  15. Langdon Ch., Genkin M., Engel T. A. A unifying perspective on neural manifolds and circuits for cognition // Nature Reviews Neuroscience. 2023. Vol. 24, no. 6. P. 363–377. doi: 10.1038/s41583-023-00693-x.
  16. Масленников О. В., Пугавко М. М., Щапин Д. С., Некоркин В. И. Нелинейная динамика и машинное обучение рекуррентных спайковых нейронных сетей // Успехи физических наук. 2022. Т. 65, № 12. doi: 10.3367/UFNr.2021.08.039042.
  17. Maslennikov O. V., Nekorkin V. I. Stimulus-induced sequential activity in supervisely trained recurrent networks of firing rate neurons // Nonlinear Dynamics. 2020. Vol. 101, no. 2. P. 1093–1103. doi: 10.1007/s11071-020-05787-0.
  18. Pugavko M. M, Maslennikov O. V., Nekorkin V. I. Dynamics of spiking map-based neural networks in problems of supervised learning // Communications in Nonlinear Science, Numerical Simulation. 2020. Vol. 90. P. 105399. doi: 10.1016/j.cnsns.2020.105399.
  19. Пугавко М. М., Масленников О. В., Некоркин В. И. Динамика сети дискретных модельных нейронов при контролируемом обучении системы резервуарных вычислений // Известия вузов. Прикладная нелинейная динамика. 2020. Т. 28, № 1. C. 77–89. doi: 10.18500/0869-6632-2020-28-1-77-89.
  20. Maslennikov O. V., Nekorkin V. I. Collective dynamics of rate neurons for supervised learning in a reservoir computing system // Chaos. 2019. Vol. 29, no. 10. P. 103126. doi: 10.1063/1.5119895.
  21. Parga N., Serrano-Fernandez L., Falco-R. J. Emergent computations in trained artificial neural networks and real brains // Journal of Instrumentation. 2023. Vol. 18, no. 02. P. C02060. doi: 10.1088/1748-0221/18/02/C02060.
  22. Pugavko M. M., Maslennikov O. V., Nekorkin V. I. Multitask computation through dynamics in recurrent spiking neural networks // Scientific Reports. 2023. Vol. 13, no. 1. P. 3997. doi: 10.1038/s41598-023-31110-z.
  23. Schulman J., Wolski F., Dhariwal P., Radford A., Klimov O. Proximal policy optimization algorithms. arXiv:1707.06347; 2017. doi: 10.48550/arXiv.1707.06347.
  24. Diederik P. K., Jimmy B. Adam: A Method for Stochastic Optimization. arXiv:1412.6980; 2017. doi: 10.48550/arXiv.1412.6980.

补充文件

附件文件
动作
1. JATS XML

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).