Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data.

Loading...
Thumbnail Image

Date

2018-07

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

60
views
23
downloads

Citation Stats

Abstract

The attractor neural network scenario is a popular scenario for memory storage in the association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in the inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting that learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exist two types of retrieval states: one in which firing rates are constant in time and another in which firing rates fluctuate chaotically.

Department

Description

Provenance

Citation

Published Version (Please cite this version)

10.1016/j.neuron.2018.05.038

Publication Info

Pereira, Ulises, and Nicolas Brunel (2018). Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data. Neuron, 99(1). pp. 227–238.e4. 10.1016/j.neuron.2018.05.038 Retrieved from https://hdl.handle.net/10161/23348.

This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.

Scholars@Duke

Brunel

Nicolas Brunel

Adjunct Professor of Neurobiology

We use theoretical models of brain systems to investigate how they process and learn information from their inputs. Our current work focuses on the mechanisms of learning and memory, from the synapse to the network level, in collaboration with various experimental groups. Using methods from
statistical physics, we have shown recently that the synaptic
connectivity of a network that maximizes storage capacity reproduces
two key experimentally observed features: low connection probability
and strong overrepresentation of bidirectionnally connected pairs of
neurons. We have also inferred `synaptic plasticity rules' (a
mathematical description of how synaptic strength depends on the
activity of pre and post-synaptic neurons) from data, and shown that
networks endowed with a plasticity rule inferred from data have a
storage capacity that is close to the optimal bound.



Unless otherwise indicated, scholarly articles published by Duke faculty members are made available here with a CC-BY-NC (Creative Commons Attribution Non-Commercial) license, as enabled by the Duke Open Access Policy. If you wish to use the materials in ways not already permitted under CC-BY-NC, please consult the copyright owner. Other materials are made available here through the author’s grant of a non-exclusive license to make their work openly accessible.