Storing structured sparse memories in a multi-modular cortical network model.

Loading...
Thumbnail Image

Date

2016-04

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

43
views
18
downloads

Citation Stats

Attention Stats

Abstract

We study the memory performance of a class of modular attractor neural networks, where modules are potentially fully-connected networks connected to each other via diluted long-range connections. On this anatomical architecture we store memory patterns of activity using a Willshaw-type learning rule. P patterns are split in categories, such that patterns of the same category activate the same set of modules. We first compute the maximal storage capacity of these networks. We then investigate their error-correction properties through an exhaustive exploration of parameter space, and identify regions where the networks behave as an associative memory device. The crucial parameters that control the retrieval abilities of the network are (1) the ratio between the number of synaptic contacts of long- and short-range origins (2) the number of categories in which a module is activated and (3) the amount of local inhibition. We discuss the relationship between our model and networks of cortical patches that have been observed in different cortical areas.

Department

Description

Provenance

Citation

Published Version (Please cite this version)

10.1007/s10827-016-0590-z

Publication Info

Dubreuil, Alexis M, and Nicolas Brunel (2016). Storing structured sparse memories in a multi-modular cortical network model. Journal of computational neuroscience, 40(2). pp. 157–175. 10.1007/s10827-016-0590-z Retrieved from https://hdl.handle.net/10161/23354.

This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.

Scholars@Duke

Brunel

Nicolas Brunel

Adjunct Professor of Neurobiology

We use theoretical models of brain systems to investigate how they process and learn information from their inputs. Our current work focuses on the mechanisms of learning and memory, from the synapse to the network level, in collaboration with various experimental groups. Using methods from
statistical physics, we have shown recently that the synaptic
connectivity of a network that maximizes storage capacity reproduces
two key experimentally observed features: low connection probability
and strong overrepresentation of bidirectionnally connected pairs of
neurons. We have also inferred `synaptic plasticity rules' (a
mathematical description of how synaptic strength depends on the
activity of pre and post-synaptic neurons) from data, and shown that
networks endowed with a plasticity rule inferred from data have a
storage capacity that is close to the optimal bound.



Unless otherwise indicated, scholarly articles published by Duke faculty members are made available here with a CC-BY-NC (Creative Commons Attribution Non-Commercial) license, as enabled by the Duke Open Access Policy. If you wish to use the materials in ways not already permitted under CC-BY-NC, please consult the copyright owner. Other materials are made available here through the author’s grant of a non-exclusive license to make their work openly accessible.