Unsupervised Learning of Persistent and Sequential Activity.

dc.contributor.author

Pereira, Ulises

dc.contributor.author

Brunel, Nicolas

dc.date.accessioned

2021-06-06T15:52:38Z

dc.date.available

2021-06-06T15:52:38Z

dc.date.issued

2019-01

dc.date.updated

2021-06-06T15:52:14Z

dc.description.abstract

Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be "learned" by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism is necessary. We show that a generalized version of the standard multiplicative homeostatic plasticity (Renart et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory connections during stimulation and unmasking those connections during retrieval. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.

dc.identifier.issn

1662-5188

dc.identifier.issn

1662-5188

dc.identifier.uri

https://hdl.handle.net/10161/23346

dc.language

eng

dc.publisher

Frontiers Media SA

dc.relation.ispartof

Frontiers in computational neuroscience

dc.relation.isversionof

10.3389/fncom.2019.00097

dc.subject

Hebbian plasticity

dc.subject

homeostatic plasticity

dc.subject

persistent activity

dc.subject

sequential activity

dc.subject

synaptic plasticity

dc.subject

unsupervised learning

dc.title

Unsupervised Learning of Persistent and Sequential Activity.

dc.type

Journal article

pubs.begin-page

97

pubs.organisational-group

School of Medicine

pubs.organisational-group

Physics

pubs.organisational-group

Neurobiology

pubs.organisational-group

Duke Institute for Brain Sciences

pubs.organisational-group

Center for Cognitive Neuroscience

pubs.organisational-group

Duke

pubs.organisational-group

Trinity College of Arts & Sciences

pubs.organisational-group

Basic Science Departments

pubs.organisational-group

University Institutes and Centers

pubs.organisational-group

Institutes and Provost's Academic Units

pubs.publication-status

Published

pubs.volume

13

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Unsupervised Learning of Persistent and Sequential Activity.pdf
Size:
7.9 MB
Format:
Adobe Portable Document Format