Unsupervised Learning of Persistent and Sequential Activity.
Abstract
Two strikingly distinct types of activity have been observed in various brain structures
during delay periods of delayed response tasks: Persistent activity (PA), in which
a sub-population of neurons maintains an elevated firing rate throughout an entire
delay period; and Sequential activity (SA), in which sub-populations of neurons are
activated sequentially in time. It has been hypothesized that both types of dynamics
can be "learned" by the relevant networks from the statistics of their inputs, thanks
to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic
plasticity rule and input statistics to learn these two types of dynamics in a stable
fashion are still unclear. In particular, it is unclear whether a single learning
rule is able to learn both types of activity patterns, depending on the statistics
of the inputs driving the network. Here, we first characterize the complete bifurcation
diagram of a firing rate model of multiple excitatory populations with an inhibitory
mechanism, as a function of the parameters characterizing its connectivity. We then
investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes
the dynamics of the network. Consistent with previous studies, we find that for stable
learning of PA and SA, an additional stabilization mechanism is necessary. We show
that a generalized version of the standard multiplicative homeostatic plasticity (Renart
et al., 2003; Toyoizumi et al., 2014) stabilizes learning by effectively masking excitatory
connections during stimulation and unmasking those connections during retrieval. Using
the bifurcation diagram derived for fixed connectivity, we study analytically the
temporal evolution and the steady state of the learned recurrent architecture as a
function of parameters characterizing the external inputs. Slow changing stimuli lead
to PA, while fast changing stimuli lead to SA. Our network model shows how a network
with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Type
Journal articleSubject
Hebbian plasticityhomeostatic plasticity
persistent activity
sequential activity
synaptic plasticity
unsupervised learning
Permalink
https://hdl.handle.net/10161/23346Published Version (Please cite this version)
10.3389/fncom.2019.00097Publication Info
Pereira, Ulises; & Brunel, Nicolas (2019). Unsupervised Learning of Persistent and Sequential Activity. Frontiers in computational neuroscience, 13. pp. 97. 10.3389/fncom.2019.00097. Retrieved from https://hdl.handle.net/10161/23346.This is constructed from limited available data and may be imprecise. To cite this
article, please review & use the official citation provided by the journal.
Collections
More Info
Show full item recordScholars@Duke
Nicolas Brunel
Duke School of Medicine Distinguished Professor in Neuroscience
We use theoretical models of brain systems to investigate how they process and learn
information from their inputs. Our current work focuses on the mechanisms of learning
and memory, from the synapse to the network level, in collaboration with various experimental
groups. Using methods fromstatistical physics, we have shown recently that the synapticconnectivity
of a network that maximizes storage capacity reproducestwo key experimentally observed
features: low connection proba

Articles written by Duke faculty are made available through the campus open access policy. For more information see: Duke Open Access Policy
Rights for Collection: Scholarly Articles
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info