Neural Manifolds, Just in Time: Real-time Dimensionality Reduction for Neural Populations
Date
2024
Authors
Advisors
Journal Title
Journal ISSN
Volume Title
Repository Usage Stats
views
downloads
Abstract
In the last decade, systems neuroscience has experienced an explosion in the volume and complexity of recorded data, reinforcing a theoretical interest in the coding properties of populations of neurons. The emerging ``neural population doctrine'' holds that, rather than single neurons, the fundamental coding unit in the brain is the neural population, and that the dynamics of populations reveal the structure of neural computations. Indeed, given that tens of thousands of neurons can now be recorded simultaneously --- a number that is growing super-exponentially --- it is difficult to imagine tractable scientific progress if analyses aim to describe every individual neuron in a population. Critical, then, to scalable analyses of neuronal activity is the use of dimensionality reduction, which includes a broad range of algorithms designed produce a compressed representations of input data. That is, if many neurons can be accurately represented through a smaller number of factors due to shared variability among the neurons, then dimensionality reduction methods may allow theoretical analyses to keep pace with the development of experimental technologies.
In this dissertation I present streaming algorithms which learn compressed representations of neural population activity by observing small portions of a dataset. I focus on streaming methods not only because traditional methods scale poorly to large datasets, but also because neuroscience studies increasingly employ complex stimuli and task designs resulting in loosely-constrained behaviors and less robust neural dynamics; streaming algorithms allow for highly flexible learning of signals in such settings. Briefly, Chapter 2 of this dissertation presents novel streaming dimension reduction methods to address challenges in the real-time inference and prediction of low-dimensional neural dynamics, Chapter 3 and Chapter 4 motivate and present novel models for inferring temporally sparse dynamics to explore the idea of dimensionality in a framework of low-dimensional neural manifolds which vary in time, and Chapter 5 extends this model to integer spike count data.
First, Chapter 2 addresses computational challenges in performing dimensionality reduction during adaptive or closed-loop experimental paradigms, which involve the delivery of an intervention contingent on a brain state. Such experiments are necessary to establish a causal link between low-dimensional dynamics and neural codes. Streaming dimensionality reduction methods are necessary for real-time inference of low-dimensional dynamics, but existing streaming methods suffer from two key issues: scaling to real-time speeds, and poor stability of inferred low-dimensional spaces. This chapter presents a novel streaming method which is both highly scalable and highly stable, enabling real-time inference and prediction of future low-dimensional neural trajectories. Publication - Draelos et al., (2021).
Next, in Chapter 3 I turn to recent findings about the dimensionality of neural population activity, and motivated by these findings I hypothesize that dimensionality may be overestimated in neural population recordings. In essence, I argue that the notion of a neural manifold with a static dimensionality depends on dense, smooth latent dynamics. That is, if low-dimensional dynamics are sparse and non-smooth, then the dimensionality of the resulting neural manifold can change drastically over short periods of time. In other words, are neural manifolds better characterized as having a static dimensionality or as having a dimensionality which changes over time? I address this question by first showing in synthetic data that PCA overestimates dimensionality in temporally sparse data. Iterating through several methods, I show methods which regularize matrix norms as penalties for encouraging sparse transitions work poorly. Chapter 4 then shows a Hidden Markov Model based approach for estimating time-varying manifolds, finding more success than matrix factorization approaches.
I finally address the challenge of modeling low-dimensional dynamics in datsets where neurons have low firing rates in Chapter 5. Many existing latent dynamics methods for spiking neural populations often assume that the observed data are smooth or trial-averaged. However, this preprocessing may not be appropriate for datasets with low spike counts. Methods which assume non-smooth observations often still assume smooth low-dimensional dynamics, but such methods produce uninterpretable dynamics for low firing rate datasets because smooth dynamics provide a poor basis for summarizing neurons with few spikes. With this reasoning, we extend the model from Chapter 4 to infer temporally sparse latent dynamics, or dynamics which are non-smooth due to a binary structure of activation, from integer count observed data. I show that this model, termed SPLAT, better captures the properties of low spiking datasets than models with smooth observations or smooth latent dynamics.
Type
Department
Description
Provenance
Subjects
Citation
Permalink
Citation
Gupta, Pranjal (2024). Neural Manifolds, Just in Time: Real-time Dimensionality Reduction for Neural Populations. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/30874.
Collections
Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.