Browsing by Author "Pearson, John"
Results Per Page
Sort Options
Item Open Access Confidence and gradation in causal judgment.(Cognition, 2022-06) O'Neill, Kevin; Henne, Paul; Bello, Paul; Pearson, John; De Brigard, FelipeWhen comparing the roles of the lightning strike and the dry climate in causing the forest fire, one might think that the lightning strike is more of a cause than the dry climate, or one might think that the lightning strike completely caused the fire while the dry conditions did not cause it at all. Psychologists and philosophers have long debated whether such causal judgments are graded; that is, whether people treat some causes as stronger than others. To address this debate, we first reanalyzed data from four recent studies. We found that causal judgments were actually multimodal: although most causal judgments made on a continuous scale were categorical, there was also some gradation. We then tested two competing explanations for this gradation: the confidence explanation, which states that people make graded causal judgments because they have varying degrees of belief in causal relations, and the strength explanation, which states that people make graded causal judgments because they believe that causation itself is graded. Experiment 1 tested the confidence explanation and showed that gradation in causal judgments was indeed moderated by confidence: people tended to make graded causal judgments when they were unconfident, but they tended to make more categorical causal judgments when they were confident. Experiment 2 tested the causal strength explanation and showed that although confidence still explained variation in causal judgments, it did not explain away the effects of normality, causal structure, or the number of candidate causes. Overall, we found that causal judgments were multimodal and that people make graded judgments both when they think a cause is weak and when they are uncertain about its causal role.Item Open Access Measuring and Modeling Confidence in Human Causal Judgment(2021-10-25) O'Neill, Kevin; Henne, Paul; Pearson, John; De Brigard, FelipeThe human capacity for causal judgment has long been thought to depend on an ability to consider counterfactual alternatives: the lightning strike caused the forest fire because had it not struck, the forest fire would not have ensued. To accommodate psychological effects on causal judgment, a range of recent accounts of causal judgment have proposed that people probabilistically sample counterfactual alternatives from which they compute a graded index of causal strength. While such models have had success in describing the influence of probability on causal judgments, among other effects, we show that these models make further untested predictions: probability should also influence people's metacognitive confidence in their causal judgments. In a large (N=3020) sample of participants in a causal judgment task, we found evidence that normality indeed influences people's confidence in their causal judgments and that these influences were predicted by a counterfactual sampling model. We take this result as supporting evidence for existing Bayesian accounts of causal judgment.
Item Embargo Neural Manifolds, Just in Time: Real-time Dimensionality Reduction for Neural Populations(2024) Gupta, PranjalIn the last decade, systems neuroscience has experienced an explosion in the volume and complexity of recorded data, reinforcing a theoretical interest in the coding properties of populations of neurons. The emerging ``neural population doctrine'' holds that, rather than single neurons, the fundamental coding unit in the brain is the neural population, and that the dynamics of populations reveal the structure of neural computations. Indeed, given that tens of thousands of neurons can now be recorded simultaneously --- a number that is growing super-exponentially --- it is difficult to imagine tractable scientific progress if analyses aim to describe every individual neuron in a population. Critical, then, to scalable analyses of neuronal activity is the use of dimensionality reduction, which includes a broad range of algorithms designed produce a compressed representations of input data. That is, if many neurons can be accurately represented through a smaller number of factors due to shared variability among the neurons, then dimensionality reduction methods may allow theoretical analyses to keep pace with the development of experimental technologies.
In this dissertation I present streaming algorithms which learn compressed representations of neural population activity by observing small portions of a dataset. I focus on streaming methods not only because traditional methods scale poorly to large datasets, but also because neuroscience studies increasingly employ complex stimuli and task designs resulting in loosely-constrained behaviors and less robust neural dynamics; streaming algorithms allow for highly flexible learning of signals in such settings. Briefly, Chapter 2 of this dissertation presents novel streaming dimension reduction methods to address challenges in the real-time inference and prediction of low-dimensional neural dynamics, Chapter 3 and Chapter 4 motivate and present novel models for inferring temporally sparse dynamics to explore the idea of dimensionality in a framework of low-dimensional neural manifolds which vary in time, and Chapter 5 extends this model to integer spike count data.
First, Chapter 2 addresses computational challenges in performing dimensionality reduction during adaptive or closed-loop experimental paradigms, which involve the delivery of an intervention contingent on a brain state. Such experiments are necessary to establish a causal link between low-dimensional dynamics and neural codes. Streaming dimensionality reduction methods are necessary for real-time inference of low-dimensional dynamics, but existing streaming methods suffer from two key issues: scaling to real-time speeds, and poor stability of inferred low-dimensional spaces. This chapter presents a novel streaming method which is both highly scalable and highly stable, enabling real-time inference and prediction of future low-dimensional neural trajectories. Publication - Draelos et al., (2021).
Next, in Chapter 3 I turn to recent findings about the dimensionality of neural population activity, and motivated by these findings I hypothesize that dimensionality may be overestimated in neural population recordings. In essence, I argue that the notion of a neural manifold with a static dimensionality depends on dense, smooth latent dynamics. That is, if low-dimensional dynamics are sparse and non-smooth, then the dimensionality of the resulting neural manifold can change drastically over short periods of time. In other words, are neural manifolds better characterized as having a static dimensionality or as having a dimensionality which changes over time? I address this question by first showing in synthetic data that PCA overestimates dimensionality in temporally sparse data. Iterating through several methods, I show methods which regularize matrix norms as penalties for encouraging sparse transitions work poorly. Chapter 4 then shows a Hidden Markov Model based approach for estimating time-varying manifolds, finding more success than matrix factorization approaches.
I finally address the challenge of modeling low-dimensional dynamics in datsets where neurons have low firing rates in Chapter 5. Many existing latent dynamics methods for spiking neural populations often assume that the observed data are smooth or trial-averaged. However, this preprocessing may not be appropriate for datasets with low spike counts. Methods which assume non-smooth observations often still assume smooth low-dimensional dynamics, but such methods produce uninterpretable dynamics for low firing rate datasets because smooth dynamics provide a poor basis for summarizing neurons with few spikes. With this reasoning, we extend the model from Chapter 4 to infer temporally sparse latent dynamics, or dynamics which are non-smooth due to a binary structure of activation, from integer count observed data. I show that this model, termed SPLAT, better captures the properties of low spiking datasets than models with smooth observations or smooth latent dynamics.
Item Open Access The Modal and Metacognitive Nature of Causal Judgment(2024) O'Neill, Kevin GuyWhy did the car accident occur? How do we stop the recent rise in inflation? Which player is responsible for the team winning the game? In daily life, we are constantly presented with a variety of questions such as these about the causes of events. Given its prevalence and importance, we should hope to understand how people make causal judgments. But driven by a longstanding debate in philosophy, the psychology of causal judgment is fragmented between two concepts of causation. Productive concepts follow the intuition that causes interact with their effects through a chain of transmissions of quantities like force and energy. On the other hand, dependence concepts assume that causes make a difference to their effects in that if the cause had been different, the effect would also have been different. In this dissertation, I present six experiments demonstrating that causal judgment has a modal and metacognitive character, and I argue that dependence concepts alone can explain both of these characters. Specifically, in Chapter 2 and Chapter 3, I find that people make causal judgments in ways that are consistent with the idea that they do so by imagining alternative possibilities, and they even move their eyes to visually imagine these possibilities. In Chapter 4 and Chapter 5, I find that people qualify their causal judgments by their confidence in these judgments in systematic ways. Throughout these six experiments, these patterns in causal judgments are well-described by a particular dependence concept of causation known as counterfactual sampling models. Moreover, productive concepts of causation are unable to make similar predictions. I conclude by suggesting that people make causal judgments by imagining alternative possibilities and by discussing the implications of this result for psychology and philosophy.