Contributions of Bayesian and Discriminative Models to Active Visual Perception across Saccades
Date
2022
Authors
Advisors
Journal Title
Journal ISSN
Volume Title
Repository Usage Stats
views
downloads
Abstract
The brain must interpret sensory inputs to guide movement and behavior, but movements themselves disrupt sensory inputs. Maintaining perceptual continuity through these disruptions requires one to resolve whether sensory inputs were externally generated or caused by one’s own movements. Understanding the sensory world while moving through it constitutes active perception. Saccadic eye movements in primates are a good model system for studying active perception. Eye movements displace the image sensed by the eye, yet the visual system can distinguish movement-induced displacement from external object displacement. How does the brain resolve this uncertainty?
One way is to directly discriminate between sensory states and map them onto percepts in a bottom-up manner. Alternatively, the system could develop an internal model of the world it could use to generate predictions for its sensory inputs. If the input is then ambiguous, the system can default to its predictions more for perception. Bayes rule formalizes how internally generated predictions may compensate for sensory uncertainty. The goal of this dissertation is to investigate the relative contributions of Discriminative and Bayesian processes to active visual perception across saccades.
We performed a series of psychophysical, computational, and neural recording experiments grounded in variations of a task known as “saccadic suppression of displacement,” in which subjects report whether a visual object moved while they made a saccade. First, we found that when humans provided continuous estimates of where an object landed across a saccade, they used a Bayesian model. That is, they used internally generated predictions, or priors, to compensate for sensory uncertainty. However, when asked to provide a categorical report (“did the object move? Yes or no?”) in the same task, they were Anti-Bayesian. They used their priors less with increasing uncertainty. Further investigation in another primate species, rhesus macaques, showed that in the categorical task, priors were used more to compensate for motor-induced uncertainty generated by the saccade. When visual noise was added to the viewed object, however, prior use was Anti-Bayesian, consistent with results from human participants. Decreasing prior use was explained by a Discriminative, neural network model instead. In the macaques, we then recorded single neuron activity during the categorical tasks in a brain region known to signal object displacement across saccades, the Frontal Eye Field (FEF). We compared FEF activity to Bayesian and Discriminative behavior in the motor- and image-noise tasks, respectively. The results showed a clear distinction: the activity of FEF neurons predicted Discriminative but not Bayesian behavior.
In summary, we show that the selection of Bayesian vs. Discriminative models depends on both task requirements and the source of uncertainty. Further, a neural pathway which includes FEF selectively predicts behavior consistent with the use of Discriminative model, implying that the Bayesian model is implemented in a different circuit. These results demonstrate a dissociation between Bayesian and Discriminative models at the computational and neural levels and set the stage for understanding how they interact for perception across saccades.
Type
Department
Description
Provenance
Citation
Permalink
Citation
Subramanian, Divya (2022). Contributions of Bayesian and Discriminative Models to Active Visual Perception across Saccades. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/25250.
Collections
Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.