Browsing by Author "Cogan, Gregory B"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Open Access A mutual information analysis of neural coding of speech by low-frequency MEG phase information.(J Neurophysiol, 2011-08) Cogan, Gregory B; Poeppel, DavidRecent work has implicated low-frequency (<20 Hz) neuronal phase information as important for both auditory (<10 Hz) and speech [theta (∼4-8 Hz)] perception. Activity on the timescale of theta corresponds linguistically to the average length of a syllable, suggesting that information within this range has consequences for segmentation of meaningful units of speech. Longer timescales that correspond to lower frequencies [delta (1-3 Hz)] also reflect important linguistic features-prosodic/suprasegmental-but it is unknown whether the patterns of activity in this range are similar to theta. We investigate low-frequency activity with magnetoencephalography (MEG) and mutual information (MI), an analysis that has not yet been applied to noninvasive electrophysiological recordings. We find that during speech perception each frequency subband examined [delta (1-3 Hz), theta(low) (3-5 Hz), theta(high) (5-7 Hz)] processes independent information from the speech stream. This contrasts with hypotheses that either delta and theta reflect their corresponding linguistic levels of analysis or each band is part of a single holistic onset response that tracks global acoustic transitions in the speech stream. Single-trial template-based classifier results further validate this finding: information from each subband can be used to classify individual sentences, and classifier results that utilize the combination of frequency bands provide better results than single bands alone. Our results suggest that during speech perception low-frequency phase of the MEG signal corresponds to neither abstract linguistic units nor holistic evoked potentials but rather tracks different aspects of the input signal. This study also validates a new method of analysis for noninvasive electrophysiological recordings that can be used to formally characterize information content of neural responses and interactions between these responses. Furthermore, it bridges results from different levels of neurophysiological study: small-scale multiunit recordings and local field potentials and macroscopic magneto/electrophysiological noninvasive recordings.Item Open Access I see what you are saying.(Elife, 2016-06-09) Cogan, Gregory BThe motor cortex in the brain tracks lip movements to help with speech perception.Item Open Access Manipulating stored phonological input during verbal working memory.(Nat Neurosci, 2017-02) Cogan, Gregory B; Iyer, Asha; Melloni, Lucia; Thesen, Thomas; Friedman, Daniel; Doyle, Werner; Devinsky, Orrin; Pesaran, BijanVerbal working memory (vWM) involves storing and manipulating information in phonological sensory input. An influential theory of vWM proposes that manipulation is carried out by a central executive while storage is performed by two interacting systems: a phonological input buffer that captures sound-based information and an articulatory rehearsal system that controls speech motor output. Whether, when and how neural activity in the brain encodes these components remains unknown. Here we read out the contents of vWM from neural activity in human subjects as they manipulated stored speech sounds. As predicted, we identified storage systems that contained both phonological sensory and articulatory motor representations. Unexpectedly, however, we found that manipulation did not involve a single central executive but rather involved two systems with distinct contributions to successful manipulation. We propose, therefore, that multiple subsystems comprise the central executive needed to manipulate stored phonological input for articulatory motor output in vWM.Item Open Access Sensory-motor transformations for speech occur bilaterally.(Nature, 2014-03-06) Cogan, Gregory B; Thesen, Thomas; Carlson, Chad; Doyle, Werner; Devinsky, Orrin; Pesaran, BijanHistorically, the study of speech processing has emphasized a strong link between auditory perceptual input and motor production output. A kind of 'parity' is essential, as both perception- and production-based representations must form a unified interface to facilitate access to higher-order language processes such as syntax and semantics, believed to be computed in the dominant, typically left hemisphere. Although various theories have been proposed to unite perception and production, the underlying neural mechanisms are unclear. Early models of speech and language processing proposed that perceptual processing occurred in the left posterior superior temporal gyrus (Wernicke's area) and motor production processes occurred in the left inferior frontal gyrus (Broca's area). Sensory activity was proposed to link to production activity through connecting fibre tracts, forming the left lateralized speech sensory-motor system. Although recent evidence indicates that speech perception occurs bilaterally, prevailing models maintain that the speech sensory-motor system is left lateralized and facilitates the transformation from sensory-based auditory representations to motor-based production representations. However, evidence for the lateralized computation of sensory-motor speech transformations is indirect and primarily comes from stroke patients that have speech repetition deficits (conduction aphasia) and studies using covert speech and haemodynamic functional imaging. Whether the speech sensory-motor system is lateralized, like higher-order language processes, or bilateral, like speech perception, is controversial. Here we use direct neural recordings in subjects performing sensory-motor tasks involving overt speech production to show that sensory-motor transformations occur bilaterally. We demonstrate that electrodes over bilateral inferior frontal, inferior parietal, superior temporal, premotor and somatosensory cortices exhibit robust sensory-motor neural responses during both perception and production in an overt word-repetition task. Using a non-word transformation task, we show that bilateral sensory-motor responses can perform transformations between speech-perception- and speech-production-based representations. These results establish a bilateral sublexical speech sensory-motor system.Item Open Access Visual input enhances selective speech envelope tracking in auditory cortex at a "Cocktail Party"(Journal of Neuroscience, 2013-01-23) Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, DavidOur ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. Weinvestigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli.Werecorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. © 2013 the authors.