ALERT: This system is being upgraded on Tuesday December 12. It will not be available
for use for several hours that day while the upgrade is in progress. Deposits to DukeSpace
will be disabled on Monday December 11, so no new items are to be added to the repository
while the upgrade is in progress. Everything should be back to normal by the end of
day, December 12.
A mutual information analysis of neural coding of speech by low-frequency MEG phase information.
Abstract
Recent work has implicated low-frequency (<20 Hz) neuronal phase information as important
for both auditory (<10 Hz) and speech [theta (∼4-8 Hz)] perception. Activity on the
timescale of theta corresponds linguistically to the average length of a syllable,
suggesting that information within this range has consequences for segmentation of
meaningful units of speech. Longer timescales that correspond to lower frequencies
[delta (1-3 Hz)] also reflect important linguistic features-prosodic/suprasegmental-but
it is unknown whether the patterns of activity in this range are similar to theta.
We investigate low-frequency activity with magnetoencephalography (MEG) and mutual
information (MI), an analysis that has not yet been applied to noninvasive electrophysiological
recordings. We find that during speech perception each frequency subband examined
[delta (1-3 Hz), theta(low) (3-5 Hz), theta(high) (5-7 Hz)] processes independent
information from the speech stream. This contrasts with hypotheses that either delta
and theta reflect their corresponding linguistic levels of analysis or each band is
part of a single holistic onset response that tracks global acoustic transitions in
the speech stream. Single-trial template-based classifier results further validate
this finding: information from each subband can be used to classify individual sentences,
and classifier results that utilize the combination of frequency bands provide better
results than single bands alone. Our results suggest that during speech perception
low-frequency phase of the MEG signal corresponds to neither abstract linguistic units
nor holistic evoked potentials but rather tracks different aspects of the input signal.
This study also validates a new method of analysis for noninvasive electrophysiological
recordings that can be used to formally characterize information content of neural
responses and interactions between these responses. Furthermore, it bridges results
from different levels of neurophysiological study: small-scale multiunit recordings
and local field potentials and macroscopic magneto/electrophysiological noninvasive
recordings.
Type
Journal articleSubject
Acoustic StimulationAdult
Auditory Pathways
Female
Humans
Magnetoencephalography
Male
Neurons
Speech
Speech Perception
Permalink
https://hdl.handle.net/10161/13995Published Version (Please cite this version)
10.1152/jn.00075.2011Publication Info
Cogan, Gregory B; & Poeppel, David (2011). A mutual information analysis of neural coding of speech by low-frequency MEG phase
information. J Neurophysiol, 106(2). pp. 554-563. 10.1152/jn.00075.2011. Retrieved from https://hdl.handle.net/10161/13995.This is constructed from limited available data and may be imprecise. To cite this
article, please review & use the official citation provided by the journal.
Collections
More Info
Show full item recordScholars@Duke
Gregory Cogan
Assistant Professor in Neurology
Dr. Cogan's research focuses on speech, language, and cognition. This research uses
a variety of analytic techniques (e.g. neural power analysis, connectivity measures,
decoding algorithms) and focuses mainly on invasive human recordings (electrocorticography
- ECoG) but also uses non-invasive methods such as EEG, MEG, and fMRI. Dr. Cogan is
also interested in studying cognitive systems in the context of disease models to
help aid recovery and treatment programs.

Articles written by Duke faculty are made available through the campus open access policy. For more information see: Duke Open Access Policy
Rights for Collection: Scholarly Articles
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info