Browsing by Subject "Magnetoencephalography"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access A mutual information analysis of neural coding of speech by low-frequency MEG phase information.(J Neurophysiol, 2011-08) Cogan, Gregory B; Poeppel, DavidRecent work has implicated low-frequency (<20 Hz) neuronal phase information as important for both auditory (<10 Hz) and speech [theta (∼4-8 Hz)] perception. Activity on the timescale of theta corresponds linguistically to the average length of a syllable, suggesting that information within this range has consequences for segmentation of meaningful units of speech. Longer timescales that correspond to lower frequencies [delta (1-3 Hz)] also reflect important linguistic features-prosodic/suprasegmental-but it is unknown whether the patterns of activity in this range are similar to theta. We investigate low-frequency activity with magnetoencephalography (MEG) and mutual information (MI), an analysis that has not yet been applied to noninvasive electrophysiological recordings. We find that during speech perception each frequency subband examined [delta (1-3 Hz), theta(low) (3-5 Hz), theta(high) (5-7 Hz)] processes independent information from the speech stream. This contrasts with hypotheses that either delta and theta reflect their corresponding linguistic levels of analysis or each band is part of a single holistic onset response that tracks global acoustic transitions in the speech stream. Single-trial template-based classifier results further validate this finding: information from each subband can be used to classify individual sentences, and classifier results that utilize the combination of frequency bands provide better results than single bands alone. Our results suggest that during speech perception low-frequency phase of the MEG signal corresponds to neither abstract linguistic units nor holistic evoked potentials but rather tracks different aspects of the input signal. This study also validates a new method of analysis for noninvasive electrophysiological recordings that can be used to formally characterize information content of neural responses and interactions between these responses. Furthermore, it bridges results from different levels of neurophysiological study: small-scale multiunit recordings and local field potentials and macroscopic magneto/electrophysiological noninvasive recordings.Item Open Access A neurophysiological study into the foundations of tonal harmony.(Neuroreport, 2009-02-18) Bergelson, Elika; Idsardi, William JOur findings provide magnetoencephalographic evidence that the mismatch-negativity response to two-note chords (dyads) is modulated by a combination of abstract cognitive differences and lower-level differences in the auditory signal. Participants were presented with series of simple-ratio sinusoidal dyads (perfect fourths and perfect fifths) in which the difference between the standard and deviant dyad exhibited an interval change, a shift in pitch space, or both. In addition, the standard-deviant pair of dyads either shared one note or both notes were changed. Only the condition that featured both abstract changes (interval change and pitch-space shift) and two novel notes showed a significantly larger magnetoencephalographic mismatch-negativity response than the other conditions in the right hemisphere. Implications for music and language processing are discussed.Item Open Access Differences in mismatch responses to vowels and musical intervals: MEG evidence.(PLoS One, 2013) Bergelson, Elika; Shvartsman, Michael; Idsardi, William JWe investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in 'cut'). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12-11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12-24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.