Browsing by Author "Idsardi, William J"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Open Access A neurophysiological study into the foundations of tonal harmony.(Neuroreport, 2009-02-18) Bergelson, Elika; Idsardi, William JOur findings provide magnetoencephalographic evidence that the mismatch-negativity response to two-note chords (dyads) is modulated by a combination of abstract cognitive differences and lower-level differences in the auditory signal. Participants were presented with series of simple-ratio sinusoidal dyads (perfect fourths and perfect fifths) in which the difference between the standard and deviant dyad exhibited an interval change, a shift in pitch space, or both. In addition, the standard-deviant pair of dyads either shared one note or both notes were changed. Only the condition that featured both abstract changes (interval change and pitch-space shift) and two novel notes showed a significantly larger magnetoencephalographic mismatch-negativity response than the other conditions in the right hemisphere. Implications for music and language processing are discussed.Item Open Access Differences in mismatch responses to vowels and musical intervals: MEG evidence.(PLoS One, 2013) Bergelson, Elika; Shvartsman, Michael; Idsardi, William JWe investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in 'cut'). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12-11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12-24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.