Browsing by Subject "Auditory Perception"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Open Access A pilot investigation of audiovisual processing and multisensory integration in patients with inherited retinal dystrophies.(BMC ophthalmology, 2017-12-07) Myers, Mark H; Iannaccone, Alessandro; Bidelman, Gavin MIn this study, we examined audiovisual (AV) processing in normal and visually impaired individuals who exhibit partial loss of vision due to inherited retinal dystrophies (IRDs).Two groups were analyzed for this pilot study: Group 1 was composed of IRD participants: two with autosomal dominant retinitis pigmentosa (RP), two with autosomal recessive cone-rod dystrophy (CORD), and two with the related complex disorder, Bardet-Biedl syndrome (BBS); Group 2 was composed of 15 non-IRD participants (controls). Audiovisual looming and receding stimuli (conveying perceptual motion) were used to assess the cortical processing and integration of unimodal (A or V) and multimodal (AV) sensory cues. Electroencephalography (EEG) was used to simultaneously resolve the temporal and spatial characteristics of AV processing and assess differences in neural responses between groups. Measurement of AV integration was accomplished via quantification of the EEG's spectral power and event-related brain potentials (ERPs).Results show that IRD individuals exhibit reduced AV integration for concurrent audio and visual (AV) stimuli but increased brain activity during the unimodal A (but not V) presentation. This was corroborated in behavioral responses, where IRD patients showed slower and less accurate judgments of AV and V stimuli but more accurate responses in the A-alone condition.Collectively, our findings imply a neural compensation from auditory sensory brain areas due to visual deprivation.Item Open Access Computational inference of neural information flow networks.(PLoS Comput Biol, 2006-11-24) Smith, V Anne; Yu, Jing; Smulders, Tom V; Hartemink, Alexander J; Jarvis, Erich DDetermining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.Item Open Access Cross-modal stimulus conflict: the behavioral effects of stimulus input timing in a visual-auditory Stroop task.(PLoS One, 2013) Donohue, Sarah E; Appelbaum, Lawrence G; Park, Christina J; Roberts, Kenneth C; Woldorff, Marty GCross-modal processing depends strongly on the compatibility between different sensory inputs, the relative timing of their arrival to brain processing components, and on how attention is allocated. In this behavioral study, we employed a cross-modal audio-visual Stroop task in which we manipulated the within-trial stimulus-onset-asynchronies (SOAs) of the stimulus-component inputs, the grouping of the SOAs (blocked vs. random), the attended modality (auditory or visual), and the congruency of the Stroop color-word stimuli (congruent, incongruent, neutral) to assess how these factors interact within a multisensory context. One main result was that visual distractors produced larger incongruency effects on auditory targets than vice versa. Moreover, as revealed by both overall shorter response times (RTs) and relative shifts in the psychometric incongruency-effect functions, visual-information processing was faster and produced stronger and longer-lasting incongruency effects than did auditory. When attending to either modality, stimulus incongruency from the other modality interacted with SOA, yielding larger effects when the irrelevant distractor occurred prior to the attended target, but no interaction with SOA grouping. Finally, relative to neutral-stimuli, and across the wide range of the SOAs employed, congruency led to substantially more behavioral facilitation than did incongruency to interference, in contrast to findings that within-modality stimulus-compatibility effects tend to be more evenly split between facilitation and interference. In sum, the present findings reveal several key characteristics of how we process the stimulus compatibility of cross-modal sensory inputs, reflecting stimulus processing patterns that are critical for successfully navigating our complex multisensory world.Item Open Access Differences in mismatch responses to vowels and musical intervals: MEG evidence.(PLoS One, 2013) Bergelson, Elika; Shvartsman, Michael; Idsardi, William JWe investigated the electrophysiological response to matched two-formant vowels and two-note musical intervals, with the goal of examining whether music is processed differently from language in early cortical responses. Using magnetoencephalography (MEG), we compared the mismatch-response (MMN/MMF, an early, pre-attentive difference-detector occurring approximately 200 ms post-onset) to musical intervals and vowels composed of matched frequencies. Participants heard blocks of two stimuli in a passive oddball paradigm in one of three conditions: sine waves, piano tones and vowels. In each condition, participants heard two-formant vowels or musical intervals whose frequencies were 11, 12, or 24 semitones apart. In music, 12 semitones and 24 semitones are perceived as highly similar intervals (one and two octaves, respectively), while in speech 12 semitones and 11 semitones formant separations are perceived as highly similar (both variants of the vowel in 'cut'). Our results indicate that the MMN response mirrors the perceptual one: larger MMNs were elicited for the 12-11 pairing in the music conditions than in the language condition; conversely, larger MMNs were elicited to the 12-24 pairing in the language condition that in the music conditions, suggesting that within 250 ms of hearing complex auditory stimuli, the neural computation of similarity, just as the behavioral one, differs significantly depending on whether the context is music or speech.Item Open Access Different mechanisms are responsible for dishabituation of electrophysiological auditory responses to a change in acoustic identity than to a change in stimulus location.(Neurobiol Learn Mem, 2013-11) Smulders, Tom V; Jarvis, Erich DRepeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli.Item Open Access Early onset of deafening-induced song deterioration and differential requirements of the pallial-basal ganglia vocal pathway.(Eur J Neurosci, 2008-12) Horita, Haruhito; Wada, Kazuhiro; Jarvis, Erich DSimilar to humans, songbirds rely on auditory feedback to maintain the acoustic and sequence structure of adult learned vocalizations. When songbirds are deafened, the learned features of song, such as syllable structure and sequencing, eventually deteriorate. However, the time-course and initial phases of song deterioration have not been well studied, particularly in the most commonly studied songbird, the zebra finch. Here, we observed previously uncharacterized subtle but significant changes to learned song within a few days following deafening. Syllable structure became detectably noisier and silent intervals between song motifs increased. Although song motif sequences remained stable at 2 weeks, as previously reported, pronounced changes occurred in longer stretches of song bout sequences. These included deletions of syllables between song motifs, changes in the frequency at which specific chunks of song were produced and stuttering for birds that had some repetitions of syllables before deafening. Changes in syllable structure and song bout sequence occurred at different rates, indicating different mechanisms for their deterioration. The changes in syllable structure required an intact lateral part but not the medial part of the pallial-basal ganglia vocal pathway, whereas changes in the song bout sequence did not require lateral or medial portions of the pathway. These findings indicate that deafening-induced song changes in zebra finches can be detected rapidly after deafening, that acoustic and sequence changes can occur independently, and that, within this time period, the pallial-basal ganglia vocal pathway controls the acoustic structure changes but not the song bout sequence changes.Item Open Access Evidence for independent peripheral and central age-related hearing impairment.(Journal of neuroscience research, 2020-09) Bao, Jianxin; Yu, Yan; Li, Hui; Hawks, John; Szatkowski, Grace; Dade, Bethany; Wang, Hao; Liu, Peng; Brutnell, Thomas; Spehar, Brent; Tye-Murray, NancyDeleterious age-related changes in the central auditory nervous system have been referred to as central age-related hearing impairment (ARHI) or central presbycusis. Central ARHI is often assumed to be the consequence of peripheral ARHI. However, it is possible that certain aspects of central ARHI are independent from peripheral ARHI. A confirmation of this possibility could lead to significant improvements in current rehabilitation practices. The major difficulty in addressing this issue arises from confounding factors, such as other age-related changes in both the cochlea and central non-auditory brain structures. Because gap detection is a common measure of central auditory temporal processing, and gap detection thresholds are less influenced by changes in other brain functions such as learning and memory, we investigated the potential relationship between age-related peripheral hearing loss (i.e., audiograms) and age-related changes in gap detection. Consistent with previous studies, a significant difference was found for gap detection thresholds between young and older adults. However, among older adults, no significant associations were observed between gap detection ability and several other independent variables including the pure tone audiogram average, the Wechsler Adult Intelligence Scale-Vocabulary score, gender, and age. Statistical analyses showed little or no contributions from these independent variables to gap detection thresholds. Thus, our data indicate that age-related decline in central temporal processing is largely independent of peripheral ARHI.Item Open Access Imagery and retrieval of auditory and visual information: neural correlates of successful and unsuccessful performance.(Neuropsychologia, 2011-06) Huijbers, Willem; Pennartz, Cyriel MA; Rubin, David C; Daselaar, Sander MRemembering past events - or episodic retrieval - consists of several components. There is evidence that mental imagery plays an important role in retrieval and that the brain regions supporting imagery overlap with those supporting retrieval. An open issue is to what extent these regions support successful vs. unsuccessful imagery and retrieval processes. Previous studies that examined regional overlap between imagery and retrieval used uncontrolled memory conditions, such as autobiographical memory tasks, that cannot distinguish between successful and unsuccessful retrieval. A second issue is that fMRI studies that compared imagery and retrieval have used modality-aspecific cues that are likely to activate auditory and visual processing regions simultaneously. Thus, it is not clear to what extent identified brain regions support modality-specific or modality-independent imagery and retrieval processes. In the current fMRI study, we addressed this issue by comparing imagery to retrieval under controlled memory conditions in both auditory and visual modalities. We also obtained subjective measures of imagery quality allowing us to dissociate regions contributing to successful vs. unsuccessful imagery. Results indicated that auditory and visual regions contribute both to imagery and retrieval in a modality-specific fashion. In addition, we identified four sets of brain regions with distinct patterns of activity that contributed to imagery and retrieval in a modality-independent fashion. The first set of regions, including hippocampus, posterior cingulate cortex, medial prefrontal cortex and angular gyrus, showed a pattern common to imagery/retrieval and consistent with successful performance regardless of task. The second set of regions, including dorsal precuneus, anterior cingulate and dorsolateral prefrontal cortex, also showed a pattern common to imagery and retrieval, but consistent with unsuccessful performance during both tasks. Third, left ventrolateral prefrontal cortex showed an interaction between task and performance and was associated with successful imagery but unsuccessful retrieval. Finally, the fourth set of regions, including ventral precuneus, midcingulate cortex and supramarginal gyrus, showed the opposite interaction, supporting unsuccessful imagery, but successful retrieval performance. Results are discussed in relation to reconstructive, attentional, semantic memory, and working memory processes. This is the first study to separate the neural correlates of successful and unsuccessful performance for both imagery and retrieval and for both auditory and visual modalities.Item Open Access Lateral symmetry of auditory attention in hemispherectomized patients.(Neuropsychologia, 1981-01) Nebes, RD; Madden, DJ; Berg, WDSingle digits were monaurally presented in a random order to the right and left ears of hemispherectomized patients. Vocal identification time was found to be equivalent for the two ears. This result does not support the existence of a massive lateral shift of attention in these patients. It is thus unlikely that the large ear difference typically found with dichotic presentation in hemispherectomized patients is due to an asymmetrical distribution of auditory attention. © 1981.Item Open Access Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.(PLoS One, 2008-05-07) Todi, Sokol V; Sivan-Loukianova, Elena; Jacobs, Julie S; Kiehart, Daniel P; Eberl, Daniel FBACKGROUND: Myosin VIIA (MyoVIIA) is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck), are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO) organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia) from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase). Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.Item Open Access Neural correlates of categorical perception in learned vocal communication.(Nat Neurosci, 2009-02) Prather, JF; Nowicki, S; Anderson, RC; Peters, S; Mooney, RAThe division of continuously variable acoustic signals into discrete perceptual categories is a fundamental feature of vocal communication, including human speech. Despite the importance of categorical perception to learned vocal communication, the neural correlates underlying this phenomenon await identification. We found that individual sensorimotor neurons in freely behaving swamp sparrows expressed categorical auditory responses to changes in note duration, a learned feature of their songs, and that the neural response boundary accurately predicted the categorical perceptual boundary measured in field studies of the same sparrow population. Furthermore, swamp sparrow populations that learned different song dialects showed different categorical perceptual boundaries that were consistent with the boundary being learned. Our results extend the analysis of the neural basis of perceptual categorization into the realm of vocal communication and advance the learned vocalizations of songbirds as a model for investigating how experience shapes categorical perception and the activity of categorically responsive neurons.Item Restricted One sound or two? Object-related negativity indexes echo perception.(Percept Psychophys, 2008-11) Sanders, LD; Joh, Amy S; Keen, RE; Freyman, RLThe ability to isolate a single sound source among concurrent sources and reverberant energy is necessary for understanding the auditory world. The precedence effect describes a related experimental finding, that when presented with identical sounds from two locations with a short onset asynchrony (on the order of milliseconds), listeners report a single source with a location dominated by the lead sound. Single-cell recordings in multiple animal models have indicated that there are low-level mechanisms that may contribute to the precedence effect, yet psychophysical studies in humans have provided evidence that top-down cognitive processes have a great deal of influence on the perception of simulated echoes. In the present study, event-related potentials evoked by click pairs at and around listeners' echo thresholds indicate that perception of the lead and lag sound as individual sources elicits a negativity between 100 and 250 msec, previously termed the object-related negativity (ORN). Even for physically identical stimuli, the ORN is evident when listeners report hearing, as compared with not hearing, a second sound source. These results define a neural mechanism related to the conscious perception of multiple auditory objects.Item Open Access Single neurons may encode simultaneous stimuli by switching between activity patterns.(Nature communications, 2018-07-13) Caruso, Valeria C; Mohl, Jeff T; Glynn, Christopher; Lee, Jungah; Willett, Shawn M; Zaman, Azeem; Ebihara, Akinori F; Estrada, Rolando; Freiwald, Winrich A; Tokdar, Surya T; Groh, Jennifer MHow the brain preserves information about multiple simultaneous items is poorly understood. We report that single neurons can represent multiple stimuli by interleaving signals across time. We record single units in an auditory region, the inferior colliculus, while monkeys localize 1 or 2 simultaneous sounds. During dual-sound trials, we find that some neurons fluctuate between firing rates observed for each single sound, either on a whole-trial or on a sub-trial timescale. These fluctuations are correlated in pairs of neurons, can be predicted by the state of local field potentials prior to sound onset, and, in one monkey, can predict which sound will be reported first. We find corroborating evidence of fluctuating activity patterns in a separate dataset involving responses of inferotemporal cortex neurons to multiple visual stimuli. Alternation between activity patterns corresponding to each of multiple items may therefore be a general strategy to enhance the brain processing capacity, potentially linking such disparate phenomena as variable neural firing, neural oscillations, and limits in attentional/memory capacity.Item Open Access The dusp1 immediate early gene is regulated by natural stimuli predominantly in sensory input neurons.(J Comp Neurol, 2010-07-15) Horita, Haruhito; Wada, Kazuhiro; Rivas, Miriam V; Hara, Erina; Jarvis, Erich DMany immediate early genes (IEGs) have activity-dependent induction in a subset of brain subdivisions or neuron types. However, none have been reported yet with regulation specific to thalamic-recipient sensory neurons of the telencephalon or in the thalamic sensory input neurons themselves. Here, we report the first such gene, dual specificity phosphatase 1 (dusp1). Dusp1 is an inactivator of mitogen-activated protein kinase (MAPK), and MAPK activates expression of egr1, one of the most commonly studied IEGs, as determined in cultured cells. We found that in the brain of naturally behaving songbirds and other avian species, hearing song, seeing visual stimuli, or performing motor behavior caused high dusp1 upregulation, respectively, in auditory, visual, and somatosensory input cell populations of the thalamus and thalamic-recipient sensory neurons of the telencephalic pallium, whereas high egr1 upregulation occurred only in subsequently connected secondary and tertiary sensory neuronal populations of these same pathways. Motor behavior did not induce high levels of dusp1 expression in the motor-associated areas adjacent to song nuclei, where egr1 is upregulated in response to movement. Our analysis of dusp1 expression in mouse brain suggests similar regulation in the sensory input neurons of the thalamus and thalamic-recipient layer IV and VI neurons of the cortex. These findings suggest that dusp1 has specialized regulation to sensory input neurons of the thalamus and telencephalon; they further suggest that this regulation may serve to attenuate stimulus-induced expression of egr1 and other IEGs, leading to unique molecular properties of forebrain sensory input neurons.Item Open Access The frequency of voluntary and involuntary autobiographical memories across the life span.(Mem Cognit, 2009-07) Rubin, David C; Berntsen, DortheIn the present study, ratings of the memory of an important event from the previous week on the frequency of voluntary and involuntary retrieval, belief in its accuracy, visual imagery, auditory imagery, setting, emotional intensity, valence, narrative coherence, and centrality to the life story were obtained from 988 adults whose ages ranged from 15 to over 90. Another 992 adults provided the same ratings for a memory from their confirmation day, when they were at about age 14. The frequencies of involuntary and voluntary retrieval were similar. Both frequencies were predicted by emotional intensity and centrality to the life story. The results from the present study-which is the first to measure the frequency of voluntary and involuntary retrieval for the same events-are counter to both cognitive and clinical theories, which consistently claim that involuntary memories are infrequent as compared with voluntary memories. Age and gender differences are noted.Item Open Access The genome of a songbird.(Nature, 2010-04-01) Warren, Wesley C; Clayton, David F; Ellegren, Hans; Arnold, Arthur P; Hillier, Ladeana W; Künstner, Axel; Searle, Steve; White, Simon; Vilella, Albert J; Fairley, Susan; Heger, Andreas; Kong, Lesheng; Ponting, Chris P; Jarvis, Erich D; Mello, Claudio V; Minx, Pat; Lovell, Peter; Velho, Tarciso AF; Ferris, Margaret; Balakrishnan, Christopher N; Sinha, Saurabh; Blatti, Charles; London, Sarah E; Li, Yun; Lin, Ya-Chi; George, Julia; Sweedler, Jonathan; Southey, Bruce; Gunaratne, Preethi; Watson, Michael; Nam, Kiwoong; Backström, Niclas; Smeds, Linnea; Nabholz, Benoit; Itoh, Yuichiro; Whitney, Osceola; Pfenning, Andreas R; Howard, Jason; Völker, Martin; Skinner, Bejamin M; Griffin, Darren K; Ye, Liang; McLaren, William M; Flicek, Paul; Quesada, Victor; Velasco, Gloria; Lopez-Otin, Carlos; Puente, Xose S; Olender, Tsviya; Lancet, Doron; Smit, Arian FA; Hubley, Robert; Konkel, Miriam K; Walker, Jerilyn A; Batzer, Mark A; Gu, Wanjun; Pollock, David D; Chen, Lin; Cheng, Ze; Eichler, Evan E; Stapley, Jessica; Slate, Jon; Ekblom, Robert; Birkhead, Tim; Burke, Terry; Burt, David; Scharff, Constance; Adam, Iris; Richard, Hugues; Sultan, Marc; Soldatov, Alexey; Lehrach, Hans; Edwards, Scott V; Yang, Shiaw-Pyng; Li, Xiaoching; Graves, Tina; Fulton, Lucinda; Nelson, Joanne; Chinwalla, Asif; Hou, Shunfeng; Mardis, Elaine R; Wilson, Richard KThe zebra finch is an important model organism in several fields with unique relevance to human neuroscience. Like other songbirds, the zebra finch communicates through learned vocalizations, an ability otherwise documented only in humans and a few other animals and lacking in the chicken-the only bird with a sequenced genome until now. Here we present a structural, functional and comparative analysis of the genome sequence of the zebra finch (Taeniopygia guttata), which is a songbird belonging to the large avian order Passeriformes. We find that the overall structures of the genomes are similar in zebra finch and chicken, but they differ in many intrachromosomal rearrangements, lineage-specific gene family expansions, the number of long-terminal-repeat-based retrotransposons, and mechanisms of sex chromosome dosage compensation. We show that song behaviour engages gene regulatory networks in the zebra finch brain, altering the expression of long non-coding RNAs, microRNAs, transcription factors and their targets. We also show evidence for rapid molecular evolution in the songbird lineage of genes that are regulated during song experience. These results indicate an active involvement of the genome in neural processes underlying vocal communication and identify potential genetic substrates for the evolution and regulation of this behaviour.