Browsing by Subject "Pattern Recognition, Visual"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Open Access Age-related differences in the neural bases of phonological and semantic processes in the context of task-irrelevant information.(Cognitive, affective & behavioral neuroscience, 2019-08) Diaz, Michele T; Johnson, Micah A; Burke, Deborah M; Truong, Trong-Kha; Madden, David JAs we age we have increasing difficulty with phonological aspects of language production. Yet semantic processes are largely stable across the life span. This suggests a fundamental difference in the cognitive and potentially neural architecture supporting these systems. Moreover, language processes such as these interact with other cognitive processes that also show age-related decline, such as executive function and inhibition. The present study examined phonological and semantic processes in the presence of task-irrelevant information to examine the influence of such material on language production. Older and younger adults made phonological and semantic decisions about pictures in the presence of either phonologically or semantically related words, which were unrelated to the task. FMRI activation during the semantic condition showed that all adults engaged typical left-hemisphere language regions, and that this activation was positively correlated with efficiency across all adults. In contrast, the phonological condition elicited activation in bilateral precuneus and cingulate, with no clear brain-behavior relationship. Similarly, older adults exhibited greater activation than younger adults in several regions that were unrelated to behavioral performance. Our results suggest that as we age, brain-behavior relations decline, and there is an increased reliance on both language-specific and domain-general brain regions that are seen most prominently during phonological processing. In contrast, the core semantic system continues to be engaged throughout the life span, even in the presence of task-irrelevant information.Item Open Access Age-related slowing in the retrieval of information from long-term memory.(Journal of gerontology, 1985-03) Madden, DJThe present experiment investigated adult age differences in the retrieval of information from long-term memory. Each trial required a decision regarding the synonymy of two visually presented words. On the yes-response trials, the two words were either identical, differed only in case, or were synonyms that differed in case. Age differences in absolute decision time were greater for the synonyms than for the other word pairs, but the proportional slowing of decision time exhibited by the older adults was constant across word-pair type. A generalized age-related slowing in the speed of information processing can currently account for age differences in the retrieval of letter-identity and semantic information from long-term memory.Item Open Access At 6-9 months, human infants know the meanings of many common nouns.(Proc Natl Acad Sci U S A, 2012-02-28) Bergelson, Elika; Swingley, DanielIt is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.Item Open Access Demographic, maltreatment, and neurobiological correlates of PTSD symptoms in children and adolescents.(J Pediatr Psychol, 2010-06) De Bellis, Michael D; Hooper, Stephen R; Woolley, Donald P; Shenk, Chad EOBJECTIVE: To examine the relationships of demographic, maltreatment, neurostructural and neuropsychological measures with total posttraumatic stress disorder (PTSD) symptoms. METHODS: Participants included 216 children with maltreatment histories (N = 49), maltreatment and PTSD (N = 49), or no maltreatment (N = 118). Participants received diagnostic interviews, brain imaging, and neuropsychological evaluations. RESULTS: We examined a hierarchical regression model comprised of independent variables including demographics, trauma and maltreatment-related variables, and hippocampal volumes and neuropsychological measures to model PTSD symptoms. Important independent contributors to this model were SES, and General Maltreatment and Sexual Abuse Factors. Although hippocampal volumes were not significant, Visual Memory was a significant contributor to this model. CONCLUSIONS: Similar to adult PTSD, pediatric PTSD symptoms are associated with lower Visual Memory performance. It is an important correlate of PTSD beyond established predictors of PTSD symptoms. These results support models of developmental traumatology and suggest that treatments which enhance visual memory may decrease symptoms of PTSD.Item Open Access Differential Mnemonic Contributions of Cortical Representations during Encoding and Retrieval.(Journal of cognitive neuroscience, 2024-10) Howard, Cortney M; Huang, Shenyang; Hovhannisyan, Mariam; Cabeza, Roberto; Davis, Simon WSeveral recent fMRI studies of episodic and working memory representations converge on the finding that visual information is most strongly represented in occipito-temporal cortex during the encoding phase but in parietal regions during the retrieval phase. It has been suggested that this location shift reflects a change in the content of representations, from predominantly visual during encoding to primarily semantic during retrieval. Yet, direct evidence on the nature of encoding and retrieval representations is lacking. It is also unclear how the representations mediating the encoding-retrieval shift contribute to memory performance. To investigate these two issues, in the current fMRI study, participants encoded pictures (e.g., picture of a cardinal) and later performed a word recognition test (e.g., word "cardinal"). Representational similarity analyses examined how visual (e.g., red color) and semantic representations (e.g., what cardinals eat) support successful encoding and retrieval. These analyses revealed two novel findings. First, successful memory was associated with representational changes in cortical location (from occipito-temporal at encoding to parietal at retrieval) but not with changes in representational content (visual vs. semantic). Thus, the representational encoding-retrieval shift cannot be easily attributed to a change in the nature of representations. Second, in parietal regions, stronger representations predicted encoding failure but retrieval success. This encoding-retrieval "flip" in representations mimics the one previously reported in univariate activation studies. In summary, by answering important questions regarding the content and contributions to the performance of the representations mediating the encoding-retrieval shift, our findings clarify the neural mechanisms of this intriguing phenomenon.Item Open Access Hemispheric differences in memory search.(Neuropsychologia, 1980-01) Madden, DJ; Nebes, RDRecent evidence suggests that memory demands contribute to visual field (VF) differences in tachistoscopic recognition. The present experiment examined VF differences in a memory-search paradigm using verbal stimuli (digits). The results demonstrated a significant advantage to right VF-left hemisphere presentation that was associated with the memory comparison stage of the task, but not with the perceptual encoding and response stages. These data are more consistent with a relative efficiency model of hemispheric specialization than with a functional localization model. © 1980.Item Open Access Influence of encoding difficulty, word frequency, and phonological regularity on age differences in word naming.(Experimental aging research, 2011-05) Allen, Philip A; Bucur, Barbara; Grabbe, Jeremy; Work, Tammy; Madden, David JIt is presently unclear as to why older adults take longer than younger adults to recognize visually presented words. To examine this issue in more detail, the authors conducted two word-naming studies (Experiment 1: 20 older adults and 20 younger adults; Experiment 2: 60 older adults and 60 younger adults) to determine the relative effects of orthographic encoding (case type), lexical access (word frequency), and phonological regularity (regular vs. irregular phonology). The hypothesis was that older adults attempt to compensate for sensory and motor slowing by using progressively larger perceptual units (holistic encoding). However, if forced to use smaller perceptual units (e.g., by using mixed-case presentation), it was predicted that older adults would be particularly challenged. Older adults did show larger case-mixing effects than younger adults (suggesting that older adults' performances were especially poor when they were forced to use smaller perceptual units), but there were no age differences in word frequency or phonological regularity even though both age groups showed main effects for these variables. These results suggest that lexical access skill remains stable in the addressed (orthographic/semantic) and assembled (phonological) routes over the life span, but that older adults slow down in recognizing words because it takes them longer to normalize (perceptually "clean up") noisier sensory information.Item Restricted Neural mechanisms of context effects on face recognition: automatic binding and context shift decrements.(J Cogn Neurosci, 2010-11) Hayes, Scott M; Baena, Elsa; Truong, Trong-Kha; Cabeza, RobertoAlthough people do not normally try to remember associations between faces and physical contexts, these associations are established automatically, as indicated by the difficulty of recognizing familiar faces in different contexts ("butcher-on-the-bus" phenomenon). The present fMRI study investigated the automatic binding of faces and scenes. In the face-face (F-F) condition, faces were presented alone during both encoding and retrieval, whereas in the face/scene-face (FS-F) condition, they were presented overlaid on scenes during encoding but alone during retrieval (context change). Although participants were instructed to focus only on the faces during both encoding and retrieval, recognition performance was worse in the FS-F than in the F-F condition ("context shift decrement" [CSD]), confirming automatic face-scene binding during encoding. This binding was mediated by the hippocampus as indicated by greater subsequent memory effects (remembered > forgotten) in this region for the FS-F than the F-F condition. Scene memory was mediated by right parahippocampal cortex, which was reactivated during successful retrieval when the faces were associated with a scene during encoding (FS-F condition). Analyses using the CSD as a regressor yielded a clear hemispheric asymmetry in medial temporal lobe activity during encoding: Left hippocampal and parahippocampal activity was associated with a smaller CSD, indicating more flexible memory representations immune to context changes, whereas right hippocampal/rhinal activity was associated with a larger CSD, indicating less flexible representations sensitive to context change. Taken together, the results clarify the neural mechanisms of context effects on face recognition.Item Open Access Object files can be purely episodic.(Perception, 2007) Mitroff, Stephen R; Scholl, Brian J; Noles, Nicholaus SOur ability to track an object as the same persisting entity over time and motion may primarily rely on spatiotemporal representations which encode some, but not all, of an object's features. Previous researchers using the 'object reviewing' paradigm have demonstrated that such representations can store featural information of well-learned stimuli such as letters and words at a highly abstract level. However, it is unknown whether these representations can also store purely episodic information (i.e. information obtained from a single, novel encounter) that does not correspond to pre-existing type-representations in long-term memory. Here, in an object-reviewing experiment with novel face images as stimuli, observers still produced reliable object-specific preview benefits in dynamic displays: a preview of a novel face on a specific object speeded the recognition of that particular face at a later point when it appeared again on the same object compared to when it reappeared on a different object (beyond display-wide priming), even when all objects moved to new positions in the intervening delay. This case study demonstrates that the mid-level visual representations which keep track of persisting identity over time--e.g. 'object files', in one popular framework can store not only abstract types from long-term memory, but also specific tokens from online visual experience.Item Open Access The spatial relationship between scanning saccades and express saccades.(Vision Res, 1997-10) Sommer, MAWhen monkeys interrupt their saccadic scanning of a visual scene to look at a suddenly appearing target, saccades to the target are made after an "express" latency or after a longer "regular" latency. The purpose of this study was to analyze the spatial patterns of scanning, express, and regular saccades. Scanning patterns were spatially biased. Express saccade patterns were biased, too, and were directly correlated with scanning patterns. Regular saccade patterns were more uniform and were not directly correlated with scanning patterns. Express saccades, but not regular saccades, seemed to be facilitated by preparation to scan. This study contributes to a general understanding of how monkeys examine scenes containing both unchanging and suddenly appearing stimuli.