Browsing by Author "Cristia, Alejandrina"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Open Access A Collaborative Approach to Infant Research: Promoting Reproducibility, Best Practices, and Theory-Building.(Infancy : the official journal of the International Society on Infant Studies, 2017-07) Frank, Michael C; Bergelson, Elika; Bergmann, Christina; Cristia, Alejandrina; Floccia, Caroline; Gervain, Judit; Hamlin, J Kiley; Hannon, Erin E; Kline, Melissa; Levelt, Claartje; Lew-Williams, Casey; Nazzi, Thierry; Panneton, Robin; Rabagliati, Hugh; Soderstrom, Melanie; Sullivan, Jessica; Waxman, Sandra; Yurovsky, DanielThe ideal of scientific progress is that we accumulate measurements and integrate these into theory, but recent discussion of replicability issues has cast doubt on whether psychological research conforms to this model. Developmental research-especially with infant participants-also has discipline-specific replicability challenges, including small samples and limited measurement methods. Inspired by collaborative replication efforts in cognitive and social psychology, we describe a proposal for assessing and promoting replicability in infancy research: large-scale, multi-laboratory replication efforts aiming for a more precise understanding of key developmental phenomena. The ManyBabies project, our instantiation of this proposal, will not only help us estimate how robust and replicable these phenomena are, but also gain new theoretical insights into how they vary across ages, linguistic communities, and measurement methods. This project has the potential for a variety of positive outcomes, including less-biased estimates of theoretically important effects, estimates of variability that can be used for later study planning, and a series of best-practices blueprints for future infancy research.Item Open Access Accuracy of the Language Environment Analysis System Segmentation and Metrics: A Systematic Review.(Journal of speech, language, and hearing research : JSLHR, 2020-04-17) Cristia, Alejandrina; Bulgarelli, Federica; Bergelson, ElikaPurpose The Language Environment Analysis (LENA) system provides automated measures facilitating clinical and nonclinical research and interventions on language development, but there are only a few, scattered independent reports of these measures' validity. The objectives of the current systematic review were to (a) discover studies comparing LENA output with manual annotation, namely, accuracy of talker labels, as well as involving adult word counts (AWCs), conversational turn counts (CTCs), and child vocalization counts (CVCs); (b) describe them qualitatively; (c) quantitatively integrate them to assess central tendencies; and (d) quantitatively integrate them to assess potential moderators. Method Searches on Google Scholar, PubMed, Scopus, and PsycInfo were combined with expert knowledge, and interarticle citations resulting in 238 records screened and 73 records whose full text was inspected. To be included, studies must target children under the age of 18 years and report on accuracy of LENA labels (e.g., precision and/or recall) and/or AWC, CTC, or CVC (correlations and/or error metrics). Results A total of 33 studies, in 28 articles, were discovered. A qualitative review revealed most validation studies had not been peer reviewed as such and failed to report key methodology and results. Quantitative integration of the results was possible for a broad definition of recall and precision (M = 59% and 68%, respectively; N = 12-13), for AWC (mean r = .79, N = 13), CVC (mean r = .77, N = 5), and CTC (mean r = .36, N = 6). Publication bias and moderators could not be assessed meta-analytically. Conclusion Further research and improved reporting are needed in studies evaluating LENA segmentation and quantification accuracy, with work investigating CTC being particularly urgent. Supplemental Material https://osf.io/4nhms/.Item Open Access HomeBank: An Online Repository of Daylong Child-Centered Audio Recordings.(Semin Speech Lang, 2016-05) VanDam, Mark; Warlaumont, Anne S; Bergelson, Elika; Cristia, Alejandrina; Soderstrom, Melanie; De Palma, Paul; MacWhinney, BrianHomeBank is introduced here. It is a public, permanent, extensible, online database of daylong audio recorded in naturalistic environments. HomeBank serves two primary purposes. First, it is a repository for raw audio and associated files: one database requires special permissions, and another redacted database allows unrestricted public access. Associated files include metadata such as participant demographics and clinical diagnostics, automated annotations, and human-generated transcriptions and annotations. Many recordings use the child-perspective LENA recorders (LENA Research Foundation, Boulder, Colorado, United States), but various recordings and metadata can be accommodated. The HomeBank database can have both vetted and unvetted recordings, with different levels of accessibility. Additionally, HomeBank is an open repository for processing and analysis tools for HomeBank or similar data sets. HomeBank is flexible for users and contributors, making primary data available to researchers, especially those in child development, linguistics, and audio engineering. HomeBank facilitates researchers' access to large-scale data and tools, linking the acoustic, auditory, and linguistic characteristics of children's environments with a variety of variables including socioeconomic status, family characteristics, language trajectories, and disorders. Automated processing applied to daylong home audio recordings is now becoming widely used in early intervention initiatives, helping parents to provide richer speech input to at-risk children.Item Open Access Vocal development in a large-scale crosslinguistic corpus.(Developmental science, 2021-01-26) Cychosz, Margaret; Cristia, Alejandrina; Bergelson, Elika; Casillas, Marisa; Baudet, Gladys; Warlaumont, Anne S; Scaff, Camila; Yankowitz, Lisa; Seidl, AmandaThis study evaluates whether early vocalizations develop in similar ways in children across diverse cultural contexts. We analyze data from daylong audio recordings of 49 children (1-36 months) from five different language/cultural backgrounds. Citizen scientists annotated these recordings to determine if child vocalizations contained canonical transitions or not (e.g., "ba" versus "ee"). Results revealed that the proportion of clips reported to contain canonical transitions increased with age. Further, this proportion exceeded 0.15 by around 7 months, replicating and extending previous findings on canonical vocalization development but using data from the natural environments of a culturally and linguistically diverse sample. This work explores how crowdsourcing can be used to annotate corpora, helping establish developmental milestones relevant to multiple languages and cultures. Lower inter-annotator reliability on the crowdsourcing platform, relative to more traditional in-lab expert annotators, means that a larger number of unique annotators and/or annotations are required and that crowdsourcing may not be a suitable method for more fine-grained annotation decisions. Audio clips used for this project are compiled into a large-scale infant vocalization corpus that is available for other researchers to use in future work.