Browsing by Subject "National Institutes of Health (U.S.)"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
Item Open Access Conceptualizing trust in community-academic research partnerships using concept mapping approach: A multi-CTSA study.(Evaluation and program planning, 2018-02) Dave, Gaurav; Frerichs, Leah; Jones, Jennifer; Kim, Mimi; Schaal, Jennifer; Vassar, Stefanie; Varma, Deepthi; Striley, Catherine; Ruktanonchai, Corrine; Black, Adina; Hankins, Jennifer; Lovelady, Nakita; Cene, Crystal; Green, Melissa; Young, Tiffany; Tiwari, Shristi; Cheney, Ann; Cottler, Linda; Sullivan, Greer; Brown, Arleen; Burke, Jessica; Corbie-Smith, GiselleObjectives
Collaborations between communities, healthcare practices and academic institutions are a strategy to address health disparities. Trust is critical in the development and maintaining of effective collaborations. The aim of this pilot study was to engage stakeholders in defining determinants of trust in community academic research partnerships and to develop a framework for measuring trust.Methods
The study was conducted by five collaborating National Institute of Health' Clinical and Translational Sciences Awardees. We used concept mapping to engage three stakeholders: community members, healthcare providers and academicians. We conducted hierarchical cluster analysis to assess the determinants of trust in community-academic research partnerships.Results
A total of 186 participants provided input generating 2,172 items that were consolidated into 125 unique items. A five cluster solution was defined: authentic, effective and transparent communication; mutually respectful and reciprocal relationships; sustainability; committed partnerships; and, communication, credibility and methodology to anticipate and resolve problems.Conclusion
Results from this study contribute to an increasing empirical body of work to better understand and improve the underlying factors that contribute to building and sustaining trust in community academic research partnerships.Item Open Access Developing Treatment Guidelines During a Pandemic Health Crisis: Lessons Learned From COVID-19.(Annals of internal medicine, 2021-08) Kuriakose, Safia; Singh, Kanal; Pau, Alice K; Daar, Eric; Gandhi, Rajesh; Tebas, Pablo; Evans, Laura; Gulick, Roy M; Lane, H Clifford; Masur, Henry; NIH COVID-19 Treatment Guidelines Panel; Aberg, Judith A; Adimora, Adaora A; Baker, Jason; Kreuziger, Lisa Baumann; Bedimo, Roger; Belperio, Pamela S; Cantrill, Stephen V; Coopersmith, Craig M; Davis, Susan L; Dzierba, Amy L; Gallagher, John J; Glidden, David V; Grund, Birgit; Hardy, Erica J; Hinkson, Carl; Hughes, Brenna L; Johnson, Steven; Keller, Marla J; Kim, Arthur Y; Lennox, Jeffrey L; Levy, Mitchell M; Li, Jonathan Z; Martin, Greg S; Naggie, Susanna; Pavia, Andrew T; Seam, Nitin; Simpson, Steven Q; Swindells, Susan; Tien, Phyllis; Waghmare, Alpana A; Wilson, Kevin C; Yazdany, Jinoos; Zachariah, Philip; Campbell, Danielle M; Harrison, Carly; Burgess, Timothy; Francis, Joseph; Sheikh, Virginia; Uyeki, Timothy M; Walker, Robert; Brooks, John T; Ortiz, Laura Bosque; Davey, Richard T; Doepel, Laurie K; Eisinger, Robert W; Han, Alison; Higgs, Elizabeth S; Nason, Martha C; Crew, Page; Lerner, Andrea M; Lund, Claire; Worthington, ChristopherThe development of the National Institutes of Health (NIH) COVID-19 Treatment Guidelines began in March 2020 in response to a request from the White House Coronavirus Task Force. Within 4 days of the request, the NIH COVID-19 Treatment Guidelines Panel was established and the first meeting took place (virtually-as did subsequent meetings). The Panel comprises 57 individuals representing 6 governmental agencies, 11 professional societies, and 33 medical centers, plus 2 community members, who have worked together to create and frequently update the guidelines on the basis of evidence from the most recent clinical studies available. The initial version of the guidelines was completed within 2 weeks and posted online on 21 April 2020. Initially, sparse evidence was available to guide COVID-19 treatment recommendations. However, treatment data rapidly accrued based on results from clinical studies that used various study designs and evaluated different therapeutic agents and approaches. Data have continued to evolve at a rapid pace, leading to 24 revisions and updates of the guidelines in the first year. This process has provided important lessons for responding to an unprecedented public health emergency: Providers and stakeholders are eager to access credible, current treatment guidelines; governmental agencies, professional societies, and health care leaders can work together effectively and expeditiously; panelists from various disciplines, including biostatistics, are important for quickly developing well-informed recommendations; well-powered randomized clinical trials continue to provide the most compelling evidence to guide treatment recommendations; treatment recommendations need to be developed in a confidential setting free from external pressures; development of a user-friendly, web-based format for communicating with health care providers requires substantial administrative support; and frequent updates are necessary as clinical evidence rapidly emerges.Item Open Access Improving the Emergency Care Research Investigator Pipeline: SAEM/ACEP Recommendations.(Academic emergency medicine : official journal of the Society for Academic Emergency Medicine, 2015-07) Ranney, Megan L; Limkakeng, Alexander T; Carr, Brendan; Zink, Brian; Kaji, Amy H; ACEP SAEM Research Committees (approved by ACEP-SAEM Board of Directors)Item Restricted Quantifying data quality for clinical trials using electronic data capture.(PLoS One, 2008-08-25) Nahm, Meredith L; Pieper, Carl F; Cunningham, Maureen MBACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.Item Open Access Race disparity in grants: check the citations.(Science (New York, N.Y.), 2011-11) Erickson, Harold PItem Open Access Representation of Female Faculty at US Medical Schools and Success in Obtaining National Institutes of Health Funding, 2008-2018.(JAMA network open, 2021-03) Malinzak, Elizabeth Burney; Weikel, Daniel; Swaminathan, MadhavItem Open Access Research Inclusion Across the Lifespan: A Good Start, but There Is More Work to Be Done.(Journal of general internal medicine, 2023-06) Bowling, C Barrett; Thomas, Jennifer; Gierisch, Jennifer M; Bosworth, Hayden B; Plantinga, LauraWhile older adults account for a disproportionate amount of healthcare spending, they are often underrepresented in clinical research needed to guide clinical care. The purpose of this perspective is to make readers aware of new data on age at enrollment for participants included in National Institutes of Health (NIH)-funded clinical research. We highlight key findings of relevance to general internal medicine and suggest ways readers could support the inclusion of older adults in clinical research. Data from the NIH Research Inclusion Statistics Report show that there were 881,385 participants enrolled in all NIH-funded clinical research in 2021, of whom 170,110 (19%) were 65 years and older. However, on average, studies included a far lower percentage of older adults. Additionally, there were many conditions for which overall enrollment rates for older adults were lower than would be expected. For example, while 10% of participants in studies related to diabetes were ≥ 65 years old, older individuals represent 43% of all prevalent diabetes in the USA. Researchers should work with clinicians to advocate for older adults and ensure their participation in clinical research. Best practices and resources for overcoming common barriers to the inclusion of older adults in research could also be disseminated.