Show simple item record Nahm, ML Pieper, CF Cunningham, MM
dc.coverage.spatial United States 2011-06-21T17:31:24Z 2008-08-25
dc.identifier.citation PLoS One, 2008, 3 (8), pp. e3049 - ?
dc.description.abstract BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.
dc.format.extent e3049 - ?
dc.language ENG
dc.language.iso en_US en_US
dc.relation.ispartof PLoS One
dc.relation.isversionof 10.1371/journal.pone.0003049
dc.subject Automatic Data Processing
dc.subject Clinical Audit
dc.subject Clinical Trials as Topic
dc.subject Commission on Professional and Hospital Activities
dc.subject Humans
dc.subject National Institute on Drug Abuse (U.S.)
dc.subject National Institutes of Health (U.S.)
dc.subject Organizational Case Studies
dc.subject Research Design
dc.subject United States
dc.title Quantifying data quality for clinical trials using electronic data capture.
dc.title.alternative en_US
dc.type Journal Article
dc.description.version Version of Record en_US 2008-8-25 en_US
duke.description.endpage e3049 en_US
duke.description.issue 8 en_US
duke.description.startpage e3049 en_US
duke.description.volume 3 en_US
dc.relation.journal Plos One en_US
pubs.issue 8
pubs.organisational-group /Duke
pubs.organisational-group /Duke/Faculty
pubs.organisational-group /Duke/School of Medicine
pubs.organisational-group /Duke/School of Medicine/Basic Science Departments
pubs.organisational-group /Duke/School of Medicine/Basic Science Departments/Biostatistics & Bioinformatics
pubs.publication-status Published online
pubs.volume 3
dc.identifier.eissn 1932-6203

Files in this item

This item appears in the following Collection(s)

Show simple item record