Quantifying data quality for clinical trials using electronic data capture.

Loading...
Thumbnail Image

Date

2008-08-25

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

394
views
8
downloads

Citation Stats

Abstract

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Department

Description

Provenance

Citation

Published Version (Please cite this version)

10.1371/journal.pone.0003049

Publication Info

Nahm, Meredith L, Carl F Pieper and Maureen M Cunningham (2008). Quantifying data quality for clinical trials using electronic data capture. PLoS One, 3(8). p. e3049. 10.1371/journal.pone.0003049 Retrieved from https://hdl.handle.net/10161/4503.

This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.

Scholars@Duke

Pieper

Carl F. Pieper

Professor of Biostatistics & Bioinformatics

Analytic Interests.

1) Issues in the Design of Medical Experiments: I explore the use of reliability/generalizability models in experimental design. In addition to incorporation of reliability, I study powering longitudinal trials with multiple outcomes and substantial missing data using Mixed models.

2) Issues in the Analysis of Repeated Measures Designs & Longitudinal Data: Use of Hierarchical Linear Models (HLM) or Mixed Models in modeling trajectories of multiple variables over time (e.g., physical and cognitive functioning and Blood Pressure). My current work involves methodologies in simultaneous estimation of trajectories for multiple variables within and between domains, modeling co-occuring change.

Areas of Substantive interest: (1) Experimental design and analysis in gerontology and geriatrics, and psychiatry,
(2) Multivariate repeated measures designs,


Unless otherwise indicated, scholarly articles published by Duke faculty members are made available here with a CC-BY-NC (Creative Commons Attribution Non-Commercial) license, as enabled by the Duke Open Access Policy. If you wish to use the materials in ways not already permitted under CC-BY-NC, please consult the copyright owner. Other materials are made available here through the author’s grant of a non-exclusive license to make their work openly accessible.