An Empirical Comparison of Multiple Imputation Methods for Categorical Data
Repository Usage Stats
© 2017 American Statistical Association. Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.
SubjectScience & Technology
Statistics & Probability
FULLY CONDITIONAL SPECIFICATION
Published Version (Please cite this version)10.1080/00031305.2016.1277158
Publication InfoLi, Fan; Reiter, Jerome; & Akande, Olanrewaju (2017). An Empirical Comparison of Multiple Imputation Methods for Categorical Data. The American Statistician, 71(2). pp. 162-170. 10.1080/00031305.2016.1277158. Retrieved from https://hdl.handle.net/10161/17536.
This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.
More InfoShow full item record
Instructor in the Social Science Research Institute
I work on developing statistical methodology for handling missing and faulty data, with particular emphasis on applications that intersect with the social sciences. I am especially motivated to develop methods that can be readily applied by statistical agencies and data analysts. I completed my PhD in statistical science at Duke in 2019, under the supervision of Jerry Reiter. I obtained an MSc in Statistical and Economic modeling from Duke in 2015, and a BSc in Mathematics and Statistics from
Associate Professor of Statistical Science
Professor of Statistical Science
My primary areas of research include methods for preserving data confidentiality, for handling missing values, for integrating information across multiple sources, and for the analysis of surveys and causal studies. I enjoy collaborating on data analyses with researchers who are not statisticians, particularly in the social sciences and public policy.
Alphabetical list of authors with Scholars@Duke profiles.