An Empirical Comparison of Multiple Imputation Methods for Categorical Data
Repository Usage Stats
© 2017 American Statistical Association. Multiple imputation is a common approach for dealing with missing values in statistical databases. The imputer fills in missing values with draws from predictive models estimated from the observed data, resulting in multiple, completed versions of the database. Researchers have developed a variety of default routines to implement multiple imputation; however, there has been limited research comparing the performance of these methods, particularly for categorical data. We use simulation studies to compare repeated sampling properties of three default multiple imputation methods for categorical data, including chained equations using generalized linear models, chained equations using classification and regression trees, and a fully Bayesian joint distribution based on Dirichlet process mixture models. We base the simulations on categorical data from the American Community Survey. In the circumstances of this study, the results suggest that default chained equations approaches based on generalized linear models are dominated by the default regression tree and Bayesian mixture model approaches. They also suggest competing advantages for the regression tree and Bayesian mixture model approaches, making both reasonable default engines for multiple imputation of categorical data. Supplementary material for this article is available online.
Published Version (Please cite this version)
Akande, O, F Li and J Reiter (2017). An Empirical Comparison of Multiple Imputation Methods for Categorical Data. The American Statistician, 71(2). pp. 162–170. 10.1080/00031305.2016.1277158 Retrieved from https://hdl.handle.net/10161/17536.
This is constructed from limited available data and may be imprecise. To cite this article, please review & use the official citation provided by the journal.
I work on developing statistical methodology for handling missing and faulty data, with particular emphasis on applications that intersect with the social sciences. I am especially motivated to develop methods that can be readily applied by statistical agencies and data analysts. I completed my PhD in statistical science at Duke in 2019, under the supervision of Jerry Reiter. I obtained an MSc in Statistical and Economic modeling from Duke in 2015, and a BSc in Mathematics and Statistics from the University of Lagos, Nigeria in 2010. Prior to coming to Duke, I worked as an analyst at KPMG Professional Services, Nigeria between 2011 and 2012.
My main research interest is causal inference and its applications to health, policy and social science. I also work on the interface between causal inference and machine learning. I have developed methods for propensity score, clinical trials, randomized experiments (e.g. A/B testing), difference-in-differences, regression discontinuity designs, representation learning. I also work on Bayesian analysis and statistical methods for missing data. I am serving as the editor for social science, biostatistics and policy for the journal Annals of Applied Statistics.
My primary areas of research include methods for preserving data confidentiality, for handling missing values, for integrating information across multiple sources, and for the analysis of surveys and causal studies. I enjoy collaborating on data analyses with researchers who are not statisticians, particularly in the social sciences and public policy.
Unless otherwise indicated, scholarly articles published by Duke faculty members are made available here with a CC-BY-NC (Creative Commons Attribution Non-Commercial) license, as enabled by the Duke Open Access Policy. If you wish to use the materials in ways not already permitted under CC-BY-NC, please consult the copyright owner. Other materials are made available here through the author’s grant of a non-exclusive license to make their work openly accessible.