CAUSAL INFERENCE FOR HIGH-STAKES DECISIONS
Causal inference methods are commonly used across domains to aid high-stakes decision-making. The validity of causal studies often relies on strong assumptions that might not be realistic in high-stakes scenarios. Inferences based on incorrect assumptions frequently result in sub-optimal decisions with high penalties and long-term consequences. Unlike prediction or machine learning methods, it is particularly challenging to evaluate the performance of causal methods using just the observed data because the ground truth causal effects are missing for all units. My research presents frameworks to enable validation of causal inference methods in one of the following three ways: (i) auditing the estimation procedure by a domain expert, (ii) studying the performance using synthetic data, and (iii) using placebo tests to identify biases. This work enables decision-makers to reason about the validity of the estimation procedure by thinking carefully about the underlying assumptions. Our Learning-to-Match framework is an auditable-and-accurate approach that learns an optimal distance metric for estimating heterogeneous treatment effects. We augment Learning-to-Match framework with pharmacological mechanistic knowledge to study the long-term effects of untreated seizure-like brain activities in critically ill patients. Here, the auditability of the estimator allowed neurologists to qualitatively validate the analysis via a chart-review. We also propose Credence, a synthetic data based framework to validate causal inference methods. Credence simulates data that is stochastically indistinguishable from the observed data while allowing for user-designed treatment effects and selection biases. We demonstrate Credence's ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications. We also discuss an approach to combines experimental and observational studies. Our approach provides a principled approach to test for the violations of no-unobserved confounder assumption and estimate treatment effects under this violation.
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Rights for Collection: Duke Dissertations
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info