Browsing by Subject "Causal inference"
Results Per Page
Sort Options
Item Open Access A Theory of Statistical Inference for Ensuring the Robustness of Scientific Results(2018) Coker, BeauInference is the process of using facts we know to learn about facts we do not know. A theory of inference gives assumptions necessary to get from the former to the latter, along with a definition for and summary of the resulting uncertainty. Any one theory of inference is neither right nor wrong, but merely an axiom that may or may not be useful. Each of the many diverse theories of inference can be valuable for certain applications. However, no existing theory of inference addresses the tendency to choose, from the range of plausible data analysis specifications consistent with prior evidence, those that inadvertently favor one's own hypotheses. Since the biases from these choices are a growing concern across scientific fields, and in a sense the reason the scientific community was invented in the first place, we introduce a new theory of inference designed to address this critical problem. From this theory, we derive ``hacking intervals,'' which are the range of summary statistic one may obtain given a class of possible endogenous manipulations of the data. They make no appeal to hypothetical data sets drawn from imaginary superpopulations. A scientific result with a small hacking interval is more robust to researcher manipulation than one with a larger interval, and is often easier to interpret than a classic confidence interval. Hacking intervals turn out to be equivalent to classical confidence intervals under the linear regression model, and are equivalent to profile likelihood confidence intervals under certain other conditions, which means they may sometimes provide a more intuitive and potentially more useful interpretation of classical intervals.
Item Open Access Advancements in Probabilistic Machine Learning and Causal Inference for Personalized Medicine(2019) Lorenzi, Elizabeth CatherineIn this dissertation, we present four novel contributions to the field of statistics with the shared goal of personalizing medicine to individual patients. These methods are developed to directly address problems in health care through two subfields of statistics: probabilistic machine learning and causal inference. These projects include improving predictions of adverse events after surgeries, or learning the effectiveness of treatments for specific subgroups and for individuals. We begin the dissertation in Chapter 1 with a discussion of personalized medicine, the use of electronic health record (EHR) data, and a brief discussion on learning heterogeneous treatment effects. In chapter 2, we present a novel algorithm, Predictive Hierarchical Clustering (PHC), for agglomerative hierarchical clustering of current procedural terminology (CPT) codes. Our predictive hierarchical clustering aims to cluster subgroups, not individual observations, found within our data, such that the clusters discovered result in optimal performance of a classification model, specifically for predicting surgical complications. In chapter 3, we develop a hierarchical infinite latent factor model (HIFM) to appropriately account for the covariance structure across subpopulations in data. We propose a novel Hierarchical Dirichlet Process shrinkage prior on the loadings matrix that flexibly captures the underlying structure of our data across subpopulations while sharing information to improve inference and prediction. We apply this work to the problem of predicting surgical complications using electronic health record data for geriatric patients at Duke University Health System (DUHS). The last chapters of the dissertation address personalized medicine from a causal perspective, where the goal is to understand how interventions affect individuals not full populations. In chapter 4, we address heterogeneous treatment effects across subgroups, where guidance for observational comparisons within subgroups is lacking as is a connection to classic design principles for propensity score (PS) analyses. We address these shortcomings by proposing a novel propensity score method for subgroup analysis (SGA) that seeks to balance existing strategies in an automatic and efficient way. With the use of overlap weights, we prove that an over-specified propensity model including interactions between subgroups and all covariates results in exact covariate balance within subgroups. This is paired with variable selection approaches to adjust for a possibly overspecified propensity score model. Finally, chapter 5 discusses our final contribution, a longitudinal matching algorithm aiming to predict individual treatment effects of a medication change for diabetes patients. This project aims to develop a novel and generalizable causal inference framework for learning heterogeneous treatment effects from Electronic Health Records (EHR) data. The key methodological innovation is to cast the sparse and irregularly-spaced EHR time series into functional data analysis in the design stage to adjust for confounding that changes over time. We conclude the dissertation and discuss future work in Section 6, outlining many directions for continued research on these topics.
Item Open Access Advances in Bayesian Hierarchical Modeling with Tree-based Methods(2020) Mao, JialiangDeveloping flexible tools that apply to datasets with large size and complex structure while providing interpretable outputs is a major goal of modern statistical modeling. A family of models that are especially suitable for this task is the P\'olya tree type models. Following a divide-and-conquer strategy, these tree-based methods transform the original task into a series of tasks that are smaller in size and easier to solve while their nonparametric nature guarantees the modeling flexibility to cope with datasets with a complex structure. In this work, we develop three novel tree-based methods that tackle different challenges in Bayesian hierarchical modeling. Our first two methods are designed specifically for the microbiome sequencing data, which consists of high dimensional counts with a complex, domain-specific covariate structure and exhibits large cross-sample variations. These features limit the performance of generic statistical tools and require special modeling considerations. Both methods inherit the flexibility and computation efficiency from the general tree-based methods and directly utilize the domain knowledge to help infer the complex dependency structure among different microbiome categories by bringing the phylogenetic tree into the modeling framework. An important task in microbiome research is to compare the composition of the microbial community of groups of subjects. We first propose a model for this classic two-sample problem in the microbiome context by transforming the original problem into a multiple testing problem, with a series of tests defined at the internal nodes of the phylogenetic tree. To improve the power of the test, we use a graphical model to allow information sharing among the tests. A regression-type adjustment is also considered to reduce the chance of false discovery. Next, we introduce a model-based clustering method for the microbiome count data with a Dirichlet process mixtures setup. The phylogenetic tree is used for constructing the mixture kernels to offer a flexible covariate structure. To improve the ability to detect clusters determined not only by the dominating microbiome categories, a subroutine is introduced in the clustering procedure that selects a subset of internal nodes of the tree which are relevant for clustering. This subroutine is also important in avoiding potential overfitting. Our third contribution proposes a framework for causal inference through Bayesian recursive partitioning that allows joint modeling of the covariate balancing and the potential outcome. With a retrospective perspective, we model the covariates and the outcome conditioning on the treatment assignment status. For the challenging multivariate covariate modeling, we adopt a flexible nonparametric prior that focuses on the relation of the covariate distributions under the two treatment groups, while integrating out other aspects of these distributions that are irrelevant for estimating the causal effect.
Item Open Access An Investigation into the Bias and Variance of Almost Matching Exactly Methods(2021) Morucci, MarcoThe development of interpretable causal estimation methods is a fundamental problem for high-stakes decision settings in which results must be explainable. Matching methods are highly explainable, but often lack the accuracy of black-box nonparametric models for causal effects. In this work, we propose to investigate theoretically the statistical bias and variance of Almost Matching Exactly (AME) methods for causal effect estimation. These methods aim to overcome the inaccuracy of matching by learning on a separate training dataset an optimal metric to match units on. While these methods are both powerful and interpretable, we currently lack an understanding of their statistical properties. In this work we present a theoretical characterization of the finite-sample and asymptotic properties of AME. We show that AME with discrete data has bounded bias in finite samples, and is asymptotically normal and consistent at a root-n rate. Additionally, we show that AME methods for matching on networked data also have bounded bias and variance in finite-samples, and achieve asymptotic consistency in sparse enough graphs. Our results can be used to motivate the construction of approximate confidence intervals around AME causal estimates, providing a way to quantify their uncertainty.
Item Open Access Bayesian Estimation and Sensitivity Analysis for Causal Inference(2019) Zaidi, Abbas MThis disseration aims to explore Bayesian methods for causal inference. In chapter 1, we present an overview of fundamental ideas from causal inference along with an outline of the methodological developments that we hope to tackle. In chapter 2, we develop a Gaussian-process mixture model for heterogeneous treatment effect estimation that leverages the use of transformed outcomes. The approach we will present attempts to improve point estimation and uncertainty quantification relative to past work that has used transformed variable related methods as well as traditional outcome modeling. Earlier work on modeling treatment effect heterogeneity using transformed outcomes has relied on tree based methods such as single regression trees and random forests. Under the umbrella of non-parametric models, outcome modeling has been performed using Bayesian additive regression trees and various flavors of weighted single trees. These approaches work well when large samples are available, but suffer in smaller samples where results are more sensitive to model misspecification -- our method attempts to garner improvements in inference quality via a correctly specified model rooted in Bayesian non-parametrics. Furthermore, while we begin with a model that assumes that the treatment assignment mechanism is known, an extension where it is learnt from the data is presented for applications to observational studies. Our approach is applied to simulated and real data to demonstrate our theorized improvements in inference with respect to two causal estimands: the conditional average treatment effect and the average treatment effect. By leveraging our correctly specified model, we are able to more accurately estimate the treatment effects while reducing their variance. In chapter 3, we parametrically and hierarchically estimate the average causal effects of different lengths of stay in the Udayan Ghar Program under the assumption that selection into different lengths is based on a set of observed covariates. This program was piloted in New Delhi, India as a means of providing a residential surrogate to vulnerable and at risk children with the hope of improving their psychological development. We find that the estimated effects on the psychological ideas of self concept and ego resilience (measured by the standardized Piers-Harris score) increase with the length of the time spent in the program. We are also able to conclude that there are measurable differences that exist between male and female children that spend time in the program. In chapter 4, we supplement the estimation of hierarchical dose-response function estimation by introducing a novel sensitivity-analysis and summarization strategy for assessing the robustness of our results to violations of the assumption of unconfoundedness. Finally, in chapter 5, we summarize what this dissertation has achieved, and briefly outline important areas where our work warrants further development.
Item Open Access Bayesian Mixture Modeling Approaches for Intermediate Variables and Causal Inference(2010) Schwartz, Scott LeeThis thesis examines causal inference related topics involving intermediate variables, and uses Bayesian methodologies to advance analysis capabilities in these areas. First, joint modeling of outcome variables with intermediate variables is considered in the context of birthweight and censored gestational age analyses. The proposed methodology provides improved inference capabilities for birthweight and gestational age, avoids post-treatment selection bias problems associated with conditional on gestational age analyses, and appropriately assesses the uncertainty associated with censored gestational age. Second, principal stratification methodology for settings where causal inference analysis requires appropriate adjustment of intermediate variables is extended to observational settings with binary treatments and binary intermediate variables. This is done by uncovering the structural pathways of unmeasured confounding affecting principal stratification analysis and directly incorporating them into a model based sensitivity analysis methodology. Demonstration focuses on a study of the efficacy of influenza vaccination in elderly populations. Third, flexibility, interpretability, and capability of principal stratification analyses for continuous intermediate variables are improved by replacing the current fully parametric methodologies with semiparametric Bayesian alternatives. This presentation is one of the first uses of nonparametric techniques in causal inference analysis,
and opens a connection between these two fields. Demonstration focuses on two studies, one involving a cholesterol reduction drug, and one examine the effect of physical activity on cardiovascular disease as it relates to body mass index.
Item Open Access CAUSAL INFERENCE FOR HIGH-STAKES DECISIONS(2023) Parikh, Harsh JCausal inference methods are commonly used across domains to aid high-stakes decision-making. The validity of causal studies often relies on strong assumptions that might not be realistic in high-stakes scenarios. Inferences based on incorrect assumptions frequently result in sub-optimal decisions with high penalties and long-term consequences. Unlike prediction or machine learning methods, it is particularly challenging to evaluate the performance of causal methods using just the observed data because the ground truth causal effects are missing for all units. My research presents frameworks to enable validation of causal inference methods in one of the following three ways: (i) auditing the estimation procedure by a domain expert, (ii) studying the performance using synthetic data, and (iii) using placebo tests to identify biases. This work enables decision-makers to reason about the validity of the estimation procedure by thinking carefully about the underlying assumptions. Our Learning-to-Match framework is an auditable-and-accurate approach that learns an optimal distance metric for estimating heterogeneous treatment effects. We augment Learning-to-Match framework with pharmacological mechanistic knowledge to study the long-term effects of untreated seizure-like brain activities in critically ill patients. Here, the auditability of the estimator allowed neurologists to qualitatively validate the analysis via a chart-review. We also propose Credence, a synthetic data based framework to validate causal inference methods. Credence simulates data that is stochastically indistinguishable from the observed data while allowing for user-designed treatment effects and selection biases. We demonstrate Credence's ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications. We also discuss an approach to combines experimental and observational studies. Our approach provides a principled approach to test for the violations of no-unobserved confounder assumption and estimate treatment effects under this violation.
Item Open Access Causal Inference for Natural Language Data and Multivariate Time Series(2023) Tierney, GrahamThe central theme of this dissertation is causal inference for complex data, and highlighting how for certain estimation problems, collecting more data has limited benefit. The central application areas are natural language data and multivariate time series. For text, large language models are trained on predictive tasks not necessarily well-suited for causal inference. Moreover, documents that vary in some treatment feature will often also vary systematically in other, unknown ways that prohibit attribution of causal effects to the feature of interest. Multivariate time series, even with high-quality contemporaneous predictors, still exhibit positive dependencies such that even with many treated and control units, the amount of information available to estimate causal quantities is quite low.
Chapter 2 builds a model for short text, as is typically found on social media platforms. Chapter 3 analyzes a randomized experiment that paired Democrats and Republicans to have a conversation about politics, then develops a sensitivity procedure to test for mediation effects attributable to the politeness of the conversation. Chapter 4 expands on the limitations of observational, model-based methods for causal inference with text and designs an experiment to validate how significant those limitations are. Chapter 5 covers experimentation with multivariate time series.
The general conclusion from these chapters is that causal inference always requires untestable assumptions. A researcher trying to make causal conclusions needs to understand the underlying structure of the problem they are studying to validate whether those assumptions hold. The work here shows how to still conduct causal analysis when commonly made assumptions are violated.
Item Open Access Communities in Social Networks: Detection, Heterogeneity and Experimentation(2022) Mathews, HeatherThe study of network data in the social and health sciences frequently concentrates on understanding how and why connections form. In particular, the task of determining latent mechanisms driving connection has received a lot of attention across statistics, machine learning, and information theory. In social networks, this mechanism often manifests as community structure. As a result, this work provides methods for discovering and leveraging these communities to better understand networks and the data they generate.
We provide three main contributions. First, we present methodology for performing community detection in challenging regimes. Existing literature has focused on modeling the spectral embedding of a network using Gaussian mixture models (GMMs) in scaling regimes where the ability to detect community memberships improves with the size of the network. However, these regimes are not very realistic. As such, we provide tractable methodology motivated by new theoretical results for networks with non-vanishing noise by using GMMs that incorporate truncation and shrinkage effects.
Further, when covariate information is available, often we want to understand how covariates impact connections. It is likely that the effects of covariates on edge formation differ between communities (e.g. age might play a different role in friendship formation in communities across a city). To address this issue, we introduce a latent space network model where coefficients associated with certain covariates can depend on latent community membership of the nodes. We show that ignoring such structure can lead to either over- or under-estimation of covariate importance to edge formation and propose a Markov Chain Monte Carlo approach for simultaneously learning the latent community structure and the community specific coefficients.
Finally, we consider how community structure can impact experimentation. It is evident that communities can act in different ways, and it is natural that this propagates into experimental design. As as result, this observation motivates our development of community informed experimental design. This design recognizes that information between individuals likely flows along within community edges rather than across community edges. We demonstrate that this design improves estimation of global average treatment effect, even when the community structure of the graph needs to be estimated.
Item Open Access Essays on Propensity Score Methods for Causal Inference in Observational Studies(2018) Nguyen, Nghi Le PhuongIn this dissertation, I present three essays from three different research projects and they involve different usages of propensity scores in drawing causal inferences in observational studies.
Chapter 1 talks about the general idea of causal inference as well as the concept of randomized experiments and observational studies. It introduces the three different projects and their contributions to the literature.
Chapter 2 gives a critical review and an extensive discussion of several commonly-used propensity score methods when the data have a multilevel structure, including matching, weighting, stratification, and methods that combine these with regression. The usage of these methods is illustrated using a data set about endoscopic vein-graft harvesting in coronary artery bypass graft (CABG) surgeries. We discuss important aspects of the implementation of these methods such as model specification and standard error calculations. Based on the comparison, we provide general guidelines for using propensity score methods with multilevel data in practice. We also provide the relevant code in the form of an \textsf{R} package, available on GitHub.
In observational studies, subjects are no longer assigned to treatment at random as in randomized experiments, and thus the association between the treatment and outcome can be due to some unmeasured variable that affects both the treatment and the outcome. Chapter 3 focuses on conducting sensitivity analysis to assess the robustness of the estimated quantity when the unconfoundedness assumption is violated. Two approaches to sensitivity analysis are presented, both are extensions from previous works to accommodate for a count outcome. One method is based on the subclassification estimator and it relies on maximum likelihood estimation. The second method is more flexible on the estimation method and is based on simulations. We illustrate both methods using a data set from a traffic safety research study about the safety effectiveness (measured in crash counts reduction) of the combined application of center line rumble strips and shoulder rumble strips on two-lane rural roads in Pennsylvania.
Chapter 4 proposes a method for estimating heterogeneous causal effects in observational studies by augmenting additive-interactive Gaussian process regression using the propensity scores, yielding a flexible yet robust way to predict the potential outcome surface from which the conditional treatment effects can be calculated. We show that our method works well even in presence of strong confounding and illustrate this by comparing with commonly-used methods in different settings using simulated data.
Finally, chapter 5 concludes this dissertation and discusses possible future works for each of the projects.
Item Open Access Essays on Theoretical Methods for Environmental and Developmental Economics Policy Analysis(2020) Mallampalli, VarunThis dissertation contributes to the fields of environmental, natural resource and development economics. It contains three essays, each tackling related but different sets of questions by developing theoretical, analytical and econometric methods for policy relevant analysis. In the first essay I develop theoretical models to discuss how fossil fuel firms may respond to anticipated climate friendly policies by intensifying resource extraction from existing reserve bases (green paradox) and/or by reducing investments in expansion of the pool of extractable reserves. In the second essay I construct theoretical models to discuss the design of institutions for regulation of novel climate altering geoengineering technologies by first exploring the dangers of a lack of carbon policy commitment and then suggesting institutional solutions that draw from the monetary policy literature. Finally in the third essay, I consider the design of a multiple cut-off regression discontinuity design and show how it can be used to answer policy relevant questions in development economics in situations involving multiple treatments and treatment conditions. Collectively, the studies involve theoretical ideas and concepts that help understand the impact of policy uncertainty, think about the design of institutions for policy governance and estimate the impacts of past implemented policies.
Item Open Access Interpretable Almost-Matching Exactly with Instrumental Variables(2019) Liu, YamengWe aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework.
The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; To match units on as many relevant variables as possible, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the units' optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality interpretable matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible. We also adapt the matching framework by using instrumental variables (IV) to the presence of observed categorical confounding that breaks the randomness assumptions and propose an approximate algorithm which speedily generates high-quality interpretable solutions.We show that our algorithms construct better matches than other existing methods on simulated datasets, produce interesting results in applications to crime intervention and political canvassing.
Item Open Access Machine Learning for Uncertainty with Application to Causal Inference(2022) Zhou, TianhuiEffective decision making requires understanding the uncertainty inherent in a problem. This covers a wide scope in statistics, from deriving an estimator to training a predictive model. In this thesis, I will spend three chapters discussing new uncertainty methods developed for solving individual and population level inference problems with their theory and applications in causal inference. I will also detail the limitations of existing approaches and why my proposed methods lead to better performance.
In the first chapter, I will introduce a novel approach, Collaborating Networks (CN), to capture predictive distributions in regression. It defines two neural networks with two distinct loss functions to approximate the cumulative distribution function and its inverse respectively and collectively. This gives CN extra flexibility through bypassing the necessity of assuming an explicit distribution family like Gaussian. Empirically, CN generates sharp intervals with reliable coverage.
In the second chapter, I extend CN to estimate the individual treatment effect in observational studies. It is augmented by a new adjustment scheme developed through representation learning, which is shown to effectively alleviate the imbalance between treatment groups. Moreover, a new evaluation criterion is suggested by combing the estimated uncertainty and variation in utility functions (e.g., variability in risk tolerance) for more comprehensive decision making, while traditional approaches only study an individual’s outcome change due to a potential treatment.
In the last chapter, I will present an analysis pipeline for causal inference with propensity score weighting. Comparing to other pipelines for similar purposes, this package comprises a wider range of functionalities to provide an exhaustive design and analysis platform that enables users to construct different estimators and assess their uncertainties. Itoffers six major advantages: it incorporates (i) visualization and diagnostic tools of checking covariate overlap and balance, (ii) a general class of balancing weights, (iii) comparison for multiple treatments, (iv) simple and augmented (doubly-robust) weighting estimators, (iv) nuisance-adjusted sandwich variances, and (v) ratio estimands for binary and count outcomes.
Item Open Access Modeling and Methodological Advances in Causal Inference(2021) Zeng, ShuxiThis thesis presents several novel modeling or methodological advancements to causal inference. First, we investigate the use of propensity score weighting in the randomized trials for covariate adjustment. We introduce the class of balancing weights and study its theoretical property. We demonstrate that it is asymptotically equivalent to the analysis of covariance (ANCOVA) and derive the closed-form variance estimator. We further recommend the overlap weighting estimator based on its semiparametric efficiency and good finite-sample performance. Next, we focus on comparative effectiveness studies with survival outcomes. As opposed to the approach coupling with a Cox proportional hazards model, we follow an ``once for all'' approach and construct pseudo-observations of the censored outcomes. We study the theoretical property of propensity score weighting estimator based on pseudo-observations and provide closed-form variance estimators. The third contribution lies in the domain of causal mediation analysis, which studies how much of the treatment effect is mediated or explained through a given intermediate variable. The existing approaches are not directly applicable to scenario where both the mediator and outcome are measured on the sparse and irregular time grids. We propose a causal mediation framework by treating the sparse and irregular data as realizations of smooth processes and provide the assumptions for nonparametric identifications. We also provide a functional principal component analysis (FPCA) approach for estimation and carries out inference with a Bayesian paradigm. Furthermore, we study how to achieve double robustness with machine learning approaches. We develop a new algorithm that learns the double-robust representations in observational studies. The proposed method can learn the low-dimensional representations as well as the balancing weights simultaneously. Lastly, we study how to build a robust prediction model by exploiting the causal relationships. From a causal perspective, we argue robust models should capture the stable causal relationships as opposed to the spurious correlations. We propose a causal transfer random forest method learning the stable causal relationships efficiently from a large scale of observational data and a small amount of randomized data. We provide theoretical justifications and validate the algorithm empirically with synthetic experiments and real world prediction tasks.
In summary, this thesis makes contributions to the following three major areas in causal inference: (i) propensity score weighting methods for randomized experiments and observational studies, which consists of (a) randomized controlled trial (Chapter 2}) (b) survival outcome (Chapter 3); (ii) causal mediation analysis with sparse and irregular longitudinal data (Chapter 4); (iii) machine learning methods for causal inference, which consists of (a) double robustness (Chapter 5), (b) causal transfer random forest (Chapter 6).
Item Open Access Multisensory Integration, Segregation, and Causal Inference in the Superior Colliculus(2020) Mohl, Jeffrey ThomasThe environment is sampled by multiple senses, which are woven together to produce a unified perceptual state. However, unifying these senses requires assigning particular signals to the same or different underlying objects or events. Sensory signals originating from the same source should be integrated together, while signals originating from separate sources should be segregated from one another. Each of these computations is associated with different neural encoding strategies, and it is unknown how these strategies interact. Here, we begin to characterize how this problem is solved in the primate brain. First, we developed a behavioral paradigm and applied a computational modeling approach to demonstrate that monkeys, like humans, implement a form of Bayesian causal inference to decide whether two stimuli (one auditory and one visual) originated from the same source. We then recorded single unit neural activity from a representative multisensory brain region, the superior colliculus (SC), while monkeys performed this task. We found that SC neurons encoded either segregated unisensory or integrated multisensory target representations in separate sub-populations of neurons. These responses were well described by a weighted linear combination of unisensory responses which did not account for spatial separation between targets, suggesting that SC sensory responses did not immediately discriminate between common cause and separate cause conditions as predicted by Bayesian causal inference. These responses became less linear as the trial progressed, hinting that such a causal inference may evolve over time. Finally, we implemented a single trial analysis method to determine whether the observed linearity was indicative of true weighted combinations on each trial, or whether this observation was an artifact of pooling data across trials. We found that initial sensory responses (0-150 ms) were well described by linear models even at the single trial level, but that later sustained (150-600 ms) and saccade period responses were instead better described as fluctuating between encoding either the auditory or visual stimulus alone. We also found that these fluctuations were correlated with behavior, suggesting that they may reflect a convergence from the SC encoding all potential targets to preferentially encoding only a specific target on a given trial. Together, these results demonstrate that non-human primates (like humans) perform an idealized version of Bayesian causal inference, that this inference may depend on separate sub-populations of neurons maintaining either integrated or segregated stimulus representations, and that these responses then evolve over time to reflect more complex encoding rules.
Item Open Access Principled Deep Learning for Healthcare Applications(2023) Assaad, SergeHealthcare stands to benefit from the advent of deep learning on account of (i) the massive amounts of data generated by the health system and (ii) the ability of deep models to make predictions from complex inputs. This dissertation centers on two applications of deep learning to challenging problems in healthcare.
First, we discuss deep learning for treatment effect/counterfactual estimation in the observational setting, i.e., where the treatment assignment is not randomized (Chapters 2 and 3). For example, we may want to know the causal effect of a drug on a patient's blood pressure. We combine deep learning with classical weighting techniques to estimate average and conditional average treatment effects from observational data. We show theoretical properties of our method, including guarantees about when "balance" can be achieved between treatment groups. We then weaken the typical "ignorability" assumption and generate treatment effect intervals (instead of point-estimates).
Second, we explore the use of deep learning applied to a difficult problem in medical imaging: classifying malignancy from thyroid cytopathology slides (Chapters 4, 5, and 6). The difficulty of this problem arises from the image size, which is typically on the order of tens of gigabytes (i.e., around 3 to 4 orders of magnitude larger than image sizes in popular deep learning architectures). Our approach is a two-step process: (i) automatically finding image regions containing follicular cell groups, (ii) classifying each region and aggregating the predictions. We show that our system works well for mobile phone images of thyroid biopsy slides, and that our system compares favorably with state-of-the-art genetic testing for malignancy.
Finally, after my Ph.D. I plan to enter a career in autonomous driving. As an "epilogue" of this dissertation (Chapter 7), we present a method to make deep learning point-cloud models for autonomous driving which are invariant (or equivariant) to rotations. Intuitively, this is an important requirement -- a rotated bicycle should still be classified as a bicycle, and driving behavior should be independent of direction of travel. However, most deep learning models used in autonomous driving today do not satisfy these properties exactly. We propose a practical model (based on the Transformer architecture) to address this pitfall, and we showcase its performance on point-cloud classification and trajectory forecasting tasks.
Item Open Access Propensity Score Methods For Causal Subgroup Analysis(2022) Yang, SiyunSubgroup analyses are frequently conducted in comparative effectiveness research and randomized clinical trials to assess evidence of heterogeneous treatment effect across patient subpopulations. Though widely used in medical research, causal inference methods for conducting statistical analyses on a range of pre-specified subpopulations remain underdeveloped, particularly in observational studies. This dissertation develops and extends propensity score methods for causal subgroup analysis.
In Chapter 2, we develop a suite of analytical methods and visualization tools for causal subgroup analysis. First, we introduce the estimand of subgroup weighted average treatment effect and provide the corresponding propensity score weighting estimator. We show that balancing covariates within a subgroup bounds the bias of the estimator of subgroup causal effects. Second, we propose to use the overlap weighting method to achieve exact balance within subgroups. We further propose a method that combines overlap weighting and LASSO, to balance the bias-variance tradeoff in subgroup analysis. Finally, we design a new diagnostic plot---the Connect-S plot---for visualizing the subgroup covariate balance. Extensive simulation studies are presented to compare the proposed method with several existing methods. We apply the proposed methods to the observational COMPARE-UF study to evaluate the causal effects of Myomectomy versus Hysterectomy on the relief of symptoms and quality of life (a continuous outcome) in a number of pre-specified subgroups of patients with uterine fibroids.
In Chapter 3, we investigate the propensity score weighting method for causal subgroup analysis with time-to-event outcomes. We introduce two causal estimands, the subgroup marginal hazard ratio and subgroup restricted average causal effect, and provide corresponding propensity score weighting estimators. We analytically established that the bias of subgroup restricted average causal effect is determined by subgroup covariate balance. Using extensive simulations, we compare the performance of various combination of propensity score models (logistic regression, random forests, LASSO, and generalized boosted models) and weighting schemes (inverse probability weighting, and overlap weighting) for estimating the survival causal estimands. We find that the logistic model with subgroup-covariate interactions selected by LASSO consistently outperforms other propensity score models. Also, overlap weighting generally outperforms inverse probability weighting in terms of balance, bias and variance, and the advantage is particularly pronounced in small subgroups and/or in the presence of poor overlap. Again, we apply the methods to the COMPARE-UF study with a time-to-event outcome, the time to disease recurrence after receiving a procedure.
In Chapter 4, we extend propensity score weighting methodology for covariate adjustment to improve the precision and power of subgroup analyses in RCTs. We fit a logistic regression propensity model with pre-specified covariate-subgroup interactions. We show that by construction, overlap weighting exactly balances the covariates with interaction terms in each subgroup. Extensive simulations are performed to compare the operating characteristics of unadjusted estimator, different propensity score weighting estimators and the analysis of covariance estimator. We apply these methods to the HF-ACTION trial to evaluate the effect of exercise training on 6-minute walk test in several pre-specified subgroups.
Item Open Access Rethinking Nonlinear Instrumental Variables(2019) Li, ChunxiaoInstrumental variable (IV) models are widely used in the social and health sciences in situations where a researcher would like to measure a causal eect but cannot perform an experiment. Formally checking the assumptions of an IV model with a given dataset is impossible, leading many researchers to take as given a linear functional form and two-stage least squares tting procedure. In this paper, we propose a method for evaluating the validity of IV models using observed data and show that, in some cases, a more flexible nonlinear model can address violations of the IV conditions. We also develop a test that detects violations in the instrument that are present in the observed data. We introduce a new version of the validity check that is suitable for machine learning and provides optimization-based techniques to answer these questions. We demonstrate the method using both the simulated data and a real-world dataset.
Item Open Access Topics and Applications of Weighting Methods in Case-Control and Observational Studies(2019) Li, FanWeighting methods have been widely used in statistics and related applications. For example, the inverse probability weighting is a standard approach to correct for survey non-response. The case-control design, frequently seen in epidemiologic or genetic studies, can be regarded as a special type of survey design; analogous inverse probability weighting approaches have been explored when the interest is the association between exposures and the disease (primary analysis) as well as when the interest is the association among exposures (secondary analysis). Meanwhile, in observational comparative effectiveness research, inverse probability weighting has been suggested as a valid approach to correct for confounding bias. This dissertation develops and extends weighting methods for case-control and observational studies.
The first part of this dissertation extends the inverse probability weighting approach for secondary analysis of case-control data. We revisit an inverse probability weighting estimator to offer new insights and extensions. Specifically, we construct its more general form by generalized least squares (GLS). Such a construction allows us to connect the GLS estimator with the generalized method of moments and motivates a new specification test designed to assess the adequacy of the inverse probability weights. The specification test statistic measures the weighted discrepancy between the case and control subsample estimators, and asymptotically follows a Chi-squared distribution under correct model specification. We illustrate the GLS estimator and specification test using a case-control sample of peripheral arterial disease, and use simulations to shed light on the operating characteristics of the specification test. The second part develops a robust difference-in-differences (DID) estimator for estimating causal effect with observational before-after data. Within the DID framework, two common estimation strategies are outcome regression and propensity score weighting. Motivated by a real application in traffic safety research, we propose a new double-robust DID estimator that hybridizes outcome regression and propensity score weighting. We show that the proposed estimator possesses the desirable large-sample robustness property, namely the consistency only requires either one of the outcome model or the propensity score model to be correctly specified. We illustrate the new estimator to study the causal effect of rumble strips in reducing vehicle crashes, and conduct a simulation study to examine its finite-sample performance. The third part discusses a unified framework, the balancing weights, for estimating causal effects in observational studies with multiple treatments. These weights incorporate the generalized propensity scores to balance the weighted covariate distribution of each treatment group, all weighted toward a common pre-specified target population. Within this framework, we further develop the generalized overlap weights, constructed as the product of the inverse probability weights and the harmonic mean of the generalized propensity scores. The generalized overlap weights corresponds to the target population with the most overlap in covariates between treatments, similar to the population in equipoise in clinical trials. We show that the generalized overlap weights minimize the total asymptotic variance of the nonparametric estimators for the pairwise contrasts within the class of balancing weights. We apply the new weighting method to study the racial disparities in medical expenditure and further examine its operating characteristics by simulations.