Browsing by Author "Rudin, Cynthia"
- Results Per Page
- Sort Options
Item Open Access Adaptive Hyper-box Matching for Interpretable Individualized Treatment Effect Estimation.(CoRR, 2020) Morucci, Marco; Orlandi, Vittorio; Rudin, Cynthia; Roy, Sudeepa; Volfovsky, AlexanderWe propose a matching method for observational data that matches units with others in unit-specific, hyper-box-shaped regions of the covariate space. These regions are large enough that many matches are created for each unit and small enough that the treatment effect is roughly constant throughout. The regions are found as either the solution to a mixed integer program, or using a (fast) approximation algorithm. The result is an interpretable and tailored estimate of a causal effect for each unit.Item Open Access CAUSAL INFERENCE FOR HIGH-STAKES DECISIONS(2023) Parikh, Harsh JCausal inference methods are commonly used across domains to aid high-stakes decision-making. The validity of causal studies often relies on strong assumptions that might not be realistic in high-stakes scenarios. Inferences based on incorrect assumptions frequently result in sub-optimal decisions with high penalties and long-term consequences. Unlike prediction or machine learning methods, it is particularly challenging to evaluate the performance of causal methods using just the observed data because the ground truth causal effects are missing for all units. My research presents frameworks to enable validation of causal inference methods in one of the following three ways: (i) auditing the estimation procedure by a domain expert, (ii) studying the performance using synthetic data, and (iii) using placebo tests to identify biases. This work enables decision-makers to reason about the validity of the estimation procedure by thinking carefully about the underlying assumptions. Our Learning-to-Match framework is an auditable-and-accurate approach that learns an optimal distance metric for estimating heterogeneous treatment effects. We augment Learning-to-Match framework with pharmacological mechanistic knowledge to study the long-term effects of untreated seizure-like brain activities in critically ill patients. Here, the auditability of the estimator allowed neurologists to qualitatively validate the analysis via a chart-review. We also propose Credence, a synthetic data based framework to validate causal inference methods. Credence simulates data that is stochastically indistinguishable from the observed data while allowing for user-designed treatment effects and selection biases. We demonstrate Credence's ability to accurately assess the relative performance of causal estimation techniques in an extensive simulation study and two real-world data applications. We also discuss an approach to combines experimental and observational studies. Our approach provides a principled approach to test for the violations of no-unobserved confounder assumption and estimate treatment effects under this violation.
Item Open Access dame-flame: A Python Library Providing Fast Interpretable Matching for Causal Inference.(CoRR, 2021) Gupta, Neha R; Orlandi, Vittorio; Chang, Chia-Rui; Wang, Tianyu; Morucci, Marco; Dey, Pritam; Howell, Thomas J; Sun, Xian; Ghosal, Angikar; Roy, Sudeepa; Rudin, Cynthia; Volfovsky, Alexanderdame-flame is a Python package for performing matching for observational causal inference on datasets containing discrete covariates. This package implements the Dynamic Almost Matching Exactly (DAME) and Fast Large-Scale Almost Matching Exactly (FLAME) algorithms, which match treatment and control units on subsets of the covariates. The resulting matched groups are interpretable, because the matches are made on covariates (rather than, for instance, propensity scores), and high-quality, because machine learning is used to determine which covariates are important to match on. DAME solves an optimization problem that matches units on as many covariates as possible, prioritizing matches on important covariates. FLAME approximates the solution found by DAME via a much faster backward feature selection procedure. The package provides several adjustable parameters to adapt the algorithms to specific applications, and can calculate treatment effects after matching. Descriptions of these parameters, details on estimating treatment effects, and further examples, can be found in the documentation at https://almost-matching-exactly.github.io/DAME-FLAME-Python-Package/Item Embargo Discrete and Continuous Optimization for Interpretable Machine Learning in High Stakes Decision Making(2024) Liu, JiachangIn high-stakes decision-making and scientific knowledge discovery, the demand for interpretable machine learning models is paramount.These models must not only exhibit high predictive capabilities but also provide domain experts with a clear understanding of the underlying decision-making processes. This transparency empowers professionals to leverage their domain expertise to evaluate the validity and relevance of the model's predictions.
At the heart of developing such interpretable models is to solve a complex combinatorial problem.Conventional methods, like $\ell_1$ regularization, often compromise solution quality. In contrast, $\ell_0$-based methods, while precise, face challenges due to their NP-hard nature, rendering them unscalable for large-scale datasets.
This Ph.D. dissertation is dedicated to bridging this statistical and computational gap.In the following sections, we present four interrelated works, covering statistical models such as generalized additive models, scoring systems, linear regression, and Cox proportional hazards models. These works are unified by the use of discrete and continuous optimization techniques to develop simple yet highly accurate models. Given the combinatorial nature of the optimization problems, we develop novel algorithms from three perspectives: (1) searching for optimal solutions, (2) certifying the optimality of solutions, and (3) decomposing and exploiting hidden structures.
The algorithms developed in this dissertation efficiently produce high-quality solutions across various fields such as healthcare, criminal justice, finance, and scientific discovery.On real-world datasets with thousands of features and observations, our algorithms can generate models with only 10-20 parameters in seconds to minutes, with predictive performance on par with black-box models. By leveraging the synergy between machine learning and optimization, this research contributes towards making AI systems more trustworhty and human-centered.
Item Open Access Generalized and Scalable Optimal Sparse Decision Trees(2020) Zhong, ChudiDecision tree optimization is notoriously difficult from a computational perspective but essential for the field of interpretable machine learning. Despite efforts over the past 40 years, only recently have optimization breakthroughs been made that have allowed practical algorithms to find \textit{optimal} decision trees. These new techniques have the potential to trigger a paradigm shift where it is possible to construct sparse decision trees to efficiently optimize a variety of objective functions without relying on greedy splitting and pruning heuristics that often lead to suboptimal solutions. The contribution in this work is to provide a general framework for decision tree optimization that addresses the two significant open problems in the area: treatment of imbalanced data and fully optimizing over continuous variables. We present techniques that produce optimal decision trees over a variety of objectives including F-score, AUC, and partial area under the ROC convex hull. We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables and speeds up decision tree construction by several orders of magnitude relative to the state-of-the art.
Item Open Access Human-in-the-loop Machine Learning System via Model Interpretability(2023) Chen, ZhiThe interpretability of a machine learning system is crucial in situations where it involves human-model interaction or affects the well-being of society. By making the decision process understandable to humans, interpretability makes it easier to troubleshoot, acquire knowledge from, and interact with machine learning models. However, designing an interpretable machine learning system that maximizes the human-in-the-loop experience can be challenging. My thesis aims to address the major challenges in interpretable machine learning and lay the foundations for a more interactive machine learning system.
In this thesis, I first tackle the challenge of building machine learning models with interpretability constraints, particularly in applications with unstructured data such as computer vision and materials science. I propose interpretable models that effectively capture the underlying patterns in the data and allow users to understand the model's decision-making process. Furthermore, this thesis studies the exploration and approximation of the set of all near-optimal models for interpretable model classes, enabling users to visualize, select, and modify multiple well-performing models. Lastly, I demonstrate how interpretable models can provide insights into the data, detecting common dataset flaws such as poorly imputed missing values, confoundings, and biases.
Item Open Access In Pursuit of Simplicity: The Role of the Rashomon Effect for Informed Decision Making(2024) Semenova, LesiaFor high-stakes decision domains, such as healthcare, lending, and criminal justice, the predictions of deployed models can have a huge impact on human lives. The understanding of why models make specific predictions is as crucial as the good performance of these models. Interpretable models, constrained to explain the reasoning behind their decisions, play a key role in enabling users' trust. They can also assist in troubleshooting and identifying errors or data biases. However, there has been a longstanding belief in the community that a trade-off exists between accuracy and interpretability. We formally show that such a trade-off does not exist for many datasets in high-stakes decision domains and that simpler models often perform as well as black-boxes.
To establish a theoretical foundation explaining the existence of simple-yet-accurate models, we leverage the Rashomon set (a set of equally well-performing models). If the Rashomon set is large, it contains numerous accurate models, and perhaps at least one of them is the simple model we desire. We formally present the Rashomon ratio as a new gauge of simplicity for a learning problem, where the Rashomon ratio is the fraction of all models in a given hypothesis space that is in the Rashomon set. Insight from studying the Rashomon ratio provides an easy way to check whether a simpler model might exist for a problem before finding it. In that sense, the Rashomon ratio is a powerful tool for understanding when an accurate-yet-simple model might exist. We further propose and study a mechanism of the data generation process, coupled with choices usually made by the analyst during the learning process, that determines the size of the Rashomon ratio. Specifically, we demonstrate that noisier datasets lead to larger Rashomon ratios through the way practitioners train models. Our results explain a key aspect of why simpler models often tend to perform as well as black box models on complex, noisier datasets.
Given that optimizing for interpretable models is known to be NP-hard and can require significant domain expertise, our foundation can help machine learning practitioners assess the feasibility of finding simple-yet-accurate models before attempting to optimize for them. We illustrate how larger Rashomon sets and noise in the data generation process explain the natural gravitation towards simpler models based on the dataset of complex biology. We further highlight how simplicity is useful for informed decision-making by introducing sparse density trees and lists - an accurate approach to density estimation that optimizes for sparsity.
Item Open Access Interpretability and Multiplicity: a Path to Trustworthy Machine Learning(2024) Zhong, ChudiMachine learning has been increasingly deployed for myriad high-stakes decisions that deeply impact people's lives. This is concerning, because not every model can be trusted. Interpretability is crucial for making machine learning models trustworthy. It provides human-understandable reasons for each prediction. This, in turn, enables easier troubleshooting, responsible decision-making, and knowledge acquisition. However, there are two major challenges in using interpretable machine learning for high-stakes problems: (1) interpretable model optimization is often NP-hard, and (2) an inefficient feedback loop is present in the standard machine learning paradigm. My dissertation addresses these challenges and proposes a new paradigm for machine learning to advance trustworthy AI.
I first tackle the challenge of finding interpretable-yet-accurate models. This involves developing efficient optimization algorithms. Models obtained from these algorithms are inherently interpretable while maintaining accuracy comparable to that of black-box counterparts. I then discuss the interaction bottleneck in the standard machine learning paradigm and propose a new paradigm, called learning Rashomon sets, which finds and stores all machine learning models with loss that is within epsilon of the optimal loss. This allows users unprecedented ability to explore and interact with all well-performing models, enabling them to choose and modify models that are best suited for the application.
Item Open Access Interpretability by Design: New Interpretable Machine Learning Models and Methods(2020) Chen, ChaofanAs machine learning models are playing increasingly important roles in many real-life scenarios, interpretability has become a key issue for whether we can trust the predictions made by these models, especially when we are making some high-stakes decisions. Lack of transparency has long been a concern for predictive models in criminal justice and in healthcare. There have been growing calls for building interpretable, human understandable machine learning models, and "opening the black box" has become a debated issue in the media. My dissertation research addresses precisely the demand for interpretability and transparency in machine learning models. The key problem of this dissertation is: "Can we build machine learning models that are both accurate and interpretable?"
To address this problem, I will discuss the notion of interpretability as it relates to machine learning, and present several new interpretable machine learning models and methods I developed during my dissertation research. In Chapter 1, I will discuss two types of model interpretability -- predicate-based and case-based interpretability. In Chapters 2 and 3, I will present novel predicate-based interpretable models and methods, and their applications to understanding low-dimensional structured data. In particular, Chapter 2 presents falling rule lists, which extend regular decision lists by requiring the probabilities of the desired outcome to be monotonically decreasing down the list; Chapter 3 presents two-layer additive models, which are hybrids of predicate-based additive scoring models and small neural networks. In Chapter 4, I will present case-based interpretable deep models, and their applications to computer vision. Given the empirical evidence, I conclude in Chapter 5 that, by designing novel model architectures or regularization techniques, we can build machine learning models that are both accurate and interpretable.
Item Open Access Interpretable Almost-Matching Exactly with Instrumental Variables(2019) Liu, YamengWe aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework.
The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; To match units on as many relevant variables as possible, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the units' optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality interpretable matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible. We also adapt the matching framework by using instrumental variables (IV) to the presence of observed categorical confounding that breaks the randomness assumptions and propose an approximate algorithm which speedily generates high-quality interpretable solutions.We show that our algorithms construct better matches than other existing methods on simulated datasets, produce interesting results in applications to crime intervention and political canvassing.
Item Open Access Interpretable Machine Learning With Medical Applications(2023) Barnett, Alina JadeMachine learning algorithms are being adopted for clinical use, assisting with difficult medical tasks previously limited to highly-skilled professionals. AI (artificial intelligence) performance on isolated tasks regularly exceeds that of human clinicians, spurring excitement about AI's potential to radically change modern healthcare. However, there remain major concerns about the uninterpretable (i.e., "black box") nature of commonly-used models. Black box models are difficult to troubleshoot, cannot provide reasoning for their predictions, and lack accountability in real-world applications, leading to a lack of trust and low rate of adoption by clinicians. As a result, the European Union (through the General Data Protection Regulation) and the US Food & Drug Administration have published new requirements and guidelines calling for interpretability and explainability in AI used for medical applications.
My thesis addresses these issues by creating interpretable models for the key clinical decisions of lesion analysis in mammography (Chapters 2 and 3) and pattern identification in EEG monitoring (Chapter 4). To create models with comparable discriminative performance to their uninterpretable counterparts, I constrain neural network models using novel neural network architectures, objective functions and training regimes. The resultant models are inherently interpretable, providing explanations for each prediction that faithfully represent the underlying decision-making of the model. These models are more than just decision makers; they are decision aids capable of explaining their predictions in a way medical practitioners can readily comprehend. This human-centered approach allows a clinician to inspect the reasoning of an AI model, empowering users to better calibrate their trust in its predictions and overrule it when necessary
Item Open Access New Directions in Bandit Learning: Singularities and Random Walk Feedback(2021) Wang, TianyuMy thesis focuses new directions in bandit learning problems. In Chapter 1, I give an overview of the bandit learning literature, which lays the discussion framework for studies in Chapters 2 and 3. In Chapter 2, I study bandit learning problem in metric measure spaces. I start with multi-armed bandit problem with Lipschitz reward, and propose a practical algorithm that can utilize greedy tree training methods and adapts to the landscape of the reward function. In particular, the study provides a Bayesian perspective to this problem. Also, I study bandit learning for Bounded Mean Oscillation (BMO) functions, where the goal is to ``maximize'' a function that may go to infinity in parts of the space. For an unknown BMO function, I will present algorithms that efficiently finds regions with high function values. To handle possible singularities and unboundedness in BMO functions, I will introduce the new notion of $\delta$-regret -- the difference between the function values along the trajectory and a point that is optimal after removing a $\delta$-sized portion of the space. I will show that my algorithm has $ \mathcal{O} \left( \frac{\kappa \log T}{T} \right) $ average $T$-step $\delta$-regret, where $ \kappa $ depends on $\delta$ and adapts to the landscape of the underlying reward function. In Chapter 3, I will study bandit learning with random walk trajectories as feedback. In domains including online advertisement and social networks, user behaviors can be modeled as a random walk over a network. To this end, we study a novel bandit learning problem, where each arm is the starting node of a random walk in a network and the reward is the length of the walk. We provide a comprehensive understanding of this formulation by studying both the stochastic and the adversarial setting. In the stochastic setting, we observe that, there exists a difficult problem instance on which the following two seemingly conflicting facts simultaneously hold: 1. No algorithm can achieve a regret bound independent of problem intrinsics information theoretically; and 2. There exists an algorithm whose performance is independent of problem intrinsics in terms of tail of mistakes. This reveals an intriguing phenomenon in general semi-bandit feedback learning problems. In the adversarial setting, we establish novel algorithms that achieve regret bound of order $\widetilde{\mathcal{O}} \left( \sqrt{ \kappa T}\right) $, where $\kappa$ is a constant that depends on the structure of the graph, instead of number of arms (nodes). This bounds significantly improves regular bandit algorithms, whose complexity depends on number of arms (nodes).
Item Open Access Optimal Sparse Decision Trees(2019) Hu, XiyangDecision tree algorithms have been among the most popular algorithms for interpretable (transparent) machine learning since the early 1980's. The problem that has plagued decision tree algorithms since their inception is their lack of optimality, or lack of guarantees of closeness to optimality: decision tree algorithms are often greedy or myopic, and sometimes produce unquestionably suboptimal models. Hardness of decision tree optimization is both a theoretical and practical obstacle, and even careful mathematical programming approaches have not been able to solve these problems efficiently. This work introduces the first practical algorithm for optimal decision trees for binary variables. The algorithm is a co-design of analytical bounds that reduce the search space and modern systems techniques, including data structures and a custom bit-vector library. We highlight possible steps to improving the scalability and speed of future generations of this algorithm based on insights from our theory and experiments.
Item Open Access Rethinking Nonlinear Instrumental Variables(2019) Li, ChunxiaoInstrumental variable (IV) models are widely used in the social and health sciences in situations where a researcher would like to measure a causal eect but cannot perform an experiment. Formally checking the assumptions of an IV model with a given dataset is impossible, leading many researchers to take as given a linear functional form and two-stage least squares tting procedure. In this paper, we propose a method for evaluating the validity of IV models using observed data and show that, in some cases, a more flexible nonlinear model can address violations of the IV conditions. We also develop a test that detects violations in the instrument that are present in the observed data. We introduce a new version of the validity check that is suitable for machine learning and provides optimization-based techniques to answer these questions. We demonstrate the method using both the simulated data and a real-world dataset.
Item Open Access Single Image Super Resolution:Perceptual quality & Test-time Optimization(2019) Chen, LeiImage super resolution is defined as recovering a high-resolution image given a low-resolution image input. It has a wide area of applications in modern digital image processing, producing better results in areas including satellite image processing, medical image processing, microscopy image processing, astrological studies and surveillance area. However, image super resolution is an ill-posed question since there exists non-deterministic answer in the high resolution image space, making it difficult to find the optimal solution.
In this work, various research directions in the area of single image super resolution are thoroughly studied. Each of the proposed methods' achievements as well as limitations including computational efficiency, perceptual performance limits are compared. The main contribution in this work including implementing a perceptual score predictor and integrating as part of the objective function in the upsampler algorithm. Apart from that, a test-time optimization algorithm is proposed, aiming at further enhance the image quality for the obtained super-resolution image from any upsampler. The proposed methods are implemented and tested using Pytorch. Results are compared on baseline applied datasets including Set5, Set14, Urban100 and DIV2K.
Results from perceptual score predictor was evaluated on both PSNR precision index and perceptual index, which is a combination of perceptual evaluation Ma score and NIQE score. With new objective function, the upsampler achieved to move along the trade-off curve of precision and perception. The test-time optimization algorithm achieved slightly improvements in both precision and perception index. Note that the proposed test time optimization does not require training of new neural network, thus, is computationally efficient.
Item Embargo Sparse and Faithful Explanations Without Sparse Models(2024) Sun, YiyangEven if a model is not globally sparse, it is possible for decisions made by that model to be accurately and faithfully described by a small number of features. For example, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence of their creditworthiness. In this paper, we introduce the Sparse Explanation Value (SEV), a new way to measure sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of \textit{decision sparsity} rather than overall model sparsity, and we can show that many machine learning models -- even if they are not sparse -- actually have low decision sparsity as measured by SEV. SEV is defined using moves over a hypercube with a predefined population commons (reference), allowing SEV to be defined consistently across model classes, with movement restrictions that reflect real-world constraints. Moreover, by allowing flexibility in this reference, and by considering how distances along the hypercube translate into distances in feature space, we can derive sparse and meaningful explanations for different types of function classes and propose three possible approaches: cluster-based SEV, SEV with flexible references and tree-based SEV. Ultimately, we propose algorithms aimed at reducing SEV without compromising model accuracy, thereby offering sparse yet fully faithful explanations, even in the absence of globally sparse models.