Browsing by Subject "Interpretable machine learning"
Results Per Page
Sort Options
Item Open Access Human-in-the-loop Machine Learning System via Model Interpretability(2023) Chen, ZhiThe interpretability of a machine learning system is crucial in situations where it involves human-model interaction or affects the well-being of society. By making the decision process understandable to humans, interpretability makes it easier to troubleshoot, acquire knowledge from, and interact with machine learning models. However, designing an interpretable machine learning system that maximizes the human-in-the-loop experience can be challenging. My thesis aims to address the major challenges in interpretable machine learning and lay the foundations for a more interactive machine learning system.
In this thesis, I first tackle the challenge of building machine learning models with interpretability constraints, particularly in applications with unstructured data such as computer vision and materials science. I propose interpretable models that effectively capture the underlying patterns in the data and allow users to understand the model's decision-making process. Furthermore, this thesis studies the exploration and approximation of the set of all near-optimal models for interpretable model classes, enabling users to visualize, select, and modify multiple well-performing models. Lastly, I demonstrate how interpretable models can provide insights into the data, detecting common dataset flaws such as poorly imputed missing values, confoundings, and biases.
Item Open Access Interpretable Almost-Matching Exactly with Instrumental Variables(2019) Liu, YamengWe aim to create the highest possible quality of treatment-control matches for categorical data in the potential outcomes framework.
The method proposed in this work aims to match units on a weighted Hamming distance, taking into account the relative importance of the covariates; To match units on as many relevant variables as possible, the algorithm creates a hierarchy of covariate combinations on which to match (similar to downward closure), in the process solving an optimization problem for each unit in order to construct the optimal matches. The algorithm uses a single dynamic program to solve all of the units' optimization problems simultaneously. Notable advantages of our method over existing matching procedures are its high-quality interpretable matches, versatility in handling different data distributions that may have irrelevant variables, and ability to handle missing data by matching on as many available covariates as possible. We also adapt the matching framework by using instrumental variables (IV) to the presence of observed categorical confounding that breaks the randomness assumptions and propose an approximate algorithm which speedily generates high-quality interpretable solutions.We show that our algorithms construct better matches than other existing methods on simulated datasets, produce interesting results in applications to crime intervention and political canvassing.