Human-in-the-loop Machine Learning System via Model Interpretability
dc.contributor.advisor | Rudin, Cynthia | |
dc.contributor.author | Chen, Zhi | |
dc.date.accessioned | 2023-06-08T18:21:07Z | |
dc.date.available | 2023-06-08T18:21:07Z | |
dc.date.issued | 2023 | |
dc.department | Computer Science | |
dc.description.abstract | The interpretability of a machine learning system is crucial in situations where it involves human-model interaction or affects the well-being of society. By making the decision process understandable to humans, interpretability makes it easier to troubleshoot, acquire knowledge from, and interact with machine learning models. However, designing an interpretable machine learning system that maximizes the human-in-the-loop experience can be challenging. My thesis aims to address the major challenges in interpretable machine learning and lay the foundations for a more interactive machine learning system. In this thesis, I first tackle the challenge of building machine learning models with interpretability constraints, particularly in applications with unstructured data such as computer vision and materials science. I propose interpretable models that effectively capture the underlying patterns in the data and allow users to understand the model's decision-making process. Furthermore, this thesis studies the exploration and approximation of the set of all near-optimal models for interpretable model classes, enabling users to visualize, select, and modify multiple well-performing models. Lastly, I demonstrate how interpretable models can provide insights into the data, detecting common dataset flaws such as poorly imputed missing values, confoundings, and biases. | |
dc.identifier.uri | ||
dc.subject | Computer science | |
dc.subject | Artificial intelligence | |
dc.subject | additive models | |
dc.subject | AI for science | |
dc.subject | Decision trees | |
dc.subject | human-model interaction | |
dc.subject | Interpretable machine learning | |
dc.subject | Rashomon set | |
dc.title | Human-in-the-loop Machine Learning System via Model Interpretability | |
dc.type | Dissertation |