Human-in-the-loop Machine Learning System via Model Interpretability

Loading...
Thumbnail Image

Date

2023

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

61
views
121
downloads

Abstract

The interpretability of a machine learning system is crucial in situations where it involves human-model interaction or affects the well-being of society. By making the decision process understandable to humans, interpretability makes it easier to troubleshoot, acquire knowledge from, and interact with machine learning models. However, designing an interpretable machine learning system that maximizes the human-in-the-loop experience can be challenging. My thesis aims to address the major challenges in interpretable machine learning and lay the foundations for a more interactive machine learning system.

In this thesis, I first tackle the challenge of building machine learning models with interpretability constraints, particularly in applications with unstructured data such as computer vision and materials science. I propose interpretable models that effectively capture the underlying patterns in the data and allow users to understand the model's decision-making process. Furthermore, this thesis studies the exploration and approximation of the set of all near-optimal models for interpretable model classes, enabling users to visualize, select, and modify multiple well-performing models. Lastly, I demonstrate how interpretable models can provide insights into the data, detecting common dataset flaws such as poorly imputed missing values, confoundings, and biases.

Description

Provenance

Citation

Citation

Chen, Zhi (2023). Human-in-the-loop Machine Learning System via Model Interpretability. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/27646.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.