Interpretable Machine Learning With Medical Applications
Date
2023
Authors
Advisors
Journal Title
Journal ISSN
Volume Title
Repository Usage Stats
views
downloads
Abstract
Machine learning algorithms are being adopted for clinical use, assisting with difficult medical tasks previously limited to highly-skilled professionals. AI (artificial intelligence) performance on isolated tasks regularly exceeds that of human clinicians, spurring excitement about AI's potential to radically change modern healthcare. However, there remain major concerns about the uninterpretable (i.e., "black box") nature of commonly-used models. Black box models are difficult to troubleshoot, cannot provide reasoning for their predictions, and lack accountability in real-world applications, leading to a lack of trust and low rate of adoption by clinicians. As a result, the European Union (through the General Data Protection Regulation) and the US Food & Drug Administration have published new requirements and guidelines calling for interpretability and explainability in AI used for medical applications.
My thesis addresses these issues by creating interpretable models for the key clinical decisions of lesion analysis in mammography (Chapters 2 and 3) and pattern identification in EEG monitoring (Chapter 4). To create models with comparable discriminative performance to their uninterpretable counterparts, I constrain neural network models using novel neural network architectures, objective functions and training regimes. The resultant models are inherently interpretable, providing explanations for each prediction that faithfully represent the underlying decision-making of the model. These models are more than just decision makers; they are decision aids capable of explaining their predictions in a way medical practitioners can readily comprehend. This human-centered approach allows a clinician to inspect the reasoning of an AI model, empowering users to better calibrate their trust in its predictions and overrule it when necessary
Type
Department
Description
Provenance
Citation
Permalink
Citation
Barnett, Alina Jade (2023). Interpretable Machine Learning With Medical Applications. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/29171.
Collections
Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.