Interpretable Machine Learning With Medical Applications

dc.contributor.advisor

Rudin, Cynthia

dc.contributor.author

Barnett, Alina Jade

dc.date.accessioned

2023-10-03T13:36:21Z

dc.date.available

2023-10-03T13:36:21Z

dc.date.issued

2023

dc.department

Computer Science

dc.description.abstract

Machine learning algorithms are being adopted for clinical use, assisting with difficult medical tasks previously limited to highly-skilled professionals. AI (artificial intelligence) performance on isolated tasks regularly exceeds that of human clinicians, spurring excitement about AI's potential to radically change modern healthcare. However, there remain major concerns about the uninterpretable (i.e., "black box") nature of commonly-used models. Black box models are difficult to troubleshoot, cannot provide reasoning for their predictions, and lack accountability in real-world applications, leading to a lack of trust and low rate of adoption by clinicians. As a result, the European Union (through the General Data Protection Regulation) and the US Food & Drug Administration have published new requirements and guidelines calling for interpretability and explainability in AI used for medical applications.

My thesis addresses these issues by creating interpretable models for the key clinical decisions of lesion analysis in mammography (Chapters 2 and 3) and pattern identification in EEG monitoring (Chapter 4). To create models with comparable discriminative performance to their uninterpretable counterparts, I constrain neural network models using novel neural network architectures, objective functions and training regimes. The resultant models are inherently interpretable, providing explanations for each prediction that faithfully represent the underlying decision-making of the model. These models are more than just decision makers; they are decision aids capable of explaining their predictions in a way medical practitioners can readily comprehend. This human-centered approach allows a clinician to inspect the reasoning of an AI model, empowering users to better calibrate their trust in its predictions and overrule it when necessary

dc.identifier.uri

https://hdl.handle.net/10161/29171

dc.subject

Computer science

dc.subject

Medical imaging

dc.subject

Biostatistics

dc.subject

human computer interaction

dc.subject

Interpretability

dc.subject

Machine learning

dc.subject

Mammography

dc.subject

Neural network

dc.subject

Seizure

dc.title

Interpretable Machine Learning With Medical Applications

dc.type

Dissertation

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Barnett_duke_0066D_17558.pdf
Size:
28.77 MB
Format:
Adobe Portable Document Format

Collections