Deep Learning for Breast Image Analysis: Current Progress and Future Work

Loading...

Date

2025

Journal Title

Journal ISSN

Volume Title

Repository Usage Stats

6
views
2
downloads

Attention Stats

Abstract

Deep learning has revolutionized medical image analysis, yet its application to breast imaging faces unique challenges stemming from high anatomical variability, subtle disease features, limited labeled data, and domain shift across imaging acquisition parameters. This dissertation addresses these challenges through three interconnected research directions focused on anomaly detection and domain adaptation for breast imaging, with applications to Digital Breast Tomosynthesis (DBT) and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).

First, we address the problem of anomaly detection in high-resolution DBT images, where cancer prevalence is extremely low ($<1\%$) and acquiring labeled pathological cases is expensive. We introduce PICARD (Pluralistic Image Completion for Anomalous Representation Detection), a novel unsupervised anomaly localization method that learns exclusively from normal tissue. PICARD uses test-time spatial dropout layers applied to an image completion network to rapidly sample multiple plausible normal completions of image regions, then compares these to the ground truth using our proposed minimum completion distance (MCD) metric. We provide theoretical analysis demonstrating that MCD approaches perfect classification performance as the number of sampled completions increases. On the challenging BCS-DBT dataset, PICARD achieves 0.875 pixel-level AUC for tumor localization, substantially outperforming state-of-the-art methods such as PatchSVDD (0.777) and CutPaste (0.737) by at least 10\%, while being over 64 times faster than PatchSVDD.

Second, we investigate the fundamental problem of domain shift in breast MRI caused by variability in image acquisition parameters (IAPs) such as scanner model and manufacturer, repetition and echo time, and other factors. We develop a multi-task learning approach to reverse-engineer these parameters directly from images, demonstrating that a single ResNet-18 model can predict 12 different categorical and continuous IAPs with high accuracy. Our model achieves $>97\%$ top-1 accuracy for predicting six out of ten categorical IAPs and relative errors of $<1\%$ for continuous parameters. We demonstrate a practical application where our IAP prediction model is used to sort unlabeled MRI data into different domains for appropriate cancer classification model selection, improving classification accuracy from 62.66\% to 76.95\%.

Third, building upon this understanding of domain shift, we develop advanced generative methods for harmonization and introduce novel metrics for evaluating such approaches. We present SegGuidedDiff, a segmentation-guided diffusion model that enables precise anatomically-controllable generation of breast MRI by conditioning on anatomical segmentation masks. Unlike existing approaches that adapt large pretrained text-to-image models or use GAN-based methods, SegGuidedDiff leverages image-space diffusion models trained from scratch on medical images, enabling superior spatial control and image quality, paving the way for anatomically-constrained harmonization methods. Critically, we address the fundamental challenge of evaluating harmonization and generation methods in medical imaging by introducing the \Frechet{} Radiomic Distance (FRD), a metric based on 464 interpretable radiomic features extracted from images and their wavelet-filtered representations. FRD demonstrates multiple advantages over learned feature metrics like FID: superior alignment with downstream medical imaging tasks, improved stability and computational efficiency for small datasets, clinical interpretability through pre-defined features, and better sensitivity to image corruptions and quality variations. We validate FRD across diverse applications including out-of-domain detection, image-to-image translation evaluation, and unconditional generation assessment, demonstrating its superiority for medical image distribution comparison. Additionally, we present StyleMapper, an early approach to breast MRI harmonization that uses disentangled representation learning to extract separate style and anatomical content codes, enabling arbitrary style transfer to unseen scanner types with limited training data through a novel training strategy using randomly sampled image transformations and cross-domain reconstruction losses.

The methods developed in this dissertation demonstrate that effective deep learning for breast imaging requires careful consideration of data scarcity, anatomical complexity, and domain variability. By developing unsupervised approaches that learn from normal data, techniques to understand and quantify domain shift, and harmonization methods to mitigate cross-domain performance degradation, this work provides a comprehensive framework for advancing automated breast image analysis. These contributions lay important groundwork for the development of more robust, generalizable, and clinically applicable deep learning systems for breast cancer detection and diagnosis.

Description

Provenance

Subjects

Artificial intelligence, Computer science, anomaly detection, breast MRI, computer vision, deep learning, domain adaptation, generative models

Citation

Citation

Konz, Nicholas Clayton (2025). Deep Learning for Breast Image Analysis: Current Progress and Future Work. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/34121.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.