Zhang, LeiDuan, Xiaoyu X.Li, Xiang2025-07-022025-07-022025https://hdl.handle.net/10161/32934<p>Purpose: Breast cancer screening has significantly reduced mortality rates through early detection and treatment. Mammography is the most commonly used and widely accepted method for breast cancer screening. However, detecting abnormalities in mammographic images, along with addressing the challenges posed by varying image quality, continues to be a significant difficulty. This study aims to evaluate and compare the performance of different deep learning models, datasets to identify the most effective approach for breast cancer segmentation in mammography. Methods: A total of 960 mammographic images of digital anthropomorphic breast phantoms, each containing simulated lesions, were generated using the ray tracing technique. The U-Net model was initially pretrained on this simulated dataset to learn the general features of mammographic images with lesions. After pretraining, the model was fine-tuned using transfer learning on clinical images from the public Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) dataset. The clinical images were chosen from the craniocaudal view of the dataset to maintain a consistent and standardized perspective of the breast tissue. For fine-tuning, three subsets of clinical images with varying sizes were selected: 190, 344, and 507 images, respectively. In this study, both U-Net and U-Net++ architectures were evaluated. Each model was trained using a combination of Dice-Sørensen (Dice) Coefficient and Binary Cross-Entropy (BCE) loss, with two different weighting ratios: 5:5 and 7:3. After the pretrained and fine-tuned U-Net and U-Net++ networks were applied to segment the clinical images, using standard metrics such as Euclidean Distance (ED), Hausdorff Distance (HD) metrics, Structural Similarity (SSIM),Dice Coefficient and successful prediction rate. Results: The 6-layer architecture achieved the best overall performance, with a model BCE combined with Dice loss showing the highest accuracy. For simulated images, the mean values were: ED = 1.75 ± 1.36 pixels, HD = 12.93 ± 20.42 pixels, SSIM = 0.98 ± 0.32, and Dice = 0.80 ± 0.07. Clinical images showed accurate lesion localization with ED = 7.76 pixels and HD = 40.07 pixels. On the 190-image CBIS-DDSM dataset, U-Net's Dice score increased from 21.73 without transfer learning to 22.55 with transfer learning. U-Net++ also demonstrated improved performance, with its Dice score rising from 19.78 without transfer learning to 19.57 with transfer learning. On the 507-image dataset, U-Net's Dice score improved from 26.78 to 29.4, and U-Net++ showed a slight improvement from 23.81 to 21.83 with transfer learning. U-Net++ outperformed U-Net in all configurations across different dataset sizes. The 7:3 Dice-BCE ratio generally provided better segmentation accuracy. Pure clinical data consistently outperformed mixed datasets in terms of segmentation accuracy. For instance, on the 190-image CBIS-DDSM dataset, U-Net trained on pure clinical data achieved a Dice score of 49% and a successful prediction rate of 43.75%. In contrast, using the mixed dataset (960+190) resulted in a Dice score of 73% but a significantly lower prediction rate of 21.88%. Conclusion: This study demonstrates that deep learning models, particularly U-Net++, offer superior performance over U-Net for breast cancer segmentation in mammographic images. The U-Net++ architecture, with its more complex design, provided better results, especially when trained on mixed datasets. Transfer learning was effective in improving performance, particularly for larger datasets, while smaller datasets benefited less from this approach. The models performed more effectively in segmenting malignant lesions compared to benign ones, highlighting their potential for improving the accuracy of breast cancer detection in clinical applications. These results suggest that deep learning models, when optimized, can significantly enhance early breast cancer detection and aid in more accurate diagnosis. </p>https://creativecommons.org/licenses/by-nc-nd/4.0/PhysicsPerformance Evaluation of Deep Learning Models for Lesion Detection in MammographyMaster's thesis