CBCT Reconstruction Using ResUNet: A Deep Learning Framework for Filter-Free Imaging and Enhanced Tumor Localization

dc.contributor.advisor

Yin, Fang-Fang

dc.contributor.author

Pu, Zhuqing

dc.date.accessioned

2025-07-02T19:08:08Z

dc.date.available

2025-07-02T19:08:08Z

dc.date.issued

2025

dc.department

DKU - Medical Physics Master of Science Program

dc.description.abstract

AbstractPurpose: Cone-beam computed tomography (CBCT) plays a critical role in radiotherapy by providing image guidance for accurate tumor localization. Current CBCT reconstruction methods rely on filtering techniques that introduce noise and artifacts. This study aims to evaluate the effectiveness of a deep learning model, ResUNet as an end-to-end solution in CBCT reconstruction, improving image quality and tumor localization accuracy. Methods and Materials: This study utilized the TIGRE toolbox to simulate CBCT forward projections from a high-quality CT volume image (GroundTruth-CT). The CBCT projections were reconstructed into a CBCT volume image (Raw-CBCT) using backprojection without filtering, resulting in a blurred and low-quality image. A supervised convolutional neural network (CNN) model, ResUNet, was trained to enhance Raw-CBCT by learning to predict the high-quality GroundTruth-CT. The input to the network was Raw-CBCT, and the ground truth was GroundTruth-CT. The model was optimized using the mean squared error (MSE) loss function. The final output of the model was an enhanced CBCT volume image (ResUNet-CBCT). For comparison, the standard Feldkamp, Davis, and Kress (FDK) reconstruction method was applied to the CBCT projections, producing FDK-CBCT. Both EN-CBCT and FDK-CBCT were compared to GroundTruth-CT using structural similarity index measure (SSIM), peak signal-to noise ratio (PSNR), and mean squared error (MSE) to evaluate image quality and reconstruction accuracy (Sara, Akter, & Uddin, 2019). Results: The ResUNet model significantly outperformed the standard FDK reconstruction method. For test patients (L333, L096), ResUNet achieved notably lower Mean Squared Error (MSE), higher Structural Similarity Index Measure (SSIM), and higher Peak Signal-to-Noise Ratio (PSNR), indicating improved image quality. Specifically, ResUNet demonstrated superior artifact suppression, reduced noise, and enhanced anatomical detail visibility compared to FDK reconstruction, especially in low-contrast regions. Conclusions: This study demonstrates the ResUNet model can effectively perform end to end CBCT reconstruction including replacement of the filtering step, resulting in superior image quality compared to standard FDK_CBCT. It will also lead to reduced imaging dose and efficiency. Future research will focus on optimizing network architectures and validating performance using larger clinical datasets to further advance CBCT imaging in radiotherapy.

dc.identifier.uri

https://hdl.handle.net/10161/32950

dc.rights.uri

https://creativecommons.org/licenses/by-nc-nd/4.0/

dc.subject

Medical imaging

dc.subject

Artificial intelligence

dc.subject

Cone-beam computed tomography (CBCT)

dc.subject

Deep learning reconstruction

dc.subject

Image quality enhancement

dc.subject

Noise and artifact suppression

dc.subject

ResUNet Model

dc.title

CBCT Reconstruction Using ResUNet: A Deep Learning Framework for Filter-Free Imaging and Enhanced Tumor Localization

dc.type

Master's thesis

duke.embargo.months

23

duke.embargo.release

2027-05-19

Files

Collections