CBCT Reconstruction Using ResUNet: A Deep Learning Framework for Filter-Free Imaging and Enhanced Tumor Localization

Loading...

Date

2025

Journal Title

Journal ISSN

Volume Title

Abstract

AbstractPurpose: Cone-beam computed tomography (CBCT) plays a critical role in radiotherapy by providing image guidance for accurate tumor localization. Current CBCT reconstruction methods rely on filtering techniques that introduce noise and artifacts. This study aims to evaluate the effectiveness of a deep learning model, ResUNet as an end-to-end solution in CBCT reconstruction, improving image quality and tumor localization accuracy. Methods and Materials: This study utilized the TIGRE toolbox to simulate CBCT forward projections from a high-quality CT volume image (GroundTruth-CT). The CBCT projections were reconstructed into a CBCT volume image (Raw-CBCT) using backprojection without filtering, resulting in a blurred and low-quality image. A supervised convolutional neural network (CNN) model, ResUNet, was trained to enhance Raw-CBCT by learning to predict the high-quality GroundTruth-CT. The input to the network was Raw-CBCT, and the ground truth was GroundTruth-CT. The model was optimized using the mean squared error (MSE) loss function. The final output of the model was an enhanced CBCT volume image (ResUNet-CBCT). For comparison, the standard Feldkamp, Davis, and Kress (FDK) reconstruction method was applied to the CBCT projections, producing FDK-CBCT. Both EN-CBCT and FDK-CBCT were compared to GroundTruth-CT using structural similarity index measure (SSIM), peak signal-to noise ratio (PSNR), and mean squared error (MSE) to evaluate image quality and reconstruction accuracy (Sara, Akter, & Uddin, 2019). Results: The ResUNet model significantly outperformed the standard FDK reconstruction method. For test patients (L333, L096), ResUNet achieved notably lower Mean Squared Error (MSE), higher Structural Similarity Index Measure (SSIM), and higher Peak Signal-to-Noise Ratio (PSNR), indicating improved image quality. Specifically, ResUNet demonstrated superior artifact suppression, reduced noise, and enhanced anatomical detail visibility compared to FDK reconstruction, especially in low-contrast regions. Conclusions: This study demonstrates the ResUNet model can effectively perform end to end CBCT reconstruction including replacement of the filtering step, resulting in superior image quality compared to standard FDK_CBCT. It will also lead to reduced imaging dose and efficiency. Future research will focus on optimizing network architectures and validating performance using larger clinical datasets to further advance CBCT imaging in radiotherapy.

Description

Provenance

Subjects

Medical imaging, Artificial intelligence, Cone-beam computed tomography (CBCT), Deep learning reconstruction, Image quality enhancement, Noise and artifact suppression, ResUNet Model

Citation

Citation

Pu, Zhuqing (2025). CBCT Reconstruction Using ResUNet: A Deep Learning Framework for Filter-Free Imaging and Enhanced Tumor Localization. Master's thesis, Duke University. Retrieved from https://hdl.handle.net/10161/32950.

Collections


Except where otherwise noted, student scholarship that was shared on DukeSpace after 2009 is made available to the public under a Creative Commons Attribution / Non-commercial / No derivatives (CC-BY-NC-ND) license. All rights in student work shared on DukeSpace before 2009 remain with the author and/or their designee, whose permission may be required for reuse.