Deep Learning-Based Projection Extrapolation for Limited-Angle CBCT Reconstruction

dc.contributor.advisor

Yin, Fang-Fang

dc.contributor.author

Liu, Yukun

dc.date.accessioned

2025-07-02T19:08:09Z

dc.date.available

2025-07-02T19:08:09Z

dc.date.issued

2025

dc.department

Medical Physics DKU

dc.description.abstract

Purpose:Limited-angle CBCT reconstruction often suffers from incomplete projection data, resulting in severe wedge artifacts, image distortions, and reduced image quality. This study introduces a deep learning-based projection extrapolation filter to help high-quality CBCT image reconstruction from limited-angle data, aiming to mitigate artifacts and improve clinical usability.

Methods:This study developed a deep convolutional network based on the ResUNet architecture to extrapolate missing projection data. The training data are projections generated using TIGRE (Tomographic Iterative GPU-based Reconstruction Toolbox). The study simulates CBCT projections for 10 patients using TIGRE, generating projections over 180 degrees + fan angle (full-fan geometry) and 120 degrees (limited-angle geometry) to replicate real-world imaging conditions. Then the projections are resampled in angular dimension into a total of 7680 sinogram pairs (limited-angled and adequate-angled) that are randomly divided into training and validation sets in a 9:1 ratio with the remaining data reserved for testing. A ResUNet model is trained to extrapolate the limited-angled sinogram to adequate-angled sinogram. After the extrapolated data is resampled into projections, the final reconstruction was performed using the Feldkamp-Davis-Kress (FDK) algorithm. While focusing on reconstructed image quality and artifact reduction, performance metrics such as peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were used to quantify image quality improvements. Simultaneous attention to reconstruction image quality and artifact reduction.

Results:The proposed method can effectively generate the extrapolated projections with reduced image artifacts. The quantitative results showed the PSNR (33.012) and loss (0.002) of the model, which indicated a superior performance. The reconstructed CBCT volumes demonstrate superior image quality compared CBCT reconstructed with conventional methods using limited-angle data, and significantly reduces image artifacts. supporting the potential of integration in real-time clinical workflows.

Conclusion:Our deep learning-based projection extrapolation filter enables artifact reduction in CBCT reconstruction from limited-angle data. The proposed method holds promise for improving CBCT imaging quality in applications such as image-guided radiotherapy. Our future work includes using updated models to further improve extrapolated image quality and clinical evaluation of the proposed technique is warranted.

dc.identifier.uri

https://hdl.handle.net/10161/32953

dc.rights.uri

https://creativecommons.org/licenses/by-nc-nd/4.0/

dc.subject

Medical imaging

dc.title

Deep Learning-Based Projection Extrapolation for Limited-Angle CBCT Reconstruction

dc.type

Master's thesis

duke.embargo.months

23

duke.embargo.release

2027-05-19

Files

Collections