Browsing by Author "Ren, Lei"
Results Per Page
Sort Options
Item Open Access Assessing the feasibility of using deformable registration for on-board multi-modality based target localization in radiation therapy(2018) Ren, GePurpose:
Cone beam computed tomography (CBCT) is typically used for on-board target localization in radiation therapy. However, CBCT has poor soft tissue contrast, which makes it extremely challenging to localize tumors in soft tissues such as liver, prostate and breast cancers. This study explores the feasibility of using deformable image registration (DIR) to generate on-board multi-modality images to improve the soft tissue contrast for target localization in radiation therapy.
Methods:
Patient CT or MR images are acquired during the simulation stage and are used as the prior images. CBCT images are acquired on-board for clinical target localization. B-spline based deformable registration is used to register MR or CT images with CBCT images to generate synthetic on-board MR/CT images, which are used for on-board target localization. Liver, prostate, and breast patient data were used in the study to investigate the feasibility of the method. The evaluation includes three aims: (1). Evaluate whether the registration and margin design in clinical practice is sufficient to ensure the coverage of the on-board tumor volume: the synthetic on-board MR/CT images are used to verify the target coverage based on the shifts determined by CT-CBCT registration in clinical practice; (2). Evaluate the potential for margin optimization based on the synthetic multi-modality imaging technique: shifts are determined by rigid registration between planning CT and synthetic on-board MR/CT, and the replacing PTV margin is determined to ensure coverage of the deformed tumor volume. (3). Evaluate the potential tolerant margin for DIR uncertainty based on the deformed tumor contour in planning CT images: shifts are determined by rigid registration between planning CT and the synthetic on-board MR/CT, and the tolerant margin is determined to cover the expanded deformed tumor volume in PTV.
Results:
In the process of DIR, using CT images as DIR prior images has better alignment than using MR as the prior images. The evaluation showed: (1). For the liver cases, the coverage of 6 in 8 cases is above 90%. For the breast cases, the coverage of 6 in 7 cases is above 90%. For the prostate cases, the coverage of all cases is above 94%. Most of the tumor volume defined by the on-board synthetic images were covered by the PTV based on the shifts applied in clinical practice. The 3 under-dosed cases are correlated with long,interfraction deviation treatment fraction, small volume, and zero-PTV margin design. (2). For 6 of the liver cases, 5 of the prostate cases, and all the breast cases, the synthetic images allowed the reduction of PTV margin, which is up to 6mm, 4mm, and 1.5 mm, respectively. For the cases with reduced optimized margin, the dose to the OAR and normal tissue can be spared based on the optimized margin while for the cases with increased optimized margin the increased dose is not significant. (3). For cases with reduced margin, the benefit margin for DIR uncertainty is available which are 2-4 mm, 1-5mm, and 2-3 mm for liver, prostate, and breast cases respectively.
Conclusion:
Our studies demonstrated the feasibilities of using on-board synthetic multi-modality imaging to improve the soft tissue contrast for target localization in low contrast regions. This new technique holds great promises to optimize the PTV margin and improve the treatment accuracy.
Item Open Access Building a patient-specific model using transfer learning for 4D-CBCT augmentation(2020) sun, leshanPurpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase‐resolved volumetric images in aid of image guided radiation therapy (IGRT), especially in SBRT, which requires highly accurate dose delivery. However, 4D-CBCT suffers from insufficient projection data in each phase bin, which leads to severe noise and artifact. To address this problem, deep learning methods have been introduced to help with augmenting image quality. However, when using traditional deep learning methods to augment CBCT images, the augmented images tend to lose small details such as lung textures. In this study, transfer learning method was proposed to further improve the image quality of the deep-learning augmented CBCT for one specific patient.
Methods: The network architecture used in this project for transfer learning is a standard U-net. CBCT images were reconstructed using limited projections that are simulated from ground truth CT images or directly from clinic. For transfer learning training process, the network was firstly fed with different patients’ data in order to learn a general restoration process to augment under-sampled CBCT images from any patients. Then, the restoration pattern was improved for one specific patient by re-feeding the network with this patient’s data from prior days. Performance of transfer learning was evaluated by comparing the augmented CBCT images to the traditional deep learning method’s images both qualitatively and quantitatively using structure similarity index matrix (SSIM) and peak signal-to-noise ratio (PSNR).
Regarding the study of effectiveness and time efficiency of transfer learning methods, two transfer learning methods, whole-layer fine tuning and layer-freezing methods, are compared to each other. Two training methods, whole-data tuning and sequential tuning were employed as well to further explore the possibility of improving transfer learning’s performance and reducing training time.
Results: The comparison demonstrated that the images augmented from transfer learning method not only recovered more detailed information in lung area but also had more uniform pixel value than basic U-net images when comparing to the ground truth. In addition, two transfer learning methods, whole-layers fine-tuning and layer-freezing method, and two training method, sequential training and all data training, were compared to each other, and all data training with layer-freezing method was found to be time-efficient with training time as short as 10 minutes. In the study of projection number’s effect, transfer-learning augmented CBCT images reconstructed from as low as 90 projection out of 900 projections showed its improvement from U-net augmented images.
Conclusion: Overall, transfer learning based image augmentation method is efficient and effective on improving image qualities of augmented under-sampled 3D/4D-CBCT images from traditional deep-learning methods. Given its relatively fast computational speeds and great performance, it can be very valuable for 4D image guided radiation therapy.
Item Embargo CBCT image enhancement for improving accuracy of radiomics analysis and soft tissue target localization(2023) Zhang, ZeyuCone-beam computed tomography (CBCT) is one of the most commonly used image modalities in radiation therapy. It provides valuable information for target localization and outcome prediction throughout treatment courses. However, CBCT images suffer from various artifacts caused by scattering, beam hardening, undersampling, system hardware instability, and motions of the patient, which severely degrade the CBCT image quality. In addition, CBCT images have extremely poor soft-tissue contrast, making it almost impossible to accurately localize tumors in the soft tissue, such as liver tumors.
This dissertation presents the improvements of CBCT image quality for better outcome prediction and target localization by developing the deep learning and finite element based image enhancement model.
A deep learning based CBCT image enhancement model was developed to improve the radiomic feature accuracy. The model was trained based on 4D CBCT of ten patients and tested on three patients with different tumor sizes. The results show that 4D CBCT image quality can substantially affect the accuracy of the radiomic features, and the degree of impact is feature-dependent. The deep learning model was able to enhance the anatomical details and edge information in the 4D-CBCT as well as removing other image artifacts. This enhancement of image quality resulted in reduced errors for most radiomic
features. The average reduction of radiomics errors for 3 patients are 20.0%, 31.4%, 36.7%, 50.0%, 33.6% and 11.3% for histogram, GLCM, GLRLM, GLSZM, NGTDM and Wavelet features. And the error reduction was more significant for patients with larger tumors. To further improve the results, a patient-specific based training model has been developed. The model was trained based on the augmentation dataset of a single patient and tested on the different 4D CBCT of the same patient. Compared with a group-based model, the patient-specific training model further improved the accuracy of radiomic features, especially for features with large errors in the group-based model. For example, the 3D whole-body and ROI loss-based patient-specific model reduces the errors of the first-order median feature by 83.67%, the wavelet LLL feature maximum by 91.98%, and the wavelet HLL skewness feature by 15.0% on average for the four patients tested.
In addition, a patient-specific deep learning model is proposed to generate synthetic magnetic resonance imaging (MRI) from CBCT to improve tumor localization. A key innovation is using patient-specific CBCT-MRI image pairs to train a deep learning model to generate synthetic MRI from CBCT. Specifically, patient planning CT was deformably registered to prior MRI, and then used to simulate CBCT with simulated projections and Feldkamp, Davis, and Kress reconstruction. These CBCT-MRI images were augmented using translations and rotations to generate enough patient-specific training data. A U-Net-based deep learning model was developed and trained to generate synthetic MRI from CBCT in the liver, and then tested on a different CBCT dataset.
Synthetic MRIs were quantitatively evaluated against ground-truth MRI. On average, the synthetic MRI achieved 28.01, 0.025, and 0.929 for peak signal-to-noise ratio, mean square error, and structural similarity index, respectively, outperforming CBCT images on the three patients tested. To further improve the robustness of synthetic MRI generation, we developed an organ specific biomechanical model. This model registers the pretreatment MRI images to onboard CBCT images based on the organ contours, and combines the MRI organ with CBCT body to the generate hybrid MRI/CBCT. 48 registration cases were performed, which includes 18 Monte Carlo simulated cases and 30 real patient cases. We identified tumor landmarks of hybrid MRI/CBCT, onboard CBCT and planning CT, and calculated errors of landmark locations of two CBCT images. The errors were calculated based on the landmark differences of two CBCT images and ground truth planning CT. The results show that the tumor landmark localization accuracy around tumor is improved by 54.2 ± 22.2 %.
Item Open Access Cone Beam Computed Tomography Image Quality Augmentation using Novel Deep Learning Networks(2019) Zhao, YaoPurpose: Cone beam computed tomography (CBCT) plays an important role in image guidance for interventional radiology and radiation therapy by providing 3D volumetric images of the patient. However, CBCT suffers from relatively low image quality with severe image artifacts due to the nature of the image acquisition and reconstruction process. This work investigated the feasibility of using deep learning networks to substantially augment the image quality of CBCT by learning a direct mapping from the original CBCT images to their corresponding ground truth CT images. The possibility of using deep learning for scatter correction in CBCT projections was also investigated.
Methods: Two deep learning networks, i.e. a symmetric residual convolutional neural network (SR-CNN) and a U-net convolutional network, were trained to use the input CBCT images to produce high-quality CBCT images that match with the corresponding ground truth CT images. Both clinical and Monte Carlo simulated datasets were included for model training. In order to eliminate the misalignments between CBCT and the corresponding CT, rigid registration was applied to clinical database. The binary masks achieved by Otsu auto-thresholding method were applied to for Monte Carlo simulate data to avoid the negative impact of non-anatomical structures on images. After model training, a new set of CBCT images were fed into the trained network to obtain augmented CBCT images, and the performances were evaluated and compared both qualitatively and quantitatively. The augmented CBCT images were quantitatively compared to CT using the peak-signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM).
Regarding the study for using deep learning for the scatter correction in CBCT, the scatter signal for each projection was acquired by Monte Carlo simulation. U-net model was trained to predict the scatter signals based on the original CBCT projections. Then the predicted scatter components were subtracted from the original CBCT projections to obtain scatter-corrected projections. CBCT image reconstructed by the scatter-corrected projections were quantitatively compared with that reconstructed by original projections.
Results: The augmented CBCT images by both SR-CNN and U-net models showed substantial improvement in image quality. Compared to original CBCT, the augmented CBCT images also achieve much higher PSNR and SSIM in quantitative evaluation. U-net demonstrated better performance than SR-CNN in quantitative evaluation and computational speed for CBCT image quality augmentation.
With the scatter correction in CBCT projections predicted by U-net, the scatter-corrected CBCT images demonstrated substantial improvement of the image contrast and anatomical details compared to the original CBCT images.
Conclusion: The proposed deep learning models can effectively augment CBCT image quality by correcting artifacts and reducing scatter. Given their relatively fast computational speeds and great performance, they can potentially become valuable tools to substantially enhance the quality of CBCT to improve its precision for target localization and adaptive radiotherapy.
Item Open Access Deep Learning-based CBCT Projection Interpolation, Reconstruction, and Post-processing for Radiation Therapy(2022) Lu, KeCone-beam computed tomography (CBCT) is an X-ray-based imaging modality widely used in medical practices. Due to the ionizing imaging dose induced by CBCT, many studies were conducted to reduce the number of projections (sparse sampling) to lower the imaging dose while maintaining good image quality and fast reconstruction speed. Conventionally, a CBCT volume is reconstructed analytically with the Feldkamp Davis Kress (FDK) algorithm that backprojects filtered projections according to projection angles. However, the FDK algorithm requires a dense angular sampling that satisfies the Shannon-Nyquist theorem. The FDK algorithm reconstructs CBCT with a high speed but requires relatively high patient imaging dose. The iterative methods like algebraic reconstruction technique (ART) and compressed sensing (CS) methods are investigated to reduce patient imaging dose. These iterative methods update estimated images iteratively and the CS methods apply penalty terms to award desired features. Yet these methods are limited by the iterative design with substantially increased computation time and consumption of computation power. Scholars have also conducted research on bypassing the limit of Shannon-Nyquist theorem by interpolating densely sampled CBCT projections from sparsely sampled projections. However, blurred structures in reconstructed images remain to be a concern for analytical interpolation methods. As such, previous research indicates that it is hard to achieve the three goals of lowered patient imaging dose, good image quality, and fast reconstruction speed all at once.
As deep learning (DL) gained popularity in fields like computer vision and data science, scholars also applied DL techniques in medical image processing. Studies on DL-based CT image reconstruction have yielded encouraging results, but GPU memory limitation made it challenging to apply DL techniques on CBCT reconstruction.
In this dissertation, we hypothesize that the image quality of CBCT reconstructed from under-sampled projections (low-dose) using deep learning techniques can be comparable to that of CBCT reconstructed from fully sampled projections for treatment verification in radiation therapy. This dissertation proposes that by applying DL techniques in pre-processing, reconstruction, and post-processing stages, the challenge of improving CBCT image quality with low imaging dose and fast reconstruction speed can be mitigated.
The dissertation proposed a geometry-guided deep learning (GDL) technique, which is as the first technique to perform end-to-end CBCT reconstruction from sparsely sampled projections and demonstrated its feasibility for CBCT reconstruction from real patient projection data. In this study, we have found that incorporating geometry information into the DL technique can effectively reduce the model size, mitigating the memory limitation in CBCT reconstruction. The novel GDL technique is composed of a GDL reconstruction module and a post-processing module. The GDL reconstruction module learns and performs projection-to-image domain transformation by replacing the traditional single fully connected layer with an array of small fully connected layers in the network architecture based on the projection geometry. The additional deep learning post-processing module further improves image quality after reconstruction.
This dissertation further optimizes the number of beamlets used in the GDL technique through a geometry-guided multi-beamlet deep learning (GMDL) technique. In addition to connecting each pixel in the projection domain to beamlet points along the central beamlet in the image domain as GDL does, these smaller fully connected layers in GMDL connect each pixel to beamlets peripheral to the central beamlet based on the CT projection geometry. Due to the limitation of GPU VRAM, the proposed technique is demonstrated through low-dose CT image reconstruction and is compared with the GDL technique and a large fully connected layer-based reconstruction method.
In addition, the dissertation also investigates deep learning-based CBCT projection interpolation and proposes a patient-independent deep learning projection interpolation technique for CBCT reconstruction. Different from previous studies that interpolate phantom or simulated data, the proposed technique is demonstrated to work on real patient projection data with unevenly distributed projection angles. The proposed technique re-slices the stack of interpolated projections axially, and each acquired slice is processed by a deep residual U-Net (DRU) model to augment the slice’s image quality. The resulting slices are reassembled into a stack of densely-sampled projections to be reconstructed into a CBCT volume. A second DRU model further post-processes the reconstructed CBCT volume to improve the image quality.
In summary, a geometry-guided deep learning (GDL) technique was proposed as the first deep learning technique for end-to-end CBCT reconstruction from sparsely sampled real patient projection data. The geometry-guided multi-beamlet deep learning (GMDL) technique further optimizes the number of beamlets based on the GDL technique. A patient-independent deep learning projection interpolation technique was also proposed for the pre-processing and post-processing stage of CBCT reconstruction.
In conclusion, the work presented in this dissertation demonstrates the feasibility of improving CBCT image quality with low imaging dose and fast reconstruction speed. The techniques developed in this dissertation also have great potential for clinical applications to enhance CBCT imaging for radiation therapy.
Item Embargo Deep Learning-based Onboard Image Guidance and Dose Verification for Radiation Therapy(2024) Jiang, ZhuoranOnboard image guidance and dose verification play important roles in radiation therapy, which enables precise targeting and accurate dose delivery. However, clinical utilities of these advanced techniques are limited by their degraded image quality due to the under-sampling. Specifically, four-dimensional cone-beam computed tomography (4D-CBCT) is a valuable tool to provide onboard respiration-resolved images for moving targets, but its image quality is degraded by the intra-phase sparse sampling due to the clinical constraints on acquisition time and imaging dose. Radiation-induced acoustic (RA) imaging and prompt gamma (PG) imaging are two promising methods to reconstruct 3D dose deposition noninvasively in real-time during the treatment, but their images are severely distorted by the single-view measurements. Essentially, reconstructing images from under-sampled acquisition is an ill-conditioned inverse problem. Our previous studies have demonstrated the effectiveness of deep learning in restoring volumetric information from sparse and limited-angle measurements. In this project, we would like to further explore the applications of deep learning (1) in providing high-quality and efficient onboard image guidance before dose delivery for target localization, and (2) in realizing precise quantitative 3D dosimetry during delivery for dose verification in the radiotherapy. The first aim is achieved by reconstructing high-quality 4D-CBCT from fast (1-minute) free-breathing scans. We proposed a feature-compensated deformable convolutional network (FeaCo-DCN) to perform inter-phase compensation in the latent feature space, which has not been explored by previous studies. In FeaCo-DCN, encoding networks extract features from each phase, and then, features of other phases are deformed to those of the target phase via deformable convolutional networks. Finally, a decoding network combines and decodes features from all phases to yield high-quality images of the target phase. The proposed FeaCo-DCN was evaluated using lung cancer patient data. Key findings include (1) FeaCo-DCN generated high-quality images with accurate and clear structures for a fast 4D-CBCT scan; (2) 4D-CBCT images reconstructed by FeaCo-DCN achieved 3D tumor localization accuracy within 2.5 mm; (3) image reconstruction is nearly real-time; and (4) FeaCo-DCN achieved superior performance by all metrics compared to the top-ranked techniques in the AAPM SPARE Challenge. In conclusion, the proposed FeaCo-DCN is effective and efficient in reconstructing 4D-CBCT while reducing about 90% of the scanning time, which can be highly valuable for moving target localization in image-guided radiotherapy. The second aim is achieved by reconstructing accurate dose maps from multi-modality radiation-induced signals such as (a) acoustic waves and (b) prompt gammas. For protoacoustic (PA) imaging, we developed a deep learning-based method to address the limited-view issue in the PA reconstruction. A deep cascaded convolutional neural network (DC-CNN) was proposed to reconstruct 3D high-quality radiation-induced pressures using PA signals detected by a matrix array, and then derive precise 3D dosimetry from pressures for dose verification in proton therapy. To validate its performance, we collected 81 prostate cancer patients’ proton therapy treatment plans. The proton-acoustic simulation was performed using the open-source k-wave package. A matrix ultrasound array was simulated near the perineum to acquire radiofrequency (RF) signals during dose delivery. For realistic acoustic simulations, tissue heterogeneity and attenuation were considered, and Gaussian white noise was added to the acquired RF signals. The proposed DC-CNN was trained on 204 samples from 69 patients and tested on 26 samples from 12 other patients. Results demonstrated that the proposed method considerably improved the limited-view proton-acoustic image quality, reconstructing pressures with clear and accurate structures and deriving doses with a high agreement with the ground truth. Quantitatively, the pressure accuracy achieved an RMSE of 0.061, and the dose accuracy achieved an RMSE of 0.044, GI (3%/3mm) of 93.71%, and 90%-isodose line Dice of 0.922. The proposed method demonstrates the feasibility of achieving high-quality quantitative 3D dosimetry in proton-acoustic imaging using a matrix array, which potentially enables the online 3D dose verification for prostate proton therapy. Besides the limited-angle acquisition challenge in acoustic imaging, we also developed a general deep inception convolutional neural network (GDI-CNN) to address the low SNR challenge in the few-frame-averaged acoustic signals. The network employs convolutions with multiple dilations in each inception block, allowing it to encode and decode signal features with varying temporal characteristics. This design generalizes GDI-CNN to denoise acoustic signals resulting from different radiation sources. The performance of the proposed method was evaluated using experimental data of X-ray-induced acoustic and protoacoustic signals both qualitatively and quantitatively. Results demonstrated the effectiveness of GDI-CNN: it achieved X-ray-induced acoustic image quality comparable to 750-frame-averaged results using only 10-frame-averaged measurements, reducing the imaging dose of X-ray-acoustic computed tomography (XACT) by 98.7%; it realized proton range accuracy parallel to 1500-frame-averaged results using only 20-frame-averaged measurements, improving the range verification frequency in proton therapy from 0.5Hz to 37.5Hz. Compared to lowpass filter-based denoising, the proposed method demonstrated considerably lower mean-squared-errors, higher peak-SNR, and higher structural similarities with respect to the corresponding high-frame-averaged measurements. The proposed deep learning-based denoising framework is a generalized method for few-frame-averaged acoustic signal denoising, which significantly improves the RA imaging’s clinical utilities for low-dose imaging and real-time therapy monitoring. For prompt gamma imaging, we proposed a two-tier deep learning-based method with a novel weighted axis-projection loss to generate precise 3D PG images to achieve accurate proton range verification. The proposed method consists of two models: first, a localization model is trained to define a region-of-interest (ROI) in the distorted back-projected PG image that contains the proton pencil beam; second, an enhancement model is trained to restore the true PG emissions with additional attention on the ROI. In this study, we simulated 54 proton pencil beams delivered at clinical dose rates in a tissue-equivalent phantom using Monte-Carlo (MC). PG detection with a CC was simulated using the MC-Plus-Detector-Effects model. Images were reconstructed using the kernel-weighted-back-projection algorithm, and were then enhanced by the proposed method. The method effectively restored the 3D shape of the PG images with the proton pencil beam range clearly visible in all testing cases. Range errors were within 2 pixels (4 mm) in all directions in most cases at a higher dose level. The proposed method is fully automatic, and the enhancement takes only ~0.26 seconds. This preliminary study demonstrated the feasibility of the proposed method to generate accurate 3D PG images using a deep learning framework, providing a powerful tool for high-precision in vivo range verification of proton therapy. These applications can significantly reduce the uncertainties in patient positioning and dose delivery in radiotherapy, which improves treatment precision and outcomes.
Item Open Access Development of Deep Learning Models for Deformable Image Registration (DIR) in the Head and Neck Region(2020) Amini, AlaDeformable image registration (DIR) is the process of registering two or more images to a reference image by minimizing local differences across the entire image. DIR is conventionally performed using iterative optimization-based methods, which are time-consuming and require manual parameter tuning. Recent studies have shown that deep learning methods, most importantly convolutional neural networks (CNNs), can be employed to address the DIR problem. In this study, we propose two deep learning frameworks to perform the DIR task in an unsupervised approach for CT-to-CT deformable registration of the head and neck region. Given that head and neck cancer patients might undergo severe weight loss over the course of their radiation therapy treatment, DIR in this region becomes an important task. The first proposed deep learning framework contains two scales, where both scales are based on freeform deformation, and are trained based on minimizing a dissimilarity intensity-based metrics, while encouraging the deformed vector field (DVF) smoothness. The two scales were first trained separately in a sequential manner, and then combined in a two-scale joint training framework for further optimization. We then developed a transfer learning technique to improve the DIR accuracy of the proposed deep learning networks by fine-tuning a pre-trained group-based model into a patient-specific model to optimize its performance for individual patients. We showed that by utilizing as few as two prior CT scans of a patient, the performance of the pretrained model described above can be improved yielding more accurate DIR results for individual patients. The second proposed deep learning framework, which also consists of two scales, is a hybrid DIR method using B-spline deformation modeling and deep learning. In the first scale, deformation of control points are learned by deep learning and initial DVF is estimated using B-spline interpolation to ensure smoothness of the initial estimation. Second scale model of the second framework is the same as that in the first framework. In our study, the networks were trained and evaluated using public TCIA HNSCC-3DCT for the head and neck region. We showed that our DIR results of our proposed networks are comparable to conventional DIR methods while being several orders of magnitude faster (about 2 to 3 seconds), making it highly applicable for clinical applications.
Item Open Access LOW DOSE CT ENHANCEMENT USING DEEP LEARNING METHOD(2021) Pan, BoyangPurpose:
Deep learning has been widely applied in traditional medical imaging tasks like segmentation and registration. Some fundamental CNN based deep learning methods have shown great potential in low dose CT (LDCT) enhancement. This study applied U-Net++ model to enhance CT images with low dose and compared the performance of U-net++ and U-net quantitatively and qualitatively.
Method:
30 patient CT images were chosen as the ground truth in the training process. Under-sampled projections were simulated from the ground truth volumes with an uniform distribution. LDCT was then reconstructed from the under-sampled projections using the ASD-POCS TV algorithm with 40 iterations and was treated as the input of the models. The U-net++ model was improved based on U-net model by connecting the decoders, reserving better dense feature along skip connections. Deep supervision (DS) were used to make a combined loss between each upper node and the ground truth to enhance the image feature preserving capacity. U-net was used as standard model for comparison. L1 loss and structure similarity (SSIM) loss were used in different attempts. The generated images were compared quantitatively using SSIM and peak signal-to-noise ratio (PSNR).
Results:
Both models succeeded to improve the quality of the low dose CT images. The U-net++ model trained with MSE loss had best average PSNR of 17.8 on the test dataset and average SSIM of 0.779 in terms of the whole images compared with the original under-sampled LDCT with SSIM of 0.532 and PSNR of 16.7. U-net model trained using L1 loss had the best average SSIM of 0.756 and the average PSNR of 17.5. Conclusion: Deep learning method showed its potential dealing with the high dose caused by modern CT technique. Different CNN model could influence the quality of the generated image on different evaluation criterions.
Item Open Access Markerless Four-Dimensional-Cone Beam Computed Tomography Projection-Phase Sorting Using Prior Knowledge and Patient Motion Modeling: A Feasibility Study.(Cancer translational medicine, 2017-01) Zhang, Lei; Zhang, Yawei; Zhang, You; Harris, Wendy B; Yin, Fang-Fang; Cai, Jing; Ren, LeiDuring cancer radiotherapy treatment, on-board four-dimensional-cone beam computed tomography (4D-CBCT) provides important patient 4D volumetric information for tumor target verification. Reconstruction of 4D-CBCT images requires sorting of acquired projections into different respiratory phases. Traditional phase sorting methods are either based on external surrogates, which might miscorrelate with internal structures; or on 2D internal structures, which require specific organ presence or slow gantry rotations. The aim of this study is to investigate the feasibility of a 3D motion modeling-based method for markerless 4D-CBCT projection-phase sorting.Patient 4D-CT images acquired during simulation are used as prior images. Principal component analysis (PCA) is used to extract three major respiratory deformation patterns. On-board patient image volume is considered as a deformation of the prior CT at the end-expiration phase. Coefficients of the principal deformation patterns are solved for each on-board projection by matching it with the digitally reconstructed radiograph (DRR) of the deformed prior CT. The primary PCA coefficients are used for the projection-phase sorting.PCA coefficients solved in nine digital phantoms (XCATs) showed the same pattern as the breathing motions in both the anteroposterior and superoinferior directions. The mean phase sorting differences were below 2% and percentages of phase difference < 10% were 100% for all the nine XCAT phantoms. Five lung cancer patient results showed mean phase difference ranging from 1.62% to 2.23%. The percentage of projections within 10% phase difference ranged from 98.4% to 100% and those within 5% phase difference ranged from 88.9% to 99.8%.The study demonstrated the feasibility of using PCA coefficients for 4D-CBCT projection-phase sorting. High sorting accuracy in both digital phantoms and patient cases was achieved. This method provides an accurate and robust tool for automatic 4D-CBCT projection sorting using 3D motion modeling without the need of external surrogate or internal markers.Item Open Access MR Susceptibility Mapping: Improved Quantification and Applications in Developmental Brain Imaging(2021) Zhang, LijiaThe white matter fibers of the human brain are primarily composed of myelinated axons, which connect different brain regions, transmit neural signals, and form efficient communication pathways that shape the neural systems responsible for higher-order functioning. The fatty myelin sheath protects and insulates the axons and acts as an electrical insulator that facilitates the electrical flow through the axons, and is crucial in the transmission of nerve impulses. Human cognition, sensation and motor functions all rely on the efficient transmission of neural signals, where compromised myelin integrity may lead to severe neurological and physical disorders. Myelin abnormality can be a hallmark of numerous neurological disorders such as cerebral palsy, multiple sclerosis, and autism. Abnormal myelination can be a result of direct damages to the myelin sheath, or indirect causes such as neuro-inflammation which affects the oligodendrocytes that generate the myelin sheath, or even genetic disorders.To approach the pathology and potential therapeutic effects for these neurological disorders, studies have been directed towards the remyelination or repair of the myelin in the central nervous system (CNS). Previously, myelin in the CNS can only be reliably quantified by in vitro methods such as myelin staining and measuring myelin basic protein. Magnetic Resonance Imaging (MRI), with its excellent soft tissue contrast and non-invasive nature, has revolutionized the ways to investigate white matter properties. Several methods in effort to assess the white matter have been developed, such as diffusion tensor imaging (DTI), which has been used to quantify the water diffusion in white matter and thus the connectivity of the brain. However, DTI-derived measurements, while sensitive to white matter microstructural changes, are difficult to interpret due to multiple factors that can alter water diffusion, including axonal membrane, neural tubules, crossing fibers, and myelin. It is possible that either axonal or myelin alternations could impact the conductivity of the fibers and further affect the diffusion measures. Therefore, DTI does not have the specificity to single out the origins of the connectivity change behind neurodegenerative diseases or brain development. Prior studies using quantitative susceptibility mapping (QSM) have shown its unique sensitivity to myelin. However, due to the cylindrical structure of myelin sheaths wrapping around axons, the magnetic susceptibility measured by QSM of the white matter has been found to be dependent on the angular orientations of white matter fibers. Susceptibility Tensor Imaging (STI) has been developed to address this orientation-dependence of susceptibility values in white matter, which requires images acquired from at least 6 non-colinear orientations to solve the susceptibility tensor, and is not practical in clinical settings. Therefore, the goal of this dissertation work is to develop a clinically practical MR susceptibility mapping method to quantitatively assess the magnetic susceptibility anisotropy (MSA) of white matter, which will greatly help us understand the role of myelination in the treatment of neurological diseases and in normal brain development. The work presented here includes the development of the methodology and two in vivo studies to prove its efficacy: (1) The magnetic susceptibility anisotropy in white matter was observed and measured by relating the apparent tissue susceptibility as a function of the white matter angle with respect to the applied magnetic field. A clinically practical solution to estimate the MSA of white matter fibers with QSM images acquired from a single orientation is proposed using prior information obtained through DTI, namely DTI-guided QSM. (2) The DTI-guided QSM methodology was used to investigate the potential mechanism behind the motor function improvement of cerebral palsy (CP) patients who underwent autologous stem cell therapy. Results showed that this motor function improvement was correlated with the connectivity increase in the motor network, and was further traced down to a focal increase of the magnetic susceptibility at the periventricular corticospinal tract (CST), which may indicate an increase in the local myelin content after treatment. (3) This methodology was then applied to profile the myelin maturation pattern of the white matter fiber bundles in pediatric subjects. Results revealed a spatio-temporal myelination pattern of the corpus callosal fibers, which follows a posterior to anterior myelination trajectory with the peak developmental rate spurts at around 2-3 years of age. This result is consistent with previous studies using histological methods and relaxometry-based methods, with better specificity to myelin, and improved consistency across subjects. In conclusion, the proposed DTI-guided QSM has shown its ability to accurately quantify the magnetic susceptibility anisotropy of major fiber tracts with high spatial accuracy and minimal angle dependence, and has addressed its potential in delineating the underlying neural mechanism in neurodevelopmental disorders such as CP, as well as in profiling the myelination pattern during normal brain development. It is anticipated that this quantitative approach may find broader applications to help characterize white matter properties in both healthy and diseased brains across the life span.
Item Open Access On-board Image Augmentation Using Prior Image and Deep Learning for Image-guided Radiation Therapy(2019) Chen, YingxuanCone-beam Computed Tomography (CBCT) has been widely used in image-guided radiation therapy for target localization. 3D CBCT has been developed for localizing static targets, while 4D CBCT has been developed for localizing moving targets. Although CBCT has been used as the gold standard in current clinical practice, it has several major limitations: (1). High imaging dose to the normal tissue from repeated 3D/4D CBCT scans. Low dose CBCT reconstruction is changeling because of streak artifacts caused by limited projection and increased noise by low exposure. Previous methods such as compressed sensing method can successfully remove streak artifacts and reduce noise. However, the reconstructed images are blurred, especially at edge regions due to the uniform image gradient penalty, which will affect the accuracy of patient positioning for target localization. (2). Poor soft tissue contrast due to the inherent nature of x-ray based imaging as well as several CBCT artifacts such as scatter and beam hardening. As a result, the accuracy of using CBCT to localize tumors in the abdominal region is extremely limited. To address these limitations, we propose to use prior images and deep learning techniques to enhance the edge information and soft tissue contrast of 3D/4D CBCT images. The specific aims include: 1) Establish a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. 2) Establish hybrid PCTV and deep learning methods to achieve accurate and fast low dose CBCT reconstruction.3) Establish deep learning methods to generate virtual on-board multi-modality images to enhance soft tissue contrast in CBCT. The results from this research will be highly relevant to the clinical application of on-board image for head and neck, lung and liver patient treatment to improve target localization while reducing radiation dose of 3D/4D CBCT scan.
To address the first limitation of current clinical CBCT, a novel prior contour based TV (PCTV) method was developed in this dissertation to enhance the edge information in compressed sensing reconstruction. Specifically, the edge information is extracted from the prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contour in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The proposed PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge-preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate image pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Although edge enhancement was reduced slightly with noisy projections, PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV.
The first technique developed in this dissertation demonstrates that PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction. PCTV is superior to TV and EPTV methods in edge enhancement, which can potentially improve the localization accuracy in radiation therapy. However, the accuracy of edge enhancement in PCTV is affected by the registration errors and anatomical changes from prior to on-board images, especially when deformation exists. The next section of the dissertation describes the development of the hybrid-PCTV to further improve the accuracy and robustness of PCTV. Similar to PCTV method, planning-CT is used as prior images and deformably registered with on-board CBCT reconstructed by the edge preserving TV (EPTV) method. Edges derived from planning CT are deformed based on the registered deformation vector fields to generate on-board edges for edge enhancement in PCTV reconstruction. Reference CBCT is reconstructed from the simulated projections of the deformed planning-CT. Image similarity map is then calculated between the reference and on-board CBCT using structural similarity index (SSIM) method to estimate local registration accuracy. The hybrid-PCTV method enhances the edge information based on a weighted edge map that combines edges from both PCTV and EPTV methods. Higher weighting is given to PCTV edges at regions with high registration accuracy and to EPTV edges at regions with low registration accuracy. The hybrid-PCTV method was evaluated using both digital extended-cardiac-torso (XCAT) phantom and lung patient data. In XCAT study, breathing amplitude change, tumor shrinkage and new tumor were simulated from CT to CBCT. In the patient study, both simulated and real projections of lung patients were used for reconstruction. Results were compared with both EPTV and PCTV methods. EPTV led to blurring bony structures due to missing edge information, and PCTV led to blurring tumor edges due to inaccurate edge information caused by errors in the deformable registration. In contrast, hybrid-PCTV enhanced edges of both bone and tumor. In XCAT study using 30 half-fan CBCT projections, compared with ground truth, relative errors were 1.3%, 1.1%, and 0.9% and edge cross-correlation were 0.66, 0.68 and 0.71 for EPTV, PCTV and hybrid-PCTV, respectively. Moreover, in the lung patient data, hybrid-PCTV avoided the wrong edge enhancement in the PCTV method while maintaining enhancements of the correct edges. Overall, hybrid-PCTV further improved the robustness and accuracy of PCTV by accounting for uncertainties in deformable registration and anatomical changes between prior and onboard images. The accurate edge enhancement in hybrid-PCTV will be valuable for target localization in radiation therapy.
In the next section, a technique for predicting daily on-board edge deformation using deep convolutional neural networks (CNN) is described to bypass deformable registration to improve the PCTV reconstruction efficiency. Edge deformation was predicted by deep learning model including a supervised and an unsupervised convolutional neural network (CNN) learning model. In the supervised model, deformation vector field (DVF) registered from CT to full-sampled CBCTs and retrospectively under-sampled low-dose CBCT were obtained on the first treatment day to train the model, which was then updated with following days’ data. In contrast, no ground truth DVF was needed for the unsupervised model and image pair of planning CT and CBCT were used as input to fine-tune the model. The model predicts DVF for low-dose CBCT acquired on the following day to generate on-board contours for PCTV reconstruction. This method was evaluated using lung SBRT patient data. In the intra-patient evaluation study, the first n-1 day’s CBCTs were used for CNN training to predict nth day edge information (n=2, 3, 4, 5). In addition, 10 lung SBRT patient data were obtained for the inter-patient study. The unsupervised model was trained on 9 of 10 patient data with transferring learning and to predict the while DVF for the other patient. 45 half-fan projections covering 360˚ from nth day CBCT was used for reconstruction and results from Edge-preserving (EPTV), PCTV and PCTV-CNN were compared. The cross-correlations between predicted and reference edge maps were about 0.74 for intra-patient study using the supervised CNN model. When using the unsupervised CNN mode, the cross-correlations of the predicted edge map were about 0.88 for both intra-patient and inter-patient. PCTV-CNN enhanced bone edges in CBCT compared to EPTV and achieved comparable image quality as PCTV while avoiding the user dependent and time-consuming deformable registration process. These results demonstrated the feasibility to use CNN to predict daily deformation of on-board edge information for PCTV based low-dose CBCT reconstruction. Thus, PCTV-CNN has a great potential for enhancing the edge sharpness with high efficiency for low dose 3D CBCT or 4D CBCT to improve the precision of on-board target localization and adaptive radiotherapy.
In the last part of this dissertation, prior images such as high-quality planning CT with deformation was used to generate on-board CT/CBCT image to improve soft tissue contrast for CBCT. The whole deformation vector field (DVF) was predicted using the unsupervised CNN model fine-tuned on liver SBRT patient. The on-board virtual CT in the liver region is obtained by deforming the prior planning CT using the finite element model (FEM) based on the deformation of livers surfaces from planning CT to CBCT. The deformed CT is embedded in the liver region to improve soft tissue contrast for tumor localization in the liver, while on-board CBCT is used for the region outside the liver to verify the positions of healthy tissues close to the tumor. In the current study, we mainly investigated the feasibility to use deep learning to generate accurate liver contours in CBCT. The method was evaluated using 15 SBRT liver patients’ data including planning CT and daily CBCT. Image sets of 14 patients were obtained to train the model while the other one was used to test. Deformed CT with DVF generated from Velocity and predicted DVF were compared using image similarity matrix including mutual information (MI), cross correlation (CC) and structural similarity index measurement (SSIM). The MI, CC and SSIM between predicted deformed CT and first-day on-board CBCT were 1.28, 0.98 and 0.91 for a new patient, respectively. All similarity evaluation results demonstrated the unsupervised CNN model can predict DVF to deform CT equivalently with Velocity. Therefore, it is feasible to apply deep CNN deformation model for fast on-board virtual image generation to improve the precision of the treatment of low contrast soft tissue tumors.
In conclusion, the works presented in this dissertation aim to use prior image and deep learning to improve the image quality of the on-board low dose 3D/4D CBCT by enhancing the edge sharpness and soft tissue contrast. The goals of this dissertation research are to: 1) establish a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT; 2) establish hybrid PCTV to improve the accuracy and robustness of PCTV when deformable registration needed 3) implement deep learning methods to bypass deformable registration to automate and accelerate the low dose CBCT reconstruction; 4) establish deep learning methods to generate virtual on-board multi-modality images to enhance soft tissue contrast in CBCT. Results demonstrated that 1) edge sharpness can be improved for low dose 3D/4D CBCT using prior contour based TV method; 2) virtual image generated by fusing CBCT and deformed CT can improve the soft tissue contrast for the liver patient and 3) deep learning can be applied to improve the efficiency and automation in image deformable registration. Imaging augmentation including high and low contrast improvement using these techniques can improve the precision of dose delivery for image-guided radiation therapy, which might path the way to be applied in the clinic to improve the patient care.
Item Open Access Optimization and Clinical Evaluation of a Prior Knowledge-based 4D Cone Beam CT Estimation Technique for Lung Radiotherapy(2018) Liu, XiaoningPurpose: 4D cone-beam CT (CBCT) provides 4D localization and monitoring of moving targets for inter/intra-fraction target verification in lung radiotherapy. CBCT reconstruction with Feldkamp-Davis-Kress (FDK) algorithm requires retrospectively sorted full-angle (360° for half fan scan/ 180° plus n angle for full fan scan) cone-beam projections, leading to long acquisition time, high imaging dose and limited mechanical clearance. A prior knowledge-based 4D-CBCT estimation technique was developed to do fast and low-dose target verification by estimating 4D-CBCT images using limited-angle on-board kV or MV projections and information from planning CT images. The purposes of this thesis are to (1) Optimize the image acquisition parameters of this technique in reconstructing 4D-CBCT images using limited-angle kV projections; and (2) Evaluate the clinical efficacy of this technique through patient studies.
Methods: A digital anthropomorphic phantom (XCAT) and real patient 4D-CT images were used to optimize and evaluate the prior knowledge-based 4D-CBCT estimation technique. To optimize the imaging acquisition schemes, phantom studies were conducted to simulate eight different treatment scenarios. The image acquisition schemes were optimized by minimizing the scanning angle/time required for accurate image estimation. With the minimum scanning angle/time determined, the effect of scanning direction and imaging frame rate on estimation accuracy was also tested. To clinically evaluate this technique through patient studies, we employed patient data with multiple 4D-CT scans. For each patient, one 4D-CT scan was considered as planning CT images and another as on-board ground truth 4D images. Digital reconstructed radiographs (DRRs) were generated from the second 4D-CT scan to simulate on-board 4D-CBCT projections in a limited angle. Each phase of the 4D-CBCT was generated by deforming the prior CT volume based on Deformation Field Maps solved by motion modeling and free-form deformation in the data fidelity constraint. Patients with tumors at different locations were selected for evaluation. The estimated images (EIs) were quantitatively evaluated against ground truth images by calculating the Dice Coefficient and Center-of-Mass-Shift (COMS) of the tumor volume. The minimal total scan angle/time was also determined for all patients.
Results: The phantom studies showed accurate 4D-CBCT estimation requires 200 projections acquired in over 97.8 degree within a total scanning time of 20 seconds (with gantry rotation speed of 6°/s, respiratory period of 4s and frame rate of 10 frame/s). We found the technique was robust against different scanning directions and imaging frame rate was positively related to estimation accuracy with the same angle coverage. The scanning angle and time of the technique could be further reduced by increasing projection number without changing the projection angle coverage. Results of patient studies showed that the technique was able to accurately estimate patient 4D-CBCT using as fewer as 320 projections for 10 phases acquired in 32 seconds over scan angle of 169.8° (with gantry rotation speed of 6°/s and frame rate of 10 frame/s for breathing period of xxx). The estimation efficiency was affected by target location and contrast between target and background.
Conclusion: This technique estimates patient on-board 4D-CBCT with higher efficiency, reduced imaging dose and more mechanical clearance compared to conventional reconstruction techniques. Clinical implementation of this technique can provide an efficient tool for fast low dose inter- and intra-fractional 4D-localization to minimize the treatment errors in lung radiotherapy, which paves the way for further margin reduction and dose escalation.
Item Open Access Optimization of a limited-angle intrafraction verification (LIVE) system for target localization in radiation therapy of lung cancer(2017) Deng, XinchenPurpose:
A novel limited-angle intrafraction verification (LIVE) system has been developed recently for fast 4D intrafraction target verification in lung SBRT treatments. LIVE acquires orthogonal limited angle kV and MV cone beam projections simultaneously during an arc treatment delivery or in-between static beams. A prior knowledge based image reconstruction technique reconstructs 4D images by deforming prior images based on a deformation model. The main goal of the project is to optimize the acquisition parameters of the LIVE system for different patient and treatment scenarios so that LIVE can be implemented effectively for clinical usage in the future.
Methods:
The LIVE system was optimized with XCAT simulation and the preliminary evaluation was divided into XCAT simulation and CIRS phantom study. The evaluation metrics used were volume percentage difference (VPD) and center of mass shift (COMS). The acquisition parameters of LIVE, including scan angle, projection number, scan speed and scan direction, were optimized for different patient scenarios using XCAT simulation to improve the accuracy and efficiency of the system in localizing the lung tumor. The optimized LIVE system was further evaluated using the CIRS motion phantom. The corresponding imaging dose of the LIVE system was also evaluated.
Results:
Acquisition parameters, including kV/MV projection numbers and number of respiratory cycles, were optimized for different patient and treatment scenarios. A clinical reference table was developed as a guideline for the optimal parameters of the LIVE system for its future clinical implementation. The robustness of the LIVE system against tumor size and tumor location changes was also validated.
Conclusion:
The LIVE system has been preliminarily optimized based on simulation and phantom studies. Results demonstrated its potential for intrafraction verification under different clinical scenarios. Future patient studies are warranted to further evaluate the system for its clinical applications.
Item Open Access Optimization of Image Guided Radiation Therapy for Lung Cancer Using Limited-angle Projections(2015) Zhang, YouThe developments of highly conformal and precise radiation therapy techniques promote the necessity of more accurate treatment target localization and tracking. On-board imaging techniques, especially the x-ray based techniques, have found a great popularity nowadays for on-board target localization and tracking. With an objective to improve the accuracy of on-board imaging for lung cancer patients, the dissertation work focuses on the investigations of using limited-angle on-board x-ray projections for image guidance. The limited-angle acquisition enables scan time and imaging dose reduction and improves the mechanical clearance of imaging.
First of all, the dissertation developed a phase-matched digital tomosynthesis (DTS) technique using limited-angle (<=30 deg) projections for lung tumor localization. This technique acquires the same traditional motion-blurred on-board DTS image as the 3D-DTS technique, but uses the planning 4D computed tomography (CT) to synthesize a phase-matched reference DTS to register with the on-board DTS for tumor localization. Of the 324 different scenarios simulated using the extended cardiac torso (XCAT) digital phantom, the phase-matched DTS technique localizes the 3D target position with an localization error of 1.07 mm (± 0.57 mm) (average ± standard deviation (S.D.)). Similarly, for the total 60 scenarios evaluated using the computerized imaging reference system (CIRS) 008A physical phantom, the phase-matched DTS technique localizes the 3D target position with an average localization error of 1.24 mm (± 0.87 mm). In addition to the phantom studies, preliminary clinical cases were also studied using imaging data from three lung cancer patients. Using the localization results of 4D cone beam computed tomography (CBCT) as `gold-standard', the phase-matched DTS techniques localized the tumor to an average localization error of 1.5 mm (± 0.5 mm).
The phantom and patient study results show that the phase-matched DTS technique substantially improved the accuracy of moving lung target localization, as compared to the 3D-DTS technique. The phase-matched DTS technique can provide accurate lung target localizations like 4D-DTS, but with much reduced imaging dose and scan time. The phase-matched DTS technique is also found more robust, being minimally affected by variations of respiratory cycle lengths, fractions of respiration cycle contained within the DTS scan and the scan directions, which potentially enables quasi-instantaneous (within a sub-breathing cycle) moving target verification during radiation therapy, preferably arc therapy.
Though the phase-matched DTS technique can provide accurate target localization under normal scenarios, its accuracy is limited when the patient on-board breathing experiences large variations in motion amplitudes. In addition, the limited-angle based acquisition leads to severe structural distortions in DTS images reconstructed by the current clinical gold-standard Feldkamp-Davis-Kress (FDK) reconstruction algorithm, which prohibit accurate target deformation tracking, delineation and dose calculation.
To solve the above issues, the dissertation further developed a prior knowledge based image estimation technique to fundamentally change the landscape of limited-angle based imaging. The developed motion modeling and free-form deformation (MM-FD) method estimates high quality on-board 4D-CBCT images through applying deformation field maps to existing prior planning 4D-CT images. The deformation field maps are solved using two steps: first, a principal component analysis based motion model is built using the planning 4D-CT (motion modeling). The deformation field map is constructed as an optimized linear combination of the extracted motion modes. Second, with the coarse deformation field maps obtained from motion modeling, a further fine-tuning process called free-form deformation is applied to further correct the residual errors from motion modeling. Using the XCAT phantom, a lung patient with a 30 mm diameter tumor was simulated to have various anatomical and respirational variations from the planning 4D-CT to on-board 4D-CBCTs, including respiration amplitude variations, tumor size variations, tumor average position variations, and phase shift between tumor and body respiratory cycles. The tumors were contoured in both the estimated and the `ground-truth' on-board 4D-CBCTs for comparison. 3D volume percentage error (VPE) and center-of-mass error (COME) were calculated to evaluate the estimation accuracy of the MM-FD technique. For all simulated patient scenarios, the average (± S.D.) VPE / COME of the tumor in the prior image without image estimation was 136.11% (± 42.76%) / 15.5 mm (± 3.9 mm). Using orthogonal-view 30 deg scan angle, the average VPE/COME of the tumors in the MM-FD estimated on-board images was substantially reduced to 5.22% (± 2.12%) / 0.5 mm (± 0.4 mm).
In addition to XCAT simulation, CIRS phantom measurements and actual patient studies were also performed. For these clinical studies, we used the normalized cross-correlation (NCC) as a new similarity metric and developed an updated MMFD-NCC method, to improve the robustness of the image estimation technique to the intensity mismatches between CT and CBCT imaging systems. Using 4D-CBCT reconstructed from fully-sampled on-board projections as `gold-standard', for the CIRS phantom study, the average (± S.D.) VPE / COME of the tumor in the prior image and the tumors in the MMFD-NCC estimated images was 257.1% (± 60.2%) / 10.1 mm (± 4.5 mm) and 7.7% (± 1.2%) / 1.2 mm (± 0.2mm), respectively. For three patient cases, the average (± S.D.) VPE / COME of tumors in the prior images and tumors in the MMFD-NCC estimated images was 55.6% (± 45.9%) / 3.8 mm (± 1.9 mm) and 9.6% (± 6.1%) / 1.1 mm (± 0.5 mm), respectively. With the combined benefits of motion modeling and free-form deformation, the MMFD-NCC method has achieved highly accurate image estimation under different scenarios.
Another potential benefit of on-board 4D-CBCT imaging is the on-board dose calculation and verification. Since the MMFD-NCC estimates the on-board 4D-CBCT through deforming prior 4D-CT images, the 4D-CBCT inherently has the same image quality and Hounsfield unit (HU) accuracy as 4D-CT and therefore can potentially improve the accuracy of on-board dose verification. Both XCAT and CIRS phantom studies were performed for the dosimetric study. Various inter-fractional variations featuring patient motion pattern change, tumor size change and tumor average position change were simulated from planning CT to on-board images. The doses calculated on the on-board CBCTs estimated by MMFD-NCC (MMFD-NCC doses) were compared to the doses calculated on the `gold-standard' on-board images (gold-standard doses). The absolute deviations of minimum dose (DDmin), maximum dose (DDmax), mean dose (DDmean) and prescription dose coverage (DV100%) of the planning target volume (PTV) were evaluated. In addition, 4D on-board treatment dose accumulations were performed using 4D-CBCT images estimated by MMFD-NCC in the CIRS phantom study. The accumulated doses were compared to those measured using optically stimulated luminescence (OSL) detectors and radiochromic films.
The MMFD-NCC doses matched very well with the gold-standard doses. For the XCAT phantom study, the average (± S.D.) DDmin, DDmax, DDmean and DV100% (values normalized by the prescription dose or the total PTV volume) between the MMFD-NCC PTV doses and the gold-standard PTV doses were 0.3% (± 0.2%), 0.9% (± 0.6%), 0.6% (± 0.4%) and 1.0% (± 0.8%), respectively. Similarly, for the CIRS phantom study, the corresponding values between the MMFD-NCC PTV doses and the gold-standard PTV doses were 0.4% (± 0.8%), 0.8% (± 1.0%), 0.5% (± 0.4%) and 0.8% (± 0.8%), respectively. For the 4D dose accumulation study, the average (± S.D.) absolute dose deviation (normalized by local doses) between the accumulated doses and the OSL measured doses was 3.0% (± 2.4%). The average gamma index (3%/3mm) between the accumulated doses and the radiochromic film measured doses was 96.1%. The MMFD-NCC estimated 4D-CBCT enables accurate on-board dose calculation and accumulation for lung radiation therapy under different scenarios. It can potentially be valuable for treatment quality assessment and adaptive radiation therapy.
However, a major limitation of the estimated 4D-CBCTs above is that they can only capture inter-fractional patient variations as they were acquired prior to each treatment. The intra-treatment patient variations cannot be captured, which can also affect the treatment accuracy. In light of this issue, an aggregated kilo-voltage (kV) and mega-voltage (MV) imaging scheme was developed to enable intra-treatment imaging. Through using the simultaneously acquired kV and MV projections during the treatment, the MMFD-NCC method enabled 4D-CBCT estimation using combined kV and MV projections.
For all XCAT-simulated patient scenarios, the average (± S.D.) VPE / COME of the tumor in the prior image and tumors in the MMFD-NCC estimated images (using kV + open field MV) was 136.11% (± 42.76%) / 15.5 mm (± 3.9 mm) and 4.5% (± 1.9%) / 0.3 mm (± 0.4 mm), respectively. In contrast, the MMFD-NCC estimation using kV + beam's eye view (BEV) MV projections yielded results of 4.3% (± 1.5%) / 0.3 mm (± 0.3 mm). The kV + BEV MV aggregation can estimate the target as accurately as the kV + open field MV aggregation. The impact of this study is threefold: 1. the kV and MV projections can be acquired at the same time. The imaging time will be cut to half as compared to the cases which use kV projections only. 2. The kV and MV aggregation enables intra-treatment imaging and target tracking, since the MV projections can be the side products of the treatment beams (BEV MV). 3. As the BEV MV projections originate from the treatment beams, there will be no extra MV imaging dose to the patient.
The above introduced 4D-CBCT estimation techniques were all based on limited-angle acquisition. Though limited-angle acquisition enables substantial scan time and dose reduction as compared to the full-angle scan, it is still not real-time and cannot provide `cine' imaging, which refers to the instantaneous imaging with negligible scan time and imaging dose. Cine imaging is important in image guided radiation therapy practice, considering the respirational variations may occur quickly and frequently during the treatment. For instance, the patient may experience a breathing baseline shift after every respiratory cycle. The limited-angle 4D-CBCT approach still requires a scan time of multiple respiratory cycles, which will not be able to capture the baseline shift in a timely manner.
In light of this issue, based on the previously developed MMFD-NCC method, an AI-FD-NCC method was further developed to enable quasi-cine CBCT imaging using extremely limited-angle (<=6 deg) projections. Using pre-treatment 4D-CBCTs acquired just before the treatment as prior information, AI-FD-NCC enforces an additional prior adaptive constraint to estimate high quality `quasi-cine' CBCT images. Two on-board patient scenarios: tumor baseline shift and continuous motion amplitude change were simulated through the XCAT phantom. Using orthogonal-view 6 deg projections, for the baseline shift scenario, the average (± S.D.) VPE / COME of the tumors in the AI-FD-NCC estimated images was 1.3% (± 0.5%) / 0.4 mm (± 0.1 mm). For the amplitude variation scenario, the average (± S.D.) VPE / COME of the tumors in the AI-FD-NCC estimated images was 1.9% (± 1.1%) / 0.5 mm (± 0.2 mm). The impact of this study is three-fold: first, the quasi-cine CBCT technique enables actual real-time volumetric tracking of tumor and normal tissues. Second, the method enables real-time tumor and normal tissues dose calculation and accumulation. Third, the high-quality volumetric images obtained can potentially be used for real-time adaptive radiation therapy.
In summary, the dissertation work uses limited-angle on-board x-ray projections to reconstruct/estimate volumetric images for lung tumor localization, delineation and dose calculation. Limited-angle acquisition reduces imaging dose, scan time and improves imaging mechanical clearance. Using limited-angle projections enables continuous, sub respiratory-cycle tumor localization, as validated in the phase-matched DTS study. The combination of prior information, motion modeling, free-form deformation and limited-angle on-board projections enables high-quality on-board 4D-CBCT estimation, as validated by the MM-FD / MMFD-NCC techniques. The high-quality 4D-CBCT not only can be applied for accurate target localization and delineation, but also can be used for accurate treatment dose verification, as validated in the dosimetric study. Through aggregating the kV and MV projections for image estimation, intra-treatment 4D-CBCT imaging was also proposed and validated for its feasibility. At last, the introduction of more accurate prior information and additional adaptive prior knowledge constraints also enables quasi-cine CBCT imaging using extremely-limited angle projections. The dissertation work contributes to lung on-board imaging in many aspects with various approaches, which can be beneficial to the future lung image guided radiation therapy practice.
Item Open Access OPTIMIZATION OF IMAGE GUIDED RADIATION THERAPY USING LIMITED ANGLE PROJECTIONS(2009) Ren, LeiDigital tomosynthesis (DTS) is a quasi-three-dimensional (3D) imaging technique which reconstructs images from a limited angle of cone-beam projections with shorter acquisition time, lower imaging dose, and less mechanical constraint than full cone-beam CT (CBCT). However, DTS images reconstructed by the conventional filtered back projection method have low plane-to-plane resolution, and they do not provide full volumetric information for target localization due to the limited angle of the DTS acquisition.
This dissertation presents the optimization and clinical implementation of image guided radiation therapy using limited-angle projections.
A hybrid multiresolution rigid-body registration technique was developed to automatically register reference DTS images with on-board DTS images to guide patient positioning in radiation therapy. This hybrid registration technique uses a faster but less accurate static method to achieve an initial registration, followed by a slower but more accurate adaptive method to fine tune the registration. A multiresolution scheme is employed in the registration to further improve the registration accuracy, robustness and efficiency. Normalized mutual information is selected as the criterion for the similarity measure, and the downhill simplex method is used as the search engine. This technique was tested using image data both from an anthropomorphic chest phantom and from head-and-neck cancer patients. The effects of the scan angle and the region-of-interest size on the registration accuracy and robustness were investigated. The average capture ranges in single-axis simulations with a 44° scan angle and a large ROI covering the entire DTS volume were between -31 and +34 deg for rotations and between -89 and +78 mm for translations in the phantom study, and between -38 and +38 deg for rotations and between -58 and +65 mm for translations in the patient study.
Additionally, a novel limited-angle CBCT estimation method using a deformation field map was developed to optimally estimate volumetric information of organ deformation for soft tissue alignment in image guided radiation therapy. The deformation field map is solved by using prior information, a deformation model, and new projection data. Patients' previous CBCT data are used as the prior information, and the new patient volume to be estimated is considered as a deformation of the prior patient volume. The deformation field is solved by minimizing bending energy and maintaining new projection data fidelity using a nonlinear conjugate gradient method. The new patient CBCT volume is then obtained by deforming the prior patient CBCT volume according to the solution to the deformation field. The method was tested for different scan angles in 2D and 3D cases using simulated and real projections of a Shepp-Logan phantom, liver, prostate and head-and-neck patient data. Hardware acceleration and multiresolution scheme are used to accelerate the 3D estimation process. The accuracy of the estimation was evaluated by comparing organ volume, similarity and pixel value differences between limited-angle CBCT and full-rotation CBCT images. Results showed that the respiratory motion in the liver patient, rectum volume change in the prostate patient, and the weight loss and airway volume change in the head-and-neck patient were accurately estimated in the 60° CBCT images. This new estimation method is able to optimally estimate the volumetric information using 60-degree projection images. It is both technically and clinically feasible for image-guidance in radiation therapy.
Item Open Access Predicting 3-D Deformation Field Maps (DFM) based on Volumetric Cine MRI (VC-MRI) and Artificial Neural Networks for On-board 4D Target Tracking(2019) Pham, JonathanOrgan and tumor positions are constantly subject to change due to involuntary movement from the gastrointestinal and respiratory systems. In radiation therapy, accurate and precise anatomical localization is critical for treatment planning and delivery. Localization, prior to and during treatment, is most significant in stereotactic body radiation therapy (SBRT), which aims to aggressively target tumors by delivering high fractional dose to tight planning target volumes (PTV). Inter-fraction uncertainties from therapy-responding anatomical change and or patient positioning errors can be mitigated with adaptive therapy and on-board four-dimensional (4D) imaging. On the other hand, intra-fraction uncertainties from involuntary movement must be minimized by using real-time imaging. Real-time imaging enables more advanced treatment delivery techniques such as respiratory-gating and target tracking. Currently, no real-time 3-dimensional (3D) MRI tracking exist for on-board MRI-guided radiotherapy. Present MRI-guided radiotherapy machines are only capable of on-board two-dimensional (2D) cine MRI. Improving to real-time 3D MRI would provide plane-to-plane information and greatly improve target localization. The purpose of this thesis is to develop real-time 3D deformation field map (DFM) predictions using volumetric cine MRI (VC-MRI) and adaptive boosting and multi-layer perceptron neural network (ADMLP-NN) for MRI-guided 4D target tracking.
On-board VC-MRI is considered as the deformation of a prior 4D-MRI phase, MRIprior, obtained during patient simulation. The DFM that best estimates VC-MRI is constructed from a weighted linear combination of three major respiratory deformation modes extracted from principal component analysis (PCA) of DFMs between MRIprior and its remaining phases. PCA weighting coefficients are solved by the data fidelity constraint using on-board 2D cine MRI. Optimized PCA coefficients are tracked and used to train the ADMLP-NN to estimate future PCA coefficients from previous ones. ADMLP-NN uses several identical multi-layer perceptron neural networks with an adaptive boosting decision algorithm to avoid local minimums. Predicted PCA coefficients are used to build 3D DFMs for VC-MRI prediction.
This method was evaluated using a 4D computerized extended-cardiac torso (XCAT) simulation of lung cancer patients. Motion was simulated in the anterior-posterior and superior-inferior direction based on patient-specific real-position management (RPM) curve. Predicted PCA coefficient accuracy was evaluated against estimated PCA coefficients using normalized cross-correlation (NCC) and normalized root-mean-squared error (NRMSE). Predicted VC-MRIs was evaluated against ground-truth VC-MRIs using Volume Percent Difference (VPD), Volume Dice Coefficient (VDC), and Center of Mass Shift (COMS). Effects of ADMLP-NN parameter variation (number of input neurons, number of hidden neurons, number of MLP-NN, cost function threshold, prediction step size) on VC-MRI prediction accuracy were evaluated. Additionally, breathing pattern change effects between 4D MRI simulation and on-board 2D cine MRI were also evaluated.
Among all RPM signals examined, when no breathing pattern change occurred between the prior 4D MRI and on-board 2D cine MRI, the average predicted VPD, VDC, and COMS was 17.50 ± 2.85%, .92 ± .02, and 1.08 ± .44 mm. Prediction accuracy decreased when the breathing amplitude increased, but remained the same or improved when the breathing amplitude decreased between prior 4D MRI and on-board 2D cines. The feasibility and robustness of using ADMLP-NN to predict deformation fields maps for VC-MRI predictions for on-board target localization during radiotherapy treatments was demonstrated.
Item Open Access Real-time Target Tracking in Fluoroscopy Imaging using Unet with Convolutional LSTM(2020) Peng, TengyaTarget localization precision is crucial for the treatment outcome of radiation therapy. In lung stereotatic body radiation therapy (SBRT), verifying target motion in the real time 2D fluoro images is often used as a vital tool to ensure adequate coverage of the target volume before the treatment delivery starts. However, accurate target localization in 2D fluoroscopy images is very challenging due to the overlapping anatomical structures in the projection images. The localization is often visually performed by physicians and physicists, which is a subjective process that depends on the experience of the clinician. In this paper, we have developed a deep learning network for automatic target localization to improve the efficiency and robustness of the process. Specifically, the deep learning network adopts a Unet architecture with a coarse-to-fine structure. In addition, we innovatively incorporate convolutional Long Short-Term Memory (LSTM) layer into the network to utilize the time correlation between the fluoro images. A Generative Adversarial method was used to train the network to further improve its localization accuracy. A hybrid loss was used to improve the feature learning during the training. The model was tested on a large amount of data generated by the digital X-CAT phantom. Various patient sizes, respiratory amplitudes, and tumor sizes and locations were simulated in the X-CAT phantoms to test the accuracy and robustness of the method. Our model has been proved with great accuracy not only on massive samples but also on specific set of samples. On massive samples, our model achieves IOU 0.92 and centroid of mass difference 0.16 and 0.07 cm in vertical and horizontal direction. On unique set of samples, the IOU is even higher to be 0.98. The centroid of mass difference could be amazingly 0.03 and 0.007 cm. In summary, our results demonstrated the feasibility of using this deep learning network for real target tracking in fluoro images, which will be crucial for target verification before or during lung SBRT treatments.
Item Open Access Respiratory motion prediction based on 4D-CT/CBCT using deep learning(2019) Teng, XinzhiPurpose: The purpose is to investigate the feasibility of using Convolutional Neural Network (CNN) to register phase-to-phase deformation vector field (DVF) of lung 4D Computed Tomography (CT) / Cone-Beam Computed Tomography (CBCT).
Methods: A Convolutional Neural Network (CNN) based deep learning method was built to directly register the deformation vector field from the individual phases images of patient 4D CT or 4D CBCT for 4D contouring, dose accumulation or target verification. . The input consists of image pairs while the output is the corresponding DVF that registers the image pairs. The network consisted of four convolutional layers, two average pooling layers and two fully connected layers. The loss function was half mean squared error. The centers of patch pairs are uniformly chosen across the lung and the number of samples were chosen to cover the majority movement of deformable vectors. The method was evaluated by using4-dimensional image volumes from 9 patients with lung cancer, such as 4D-CT, simulated 4D-CBCT reconstructed from DRR, and real 4D-CBCT reconstructed from real projections. In intra-patient study, These image volumes were sortedthe image volumes were sorted into different combinations, (1) training and testing samples from the same 4D-CT image volume, (2) training and testing samples from two 4D-CT volumes, (3) training and testing samples from 4D-CBCT volumes simulated by DRR from 4D-CT volumes, (4) training from 4D-CT and testing from 4D-CBCT reconstructed from primary projections, and (5) training and testing samples from two 4D-CBCT volumes reconstructed from primary projections. In inter-patient study, five 4D-CT volumes from five patients were used as the training set and the sixth patient 4D-CT volume was the testing set. The functionality of a well-trained network adapting new patient’s anatomy was tested. The coefficient of correlations between the prediction DVF and the ground truth DVF and the cross correlations between the target image and the ground truth deformed image, and the target image and the predicted deformed image were calculated. The registered images from predicted DVF and ground truth DVF were reconstructed and compared. One set being the training set and the other the testing set. The centers of patches (i.e. control points) are uniformly chosen across the lung. The limit of physical memory compromises the number of samples and size of patches.
Result: The ratio of cross correlation between predicted deformed image and target image to cross correlation between ground truth image and target image is 0.78 by averaging the intra-patient studies. For inter-patient study, this number is 0.62. In comparing the predicted deformed image and the ground truth image, major features such as diaphragms, lumen and main vessels are matched with each other.
Conclusion: CNN based regression model successfully learn the DVF from one image set, and the trained model can be successfully transferred to another data set, provided the high image quality in training sets and similar anatomic structure between both image sets.
Item Open Access Robust 4D-MRI Sorting with Reduced Artifacts Based on Anatomic Feature Matching(2018) Yang, ZiPurpose: Motion artifacts induced by breathing variations are common in 4D-MRI
images. This study aims to reduce the motion artifacts by developing a novel, robust 4DMRI
sorting method based on anatomic feature matching, which is applicable in both
cine and sequential acquisition.
Method: The proposed method uses the diaphragm as the anatomic feature to guide the
sorting of 4D-MRI images. Initially, both abdominal 2D sagittal cine MRI images and
axial MRI images (in both axial cine and sequential scanning modes) were acquired. The
sagittal cine MRI images were divided into 10 phases as ground truth. Next, the phase of
each axial MRI image is determined by matching the diaphragm position in the
intersection plane between the axial MRI and the ground truth cine MRI. Then, those
matched phases axial MRI images were sorted into 10-phase bins identical to the ground
truth cine images. Finally, 10-phase 4D-MRI were reconstructed from these sorted axial
MRI images. The accuracy of reconstructed 4D-MRI data was evaluated in a simulation
study using the 4D eXtended Cardiac Torso (XCAT) digital phantom with a sphere
tumor in the liver. The effects of breathing signal, including both regular (cosine
function) and irregular (patient data), on reconstruction accuracy were investigated by
calculating total relative error (TRE) of the 4D volumes, Volume-Percent-Difference
(VPD) and Center-of-Mass-Shift(COMS) of the simulated tumor between the
reconstructed and the ground truth images.
Results: In both scanning modes, reconstructed 4D-MRI images matched well with the
ground truth except minimal motion artifacts. The averaged TRE of the 4D volume, VPD
and COMS of the EOE phase in both scanning modes were 0.32%/1.20%/±0.05𝑚𝑚 for
regular breathing, and 1.13%/4.26%/±0.21𝑚𝑚 for patient irregular breathing,
respectively.
Conclusion: The preliminary results illustrated the robustness of the new 4D-MRI
sorting method based on anatomic feature matching. This method improved image
quality with reduced motion artifacts in the resulting reconstructed 4D MRI is applicable
for axial MR images acquired using both cine and sequential scanning modes.
Item Open Access Scatter Correction for Dual-source Cone-beam CT Using the Pre-patient Grid(2014) Chen, YingxuanPurpose: A variety of cone beam CT (CBCT) systems has been used in the clinic for image guidance in interventional radiology and radiation therapy. Compared with conventional single-source CBCT, dual-source CBCT has the potential for dual-energy imaging and faster scanning. However, it adds additional cross-scatter when compared to a single-source CBCT system, which degrades the image quality. Previously, we developed a synchronized moving grid (SMOG) system to reduce and correct scatter for a single-source CBCT system. The purpose of this work is to implement the SMOG system on a prototype dual-source CBCT system and to investigate its efficacy in scatter reduction and correction under various imaging acquisition settings.
Methods:A 1-D grid was attached to each x-ray source during dual-source CBCT imaging to acquire partially blocked projections. As the grid partially blocked the x-ray primary beams and divided it into multiple quasi-fan beams during the scan, it produced a physical scatter reduction effect in the projections. Phantom data were acquired in the unblocked area, while scatter signal was measured from the blocked area in projections. The scatter distribution was estimated from the measured scatter signals using a cubic spline interpolation for post-scan scatter correction. Complimentary partially blocked projections were acquired at each scan angle by positioning the grid at different locations, and were merged to obtain full projections for reconstruction. In this study, three sets of CBCT images were reconstructed from projections acquired: (a) without grid, (b) with grid but without scatter correction, and (c) with grid and with scatter correction to evaluate the effects of scatter reduction and scatter correction on artifact reduction and improvements of contrast-to-noise ratio index (CNR') and CT number accuracy. The efficacy of the scatter reduction and correction method was evaluated for CATphan phantoms of different diameters (15cm, 20cm, and 30cm), different grids (grid blocking ratios of 1:1 and 2:1), different acquisition modes (simultaneous: two tubes firing at the same time, interleaved: tube alternatively firing and sequential: only one tube firing in one rotation) and different reconstruction algorithms (iterative reconstruction method vs Feldkamp, Davis, and Kress (FDK) back projection method).
Results: The simultaneous scanning mode had the most severe scatter artifacts and the most degraded CNR' when compared to either the interleaved mode or the sequential mode. This is due to the cross-scatter between the two x-ray sources in the simultaneous mode. Scatter artifacts were substantially reduced by scatter reduction and correction. CNR's of the different inserts in the CATphan were enhanced on average by 24%, 13%, and 33% for phantom sizes of 15cm, 20cm, and 30cm, respectively, with only scatter reduction and a 1:1 grid. Correspondingly, CNR's were enhanced by 34%, 18%, and 11%, respectively, with both scatter reduction and correction. However, CNR' may decrease with scatter correction alone for the larger phantom and low contrast ROIs, because of an increase in noise after scatter correction. In addition, the reconstructed HU numbers were linearly correlated to nominal HU numbers. A higher grid blocking ratio, i.e. with a greater blocked area, resulted in better scatter artifact removal and CNR' improvement at the cost of complexity and increased number of exposures. Iterative reconstruction with total variation regularization resulted in better noise reduction and enhanced CNR', in comparison to the FDK method.
Conclusion:Our method with a pre-patient grid can effectively reduce the scatter artifacts, enhance CNR', and modestly improve the CT number linearity for the dual-source CBCT system. The settings such as grid blocking ratio and acquisition mode can be optimized based on the patient-specific condition to further improve image quality.